You are on page 1of 346

Veritas Storage Foundation Scalable File Server Administrator's Guide

5.5

Veritas Storage Foundation Scalable File Server Administrators Guide


The software described in this book is furnished under a license agreement and may be used only in accordance with the terms of the agreement. Documentation version 5.5.1

Legal Notice
Copyright 2009 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. This Symantec product may contain third party software for which Symantec is required to provide attribution to the third party (Third Party Programs). Some of the Third Party Programs are available under open source or free software licenses. The License Agreement accompanying the Software does not alter any rights or obligations you may have under those open source or free software licenses. Please see the Third Party Legal Notice Appendix to this Documentation or TPIP ReadMe File accompanying this Symantec product for more information on the Third Party Programs. The product described in this document is distributed under licenses restricting its use, copying, distribution, and decompilation/reverse engineering. No part of this document may be reproduced in any form by any means without prior written authorization of Symantec Corporation and its licensors, if any. THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE. The Licensed Software and Documentation are deemed to be commercial computer software as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19 "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in Commercial Computer Software or Commercial Computer Software Documentation", as applicable, and any successor regulations. Any use, modification, reproduction release, performance, display or disclosure of the Licensed Software and Documentation by the U.S. Government shall be solely in accordance with the terms of this Agreement.

Symantec Corporation 350 Ellis Street Mountain View, CA 94043 http://www.symantec.com

Technical Support
Symantec Technical Support maintains support centers globally. Technical Supports primary role is to respond to specific queries about product features and functionality. The Technical Support group also creates content for our online Knowledge Base. The Technical Support group works collaboratively with the other functional areas within Symantec to answer your questions in a timely fashion. For example, the Technical Support group works with Product Engineering and Symantec Security Response to provide alerting services and virus definition updates. Symantecs maintenance offerings include the following:

A range of support options that give you the flexibility to select the right amount of service for any size organization Telephone and Web-based support that provides rapid response and up-to-the-minute information Upgrade assurance that delivers automatic software upgrade protection Global support that is available 24 hours a day, 7 days a week Advanced features, including Account Management Services

For information about Symantecs Maintenance Programs, you can visit our Web site at the following URL: www.symantec.com/techsupp/

Contacting Technical Support


Customers with a current maintenance agreement may access Technical Support information at the following URL: www.symantec.com/techsupp/ Before contacting Technical Support, make sure you have satisfied the system requirements that are listed in your product documentation. Also, you should be at the computer on which the problem occurred, in case it is necessary to replicate the problem. When you contact Technical Support, please have the following information available:

Product release level Hardware information Available memory, disk space, and NIC information Operating system

Version and patch level Network topology Router, gateway, and IP address information Problem description:

Error messages and log files Troubleshooting that was performed before contacting Symantec Recent software configuration changes and network changes

Licensing and registration


If your Symantec product requires registration or a license key, access our technical support Web page at the following URL: www.symantec.com/techsupp/

Customer service
Customer service information is available at the following URL: www.symantec.com/techsupp/ Customer Service is available to assist with the following types of issues:

Questions regarding product licensing or serialization Product registration updates, such as address or name changes General product information (features, language availability, local dealers) Latest information about product updates and upgrades Information about upgrade assurance and maintenance contracts Information about the Symantec Buying Programs Advice about Symantec's technical support options Nontechnical presales questions Issues that are related to CD-ROMs or manuals

Maintenance agreement resources


If you want to contact Symantec regarding an existing maintenance agreement, please contact the maintenance agreement administration team for your region as follows:
Asia-Pacific and Japan Europe, Middle-East, and Africa North America and Latin America customercare_apac@symantec.com semea@symantec.com supportsolutions@symantec.com

Additional enterprise services


Symantec offers a comprehensive set of services that allow you to maximize your investment in Symantec products and to develop your knowledge, expertise, and global insight, which enable you to manage your business risks proactively. Enterprise services that are available include the following:
Symantec Early Warning Solutions These solutions provide early warning of cyber attacks, comprehensive threat analysis, and countermeasures to prevent attacks before they occur. Managed Security Services These services remove the burden of managing and monitoring security devices and events, ensuring rapid response to real threats. Symantec Consulting Services provide on-site technical expertise from Symantec and its trusted partners. Symantec Consulting Services offer a variety of prepackaged and customizable options that include assessment, design, implementation, monitoring, and management capabilities. Each is focused on establishing and maintaining the integrity and availability of your IT resources. Educational Services provide a full array of technical training, security education, security certification, and awareness communication programs.

Consulting Services

Educational Services

To access more information about Enterprise services, please visit our Web site at the following URL: www.symantec.com Select your country or language from the site index.

Contents

Technical Support ............................................................................................... 4 Chapter 1 Introducing the Veritas Storage Foundation Scalable File Server ....................................................... 15
About Storage Foundation Scalable File Server .................................. About the core strengths of SFS ...................................................... About SFS features ....................................................................... Simple installation ................................................................. Administration ...................................................................... Scalable NFS ......................................................................... NFS Lock Management (NLM) ................................................... Active/Active CIFS ................................................................. Storage tiering ...................................................................... SFS key benefits and other applications ........................................... High performance scaling and seamless growth ........................... High availability .................................................................... Consolidating and reducing costs of storage ................................ Enabling scale-out compute clusters and heterogeneous sharing of data ........................................................................... 15 16 17 17 17 18 18 18 18 19 19 20 20 21

Chapter 2

Creating users based on roles .......................................... 23


About user roles and privileges ....................................................... About the naming requirements for adding new users ........................ About using the SFS command-line interface .................................... Logging in to the SFS CLI ............................................................... About accessing the online man pages ............................................. About creating Master, System Administrator, and Storage Administrator users ............................................................... Creating Master, System Administrator, and Storage Administrator users ......................................................... About the support user .................................................................. Configuring the support user account ........................................ Displaying the command history ..................................................... 23 24 25 25 30 32 33 35 36 37

Contents

Chapter 3

Displaying and adding nodes to a cluster ...................... 39


About the cluster commands .......................................................... Displaying the nodes in the cluster .................................................. About adding a new node to the cluster ............................................ Installing the SFS software onto a new node ...................................... Adding a node to the cluster ........................................................... Deleting a node from the cluster ..................................................... Shutting down the cluster nodes ..................................................... Rebooting the nodes in the cluster ................................................... 39 40 43 43 44 45 47 47

Chapter 4

Configuring SFS network settings ................................... 49


About network mode commands ..................................................... Displaying the network configuration and statistics ........................... About bonding Ethernet interfaces .................................................. Bonding Ethernet interfaces ..................................................... About DNS .................................................................................. Configuring DNS settings ........................................................ About IP commands ...................................................................... About configuring IP addresses ...................................................... Configuring IP addresses ......................................................... About configuring Ethernet interfaces ............................................. Configuring Ethernet interfaces ................................................ About configuring routing tables .................................................... Configuring routing tables ....................................................... About LDAP ................................................................................ Before configuring LDAP settings ................................................... About configuring LDAP server settings ........................................... Configuring LDAP server settings ............................................. About administering SFS cluster's LDAP client .................................. Administering the SFS cluster's LDAP client ............................... About NIS ................................................................................... Configuring the NIS-related commands ...................................... About NSS .................................................................................. Configuring NSS lookup order .................................................. About VLAN ................................................................................ Configuring VLAN .................................................................. 50 51 52 53 54 56 58 58 60 64 65 67 69 72 72 73 75 79 80 81 82 84 84 85 86

Chapter 5

Configuring your NFS server ............................................. 89


About NFS server commands .......................................................... 89 Accessing the NFS server ......................................................... 90 Displaying NFS statistics ......................................................... 92

Contents

Displaying file systems and snapshots that can be exported ........... 93

Chapter 6

Configuring storage ............................................................ 95


About storage provisioning and management .................................... 95 About configuring storage pools ..................................................... 96 Configuring storage pools ........................................................ 99 About configuring disks ............................................................... 101 Configuring disks ................................................................. 103 About displaying information for all disk devices ............................. 105 Displaying information for all disk devices associated with nodes in a cluster .................................................................... 106 Increasing the storage capacity of a LUN ........................................ 108 Printing WWN information .......................................................... 109 Initiating SFS host discovery of LUNs ............................................ 110 About I/O fencing ....................................................................... 111 Configuring I/O fencing ......................................................... 113

Chapter 7

Creating and maintaining file systems

........................ 117 117 120 120 121 124 126 127 127 129 130 131 133 133 134 138 140

About creating and maintaining file systems ................................... Listing all file systems and associated information ........................... About creating file systems .......................................................... Creating a file system ........................................................... Adding or removing a mirror to a file system ................................... Configuring FastResync for a file system ........................................ Disabling the FastResync option for a file system ............................. Increasing the size of a file system ................................................. Decreasing the size of a file system ................................................ Checking and repairing a file system .............................................. Changing the status of a file system ............................................... Destroying a file system .............................................................. About snapshots ........................................................................ Configuring snapshots .......................................................... About snapshot schedules ............................................................ Configuring snapshot schedules ..............................................

Chapter 8

Creating and maintaining NFS shares

......................... 143 143 144 145 148 151

About NFS file sharing ................................................................ Displaying exported file systems ............................................. Adding an NFS share ............................................................ Sharing file systems using CIFS and NFS protocols ..................... Unexporting a file system or deleting NFS options ......................

10

Contents

Chapter 9

Using SFS as a CIFS server .............................................. 153


About configuring SFS for CIFS ..................................................... About configuring CIFS for standalone mode ................................... Configuring CIFS server status for standalone mode ................... About configuring CIFS for NT domain mode ................................... Configuring CIFS for the NT domain mode ................................ About leaving an NT domain ........................................................ Changing NT domain settings ....................................................... Changing security settings ........................................................... Changing security settings after the CIFS server is stopped ................ About configuring CIFS for AD domain mode ................................... Configuring CIFS for the AD domain mode ................................ Leaving an AD domain ................................................................. Changing domain settings for AD domain mode ............................... Removing the AD interface .......................................................... About setting NTLM .................................................................... Setting NTLM ...................................................................... About setting trusted domains ...................................................... Setting AD trusted domains .................................................... About storing account information ................................................ Storing user and group accounts ............................................. About reconfiguring the CIFS service ............................................. Reconfiguring the CIFS service ............................................... About managing CIFS shares ........................................................ Setting share properties ........................................................ Sharing file systems using CIFS and NFS protocols ........................... About SFS cluster and load balancing ............................................. Splitting a share ................................................................... About managing home directories ................................................. Setting the home directory file systems .................................... Enabling quotas on home directory file systems ......................... Setting up home directories and use of quotas ........................... Displaying home directory usage information ............................ Deleting home directories and disabling creation of home directories ..................................................................... About managing local users and groups .......................................... Creating a local CIFS user ...................................................... About configuring local groups ..................................................... Configuring a local group ....................................................... 154 155 156 159 160 163 163 165 165 165 167 170 171 173 173 175 176 176 177 179 180 181 183 184 188 191 192 194 195 196 197 199 200 201 202 204 205

Contents

11

Chapter 10

Using FTP

............................................................................ 207 207 208 208 209 210 213 216 217 219

About FTP ................................................................................. Displaying FTP server ................................................................. About FTP server commands ........................................................ Using the FTP server commands ............................................. About FTP set commands ............................................................. Using the set commands ........................................................ About FTP session commands ....................................................... Using the FTP session commands ............................................ Using the logupload command ......................................................

Chapter 11

Configuring event notifications ...................................... 221


About configuring event notifications ............................................ About severity levels and filters .................................................... About email groups ..................................................................... Configuring an email group .................................................... About syslog event logging ........................................................... Configuring a syslog server .................................................... Displaying events ....................................................................... About SNMP notifications ............................................................ Configuring an SNMP management server ................................ Configuring events for event reporting ........................................... Exporting events in syslog format to a given URL ............................. 221 222 223 225 229 230 231 232 233 236 237

Chapter 12

Configuring backup ........................................................... 239


About backup ............................................................................ Configuring backups using NetBackup or other third-party backup applications ........................................................................ About NetBackup ....................................................................... Adding a NetBackup master server to work with SFS ........................ Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation .......................................... Configuring the virtual name of NetBackup ..................................... About Network Data Management Protocol ..................................... About NDMP supported configurations .......................................... About the NDMP policies ............................................................. Configuring the NDMP policies ............................................... Displaying all NDMP policies ........................................................ About retrieving the NDMP data ................................................... Retrieving the NDMP data ...................................................... Restoring the default NDMP policies .............................................. 239 240 241 243 244 245 246 247 249 250 255 255 257 259

12

Contents

About backup configurations ........................................................ 259 Configuring backup .............................................................. 260

Chapter 13

Configuring SFS Dynamic Storage Tiering ................... 263


About SFS Dynamic Storage Tiering (DST) ...................................... How SFS uses Dynamic Storage Tiering .......................................... About policies ............................................................................ About adding tiers to file systems .................................................. Adding tiers to a file system ................................................... Removing a tier from a file system ................................................. About configuring a mirror on the tier of a file system ...................... Configuring a mirror to a tier of a file system ............................ Listing all of the files on the specified tier ....................................... Displaying a list of DST file systems ............................................... Displaying the tier location of a specified file ................................... About configuring the policy of each tiered file system ...................... Configuring the policy of each tiered file system ........................ Relocating a file or directory of a tiered file system ........................... About configuring schedules for all tiered file systems ...................... Configuring schedules for all tiered file systems ......................... Displaying files that will be moved by running a policy ...................... 263 266 267 268 268 270 271 271 273 274 274 274 275 277 277 279 280

Chapter 14

Configuring system information .................................... 283


About system commands ............................................................. About setting the clock commands ................................................ Setting the clock commands ................................................... About configuring the locally saved configuration files ..................... Configuring the locally saved configuration files ........................ Using the more command ............................................................ About coordinating cluster nodes to work with NTP servers ............... Coordinating cluster nodes to work with NTP servers .................. Displaying the system statistics .................................................... Using the swap command ............................................................ About the option commands ......................................................... Using the option commands ................................................... 283 284 285 288 289 292 292 293 294 295 296 299

Chapter 15

Upgrading Storage Foundation Scalable File Server ............................................................................. 305


About upgrading drivers .............................................................. 305 Displaying the current version of SFS ............................................ 307 About installing patches .............................................................. 308

Contents

13

Installing patches ................................................................. 310

Chapter 16

Troubleshooting ................................................................. 313


About troubleshooting commands ................................................. Retrieving and sending debugging information ................................ About the iostat command ........................................................... Generating CPU and device utilization reports ........................... About excluding the PCI ID prior to the SFS installation .................... Excluding the PCI IDs from the cluster ..................................... Testing network connectivity ....................................................... About the services command ........................................................ Using the services command .................................................. Using the support login ............................................................... About network traffic details ........................................................ Exporting and displaying the network traffic details ................... Accessing processor activity ......................................................... Using the traceroute command ..................................................... 313 314 315 316 317 319 321 321 323 325 325 326 327 328

Glossary ............................................................................................................. 331 Index ................................................................................................................... 335

14

Contents

Chapter

Introducing the Veritas Storage Foundation Scalable File Server


This chapter includes the following topics:

About Storage Foundation Scalable File Server About the core strengths of SFS About SFS features SFS key benefits and other applications

About Storage Foundation Scalable File Server


Storage Foundation Scalable File Server (SFS) is a highly scalable and highly available clustered Network Attached Storage (NAS) software appliance. It is based on the Storage Foundation Cluster File System technology, and is a complete solution for multi-protocol file serving. SFS provides an open storage gateway model, including a highly available and scalable Network File System (NFS), CIFS and FTP file serving platform and an easy-to-use administrative interface. The product includes the following features:

Backup operations using both NDMP and/or the built-in NetBackup client Active/Active CIFS, including integration with Active Directory operations Global cluster administration through a single interface Active/Active shared data NFS sharing including shared read/write and LDAP/NIS support

16

Introducing the Veritas Storage Foundation Scalable File Server About the core strengths of SFS

Simple administration of Fibre Channel Host Bus Adapters (HBAs), file systems, disks, snapshots, and Dynamic Storage Tiering (DST) SNMP, syslog, and email notification Seamless upgrade and patch management Support information Online man pages Simple help

SFS provides sharing of NFS and CIFS file systems in a simple, highly scalable, and highly available manner. The components of SFS include a security-hardened, custom-install SLES 10 SP2 operating system, core Storage Foundation services including Cluster File System, and the SFS software platform. These components are provided on a single DVD or DVD ISO image.

About the core strengths of SFS


SFS leverages all the capabilities and strengths of the Storage Foundation family of products. SFS contains all the key features of Storage Foundation Cluster File System 5.0 MP3 including:

Dynamic Multipathing (DMP) Cluster Volume Manager Cluster File System (CFS) Veritas Cluster Server (VCS) Dynamic Storage Tiering (DST) I/O Fencing

DMP provides Fibre Channel Host Bus Adapter load balancing policies and tight integration with array vendors to provide in-depth failure detection and path failover logic. DMP is compatible with more hardware than any similar product. Cluster Volume Manager provides a cluster-wide consistent virtualization layer that leverages all the strengths of Veritas Volume Manager (VxVM) including online re-layout and resizing of volumes, and online array migrations. You can mirror your underlying SFS file systems across separate physical frames to ensure maximum availability on the storage tier. This technique seamlessly adds or removes new storage, whether single drives or entire arrays.

Introducing the Veritas Storage Foundation Scalable File Server About SFS features

17

Cluster File System complies with the Portable Operating System Interface (POSIX) standard. It also provides full cache consistency and global lock management at a file or sub-file level. CFS lets all nodes in the cluster perform metadata or data transactions. This allows linear scalability in terms of NFS operations per second. VCS monitors communication, and failover for all nodes in the cluster and their associated critical resources. This includes virtual IP addressing failover for all client connections regardless of the client protocol. DST dynamically and transparently moves files to different storage tiers to respond to changing business needs. DST is used in Storage Foundation Scalable File Server as SFS Storage Tiering. I/O fencing further helps to guarantee data integrity in the event of a multiple network failure by using the SFS storage to ensure that cluster membership can be determined correctly. This virtually eliminates the chance of a cluster split-brain from occurring.

About SFS features


SFS has new features specific to being a clustered NAS product. A partial list of these features is discussed in the following sections.

Simple installation
A single node in the cluster is booted from a DVD containing the operating system image, core Storage Foundation, and SFS modules. While the node boots, the other nodes are defined using IP addresses. After you install SFS and the first node is up and running, the rest of the cluster nodes are automatically installed with all necessary components. The key services are then automatically started to allow the cluster to begin discovering storage and creating file shares.

Administration
SFS contains a role-based administration model consisting of the following key roles:

Storage Master System

These roles are consistent with the operational roles in many data centers.

18

Introducing the Veritas Storage Foundation Scalable File Server About SFS features

For each role, the administrator uses a simple menu-driven text interface. This interface provides a single point of administration for the entire cluster. A user logs in as one of those roles on one of the nodes in the cluster and runs commands that perform the same tasks on all nodes in the cluster. You do not need to have any knowledge of the Veritas Storage Foundation technology to install or administer an SFS cluster. If you are currently familiar with core SFCFS or Storage Foundation in general, you will be familiar with the basic management concepts.

Scalable NFS
With SFS, all nodes in the cluster can serve the same NFS shares as both read and write. This creates very high aggregated throughput rates, because you can use sum of the bandwidth of all nodes. Cache-coherency is maintained throughout the cluster.

NFS Lock Management (NLM)


The NFS Lock Management (NLM) module allows a customer to use NFS advisory client locking in parallel with core SFCFS global lock management. The module consists of failing over the locks among SFS nodes as well as forwarding all NFS client lock requests to a single NFS lock master. The result is that no data corruption occurs if a user or application needs to use NFS client locking with an SFS cluster.

Active/Active CIFS
CIFS is active on all nodes within the SFS cluster. The specific shares are read/write on the node they reside on, but can failover to any other node in the cluster. SFS supports CIFS home directory shares.

Storage tiering
SFS's built-in Dynamic Storage Tiering (DST) feature can reduce the cost of storage by moving data to lower cost storage. SFS storage tiering also facilitates the moving of data between different drive architectures. DST lets you do the following:

Create each file in its optimal storage tier, based on pre-defined rules and policies. Relocate files between storage tiers automatically as optimal storage changes, to take advantage of storage economies.

Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications

19

Retain original file access paths to minimize operational disruption, for applications, backup procedures, and other custom scripts. Handle millions of files that are typical in large data centers. Automate these features quickly and accurately.

SFS key benefits and other applications


SFS can be used with any application that requires the sharing of files using the NFS v3, CIFS, or FTP protocol. Use-cases such as home directories or decision support applications that require sequential shared access, Web pages, and applications are all ideal for SFS. SFS is also applicable when you want general purpose, high-throughput scale-out processing for your data, together with enterprise-class highly available cluster functionality.

High performance scaling and seamless growth


SFS lets you scale storage and processing independently and seamlessly, online. Because an application may need to scale either storage or processing, or both, this capability gives you a lot of flexibility. SFS automates the installation of new nodes into the running cluster, configures those nodes, and adds the nodes' capacity into the processing tier. SFS can scale from 1 to 16 nodes with near linear performance scaling. You can add processing one node at a time, rather than buying a large, expensive independent appliance. A storage administrator can configure a new array or even add new LUNs from an existing array into the SFS cluster. SFS can then scan the storage, automatically see the new LUNs and place them under SFS control for use in the cluster. All of this is performed online. At the storage end, resizing of existing file systems can be performed online with no interruption of service. A simple command is used to both add space to an existing file system and to also reduce (dynamically shrink) the amount of free space in a specified file system. The product provides nearly linear scaling in terms of NFS operations per second and total I/O throughput. Figure 1-1 depicts this scaling capability.

20

Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications

Figure 1-1

Example of near-linear performance scaling with SFS

When using 16-node clusters, extremely high throughput performance numbers can be obtained. This is due to the benefits of near linear SFS cluster scalability.

High availability
SFS has an always on" file service that provides zero interruption of file services for company critical data. The loss of single or even multiple nodes does not interrupt I/O operations on the client tier. This is in stark contrast to the traditional NFS active/passive failover paradigm. The SFS architecture provides transparent failover for other key services such as NFS lock state, CIFS and FTP daemons, reporting, logging, and backup/restore operations. The console service that provides access to the centralized menu-driven interface is automatically failed over to another node. The installation service is also highly available and can seamlessly recover from the initially installed node failing during the installation of the remaining nodes in the cluster. The use of Veritas Cluster Server technology and software within SFS is key to the ability of SFS to provide best-of-breed high availability, in addition to class-leading scale-out performance.

Consolidating and reducing costs of storage


The value of consolidating several independent islands of NFS or NAS appliances into fewer, larger shared pools has many cost benefits. A typical enterprise uses 30-40% of its storage. This low storage utilization rate results in excessive spending on new storage when there is more than adequate free space in the data center.

Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications

21

With SFS, you can group storage assets into fewer, larger shared pools. This increases the use of backend LUNs and overall storage. SFS also has built-in, pre-configured heterogeneous storage tiering. This lets you use different types of storage in a primary and secondary tier configuration. Using simple policies, data can be transparently moved from the primary storage tier to the secondary tier. This is ideal when mixing drive types and architectures such as high-speed SAS drives with cheaper storage, such as SATA-based drives. Furthermore, data can be stored initially on the secondary tier and then promoted to the primary tier dynamically based on a pattern of I/O. This creates an optimal scenario when you use Solid State Disks (SSDs) because there will often be a significant change between the amount of SSD storage available, and amount of other storage availability, such as SATA drives. Data and files that are promoted to the primary tier are transferred back to the secondary tier in accordance with the configured access time policy. All of this results in substantially increased efficiency, and it can save you money because you make better use of the storage you already have.

Enabling scale-out compute clusters and heterogeneous sharing of data


The trend toward scale-out, or grid computing continues to gain pace. There are significant performance and cost advantages of moving applications away from large UNIX Symmetrical Multi-Processing (SMP) or mainframe environments and towards a farm of commodity computer servers running a distributed application. One of the key inhibitors to scale-out computing is the requirement to provide a shared storage infrastructure for the compute nodes, and enable you to share heterogeneously as well as scale up as performance requires. SFS solves both of these issues by providing a highly scalable and shared storage platform at the storage tier and by facilitating heterogeneous sharing on the compute tier. SFS can provide the performance and availability you need for a large-scale NFS compute and storage tier. It provides enough throughput and seamless failover for this type of architecture whether a few dozen compute nodes, or scaling to several hundred nodes.

22

Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications

Chapter

Creating users based on roles


This chapter includes the following topics:

About user roles and privileges About the naming requirements for adding new users About using the SFS command-line interface Logging in to the SFS CLI About accessing the online man pages About creating Master, System Administrator, and Storage Administrator users About the support user Displaying the command history

About user roles and privileges


Your privileges within Storage Foundation Scalable File Server (SFS) are based on what user role (Master, System Administrator, or Storage Administrator) you have been assigned. The following table provides an overview of the user roles within SFS.

24

Creating users based on roles About the naming requirements for adding new users

Table 2-1 User role


Master

User roles within SFS Description


Masters are responsible for adding or deleting users, displaying users, and managing passwords. Only the Masters can add or delete other administrators. System Administrators are responsible for configuring and maintaining the file system, NFS sharing, networking, clustering, setting the current date/time, and creating reports. Storage Administrators are responsible for provisioning storage and exporting and reviewing reports.

System Administrator

Storage Administrator

The Support account is reserved for Technical Support use only, and it cannot be created by administrators. See Using the support login on page 325.

About the naming requirements for adding new users


The following table provides the naming requirements for adding new SFS users. Table 2-2 Guideline
Starts with

Naming requirements for adding new users Description


Letter or an underscore (_) Must begin with an alphabetic character and the rest of the string should be from the following POSIX portable character set: ([A-Za-z_][A-Za-z0-9_-.]*[A-Za-z0-9_-.$]).

Length

Can be up to 31 characters. If user names are greater than 31 characters, you will receive the error, "Invalid user name." Command names are case sensitive: username and USERNAME are the same. However, user-provided variables are case-sensitive. Hyphens (-) and underscores (_) are allowed. Valid user names include:

Case

Can contain Valid syntax

Name: a.b a_b ______-

Creating users based on roles About using the SFS command-line interface

25

See Creating Master, System Administrator, and Storage Administrator users on page 33.

About using the SFS command-line interface


You can enter SFS commands on the system console or from any host that can access SFS through a session using Secure Socket Shell (SSH) . SFS provides the following features to help you when you enter commands on the command line:

Command-line help by typing a command and then a question mark (?) Command-line manual (man) pages by typing man and the name of the command you are trying to find Conventions used in the SFS online command-line man pages Description
Indicates you must choose one of elements on either side of the pipe. Indicates that the element inside the brackets is optional. Indicates that the element inside the braces is part of a group. Indicates a variable for which you need to supply a value. Variables are indicated in italics in the man pages.

Table 2-3 Symbol


| (pipe) [ ] (brackets) { } (braces) <>

Logging in to the SFS CLI


When you first log in to the SFS CLI, use the default username/password of master/master. After you have logged in successfully, change your password. See To change a user's password on page 34. By default, the initial password for any user is the same as the username. For example, if you logged in as user1, your default password would also be user1. To use any of the CLI commands, first log in by using the user role you have been assigned. Then enter the correct mode. These two steps must be performed before you can use any of the commands.

26

Creating users based on roles Logging in to the SFS CLI

To log in to the SFS CLI

Log in to SFS using the appropriate user role, System Admin, Storage Admin, or Master. See Logging in to the SFS CLI on page 25.

Enter the name of the mode you want to enter. For example, to enter the admin mode, you would enter the following:
admin

You can tell you are in the admin mode because you will see the following:
Admin>

The following tables describe all the available modes, commands associated with that mode, and what roles to use depending on which operation you are performing. Table 2-4 Admin mode commands System Admin
X X

Admin mode commands


passwd show supportuser user

Storage Admin
X X

Master
X X X X

Table 2-5

Backup mode commands System Admin


X X X X X X X

Backup mode commands


ndmp netbackup show start status stop virtual-ip

Storage Admin

Master
X X X X X X X

Creating users based on roles Logging in to the SFS CLI

27

Table 2-5

Backup mode commands (continued) System Admin


X

Backup mode commands


virtual-name

Storage Admin

Master
X

Table 2-6

CIFS mode commands System Admin


X X X X X X X

CIFS mode commands


homedir local server set share show split

Storage Admin

Master
X X X X X X X

Table 2-7

Cluster mode commands System Admin


X X X X X

Cluster mode commands


add delete reboot show shutdown

Storage Admin

Master
X X X X X

Table 2-8

FTP mode commands System Admin


X X X

FTP mode commands


logupload server session

Storage Admin

Master
X X X

28

Creating users based on roles Logging in to the SFS CLI

Table 2-8

FTP mode commands (continued) System Admin


X X

FTP mode commands


set show

Storage Admin

Master
X X

Table 2-9

History mode commands System Admin


X

History mode commands


history

Storage Admin
X

Master
X

Table 2-10

Network mode commands System Admin


X X X X X X X X X

Network mode commands


bond dns ip ldap nis nsswitch ping show vlan

Storage Admin

Master
X X X X X X X X X

Table 2-11

NFS mode commands System Admin


X X X X

NFS mode commands


server share show fs stat

Storage Admin

Master
X X X X

Creating users based on roles Logging in to the SFS CLI

29

Table 2-12

Report mode commands System Admin


X X X X X X X X

Report mode commands


email event exportevents showevents snmp syslog

Storage Admin

Master
X X X X X X

Table 2-13

Storage mode commands System Admin


X X X X X X X X X X

Storage mode commands


disk grow disk list fencing fs hba pool scanbus snapshot tier

Storage Admin

Master
X X X X X X X X X

Table 2-14

Support mode commands System Admin Storage Admin Master


X X X

Support mode commands


debuginfo iostat pciexclusion

30

Creating users based on roles About accessing the online man pages

Table 2-14

Support mode commands (continued) System Admin Storage Admin Master


X X X X

Support mode commands


services tethereal top traceroute

Table 2-15

System mode commands System Admin


X X X X X X X

System mode commands


clock config more ntp option stat swap

Storage Admin

Master
X X X X X X X

Table 2-16

Upgrade mode commands System Admin


X X

Upgrade mode commands


patch show

Storage Admin

Master
X X

About accessing the online man pages


You access the online man pages by typing man name_of_command at the command line. The example shows the result of entering the Network> man ldap command.

Creating users based on roles About accessing the online man pages

31

Network> man ldap NAME ldap - configure LDAP client for authentication SYNOPSIS ldap ldap ldap ldap

enable disable show [users|groups|netgroups] set {server|port|basedn|binddn|ssl|rootbinddn|users-basedn| groups-basedn|netgroups-basedn|password-hash} value ldap get {server|port|basedn|binddn|ssl|rootbinddn| users-basedn|groups-basedn|netgroups-basedn|password-hash}

You can also type a question mark (?) at the prompt for a list of all the commands that are available for the command mode that you are in. For example, if you are within the admin mode, if you type a question mark (?), you will see a list of the available commands for the admin mode.
sfs> admin ? Entering admin mode... sfs.Admin> exit logout man passwd show supportuser user --return to the previous menus --logout of the current CLI session --display on-line reference manuals --change the administrator password --show the administrator details --enable or disable the support user --add or delete an administrator

To exit the command mode, enter the following: exit. For example:
sfs.Admin> exit sfs>

To exit the system console, enter the following: logout. For example:
sfs> logout

32

Creating users based on roles About creating Master, System Administrator, and Storage Administrator users

About creating Master, System Administrator, and Storage Administrator users


The admin> user commands add or delete a user, display user settings, and rename the password. Note: By default, the password of the new user is the same as the username. Table 2-17 Command
user add

Creating users Definition


Creates the different levels of administrator. You must have master privilege. A user can be a Master user who has all the permissions, including add and deleting users. A Storage Administrator has access to only storage commands and is responsible for upgrading the cluster and applying the patches. A System Administrator is responsible for configuring the NFS server and exporting the file system, adding and deleting new nodes to the cluster, and configuring other network parameters such as DNS and NIS. See To create a Master user on page 33.

passwd

Creates a password. Passwords should be eight characters or less. If you enter a password that exceeds eight characters, the password is truncated, and you need to specify the truncated password when re-entering the password. For example, if you entered "elephants" as the password, the password is truncated to "elephant," and you will need to re-enter "elephant" instead of "elephants" for the system to accept your password. By default, the initial password for any user is the same as the username. For example, if you logged in as user1, your default password would also be user1. You will not be prompted to supply the old password. See To change a user's password on page 34.

show

Displays a list of current users, or you can specify a particular username and display both the username and its associated privilege. See To display a list of current users on page 34.

user delete

Deletes a user. See To delete a user from SFS on page 35.

Creating users based on roles About creating Master, System Administrator, and Storage Administrator users

33

Creating Master, System Administrator, and Storage Administrator users


To create the different levels of administrator, you must have master privilege. To create a Master user

To create a Master user, enter the following:


Admin> user add username master

For example:
Admin> user add master1 master Creating Master: master1 Success: User master1 created successfully

To create a System Administrator user

To create a System Administrator user, enter the following:


Admin> user add username system-admin

For example:
Admin> user add systemadmin1 system-admin Creating System Admin: systemadmin1 Success: User systemadmin1 created successfully

To create a Storage Administrator user

To create a Storage Administrator user, enter the following:


Admin> user add username storage-admin

For example:
Admin> user add storageadmin1 storage-admin Creating Storage Admin: storageadmin1 Success: User storageadmin1 created successfully

34

Creating users based on roles About creating Master, System Administrator, and Storage Administrator users

To change a user's password

To change the password for the current user, enter the following command:
Admin> passwd

You will be prompted to enter the new password for the current user.

To change the password for a user other than the current user, enter the following command:
Admin> passwd [username]

You will be prompted to enter the new password for the user. To display a list of current users

To display the current user, enter the following:


Admin> show [username]

To display a list of all the current users, enter the following:


Admin> show

For example:
Admin> show List of Users ------------master user1 user2

To display the details of the administrator with the username master, enter the following:
Admin> show master Username : master Privileges : Master Admin>

Creating users based on roles About the support user

35

To delete a user from SFS

If you want to display the list of all the current users prior to deleting a user, enter the following:
Admin> show

To delete a user from SFS, enter the following:


Admin> user delete username

For example:
Admin> user delete user1 Deleting User: user1 Success: User user1 deleted successfully

About the support user


The supportuser commands are used to enable, disable, or view the status of the support user. Only an administrator logged in as master has the privilege to enable, disable, change the password, or check the status of the support user. You log into the system console and enter Admin> mode to access the commands. See About using the SFS command-line interface on page 25. for log in instructions. Table 2-18 Command
supportuser enable

Support user commands Definition


Enables the support user for the tracing and debugging of any node. The enable command lets the support user login remotely. See To enable the support user account on page 36.

supportuser password

Changes the support user password. The password can be changed at any time. See To change the support user password on page 36.

supportuser status Checks the status of the support user (whether it is enabled or disabled).

Note: You must have master privilege to use this command.


See To check the support user status on page 37.

36

Creating users based on roles About the support user

Table 2-18 Command


supportuser disable

Support user commands (continued) Definition


Disables the support user without permanently removing it from the system. By default, the support user is in disable mode when SFS is installed. See To disable the support user account on page 37.

Configuring the support user account


To enable the support user account

If you want to enable the support user, enter the following:


Admin> supportuser enable

For example:
Admin> supportuser enable Enabling support user. support user enabled. Admin>

To change the support user password

If you want to change the support user password, enter the following:
Admin> supportuser password

For example:
Admin> supportuser password Changing password for support. New password: Re-enter new password: Password changed Admin>

Creating users based on roles Displaying the command history

37

To check the support user status

If you want to check the status of the support user, enter the following:
Admin> supportuser status

For example:
Admin> supportuser status support user status : Enabled Admin>

To disable the support user account

If you want to disable the support user, enter the following:


Admin> supportuser disable

For example:
Admin> supportuser disable Disabling support user. support user disabled. Admin>

Displaying the command history


The history command displays the commands that you have executed. You can also view commands executed by another user. You must be logged in to the system to view the command history. For login instructions, go to About using the SFS command-line interface.

38

Creating users based on roles Displaying the command history

To display command history

To display the command history, enter the following:


SFS> history [username] [number_of_lines] username number_of_lines Displays the command history for a particular user. Displays the number of lines of history you want to view.

For example:
SFS> history master 7 Username : master Privileges : Master Time Status 02-12-2009 11:09 Success 02-12-2009 11:10 Success 02-12-2009 11:19 Success 02-12-2009 11:28 Success 02-12-2009 15:00 SUCCESS 02-12-2009 15:31 Success 02-12-2009 15:49 Success SFS>

Message NFS> server status NFS> server start NFS> server stop NFS> fs show Disk list stats completed Network shows success Network shows success

Command (server status) (server start ) (server stop ) (show fs ) (disk list ) (show ) (show )

The information displayed from using the history command is:


Time Status Message Command Displays the time stamp as MM-DD-YYYY HH:MM Displays the status of the command as Success, Error, or Warning. Displays the command description. Displays the actual commands that were executed by you or another user.

Chapter

Displaying and adding nodes to a cluster


This chapter includes the following topics:

About the cluster commands Displaying the nodes in the cluster About adding a new node to the cluster Installing the SFS software onto a new node Adding a node to the cluster Deleting a node from the cluster Shutting down the cluster nodes Rebooting the nodes in the cluster

About the cluster commands


This chapter discusses the SFS cluster commands. You use these commands to add or delete nodes to your cluster. The cluster commands are defined in Table 3-1. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter Cluster> mode. For login instructions, go to About using the SFS command-line interface.

40

Displaying and adding nodes to a cluster Displaying the nodes in the cluster

Table 3-1 Commands


cluster> show

Cluster mode commands Definition


Displays the nodes in the SFS cluster, their states, CPU load, and network load during the past 15 minutes. See Displaying the nodes in the cluster on page 40.

network> ip addr add cluster> add

Installs the SFS software onto the new node. See Installing the SFS software onto a new node on page 43. Adds a new node to the SFS cluster. See Adding a node to the cluster on page 44.

cluster> delete

Deletes a node from the SFS cluster. See Deleting a node from the cluster on page 45.

cluster> shutdown Shuts down one or all of the nodes in the SFS cluster. See Shutting down the cluster nodes on page 47. cluster> reboot Reboots a single node or all of the nodes in the SFS cluster. Use the nodename(s) that is displayed in the show command. See Rebooting the nodes in the cluster on page 47.

Displaying the nodes in the cluster


You can display all the nodes in the cluster, their states, CPU load, and network load during the past 15 minutes. If you use the currentload option, you can display the CPU and network loads collected from now to the next five seconds.

Displaying and adding nodes to a cluster Displaying the nodes in the cluster

41

To display a list of nodes in the cluster

To display a list of nodes that are part of a cluster, and the systems that are available to add to the cluster, enter the following:
Cluster> show

For nodes already in the cluster, the following is displayed:


Node ---sfs_1 sfs_2 State ----RUNNING RUNNING CPU(15 min) pubeth0(15 min) % rx(MB/s) tx(MB/s) ----------- -------- -------1.35 0.00 0.00 1.96 0.00 0.00 pubeth1(15 min) rx(MB/s) tx(MB/s) -------- -------0.00 0.00 0.00 0.00

For the nodes not yet added to the cluster, they are displayed with unique identifiers.
Node ---4dd5a565-de6c-4904-aa27-3645cf557119 bafd13c1-536a-411a-b3ab-3e3253006209 State ----INSTALLED 5.0SP2 (172.16.113.118) INSTALLING-Stage-4-of-4

42

Displaying and adding nodes to a cluster Displaying the nodes in the cluster

To display the CPU and network loads collected from now to the next five seconds, enter the following:
Cluster> show [currentload]

Example output:

Node ---sfs_1 sfs_2 sfs_3 Node

State ----RUNNING RUNNING RUNNING

CPU(5 sec) % ---------0.26 0.87 10.78

pubeth0(5 sec) pubeth1(5 sec) rx(MB/s) tx(MB/s) rx(MB/s) tx(MB/s) -------- -------- -------- -------0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 27.83 12.54 0.01 0.00

Displays the node name if the node has already been added to the cluster. Displays the unique identifier for the node if it has not been added to the cluster. Example: node_1 or 35557d4c-6c05-4718-8691-a2224b621920

State

Displays the state of the node or the installation state of the system along with an IP address of the system if it is installed. Example: INSTALLED (172.16.113.118) RUNNING FAULTED EXITED LEAVING UNKNOWN

CPU pubeth0 pubeth1

Indicates the CPU load Indicates the network load for the Public Interface 0 Indicates the network load for the Public Interface 1

If a system is physically removed from the cluster, or if you power off the system, you will not see the unique identifier for the system, installation state, and IP address for the system when you issue the cluster> show

Displaying and adding nodes to a cluster About adding a new node to the cluster

43

command. If you power the system back on, you will see the unique identifier for the system, the installation state, and the IP address for the system. You can then use the IP address to add the node back to the cluster. See About adding a new node to the cluster on page 43.

About adding a new node to the cluster


After you have installed the first node of the cluster, you need to complete two separate procedures to install additional nodes and add them to the cluster. Procedures to install and add additional nodes:

You first need to install the SFS software binaries on the node. You then add the node to your existing cluster. After the SFS software has been installed, the node enters the INSTALLED state. It can then be added to the cluster and become operational.

Note: Before proceeding, make sure that all of the nodes are physically connected to the private and public networks. This allows the software installation to run concurrently on each node. See the Veritas Storage Foundation Scalable File Server Installation Guide for more information.

Installing the SFS software onto a new node


To install the SFS software onto the new node

Log in to the master account through the SFS console and access the network mode. To log in to the SFS console:

Use ssh master@consoleipaddr where consoleipaddr is the console IP address. For the password, enter the default password for the master account, master. You can change the password later by using the Admin> password command.

If the nodes have not been preconfigured, you need to preconfigure them. To preconfigure nodes:

44

Displaying and adding nodes to a cluster Adding a node to the cluster

Obtain the IP address ranges, as described in the Veritas Storage Foundation Scalable File Server Installation Guide, for the public network interfaces of the nodes to be installed. Add each IP address using the following command:
Network> ip addr add ipaddr netmask type

IP is a protocol that allows addresses to be attached to an Ethernet interface. Each Ethernet interface must have at least one address to use the protocol. Several different addresses can be attached to one Ethernet interface. Add the ipaddr and the netmask. And type is the type of IP address (virtual or physical).

Power up and press F12 for each new node to initiate a network boot. The SFS software is automatically installed on all of the nodes.

Enter Cluster> show to display the status of the node installation as it progresses.
Cluster> show

The following is an example of the status messages that appear.


INSTALLING INSTALLING INSTALLING INSTALLING (Stage (Stage (Stage (Stage 1 2 3 4 of of of of 4: 4: 4: 4: Installing Linux) Copying SFS installation sources) First Boot) Installing SFS)

Installed/Installing Nodes Node ---4dd5a565-de6c-4904-aa27-3645cf557119 State ----INSTALLED 5.0SP2 (172.16.113.118)

Adding a node to the cluster


After the SFS software is installed on a new node, the node is assigned a temporary IP address. The address is displayed in the State field in the output for Cluster> show. In the example in Installing the SFS software onto a new node, the temporary IP address is 172.16.113.118. The temporary IP address is only used to add the

Displaying and adding nodes to a cluster Deleting a node from the cluster

45

node to the cluster. Only the nodes in the INSTALLED state can be added to the cluster. Note: This command is not supported in a single-node cluster. The coordinator disks must be visible on the newly added node as a prerequisite for I/O fencing to be configured successfully. Without the coordinator disks, I/O fencing will not load properly and the node will not be able to obtain cluster membership. For more information about I/O fencing, go to About I/O fencing. To add the new node to the cluster

1 2 3

Log in to SFS using the master user role. Enter the cluster mode. To add the new node to the cluster, enter the following:
Cluster> add nodeip

where nodeip is the IP address assigned to the INSTALLED node. For example:
Cluster> add 172.16.113.118 Checking ssh communication with 172.16.113.118 ...done Configuring the new node .....done Adding node to the cluster.........done Node added to the cluster New node's name is: sfs_1

Deleting a node from the cluster


This command deletes a node from the cluster. Use the nodename that is displayed in the Cluster> show command. Note: This command is not supported in a single-node cluster. If the deleted node was in the RUNNING state prior to deletion, that node would be assigned an IP address that can be used to add the node back to the cluster. See About adding a new node to the cluster on page 43. If the deleted node was not in the RUNNING state prior to deletion, reboot the deleted node to assign it an IP address which can be used to add the node back

46

Displaying and adding nodes to a cluster Deleting a node from the cluster

into the cluster. You must first reinstall the operating system SFS software (using the PXE installation) onto the node before adding it to the cluster. Refer to Veritas Storage Foundation Scalable File Server Installation Guide. After the node is deleted from the cluster, that node's IP address is free for use by the cluster for new nodes. The state of each node can be:

RUNNING FAULTED EXITED LEAVING UNKNOWN

To delete a node from the cluster

To show the current state of all nodes in the cluster, enter the following:
Cluster> show

To delete a node from a cluster, enter the following:


Cluster> delete nodename

where nodename is the nodename that appeared in the listing from the show command. For example:
Cluster> delete sfs_1 Stopping Cluster processes on sfs_1 ...........done deleting sfs_1's configuration from the cluster .....done Node sfs_1 deleted from the cluster

If you try to delete a node that is unreachable, you will receive the following warning message:
This SFS node is not reachable, you have to re-install the SFS software via PXE boot after deleting it. Do you want to delete it now? (y/n)

Displaying and adding nodes to a cluster Shutting down the cluster nodes

47

Shutting down the cluster nodes


You can shut down a single node or all of the nodes in the cluster. Use the nodename(s) that is displayed in the Cluster> show command. To shut down a node or all the nodes in a cluster

To shut down a node, enter the following:


Cluster> shutdown nodename

nodename indicates the name of the node you want to shut down. For example:
Cluster> shutdown sfs_1 Stopping Cluster processes on sfs_1 .......done Sent shutdown command to sfs_1

To shut down all of the nodes in the cluster, enter the following:
Cluster> shutdown all

Use all as the nodename if you want to shut down all of the nodes in the cluster. For example:
Cluster> shutdown all Stopping Cluster processes on all ...done Sent shutdown command to sfs_1 Sent shutdown command to sfs_2

Rebooting the nodes in the cluster


You can reboot a single node or all of the nodes in the cluster. Use the nodename(s) that is displayed in the Cluster> show command.

48

Displaying and adding nodes to a cluster Rebooting the nodes in the cluster

To reboot a node

To reboot a node, enter the following:


Cluster> reboot nodename

nodename indicates the name of the node you want to reboot. For example:
Cluster> reboot sfs_1 Stopping Cluster processes on sfs_1 .......done Sent reboot command to sfs_1

To reboot all of the nodes in the cluster, enter the following:


Cluster> reboot all

Use all as the nodename if you want to reboot all of the nodes in the cluster. For example:
Cluster> reboot all Stopping Cluster processes on all ...done Sent reboot command to sfs_1 Sent reboot command to sfs_2

Chapter

Configuring SFS network settings


This chapter includes the following topics:

About network mode commands Displaying the network configuration and statistics About bonding Ethernet interfaces About DNS About IP commands About configuring IP addresses About configuring Ethernet interfaces About configuring routing tables About LDAP Before configuring LDAP settings About configuring LDAP server settings About administering SFS cluster's LDAP client About NIS About NSS About VLAN

50

Configuring SFS network settings About network mode commands

About network mode commands


SFS network-mode commands let you specify and check the status of network parameters for the SFS cluster. Note: Before you use SFS network mode commands, you must have a general understanding of IP addresses and networking. If you are not familiar with the terms or output, contact your Network Administrator for help. As shown in Table 4-1, network node commands are organized into functional groups or submodes. To access the commands, log into your administrative console (master, system-admin, or storage-admin) and enter Network> mode. For information on how to login, go to About using the SFS command-line interface. Table 4-1 Network submode
Bond

Network submodes Function


Creates a logical association between two or more Ethernet interfaces. See About bonding Ethernet interfaces on page 52.

DNS

Identifies enterprise DNS servers for SFS use. See About DNS on page 54.

IP

Manages the SFS cluster IP addresses. See About IP commands on page 58.

LDAP

Identifies the LDAP servers thatSFS can use. See About LDAP on page 72.

NIS

Identifies the NIS server that SFS can use. See About NIS on page 81.

NSS

Provides a single configuration location to identify the services (such as NIS or LDAP) for network information such as hosts, groups, or passwords. See About NSS on page 84.

VLAN

Views, adds, or deletes VLAN interfaces. See Configuring VLAN on page 86.

Configuring SFS network settings Displaying the network configuration and statistics

51

Displaying the network configuration and statistics


You can use the Network> show command to display the current cluster configuration and related statistics of the cluster network configuration. To display the network configuration and statistics

To display the cluster's network configuration and statistics, enter the following:
Network> show Interface Statistics -------------------sfs_1 ------------Interfaces MTU Metric lo 16436 1 priveth0 1500 1 priveth1 1500 1 pubeth0 1500 1 pubeth1 1500 1 TX-OK 13766 953273 506641 152817 673 TX-DROP 0 0 0 0 0 TX-ERR 0 0 0 0 0

RX-OK 13766 452390 325940 25806318 25755262 Flag LRU BMR BMRU BMRU BMRU

RX-DROP 0 0 0 0 0

RX-ERR 0 0 0 0 0

RX-FRAME 0 0 0 0 0

TX-CAR 0 0 0 0 0

Routing Table ------------sfs_1 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1

Genmask 255.255.255.0 255.255.240.0 255.255.240.0 255.0.0.0 0.0.0.0

Flags MSS Window irtt Iface U 0 0 0 priveth0 U 0 0 0 pubeth0 U 0 0 0 pubeth1 U 0 0 0 lo UG 0 0 0 pubeth0

For definitions of the column headings in the Routing Table, see To display the routing tables of the nodes in the cluster.

52

Configuring SFS network settings About bonding Ethernet interfaces

About bonding Ethernet interfaces


Bond commands associate each set of two or more Ethernet interfaces with one IP address. This association improves network performance on each SFS cluster node by increasing the potential bandwidth available on an IP address beyond the limits of a single Ethernet interface and by providing redundancy for higher availability. For example, you can bond two 1-gigabit Ethernet interfaces together to provide up to 2 gigabits per second of throughput to a single IP address. Moreover, if one of the interfaces fails, communication continues using the single Ethernet interface. Bond commands let you create, remove, and display a cluster's bonds. When you create or delete a bond, it affects the corresponding Ethernet interfaces on the SFS cluster nodes. Every node in the cluster has pubeth0 and pubeth1 interfaces. You can only bond public Ethernet interfaces. Note: When you create or remove a bond, all of the SSH connections with Ethernet interfaces may be dropped. When the operation is complete, you must restore the ssh connections. Table 4-2 Command
show

Bond commands Definition


Displays a bond and the algorithm used to distribute traffic among the bonded interfaces. See To display a bond on page 53.

create

Creates a bond between sets of two or more correspondingly named Ethernet interfaces on all SFS cluster nodes. See To create a bond on page 53.

remove

Removes a bond between two or more correspondingly named Ethernet interfaces on all SFS cluster nodes. The bond show command displays the names. See To remove a bond on page 54.

Configuring SFS network settings About bonding Ethernet interfaces

53

Bonding Ethernet interfaces


To display a bond

To display a bond and the algorithm used to distribute traffic among the bonded interfaces, enter the following:
Network> bond show

In this example, DEVICES refers to Ethernet interfaces.


BONDNAME -------bond0 MODE ----1 DEVICES ------pubeth1 pubeth2

To create a bond

To create a bond between sets of two or more Ethernet interfaces on all SFS cluster nodes, enter the following:
Network> bond create interfacelist mode interfacelist Specifies a comma-separated list of public Ethernet interfaces to bond. Bonds are created on correspondingly named sets of Ethernet interfaces on each cluster node. Specifies how the bonded Ethernet interfaces divide the traffic.

mode

For example:
Network> bond create pubeth1,pubeth2 broadcast 100% [#] Bonding interfaces. Please wait... bond created, the bond name is: bond0

You can specify a mode either as a number or a character string, as follows:


0 balance-rr This mode provides fault tolerance and load balancing. It transmits packets in order from the first available slave through the last. Only one slave in the bond is active. If the active slave fails, a different slave becomes active. To avoid confusing the switch, the bond's MAC address is externally visible on only one port (network adapter).

active-backup

54

Configuring SFS network settings About DNS

balance-xor

Transmits based on the selected transmit hash policy. The default policy is a simple. This mode provides load balancing and fault tolerance. You can use the xmit_hash_policy option to select alternate transmit policies.

broadcast

Transmits everything on all slave interfaces and provides fault tolerance. Creates aggregation groups with the same speed and duplex settings. It uses all slaves in the active aggregator based on the 802.3ad specification. Provides channel bonding that does not require special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. The current slave receives incoming traffic. If the receiving slave fails, another slave takes over its MAC address. Includes balance-tlb plus Receive Load Balancing (RLB) for IPV4 traffic. This mode does not require any special switch support. ARP negotiation load balances the receive.

802.3ad

balance-tlb

balance-alb

To remove a bond

To remove a bond from all of the nodes in a cluster, enter the following:
Network> bond remove bondname

where bondname is the name of the bond configuration. For example:


Network> bond remove bond0 100% [#] Removing Bond bond0. Please wait... bond removed : bond0

About DNS
The Domain Name System (DNS) service translates between numeric IP addresses and their associated host names. The DNS commands let you view or change an SFS cluster 's DNS settings. You can configure an SFS cluster's DNS lookup service to use up to three DNS servers.

Configuring SFS network settings About DNS

55

You must enable the SFS cluster's DNS name service before you specify the DNS servers it is to use for lookups. Table 4-3 Command
dns show

DNS commands Definition


Displays the current settings of an SFS cluster's DNS lookup service. See To display DNS settings on page 56.

dns enable

Enables SFS to perform DNS lookups. When DNS is enabled, the SFS cluster's DNS service uses the data center's DNS server(s) to determine the IP addresses of network entities such as SNMP, NTP, LDAP, and NIS servers with which the cluster must communicate. See To enable DNS settings on page 56.

dns disable

Disables DNS lookups. If the DNS services are already disabled, the command does not respond. See To disable DNS settings on page 56.

dns set nameservers

Specifies the IP addresses of DNS name servers to be used by the SFS DNS lookup service. The order of the IP addresses is the order in which the name servers are to be used. Enter the IP addresses of the name servers. The order of the IP addresses is the order in which the name servers are to be used. See To specify IP addresses of DNS name servers on page 57.

dns clear nameservers

Removes the IP addresses of DNS name servers from the cluster's DNS lookup service database. See To remove name servers list used by DNS on page 57.

dns set domainname

Enter the domain name that the SFS cluster will be in. For the required information, contact your Network Administrator. This command clears any previously set domain name. Before you use this procedure, you must enable the DNS server. See To set the domain name for the DNS server on page 57.

dns clear domainname

Removes the DNS domain name. See To remove domain name used by DNS on page 58.

56

Configuring SFS network settings About DNS

Configuring DNS settings


To display DNS settings

To display DNS settings, enter the following:


Network> dns show DNS Status : Disabled nameserver : 172.16.113.118 domain : symantec.com

To enable DNS settings

To enable DNS settings to allow SFS hosts to do lookups and verify the results, enter the following commands:
Network> dns enable Network> Network> dns show DNS Status : Enabled domain : cluster1.com nameserver : 10.216.50.132

To disable DNS settings

To disable DNS settings, enter the following:


Network> dns disable Network> Network> dns show DNS Status : Disabled Old Settings -----------domain : cluster1.com nameserver : 10.216.50.132

Configuring SFS network settings About DNS

57

To specify IP addresses of DNS name servers

To specify the IP addresses of DNS name servers to be used by the SFS DNS service and verify the results, enter the following commands:
Network> dns set nameservers nameserver1 [nameserver2] [nameserver3]

For example:
Network> dns set nameservers 10.216.50.199 10.216.50.200 Network> Network> dns show DNS Status : Enabled nameserver : 10.216.50.199 nameserver : 10.216.50.200

To remove name servers list used by DNS

To remove the name servers list used by DNS and verify the results, enter the following commands:
Network> dns clear nameservers Network> Network> dns show DNS Status : Enabled

To set the domain name for the DNS server

To set the domain name for the DNS server, enter the following:
Network> dns set domainname domainname

where domainname is the domain name for the DNS server. For example:
Network> dns set domainname example.com Network> Network> dns show DNS Status : Enabled domain : example.com nameserver : 10.216.50.132

58

Configuring SFS network settings About IP commands

To remove domain name used by DNS

To remove the domain name used by DNS, enter the following:


Network> dns clear domainname Network> Network> dsn show DNS Status : Enabled nameserver : 10.216.50.132

About IP commands
Internet Protocol (IP) commands configure your routing tables, Ethernet interfaces, and IP addresses, and display the settings. The following sections describe how to configure the IP commands:

About configuring IP addresses About configuring Ethernet interfaces Configuring routing tables

About configuring IP addresses


Each Ethernet interface must have a physical IP address associated with it. These are usually supplied when the SFS software is installed. Each Ethernet interface also requires one or more virtual IP addresses to communicate with other cluster nodes and with the rest of the enterprise network. Note: The operating system requires physical IP addresses. You should only add the physical IP addresses when the cluster's hardware configuration changes. Table 4-4 lists the commands you can use to configure your IP addresses. Table 4-4 Command
ip addr show

IP commands Definition
Displays the IP addresses, the devices (Ethernet interfaces) they are assigned to, and their attributes.

Note: Any Ethernet interfaces excluded during the initial SFS


installation will not be displayed. See To display all the IP addresses for the cluster on page 60.

Configuring SFS network settings About configuring IP addresses

59

Table 4-4 Command


ip addr add

IP commands (continued) Definition


Adds a virtual or physical IP address to the SFS cluster. SFS assigns the newly added IP address to an Ethernet interface or one of its nodes. Virtual IP addresses are used for communication among cluster nodes and with clients on the enterprise network. By default, this command does not use VLAN Ethernet interfaces unless they are specified in the device option. SFS determines the node to which the IP address will be assigned. After you add a virtual IP address, it takes a few seconds for it to come online. If you enter an IP address that is already used in the cluster, an error message is displayed. You cannot enter an invalid IP address (one that is not four bytes or has a byte value greater than 255).

Note: An IP address that does not go online may indicate a problem


with the SFS cluster. For help, see To display the state of the services or contact Symantec Technical Support. See To add an IP address to a cluster on page 62. ip addr online Brings an IP address online on any running node in the cluster. The IP address does not need to be in the offline mode for this command to work. You can use this command to switch the IP address from an online node to another specified node. You can change an IP address to the online mode if it is in the OFFLINE/FAULTED state. This command also displays any faults for the IP address on the specified node. If the command succeeds you do not receive a response at the prompt.

Note: An IP address that does not go online may indicate a problem


with the SFS cluster. For help, see To display the state of the services or contact Symantec Technical Support. See To change an IP address to the online mode on a specified node on page 63. ip addr modify Modifies an IP protocol address used by the cluster. You can change both the physical IP addresses and virtual IP addresses. If you change the virtual IP address it terminates the NFS connection on oldipaddr. See To modify an IP address on page 64.

60

Configuring SFS network settings About configuring IP addresses

Table 4-4 Command


ip addr del

IP commands (continued) Definition


Deletes an IP protocol address from the cluster. You can only delete physical IP addresses if they are not being used by any interface of the cluster. You can also delete virtual IP addresses, except for the console IP address. When you add or delete an IP address from the cluster, the cluster automatically evens out the number of virtual IP addresses on each node. See To remove an IP address from the cluster on page 64.

Configuring IP addresses
To configure your IP addresses, perform the following commands. To display all the IP addresses for the cluster

To display all of a cluster's IP addresses, enter the following:


Network> ip addr show IP Netmask -------10.182.107.53 255.255.240.0 10.182.107.54 255.255.240.0 10.182.107.55 255.255.240.0 10.182.107.56 255.255.240.0 10.182.107.65 255.255.240.0 10.182.107.201 255.255.240.0 10.182.107.202 255.255.240.0 10.182.107.203 255.255.240.0 10.182.107.204 255.255.240.0

Device -----pubeth0 pubeth1 pubeth0 pubeth1 pubeth0 pubeth0 pubeth0 pubeth1 pubeth1

Node ---sfs_1 sfs_1 sfs_2 sfs_2 sfs_1 sfs_2 sfs_1 sfs_2 sfs_1

Type ---Physical Physical Physical Physical Virtual Virtual Virtual Virtual Virtual

Status ------

ONLINE (Con IP) ONLINE ONLINE ONLINE ONLINE

The output headings are:


IP Netmask Device Node Type Displays the IP addresses for the cluster. Displays the netmask for the IP address. Displays the names of the Ethernet interfaces for the IP address. Displays the node names associated with the interface. Displays the type of the IP address: physical or virtual.

Configuring SFS network settings About configuring IP addresses

61

Status

Displays the status of the IP addresses:


ONLINE ONLINE (console IP) OFFLINE FAULTED

A virtual IP can be in the FAULTED state if it is already being used. It can also be in the FAULTED state if the corresponding device is not working on all nodes in the cluster (for example, a disconnected cable).

62

Configuring SFS network settings About configuring IP addresses

To add an IP address to a cluster

To add an IP address to a cluster, enter the following:


Network> ip addr add ipaddr netmask type [device] ipaddr Specifies the IP address to add to the cluster. Do not use physical IP addresses to access the SFS cluster. In case of failure, the IP addresses cannot move between nodes. A failure could be either a node failure, an Ethernet interface failure, or storage failure. netmask type device Specifies the netmask for the IP address. Specifies the IP type, either virtual or physical. Only use this option if you entered virtual for the type.

For example, to add a virtual IP address on a normal device, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual pubeth0 SFS ip addr Success V-288-0 ip addr add successful. Network>

For example, to add a virtual IP address on a bond device, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual bond0 SFS ip addr Success V-288-0 ip addr add successful. Network>

For example, to add a virtual IP address on a VLAN device created over a normal device with VLAN ID 3, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual pubeth0.3 SFS ip addr Success V-288-0 ip addr add successful. Network>

For example, to add a virtual IP address on a VLAN device created over a bond device with VLAN ID 3, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual bond0.3 SFS ip addr Success V-288-0 ip addr add successful. Network>

Configuring SFS network settings About configuring IP addresses

63

To change an IP address to the online mode on a specified node

To change an IP address to the online mode on a specified node, enter the following:
Network> ip addr online ipaddr nodename ipaddr nodename Specifies the IP address that needs to be brought online. Specifies the nodename on which the IP address needs to be brought online. If you do not want to enter a specific nodename, enter any with the IP address.

For example:
Network> ip addr online 10.10.10.15 node5_2 Network> ip addr show IP Netmask Device Node ---------------10.216.114.212 255.255.248.0 pubeth0 node5_1 10.216.114.213 255.255.248.0 pubeth1 node5_1 10.216.114.214 255.255.248.0 pubeth0 node5_2 10.216.114.215 255.255.248.0 pubeth1 node5_2 10.216.114.217 255.255.248.0 pubeth0 node5_1 10.10.10.10 255.255.248.0 pubeth0 node5_1 10.10.10.11 255.255.248.0 pubeth1 node5_1 10.10.10.12 255.255.248.0 pubeth0 node5_2 10.10.10.13 255.255.248.0 pubeth1 node5_2 10.10.10.15 255.255.248.0 pubeth0 node5_2

Type ---Physical Physical Physical Physical Virtual Virtual Virtual Virtual Virtual Virtual

Status ------

ONLINE (Con IP) ONLINE ONLINE ONLINE ONLINE ONLINE

64

Configuring SFS network settings About configuring Ethernet interfaces

To modify an IP address

To modify an IP address, enter the following:


Network> ip addr modify oldipaddr newipaddr netmask oldipaddr newipaddr netmask Specifies the old IP address to be modified. Specifies what the new IP address will be. Specifies the netmask for the new IP address.

A valid netmask has a "1" on the far right, with all "1's" to the left in bitwise form. If the specified oldipaddr is not assigned to the cluster, an error message is displayed. If you enter an invalid IP address (one that is not four bytes or has a byte value greater than 255), an error message is displayed. If the new IP address is already being used, an error message is displayed. For example:
Network> ip addr modify 10.10.10.15 10.10.10.16 255.255.240.0 SFS ip addr Success V-288-0 ip addr modify successful.

To remove an IP address from the cluster

To remove an IP address from the cluster, enter the following:


Network> ip addr del ipaddr

where ipaddr is the IP address to remove from the cluster. For example:
Network> ip addr del 10.10.10.15 SFS ip addr Success V-288-0 ip addr del successful. Network>

About configuring Ethernet interfaces


You can display and change the public Ethernet interfaces (pubeth0 and pubeth1) whether a link is up or down, and the Ethernet interface's Maximum Transmission Unit (MTU) value.

Configuring SFS network settings About configuring Ethernet interfaces

65

Table 4-5 Command


ip link show

Ethernet interface commands Definition


Displays each Ethernet interface's (device) status, if it connected to each node in the cluster, the speed, and MTU.

Note: Any Ethernet interfaces excluded during the initial SFS


installation will not be displayed. See To display current Ethernet interfaces and states on page 65. ip link set Changes the network Ethernet interface's attributes or states. See To change an Ethernet interface on page 66.

Configuring Ethernet interfaces


To display current Ethernet interfaces and states

To display current configurations, enter the following:


Network> ip link show [nodename] [device] nodename Specifies which node of the cluster to display the attributes. Enter all to display all IP links. device Specifies which Ethernet interface on the node to display the attributes.

For example:
Network> ip link show sfs_1 pubeth0 Nodename -------sfs_1 Device Status ------ -----pubeth0 UP MTU Detect --- -----1500 yes Speed -----100Mb/s

To display all configurations, enter the following:


Nodename -------sfs_1 sfs_1 sfs_2 sfs_2 Device Status ------ -----pubeth0 UP pubeth1 UP pubeth0 UP pubeth1 UP MTU Detect --- -----1500 yes 1500 yes 1500 yes 1500 yes Speed -----100Mb/s 100Mb/s 100Mb/s 100Mb/s

66

Configuring SFS network settings About configuring Ethernet interfaces

To change an Ethernet interface

To change an Ethernet interface's configuration, enter the following:


Network> ip link set nodename device operation [argument] nodename Specifies which node of the cluster to configure. If the node specified is not part of the cluster, then an error message is displayed. To configure all nodes at once, use the all option in the nodename field. device Specifies the Ethernet interface to configure. If you enter an Ethernet interface that cannot be configured, an error message is displayed. operation Enter one of the following operations:

up - Brings the Ethernet interface online. down - Brings the Ethernet interface offline.

mtu MTU - Changes the Ethernet interface's Maximum Transmission Unit (MTU) to the value that is specified in the argument field. detect- Displays whether the Ethernet interface is physically connected or not. speed- Displays the device speed. argument The argument field is used only when you enter mtu in the operation field. Setting the incorrect MTU value causes the console IP to become unavailable. The argument field specifies what the MTU of the specified Ethernet interface on the specified node should be changed to. The MTU value must be an unsigned integer between 46 and 9216. If you enter the argument field, but do not enter an MTU in the operation field, the argument is ignored.

For example:
Network> ip link set all pubeth0 mtu 1600 sfs_1 : mtu updated on pubeth0 sfs_2 : mtu updated on pubeth0

Configuring SFS network settings About configuring routing tables

67

Network> ip link show Nodename -------sfs_1 sfs_1 sfs_2 sfs_2 Device Status ------ -----pubeth0 UP pubeth1 UP pubeth0 UP pubeth1 UP MTU Detect --- -----1600 yes 1500 yes 1600 yes 1500 yes Speed -----100Mb/s 100Mb/s 100Mb/s 100Mb/s

About configuring routing tables


Sometimes an SFS cluster must communicate with network services (for example, LDAP) using specific gateways in the public network. In these cases, you must define routing table entries. These entries consist of the following:

The target network node's IP address and accompanying netmask. Gateways IP address. Optionally, a specific Ethernet interface via which to communicate with the target. This is useful, for example, if the demands of multiple remote clients are likely to exceed a single gateways throughput capacity.

You add or remove routing table entries using the Network> mode ip route command. Table 4-6 lists the commands used to configure the routing tables of the nodes in the cluster. Table 4-6 Command
route show

Routing table commands Definition


Displays the routing table of the nodes in the cluster. You can enter a specific nodename or use all to display the routing tables for all nodes in the cluster. See To display the routing tables of the nodes in the cluster on page 69.

68

Configuring SFS network settings About configuring routing tables

Table 4-6 Command


route add

Routing table commands (continued) Definition


Adds a new route for the cluster. The routing table contains information about paths to other networked nodes. You can make routing table changes on each node of the cluster. Use all for the nodename to add the route to all of the nodes in the cluster. Use a netmask value of 255.255.255.255 for the netmask to add a host route to ipaddr. Use a value of 0.0.0.0 for the gateway to add a route that does not use any gateway. The dev device is an optional argument. Use any of the public Ethernet interfaces for the device (pubeth0, pubeth1, or any). See To add to the route table on page 70.

route del

Deletes a route used by the cluster. Use all for nodename to delete the route from all of the nodes in the cluster. The combination of ipaddr and netmask specifies the network or host for which the route is deleted. Use a value of 255.255.255.255 for the netmask to delete a host route to ipaddr. See To delete route entries from the routing tables of nodes in the cluster on page 72.

Configuring SFS network settings About configuring routing tables

69

Configuring routing tables


To display the routing tables of the nodes in the cluster

To display the routing tables of the nodes in the cluster, enter the following:
Network> ip route show [nodename]

where nodename is the node whose routing tables you want to display. To see the routing table for all of the nodes in the cluster, enter all. For example:
Network> ip route show all sfs_1 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1

Genmask 255.255.255.0 255.255.240.0 255.255.240.0 255.0.0.0 0.0.0.0

Flags MSS Window irtt Iface U 0 0 0 priveth0 U 0 0 0 pubeth0 U 0 0 0 pubeth1 U 0 0 0 lo UG 0 0 0 pubeth0

sfs_2 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1

Genmask 255.255.255.0 255.255.240.0 255.255.240.0 255.0.0.0 0.0.0.0

Flags MSS Window irtt Iface U 0 0 0 priveth0 U 0 0 0 pubeth0 U 0 0 0 pubeth1 U 0 0 0 lo UG 0 0 0 pubeth0

Destination

Displays the destination network or destination host for which the route is defined. Displays a network node equipped for interfacing with another network. Displays the netmask.

Gateway

Genmask

70

Configuring SFS network settings About configuring routing tables

Flags

The flags are as follows: U - Route is up H - Target is a host G - Use gateway

MSS

Displays maximum segment size. The default is 0. You cannot modify this attribute. Displays the maximum amount of data the system accepts in a single burst from the remote host. The default is 0. You cannot modify this attribute. Displays the initial round trip time with which TCP connections start. The default is 0. You cannot modify this attribute. Displays the interface. On UNIX systems, the device name lo refers to the loopback interface.

Window

irtt

Iface

To add to the route table

To add a route entry to the routing table of nodes in the cluster, enter the following:
Network> ip route add nodename ipaddr netmask via gateway [dev device] nodename Specifies the node to whose routing table the route is to be added. To add a route path to all the nodes, use all in the nodename field. If you enter a node that is not a part of the cluster, an error message is displayed. ipaddr Specifies the destination of the IP address. If you enter an invalid IP address, then a message notifies you before you fill in other fields. netmask Specifies the netmask associated with the IP address that is entered for the ipaddr field. Use a netmask value of 255.255.255.255 for the netmask to add a host route to ipaddr. via This is a required field. You must type in the word.

Configuring SFS network settings About configuring routing tables

71

gateway

Specifies the gateway IP address used for the route. If you enter an invalid gateway IP address, then an error message is displayed. To add a route that does not use a gateway, enter a value of 0.0.0.0.

dev dev device

Specifies the route device option. You must type in the word. Specifies which Ethernet interface on the node the route path is added to. This variable is optional. You can specify the following values:

any - Default pubeth0 - Public Ethernet interface pubeth1 - Public Ethernet interface

The Ethernet interface field is required only when you specify dev in the dev field. If you omit the dev and device fields, SFS uses a default Ethernet interface.

For example:
Network> ip route add sfs_1 10.10.10.10 255.255.255.255 via 0.0.0.0 dev pubeth0 sfs_1: Route added successfully

72

Configuring SFS network settings About LDAP

To delete route entries from the routing tables of nodes in the cluster

To delete route entries from the routing tables of nodes in the cluster, enter the following:
Network> ip route del nodename ipaddr netmask nodename Specifies the route entry from which the node is deleted. To delete the route entry from all nodes, use the all option in this field. ipaddr Specifies the destination IP address of the route entry to be deleted. If you enter an invalid IP address a message notifies you before you enter other fields. netmask Specifies the IP address to be used.

For example:
Network> ip route del sfs_1 10.216.128.0 255.255.255.255 sfs_1: Route deleted successfully

About LDAP
The Lightweight Directory Access Protocol (LDAP) is the protocol used to communicate with LDAP servers. The LDAP servers are the entities that perform the service. In SFS the most common use of LDAP is user authentication. For sites that use an LDAP server for access or authentication, SFS provides a simple LDAP client configuration interface.

Before configuring LDAP settings


Before you configure SFS LDAP settings, obtain the following LDAP configuration information from your system administrator:

IP address or host name of the LDAP server. You also need the port number of the LDAP server. Base (or root) distinguished name (DN), for example, cn=employees,c=us. LDAP database searches start here.

Configuring SFS network settings About configuring LDAP server settings

73

Bind distinguished name (DN) and password, for example, ou=engineering,c=us. This allows read access to portions of the LDAP database to search for information. Base DN for users, for example, ou=users,dc=com. This allows access to the LDAP directory to search for and authenticate users. Base DN for groups, for example, ou=groups,dc=com. This allows access to the LDAP database, to search for groups. Root bind DN and password. This allows write access to the LDAP database, to modify information, such as changing a user's password. Secure Sockets Layer (SSL). Configures an SFS cluster to use the Secure Sockets Layer (SSL) protocol to communicate with the LDAP server. Password hash algorithm, for example md5, if a specific password encryption method is used with your LDAP server.

The following sections describe how to configure LDAP:


Configuring LDAP server settings Administering the SFS cluster's LDAP client

About configuring LDAP server settings


Table 4-7 lists the LDAP commands used to configure the LDAP server settings. Table 4-7 Command
set basedn

LDAP commands Definition


Sets the base DN value for the LDAP server.

Note: Setting the base DN for the LDAP server is required.


See To set the base DN for the LDAP server on page 75. set server Sets the hostname or IP address for the LDAP server. See To set the LDAP server hostname or IP address on page 76. set port Sets the port number for the LDAP server. See To set the LDAP server port number on page 76.

74

Configuring SFS network settings About configuring LDAP server settings

Table 4-7 Command


set ssl

LDAP commands (continued) Definition


Configures an SFS cluster to use the Secure Sockets Layer (SSL) protocol to communicate with the LDAP server. If your LDAP server does not use SSL for authentication, sets this value to off (the default value). Consult your system administrator for confirmation. If your LDAP server supports SSL, you must set SSL to on. This setting is mandatory. The certificates that are required for SSL are auto-negotiated with the LDAP server when the session is established. See To set SFS to use LDAP over SSL on page 76.

set binddn

Sets the bind Distinguished Name (DN) and its password for the LDAP server. This DN is used to bind with the LDAP server for read access. For LDAP authentication, most attributes need read access.

Note: Use the LDAP server password. Contact your Network


Administrator for assistance. See To set the bind DN for the LDAP server on page 77. set rootbinddn Sets the LDAP root bind DN and its password. This DN is used to bind with the LDAP server for write access to the LDAP directory. This setting is not required for authentication. To change some attributes of an LDAP entry, the root bind DN is required. For example, if a root user wants to change a user's password, the root user must have administrative privileges to write to the LDAP directory.

Note: Use the LDAP server password. Contact your Network


Administrator for assistance. See To set the root bind DN for the LDAP server on page 77. set users-basedn set groups-basedn Sets the LDAP users, groups, and netgroups base Distinguished Name (DN). PAM/NSS uses this DN to search LDAP groups.

Note: You must set the LDAP users, groups, and netgroups base DN.

set See To set the LDAP users, groups, or netgroups base DN on page 78. netgroups-basedn

Configuring SFS network settings About configuring LDAP server settings

75

Table 4-7 Command

LDAP commands (continued) Definition

set password-hash Sets the LDAP password hash algorithm used when you set or change the LDAP user's password. The password is encrypted with the configured hash algorithm before it is sent to the LDAP server and stored in the LDAP directory.

Note: Setting the LDAP password hash algorithm is optional.


See To set the password hash algorithm on page 78. get Displays the configured LDAP settings. See To display the LDAP configured settings on page 79. clear Clears a configured setting. See To clear the LDAP setting on page 79.

Configuring LDAP server settings


You can set the LDAP base Distinguished Name (base DN). LDAP records are structured in a hierarchical tree. You access records through a particular path, in this case, a Distinguished Name, or DN. The base DN indicates where in the LDAP directory hierarchy you want to start your search. Note: For SFS to access an LDAP directory service, you must specify the LDAP server DNS name or IP address. To set the base DN for the LDAP server

To set the base DN for the LDAP server, enter the following:
Network> ldap set basedn value

where value is the LDAP base DN in the following format:


dc=yourorg,dc=com

For example:
Network> ldap set basedn dc=example,dc=com OK Completed

76

Configuring SFS network settings About configuring LDAP server settings

To set the LDAP server hostname or IP address

To set the LDAP server hostname or IP address, enter the following:


Network> ldap set server value

where value is the LDAP server hostname or IP address. For example:


Network> ldap set server ldap-server.example.com OK Completed

For example, if you enter an IP address for the value you get the following message:
Network> ldap set server 10.10.10.10 OK Completed

To set the LDAP server port number

To set the LDAP server port number, enter the following:


Network> ldap set port value

where value is the LDAP server port number. For example:


Network> ldap set port 555 OK Completed

To set SFS to use LDAP over SSL

To set SFS to use LDAP over SSL, enter the following:


Network> ldap set ssl {on|off}

For example:
Network> ldap set ssl on OK Completed

Configuring SFS network settings About configuring LDAP server settings

77

To set the bind DN for the LDAP server

To set the bind DN for the LDAP server, enter the following:
Network> ldap set binddn value

where value is the LDAP bind DN in the following format:


cn=binduser,dc=yourorg,dc=com

The value setting is mandatory. You are prompted to supply a password. You must use your LDAP server password. For example:
Network> ldap set binddn cn Enter password for 'cn': *** OK Completed

To set the root bind DN for the LDAP server

To set the root bind DN for the LDAP server, enter the following:
Network> ldap set rootbinddn value

where value is the LDAP root bind DN in the following format:


cn=admin,dc=yourorg,dc=com

You are prompted to supply a password. You must use your LDAP server password. For example:
Network> ldap set rootbinddn dc Enter password for 'dc': *** OK Completed

78

Configuring SFS network settings About configuring LDAP server settings

To set the LDAP users, groups, or netgroups base DN

To set the LDAP users, groups, or netgroups base DN, enter the following:
Network> ldap set users-basedn value Network> ldap set groups-basedn value Network> ldap set netgroups-basedn value users-basedn value Specifies the value for the users-basedn. For example: ou=users,dc=example,dc=com (default)

groups-basedn value

Specifies the value for the groups-basedn. For example: ou=groups,dc=example,dc=com (default)

netgroups-basedn Specifies the value for the netgroups-basedn. For example: value ou=netgroups,dc=example,dc=com (default)

For example:
Network> ldap set users-basedn ou=Users,dc=example,dc=com OK Completed

To set the password hash algorithm

To set the password hash algorithm, enter the following:


Network> ldap set password-hash {clear|crypt|md5}

For example:
Network> ldap set password-hash clear OK Completed

Configuring SFS network settings About administering SFS cluster's LDAP client

79

To display the LDAP configured settings

To display the LDAP configured settings, enter the following:


Network> ldap get {server|port|basedn|binddn|ssl|rootbinddn| users-basedn|groups-basedn|netgroups-basedn|password-hash}

For example:
Network> ldap get server LDAP server: ldap-server.example.com OK Completed

To clear the LDAP setting

To clear the previously configured LDAP setting, enter the following:


Network> ldap clear {server|port|basedn|binddn|ssl|rootbinddn| users-basedn|groups-basedn|netgroups-basedn|password-hash}

For example:
Network> ldap clear binddn OK Completed

About administering SFS cluster's LDAP client


You can display the Lightweight Directory Access Protocol (LDAP) client configurations. LDAP clients use the LDAPv3 protocol to communicate with the server. Table 4-8 Command
ldap show

LDAP client commands Definition


Displays the SFS cluster's LDAP client configuration. See To display LDAP client configuration on page 80.

ldap enable

Enables the LDAP client configuration. See To enable LDAP client configuration on page 81.

ldap disable

Disables the LDAP client configuration. This command stops SFS from querying the LDAP service. See To disable LDAP client configuration on page 81.

80

Configuring SFS network settings About administering SFS cluster's LDAP client

Administering the SFS cluster's LDAP client


To display LDAP client configuration

To display LDAP client configuration, enter the following:


Network> ldap show [users|groups|netgroups] users Displays the LDAP users that are available in the Name Service Switch (NSS) database. Displays the LDAP groups that are available in the NSS database. Displays the LDAP netgroups that are available in the NSS database.

groups netgroups

If you do not include one of the optional variables, the command displays all the configured settings for the LDAP client. For example:
Network> ldap show LDAP client is enabled. ======================= LDAP server: LDAP port: LDAP base DN: LDAP over SSL: LDAP bind DN: LDAP root bind DN: LDAP password hash: LDAP users base DN: LDAP groups base DN: LDAP netgroups base DN: OK Completed Network>

ldap_server 389 (default) dc=example,dc=com on cn=binduser,dc=example,dc=com cn=admin,dc=example,dc=com md5 ou=Users,dc=example,dc=com ou=Groups,dc=example,dc=com ou=Netgroups,dc=example,dc=com

LDAP clients use the LDAPv3 protocol for communicating with the server. Enabling the LDAP client configures the Pluggable Authentication Module (PAM) files to use LDAP. PAM is the standard authentication framework for Linux.

Configuring SFS network settings About NIS

81

To enable LDAP client configuration

To enable LDAP client configuration, enter the following:


Network> ldap enable

For example:
Network> ldap enable Network>

LDAP clients use the LDAPv3 protocol for communicating with the server. This command configures the PAM configuration files so that they do not use LDAP. To disable LDAP client configuration

To disable LDAP client configuration, enter the following:


Network> ldap disable

For example:
Network> ldap disable Network>

About NIS
SFS supports Network Information Service (NIS), implemented in a NIS server, as an authentication authority. You can use NIS to authenticate computers. If your environment uses NIS, enable the NIS-based authentication on the SFS cluster. Table 4-9 Command
nis show

NIS commands Definition


Displays the NIS server name, domain name, the NIS users, groups, and netgroups that are available in the NIS database. See To display NIS-related settings on page 82.

nis set domainname

Sets the NIS domain name in the SFS cluster. See To set the NIS domain name on all nodes in the cluster on page 82.

nis set servername Sets the NIS server name in the SFS cluster. See To set NIS server name on all nodes in the cluster on page 83.

82

Configuring SFS network settings About NIS

Table 4-9 Command


nis enable

NIS commands (continued) Definition


Enables the NIS clients in the SFS cluster. You must set the NIS domain name and NIS server name before you can enable NIS. See To enable NIS clients on page 83.

nis disable

Disables the NIS clients in the SFS cluster. See To disable NIS clients on page 83.

Configuring the NIS-related commands


To display NIS-related settings

To display NIS-related settings, enter the following:


Network> nis show [users|groups|netgroups] users Displays the NIS users that are available in the SFS cluster's NIS database. Displays the NIS groups that are available in the SFS cluster's NIS database. Displays the NIS netgroups that are available in the SFS cluster's NIS database.

groups

netgroups

For example:
Network> nis show NIS Status : Disabled domain : NIS Server :

To set the NIS domain name on all nodes in the cluster

To set the NIS domain name on the cluster nodes, enter the following:
Network> nis set domainname [domainname]

where domainname is the domain name. For example:


Network> nis domainname domain_1 Setting domainname: "domain_1"

Configuring SFS network settings About NIS

83

To set NIS server name on all nodes in the cluster

To set the NIS server name on all cluster nodes, enter the following:
Network> nis set servername servername

where servername is the NIS server name. You can use the server's name or IP address. For example:
Network> nis servername 10.10.10.10 Setting NIS Server "10.10.10.10"

To enable NIS clients

To enable NIS clients, enter the following:


Network> nis enable

For example:
Network> nis enable Enabling NIS Client on all the nodes..... Done. Please enable NIS in nsswitch settings for required services.

To view the new settings, enter the following:


Network> nis show NIS Status : Enabled domain : domain_1 NIS Server : 10.10.10.10

To disable NIS clients

To disable NIS clients, enter the following:


Network> nis disable

For example:
Network> nis disable Disabling NIS Client on all nodes Please disable NIS in nsswitch settings for required services.

84

Configuring SFS network settings About NSS

About NSS
Name Service Switch (NSS) is an SFS cluster service which provides a single configuration location to identify the services (such as NIS or LDAP) for network information such as hosts, groups, or passwords. For example, host information may be on an NIS server. Group information may be in an LDAP database. The NSS configuration specifies which network services the SFS cluster should use to authenticate hosts, users, groups, and netgroups. The configuration also specifies the order in which multiple services should be queried. Table 4-10 Command
nsswitch show

NSS commands Definition


Displays the NSS configuration. See To display the NSS configuration on page 84.

nsswitch conf

Configures the order of the NSS services. See To configure the NSS lookup order on page 84.

Configuring NSS lookup order


To display the NSS configuration

To display the NSS configuration, enter the following:


Network> nsswitch group: hosts: netgroup: passwd: shadow: Network> show files nis files nis nis files nis files winbind

winbind dns winbind

ldap

ldap

To configure the NSS lookup order

To configure the NSS lookup order, enter the following:


Network> nsswitch conf {group|hosts|netgroups|passed|shadow} value1 [[value2]] [[value3]] [[value4]] group Selects the group file.

Configuring SFS network settings About VLAN

85

hosts netgroups passed shadow value

Selects the hosts file. Selects the netgroups file. Selects the password. Selects the shadow file. Specifies the following NSS lookup order with the following values:

value1 (required)- { files/nis/winbind/ldap } value2 (optional) - { files/nis/winbind/ldap } value3 (optional) - { files/nis/winbind/ldap } value4 (optional) - { files/nis/winbind/ldap }

To select DNS, you must use the following command: Network> nsswitch conf hosts nsswitch conf hosts <value1> [value2] [value3] --select hosts file value1 value2 value3 : Choose the type (files) (files) : Type the type (files/nis/dns) [] : Type the type (files/nis/dns) []

For example:
Network> nsswitch conf shadow files ldap Network> nsswitch show group: files nis winbind hosts: files nis dns netgroup: nis passwd: files nis winbind shadow: files ldap

ldap

ldap

About VLAN
The virtual LAN (VLAN) feature lets you create VLAN interfaces on the SFS nodes and administer them as any other VLAN interfaces. The VLAN interfaces are created using Linux support for VLAN interfaces. The Network> vlan commands view, add, or delete VLAN interfaces.

86

Configuring SFS network settings About VLAN

Table 4-11 Command


vlan show

VLAN commands Definition


Displays the VLAN interfaces. See To display the VLAN interfaces on page 86.

vlan add

Adds a VLAN interface. See To add a VLAN interface on page 87.

vlan del

Deletes a VLAN interface. See To delete a VLAN interface on page 87.

Configuring VLAN
To display the VLAN interfaces

To display the VLAN interfaces, enter the following:


Network> vlan show

For example:
VLAN ----pubeth0.2 DEVICE -----pubeth0 VLAN id ------2

Configuring SFS network settings About VLAN

87

To add a VLAN interface

To add a VLAN interface, enter the following:


Network> vlan add device vlan_id device Specifies the VLAN interface on which the VLAN interfaces will be added. Specifies the VLAN ID which the new VLAN interface uses. Valid values range from 1 to 4095.

vlan_id

For example:
Network> vlan add pubeth1 2 Network> vlan show VLAN ----pubeth0.2 pubeth1.2 DEVICE -----pubeth0 pubeth1 VLAN id ------2 2

To delete a VLAN interface

To delete a VLAN interface, enter the following:


Network> vlan del vlan_device

where the vlan_device name combines the interface on which the VLAN is based and the VLAN ID separated by '.'. For example:
Network> vlan del pubeth0.2 Network> vlan show VLAN ----pubeth1.2 DEVICE -----pubeth1 VLAN id ------2

88

Configuring SFS network settings About VLAN

Chapter

Configuring your NFS server


This chapter includes the following topics:

About NFS server commands

About NFS server commands


The clustered NFS Server provides file access services to UNIX and Linux client computers via the Network File System (NFS) protocol. You use the NFS commands to start and stop your NFS server. The NFS commands are defined in Table 5-1. Note: For the NFS> share commands, go to About NFS file sharing. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter NFS> mode. For login instructions, go to About using the SFS command-line interface. Table 5-1 Command
server status

NFS mode commands Definition


Displays the status of the NFS server. See To check on the NFS server status on page 90.

server start

Starts the NFS server. See Starting the NFS server on page 91.

server stop

Stops the NFS server. See To stop the NFS server on page 91.

90

Configuring your NFS server About NFS server commands

Table 5-1 Command


stat

NFS mode commands (continued) Definition


Prints the NFS statistics. See To display statistics for all the nodes in the cluster on the NFS server on page 92.

show fs

Displays all of the online file systems and snapshots that can be exported. See To display a file system and snapshots that can be exported on page 93.

Accessing the NFS server


To check on the NFS server status

Prior to starting the NFS server, check on the status of the server by entering:
NFS> server status

For example:
NFS> server status NFS Status on sfs_1 : OFFLINE NFS Status on sfs_2 : OFFLINE

The states (ONLINE, OFFLINE, and FAULTED) correspond to each SFS node identified by the node name. The states of the node may vary depending on the situation for that particular node. The possible states of the NFS> server status command are:
ONLINE OFFLINE FAULTED Indicates that the node can serve NFS protocols to the client. Indicates the NFS services on that node are down. Indicates something is wrong with the NFS service on the node.

You can run the NFS> server start command to restart the NFS services, and only the nodes where NFS services have problems, will be restarted.

Configuring your NFS server About NFS server commands

91

Starting the NFS server

To start the NFS server, enter the following:


NFS> server start

You can use the NFS> server start command to clear an OFFLINE state from the NFS> server status output by only restarting the services that are offline. You can run the NFS> server start command multiple times without it affecting the already-started NFS server. For example:
NFS> server start ..Success.

Run the NFS> server status command again to confirm the change.
NFS> server status NFS Status on sfs_1 : ONLINE NFS Status on sfs_2 : ONLINE

To stop the NFS server

To stop the NFS server, enter the following:


NFS> server stop

For example:
NFS> server stop ..Success.

You will receive an error if you try to stop an already stopped NFS server.

92

Configuring your NFS server About NFS server commands

Displaying NFS statistics


To display statistics for all the nodes in the cluster on the NFS server

To display NFS statistics, enter the following:


NFS> stat [nodename]

where nodename specifies the node name for which you are trying to obtain the statistical information. If the nodename is not specified, statistics for all the nodes in the cluster are displayed. For example:
NFS> stat sfs_01 sfs_01 ---------------Server rpc stats: calls badcalls 52517 0 Server nfs v2: null getattr 10 100% 0 0% read wrcache 0 0% 0 0% link symlink 0 0% 0 0% Server null 11 read 4138 remove 0 fsstat 0 nfs v3: getattr 0% 17973 35% write 8% 4137 8% rmdir 0% 1 0% fsinfo 0% 2 0%

badauth 0

badclnt 0

xdrcall 0

setattr 0 0% write 0 0% mkdir 0 0%

root 0 0% create 0 0% rmdir 0 0%

lookup 0 0% remove 0 0% readdir 0 0%

readlink 0 0% rename 0 0% fsstat 0 0%

setattr 0 0% create 3251 6% rename 0 0% pathconf 0 0%

lookup 5951 11% mkdir 1255 2% link 0 0% commit 3067 6%

access 6997 13% symlink 1034 2% readdir 0 0%

readlink 1034 2% mknod 0 0% readdirplus 1361 2%

Configuring your NFS server About NFS server commands

93

Displaying file systems and snapshots that can be exported


To display a file system and snapshots that can be exported

To display online file systems and the snapshots that can be exported, enter the following:
NFS> show fs

For example:
NFS> show fs FS/Snapshot =========== fs1

94

Configuring your NFS server About NFS server commands

Chapter

Configuring storage
This chapter includes the following topics:

About storage provisioning and management About configuring storage pools About configuring disks About displaying information for all disk devices Increasing the storage capacity of a LUN Printing WWN information Initiating SFS host discovery of LUNs About I/O fencing

About storage provisioning and management


Storage provisioning in SFS focuses on the storage pool, which is comprised of a set of disks. The file system commands accept a set of pools as an argument. For example, creating a file system takes one or more pools, and creates a file system over some or all of the pools. A mirrored file system takes multiple pools as an argument and creates a file system such that each copy of the data resides on a different pool. To provision SFS storage, verify that the Logical Unit Numbers (LANS) or meta-LANS in your physical storage arrays have been zoned for use with the SFS cluster. The storage array administrator normally allocates and zones this physical storage.

96

Configuring storage About configuring storage pools

Use the SFS Storage> pool commands to create storage pools using disks (the named LANS). Each disk can only belong to one storage pool. If you try to add a disk that is already in use, an error message is displayed. With these storage pools, use the Storage> fs commands to create file systems with different layouts (for example mirrored, striped, striped-mirror). The storage commands are defined in Table 6-1. To access the commands, log into the administrative console (master, system-admin, or storage-admin) and enter the Storage> mode. For login instructions, go to About using the SFS command-line interface. Table 6-1 Command
pool

Storage mode commands Definition


Configures storage pools. See About configuring storage pools on page 96.

pool adddisk, pool Configures the disk(s) in the pool. mvdisk, pool See About configuring disks on page 101. rmdisk hba Prints the World Wide Name (WWN) information for all of the nodes in the cluster. See Printing WWN information on page 109. scanbus Scans all of the SCSI devices connected to all of the nodes in the cluster. See Initiating SFS host discovery of LUNs on page 110. fencing Protects the data integrity if the split-brain condition occurs. See About I/O fencing on page 111. disk list Lists all of the available disks, and identifies which ones you want to assign to which pools. See About displaying information for all disk devices on page 105.

About configuring storage pools


A storage pool is a group of disks from which SFS allocates capacity when you create or expand file systems. During the initial configuration, you use the Storage> commands to create storage pools, to discover disks, and to assign

Configuring storage About configuring storage pools

97

them to pools. Disk discovery and pool assignment are done once. SFS propagates disk information to all cluster nodes. You must first create storage pools that can be used to build file systems on. Disks and pools can be specified in the same command provided the disks are part of an existing storage pool. The pool and disk specified first are allocated space before other pools and disks. If the specified disk is larger than the space allocated, the reminder of the space is still utilized when another file system is created spanning the same disk. Table 6-2 Command
pool create

Configure storage pool commands Definition


Creates storage pools. You can build file systems on top of them.

Note: Disks being used for the pool create command must support
SCSI-3 PGR registrations if I/O fencing is enabled.

Note: The minimum size of disks required for creating a pool or adding
a disk to the pool is 10 MB. See To create the storage pool used to create a file system on page 99. pool list Lists all of the available disks, and identifies which ones you want to assign to which pools. A storage pool is a collection of disks from shared storage; the pool is used as the source for adding file system capacity as needed.

Note: Your output for the pool list command depends upon which
node console is running. See To list your pools on page 100. pool rename Renames a pool. See To rename a pool on page 100. pool destroy Destroys storage pools used to create file systems. Destroying a pool does not delete the data on the disks that make up the storage pool. See To destroy a storage pool on page 101.

98

Configuring storage About configuring storage pools

Table 6-2 Command


pool free

Configure storage pool commands (continued) Definition


Lists the free space in each of the pools. Free space information includes:

Disk name Free space Total space Use %

See To list free space for pools on page 101.

Configuring storage About configuring storage pools

99

Configuring storage pools


To create the storage pool used to create a file system

List all of the available disks, and identify which ones you want to assign to which pools.
Storage> disk list Disk sfs_01 ==== ======== disk1 OK

To create a storage pool, enter the following:


Storage> pool create pool_name disk1[,disk2,...] pool_name Specifies what the created storage pool will be named. The storage pool name should be a string. Specifies the disks to include in the storage pool. If the specified disk does not exist, an error message is displayed. Use the Storage> disk list command to view the available disks. Each disk can only belong to one storage pool. If you try to add a disk that is already in use, an error message is displayed. To specify additional disks to be part of the storage pool, use a comma with no space in between.

disk1, disk2,...

For example:
Storage> pool create pool1 Disk_0,Disk_1 SFS pool Success V-288-1015 Pool pool1 created successfully 100% [#] Creating pool pool1

100

Configuring storage About configuring storage pools

To list your pools

To list your pools, enter the following:


Storage> pool list

For example:
Storage> pool list Pool List of disks ----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_3 pool3 Disk_4 Disk_5

To rename a pool

To rename a pool, enter the following:


Storage> pool rename old_name new_name old_name Specifies the name for the existing pool that will be changed. If the old name is not the name of an existing pool, an error message is displayed. Specifies the new name for the pool. If the specified new name for the pool is already being used by another pool, an error message is displayed.

new_name

For example:
Storage> pool rename pool1 p01 SFS pool Success V-288-0 Disk(s) Pool rename successful.

Configuring storage About configuring disks

101

To destroy a storage pool

To destroy a storage pool, enter the following:


Storage> pool destroy pool_name

where pool_name specifies the storage pool to delete. If the specified pool_name is not an existing storage pool, an error message is displayed. For example:
Storage> pool destroy pool1 SFS pool Success V-288-988 Pool pool1 is destroyed.

Because you cannot destroy an Unallocated storage pool, you need to remove the disk from the storage pool using the Storage> pool rmdisk command prior to trying to destroy the storage pool. Go to To remove a disk. If you want to move the disk from the unallocated pool to another existing pool, you can use the Storage> pool mvdisk command. Go to To move disks from one pool to another. To list free space for pools

To list free space for your pool, enter the following:


Storage> pool free [pool_name]

where pool_name specifies the pool for which you want to display free space information. If a specified pool does not exist, an error message is displayed. If pool_name is omitted, the free space for every pool is displayed, but information for specific disks is not displayed. For example:
storage> pool free Pool Free Space ==== ========== pool_1 0 KB pool_2 0 KB pool_3 57.46M

Total Space =========== 165.49M 165.49M 165.49M

Use% ==== 100% 100% 65%

About configuring disks


Disks and pools can be specified in the same command provided the disks are part of an existing storage pool.

102

Configuring storage About configuring disks

The pool and disk that are specified first are allocated space before other pools and disks. If the specified disk is larger than the space allocated, the remainder of the space is still utilized when another file system is created spanning the same disk. Table 6-3 Command
pool adddisk

Configure disks commands Definition


You can add a new disk to an existing pool. A disk can belong to only one pool. The minimum size of disks required for creating a pool or adding a disk to the pool is 10 MB.

Note: Disks being used for the pool adddisk command must support
SCSI-3 PGR registrations if I/O fencing is enabled. See To add a disk on page 103. pool mvdisk You can move disks from one storage pool to another.

Note: You cannot move a disk from one storage pool to another if the
disk has data on it. See To move disks from one pool to another on page 104. pool rmdisk You can remove a disk from a pool.

Note: You cannot remove a disk from a pool if the disk has data on
it. See To remove a disk on page 105. If a specified disk does not exist, an error message is displayed. If one of the disks does not exist, then none of the disks are removed. A pool cannot exist if there are no disks assigned to it. If a disk specified to be removed is the only disk for that pool, the pool is removed as well as the assigned disk. If the specified disk to be removed is being used by a file system, then that disk will not be removed.

Configuring storage About configuring disks

103

Configuring disks
To add a disk

To add a new disk to an existing pool, enter the following:


Storage> pool adddisk pool_name disk1[,disk2,...] pool_name Specifies the pool to be added to the disk. If the specified pool name is not an existing pool, an error message is displayed. Specifies the disks to be added to the pool. To add additional disks, use a comma with no spaces between. A disk can only be added to one pool, so if the entered disk is already in the pool, an error message is displayed.

disk1,disk2,...

For example:
Storage> pool adddisk pool2 Disk_2 SFS pool Success V-288-0 Disk(s) Disk_2 are added to pool2 successfully.

104

Configuring storage About configuring disks

To move disks from one pool to another

To move a disk from one pool to another, or from an unallocated pool to an existing pool, enter the following:
Storage> pool mvdisk src_pool dest_pool disk1[,disk2,...] src_pool Specifies the source pool to move the disks from. If the specified source pool does not exist, an error message is displayed. Specifies the destination pool to move the disks to. If the specified destination pool does not exist, a new pool is created with the specified name. The disk is moved to that pool. Specifies the disks to be moved. To specify multiple disks to be moved, use a comma with no space in between. If a specified disk is not part of the source pool or does not exist, an error message is displayed. If one of the disks to be moved does not exist, all of the specified disks to be moved will not be moved. If all of the disks for the pool are moved, the pool is removed (deleted from the system), since there are no disks associated with the pool.

dest_pool

disk1,disk2,...

For example:
Storage> pool mvdisk p01 pool2 Disk_0 SFS pool Success V-288-0 Disk(s) moved successfully.

Configuring storage About displaying information for all disk devices

105

To remove a disk

To remove a disk from a pool, enter the following:


Storage> pool rmdisk disk1[,disk2,...]

where disk1,disk2 specifies the disk(s) to be removed from the pool. An unallocated pool is a reserved pool for holding disks that are removed from other pools. For example:
Storage> pool list Pool Name List of disks -----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_5 pool3 Disk_3 Disk_4 Unallocated Disk_6 Storage> pool rmdisk Disk_6 SFS pool Success V-288-987 Disk(s) Disk_6 are removed successfully. Storage> pool list Pool Name List of disks -----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_5 pool3 Disk_3 Disk_4

The Disk_6 disk no longer appears in the output.

To remove additional disks, use a comma with no spaces in between. For example:
Storage> pool rmdisk disk1,disk2 Storage>

About displaying information for all disk devices


The Storage> disk list command displays the aggregated information of the disk devices connected to all of the nodes in the cluster.

106

Configuring storage About displaying information for all disk devices

Table 6-4 Command


disk list stats (default)

Disk devices commands Definition


Displays a list of disks and nodes in tabular form. Each row corresponds to a disk, and each column corresponds to a node. If an OK appears in the table, it indicates that the disk that corresponds to that row is accessible by the node that corresponds to that column. If an ERR appears in the table, it indicates that the disk that corresponds to that row is inaccessible by the node that corresponds to that column. This list does not include the internal disks of each node.

See To display a list of disks and nodes in tabular form on page 107. disk list detail Displays the disk information, including a list of disks and their properties. If the console server is unable to access any disk, but if any other node in the cluster is able to access that disk, then that disk is shown as "---." See To display the disk information on page 108. disk list paths Displays the list of multiple paths of disks connected to all of the nodes in the cluster. It also shows the status of each path on each node in the cluster. See To display the disk list paths on page 108. disk list types Displays the enclosure name, array name, and array type for a particular disk that is present on all of the nodes in the cluster. See To display information for all disk devices associated with nodes in a cluster on page 108.

Displaying information for all disk devices associated with nodes in a cluster
Depending on which command variable you use, the column headings will differ.
Disk Serial Number Enclosure Size Use% Indicates the disk name. Indicates the serial number for the disk. Indicates the type of storage enclosure. Indicates the size of the disk. Indicates the percentage of the disk that is being used.

Configuring storage About displaying information for all disk devices

107

ID

ID column consists of the following four fields. A ":" separates these fields.

VendorID - Specifies the name of the storage vendor, for example, NETAPP, HITACHI, IBM, EMC, HP, and so on.

ProductID - Specifies the ProductID based on vendor. Each vendor manufactures different products. For example, HITACHI has HDS5700, HDS5800, and HDS9200 products. These products have ProductIDs such as DF350, DF400, and DF500. TargetID - Specifies the TargetID. Each port of an array is a target. Two different arrays or two ports of the same array have different TargetIDs. TargetIDs start from 0. LunID - Specifies the ID of the LUN. This should not be confused with the LUN serial number. LUN serial numbers uniquely identify a LUN in a target. Whereas a LunID uniquely identifies a LUN in an initiator group (or host group). Two LANS in the same initiator group cannot have the same LunID. For example, if a LUN is assigned to two clusters, then the LunID of that LUN can be different in different clusters, but the serial number is the same.

Enclosure

Name of the enclosure to distinguish between arrays having the same array name. Indicates the name of the storage array. Indicates the type of storage array and can contain any one of the three values: Disk for JBODs, Active-Active, and Active-Passive.

Array Name Array Type

To display a list of disks and nodes in tabular form

To display a list of disks and nodes in tabular form, enter the following:
Storage> disk list stats Disk ==== disk1 sfs_1 ======== OK sfs_2 ======== OK

108

Configuring storage Increasing the storage capacity of a LUN

To display the disk information

To display the disk information, enter the following:


Storage> disk list detail Disk ==== disk1 Pool ==== p2 Enclosure ========== OTHER_DISKS Size ==== 10.00G Use% ==== 0.0%

ID Serial Number == ============= VMware%2C:VMware%20Virtual%20S:0:0 -

To display the disk list paths

To display the disks multiple paths, enter the following:


Storage> disk list paths Disk ==== disk1 Paths ===== Path 1 sfs_1 ======== enabled,active sfs_2 ======== enabled,active

To display information for all disk devices associated with nodes in a cluster

To display information for all of the disk devices connected to all of the nodes in a cluster, enter the following:
Storage> disk list types Disk ==== Disk_0 Disk_1 Disk_3 Disk_4 Disk_5 Enclosure ========== Disk Disk Disk Disk Disk Array Name ========== Disk Disk Disk Disk Disk Array Type ========== Disk Disk Disk Disk Disk

Increasing the storage capacity of a LUN


The Storage> disk grow command lets you increase the storage capacity of a previously created LUN on a storage array disk.

Configuring storage Printing WWN information

109

Warning: When increasing the storage capacity of a disk, make sure that the storage array does not reformat it. This will destroy the data. For help, contact your Storage Administrator. To increase the storage capacity of a LUN

1 2

Increase the storage capacity of the disk on your storage array. Contact your Storage Administrator for assistance. Run the SFS Storage> scanbus command to make sure that the disk is connected to the SFS cluster. See Initiating SFS host discovery of LUNs on page 110.

To increase the storage capacity of the LUN, enter the following:


Storage> disk grow disk_name

where disk_name is the name of the disk. For example:


Storage> disk grow Disk_0 SFS disk SUCCESS V-288-0 disk grow Disk_0 completed successfully

Printing WWN information


The Storage> hba (host bus adapter) command prints World Wide Name (WWN) information for all of the nodes in the cluster. If you want to find the WWN information for a particular node, specify the node name.

110

Configuring storage Initiating SFS host discovery of LUNs

To print WWN information

To print the WWN information, enter the following:


Storage> hba [host_name]

where you can use the host_name variable if you want to find WWN information for a particular node. Example output:
Storage> hba Node ==== sfs_1 sfs_2 sfs_3

Host Initiator HBA WWNs ======================= 21:00:00:e0:8b:9d:85:27, 21:01:00:e0:8b:bd:85:27 21:00:00:e0:8b:9d:65:1c, 21:01:00:e0:8b:bd:65:1c 21:00:00:e0:8b:9d:88:27, 21:01:00:e0:8b:bd:88:27

There are two WWN on each row that represent the two HBAs for each node.

Initiating SFS host discovery of LUNs


The Storage> scanbus command scans all of the SCSI devices connected to all of the nodes in the cluster. When you add new storage to your devices, you must scan for new SCSI devices. You only need to issue the command once and all of the nodes discover the newly added disks. And the command updates the device configurations without interrupting the existing I/O activity. The scan does not inform you if there is a change in the storage configuration. You can see the latest storage configuration using the Storage> disk list command. You do not need to reboot after scanbus has completed. To scan SCSI devices

To scan the SCSI devices connected to all of the nodes in the cluster, enter the following:
Storage> scanbus

For example:
Storage> scanbus 100% [#] Scanning the bus for disks Storage>

Configuring storage About I/O fencing

111

About I/O fencing


In the SFS cluster, one method of communication between the nodes is conducted through heartbeats over private links. If two nodes cannot verify each other's state because they cannot communicate, then neither node can distinguish if the failed communication is because of a failed link or a failed partner node. The network breaks into two networks that cannot communicate with each other but do communicate with the central storage. This condition is referred to as the "split-brain" condition. I/O fencing (also referred to as disk fencing) protects data integrity if the split-brain condition occurs. I/O fencing determines which nodes are to retain access to the shared storage and which nodes are to be removed from the cluster, to prevent possible data corruption. To protect the data on the shared disks, each system in the cluster must be configured to use I/O fencing by making use of special purpose disks called coordinator disks. They are standard disks or LUNs that are set aside for use by the I/O fencing driver. You can specify three (or an odd number greater than three) disks as coordinator disks. The coordinator disks act as a global lock device during a cluster reconfiguration. This lock mechanism determines which node is allowed to fence off data drives from other nodes. A system must eject a peer from the coordinator disks before it can fence the peer from the data drives. Racing for control of coordinator disks is how fencing helps prevent split-brain. Coordinator disks cannot be used for any other purpose. You cannot store data on them, or include them in a disk group for user data. To use the I/O fencing feature, you need to create a separate coordinator disk group, which will contain the three coordinator disks. Your minimum configuration must be a two-node cluster with SFS software installed and have more than five shared disks. For the list of storage commands needed to perform I/O fencing related operations, go to Table 6-5. Table 6-5 Command
fencing status

I/O fencing commands Definition


Checks the status of I/O fencing. It shows whether the coordinator disk group is currently enabled or disabled. It also shows the status of the individual coordinator disks. See To check status of I/O fencing on page 113.

112

Configuring storage About I/O fencing

Table 6-5 Command


fencing on

I/O fencing commands (continued) Definition


Checks if the coordinator disk group has three disks. If not, you will need to add disks to the coordinator disk pool until three are present. The minimum LUN size is 10MB. See To add disks to coordinator disk group on page 114.

fencing replace

Replaces a coordinator disk with another disk. The command first checks the whether the replacement disks is in failed state or not. If its in the failed state, then an error appears. After the command verifies that the replacement disk is not in a failed state, it checks whether the replacement disk is already being used by an existing pool (storage or coordinator). If it is not being used by any pool, the original disk is replaced. See To replace an existing coordinator disk on page 115.

fencing off

Disables I/O fencing on all of the nodes. This command does not free up the coordinator disks. See To disable I/O fencing on page 115.

fencing destroy

Destroys the coordinator pool if I/O fencing is disabled. This command is not supported on a single-node setup. See To destroy the coordinator pool on page 115.

Configuring storage About I/O fencing

113

Configuring I/O fencing


To check status of I/O fencing

To check the status of I/O fencing, enter the following:


Storage> fencing status

In the following example, the I/O fencing is configured on the three disks Disk_0,Disk_1 and Disk_4 and the column header Coord Flag On indicates that the coordinator disk group is in an imported state and these disks are in good condition. If you check the Storage> disk list output, it will be in the OK state.
IO Fencing Status ================= Disabled Disk Name ============== Disk_0 Disk_1 Disk_2 Coord Flag On ============== Yes Yes Yes

114

Configuring storage About I/O fencing

To add disks to coordinator disk group

To add disks to the coordinator disk group, enter the following:


Storage> fencing on [disk1,disk2,disk3]

The three disks are optional arguments and are required only if the coordinator pool does not contain any disks. You may still provide three disks for fencing with the coordinator pool already containing three disks. This will however remove the three disks previously used for fencing from the coordinator pool and configure I/O fencing on the new disks. For example:
Storage> fencing on SFS fencing Success V-288-0 IO Fencing feature now Enabled 100% [#] Enabling fencing Storage> fencing status IO Fencing Status ================= Enabled Disk Name ============== Disk_0 Disk_1 Disk_2 Coord Flag On ============== Yes Yes Yes

Configuring storage About I/O fencing

115

To replace an existing coordinator disk

To replace the existing coordinator disk, enter the following:


Storage> fencing replace src_disk dest_disk

where src_disk is the source disk and dest_disk is the destination disk. For example:
Storage> fencing replace Disk_2 Disk_3 SFS fencing Success V-288-0 Replaced disk Disk_2 with Disk_3 successfully. 100% [#] Replacing disk Disk_2 with Disk_3 Storage> fencing status IO Fencing Status ================= Enabled Disk Name ============== Disk_0 Disk_1 Disk_3 Coord Flag On ============== Yes Yes Yes

To disable I/O fencing

To disable I/O fencing, enter the following:


Storage> fencing off

For example, to disable fencing if it's already enabled:


Storage> fencing off SFS fencing Success V-288-0 IO Fencing feature now Disabled 100% [#] Disabling fencing

To destroy the coordinator pool

To destroy the coordinator pool, enter the following:


Storage> fencing destroy Storage>

116

Configuring storage About I/O fencing

Chapter

Creating and maintaining file systems


This chapter includes the following topics:

About creating and maintaining file systems Listing all file systems and associated information About creating file systems Adding or removing a mirror to a file system Configuring FastResync for a file system Disabling the FastResync option for a file system Increasing the size of a file system Decreasing the size of a file system Checking and repairing a file system Changing the status of a file system Destroying a file system About snapshots About snapshot schedules

About creating and maintaining file systems


This chapter discusses the SFS file system commands. You use these commands to configure your file system.

118

Creating and maintaining file systems About creating and maintaining file systems

For more information on the fs commands, See Table 7-1 on page 118. File systems consist of both metadata and file system data. Metadata contains information such as the last modification date, creation time, permissions, and so on. The total amount of space required for the metadata depends on the number of files in the file system. A file system with many small files requires more space to store metadata. A file system with fewer larger files requires less space for handling the metadata. When you create a file system, you need to set aside some space for handling the metadata. The space required is generally proportional to the size of the file system. For this reason, after you create the file system with the Storage> fs list command the output includes non-zero percentages. The space set aside for handling metadata may increase or decrease as needed. For example, a file system on a 1 GB volume takes approximately 35 MB (about 3%) initially for storing metadata. In contrast, a file system of 10 MB requires approximately 3.3 MB (30%) initially for storing the metadata. To access the commands, log into the administrative console (as a master, system-admin, or storage-admin) and enter Storage> mode. For login instructions, go to About using the SFS command-line interface. Table 7-1 Command
fs list

Storage mode commands Definition


Lists all file systems and associated information. See To list all file systems and associated information on page 120.

fs create

Creates a file system. See About creating file systems on page 120.

fs addmirror

Adds a mirror to a file system. See To add a mirror to a file system on page 124.

fs rmmirror

Removes a mirror from a file system. See To remove a mirror from a file system on page 126.

fs setfastresync

Keeps the mirrors in the file system in a consistent state. See To enable the FastResync option on page 127.

fs unsetfastresync Disables the FastResync option for a file system. See To disable the FastResync option on page 127.

Creating and maintaining file systems About creating and maintaining file systems

119

Table 7-1 Command


fs growto

Storage mode commands (continued) Definition


Increases the size of a file system to a specified size. See To increase the size of a file system to a specified size on page 128.

fs growby

Increases the size of a file system by a specified size. See To increase the size of a file system by a specified size on page 128.

fs shrinkto

Decreases the size of a file system to a specified size. See To decrease the size of a file system to a specified size on page 129.

fs shrinkby

Decreases the size of a file system by a specified size. See To decrease the size of a file system by a specified size on page 130.

fs fsck

Checks and repair a file system. See To check and repair a file system on page 131.

fs online

Mounts (places online) a file system. See To change the status of a file system on page 132.

fs offline

Unmounts (places offline) a file system. See To change the status of a file system on page 132.

fs destroy

Destroys a file system. See To destroy a file system on page 133.

snapshot

Copies a set of files and directories as they were at a particular point in the past. See About snapshots on page 133.

snapshot schedule Creates or remove a snapshot. See About snapshot schedules on page 138.

120

Creating and maintaining file systems Listing all file systems and associated information

Listing all file systems and associated information


To list all file systems and associated information

To list all file systems and associated information, enter the following:
Storage> fs list [fs_name]

where fs_name is optional. If you enter a file system that does not exist, an error message is displayed. If you do not enter a specified file system, a list of file systems is displayed. For example:
Storage> fs list fs1 General Info: =============== Block Size: 1024 Bytes Primary Tier ============ Size: Use%: Layout: Mirrors: Columns: Stripe Unit: FastResync: Mirror 1: List of pools: List of disks:

5.00G 11% simple 0.00 K Disabled

p2 sda

About creating file systems


The Storage> fs commands manage file system operations.

Creating and maintaining file systems About creating file systems

121

Table 7-2 Command


fs create simple

Create file systems commands Definition


Creates a simple file system of a specified size. You can specify a block size for the file system. The default block size is determined based on the size of the file system when the file system is created. For example, 1 KB is the default block size for up to a 2 TB file system size. There are other default block sizes, 2 KB, 4 KB, and 8 KB for different ranges of file system sizes. If you create a 1 TB file system, and then increase it to 3 TB, the file system block size remains at 1KB. See To create a simple file system of a specified size on page 121.

fs create mirrored Creates a mirrored file system with a specified number of mirrors, a list of pools, and online status. Each mirror uses the disks from the corresponding pools as listed. See To create a mirrored file system on page 122. fs create mirrored-stripe Creates a mirrored-stripe file system with a specified number of columns, mirrors, pools, and protection options. See To create a mirrored-stripe file system on page 122. fs create striped-mirror Creates a striped-mirror file system with a specified number of mirrors and stripes. See To create a striped-mirror file system on page 122. fs create striped Creates a striped file system. A striped file system is a file system that stores its data across multiple disks rather than storing the data on one disk. See To create a striped file system on page 122.

Creating a file system


To create a simple file system of a specified size

To create a simple file system with a specified size, enter the following:
Storage> fs create simple fs_name size pool1[,disk1,...] [blksize=bytes]

For example:
Storage> fs create simple fs2 10m sda 100% [#] Creating simple filesystem

122

Creating and maintaining file systems About creating file systems

To create a mirrored file system

To create a mirrored file system, enter the following:


Storage> fs create mirrored fs_name size nmirrors pool1[,disk1,...] [protection=disk|pool] [blksize=bytes]

For example:
Storage> fs create mirrored fs1 100M 2 pool1,pool2 100% [#] Creating mirrored filesystem

To create a mirrored-stripe file system

To create a mirrored-stripe file system, enter the following:


Storage> fs create mirrored-stripe fs_name size nmirrors ncolumns pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes] [blksize=bytes]

To create a striped-mirror file system

To create a striped-mirror file system, enter the following:


Storage> fs create striped-mirror fs_name size nmirrors ncolumns pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes] [blksize=bytes]

To create a striped file system

To create a striped file system, enter the following:


Storage> fs create striped fs_name size ncolumns pool1[,disk1,...] [stripeunit=kilobytes] [blksize=bytes]

fs_name

Specifies the name of the file system being created. The file system name should be a string. If you enter a file that already exists, you receive an error message and the file system is not created.

Creating and maintaining file systems About creating file systems

123

size

Specifies the size of a file system. To create a file system, you need at least 10 MB of space. Available units are the following:

MB GB TB

You can enter the units with either uppercase (10M) or lowercase (10m) letters. To see how much space is available on a pool, use the Storage> pool free command. See About configuring storage pools on page 96. nmirrors Specifies the number of mirrors the file system has. You must enter a positive integer. Specifies the number of columns for the striped file system. The number of columns represents the number of disks to stripe the information across. If the number of columns exceeds the number of disks for the entered pools, an error message is displayed. This message indicates that there is not enough space to create the striped file system. Specifies the pool(s) or disk(s) for the file system. If you specify a pool or disk that does not exist, you receive an error message. Specify more than one pool or disk by separating the name with a comma; however, do not include a space between the comma and the name. To find a list of pools and disks, use the Storage> pool list command. To find a list of disks, use the Storage> disk list command. The disk must be part of the pool or an error message is displayed. protection If you do not specify a protection option, the default is "disk." The available options for this field are:

ncolumns

pool1[,disk1,...]

disk - Creates mirrors on separate disks. pool - Creates mirrors in separate pools. If there is not enough space to create the mirrors, an error message is displayed, and the file system is not created.

124

Creating and maintaining file systems Adding or removing a mirror to a file system

stripeunit=kilobytes Specifies a stripe width (in kilobytes). Possible values are the following:

128 256 512 (default) 1024 2048

blksize=bytes

Specifies the block size for the file system. Possible values of bytes are the following:

1024 (default) 2048 4096 8192

Adding or removing a mirror to a file system


A mirrored file system is one that has copies of itself on other disks or pools. To add a mirror to a file system

To add a mirror to a file system, enter the following:


Storage> fs addmirror fs_name pool1[,disk1,...] [protection=disk|pool] fs_name Specifies which file system to add the mirror. If the specified file system does not exist, an error message is displayed.

Creating and maintaining file systems Adding or removing a mirror to a file system

125

pool1[,disk1,...]

Specifies the pool(s) or disk(s) to use for the file system. If the specified pool or disk does not exist, an error message is displayed, and the file system is not created. You can specify more than one pool or disk by separating the name with a comma, but do not include a space between the comma and the name. To find a list of existing pools and disks, use the Storage> pool list command. See About configuring storage pools on page 96. To find a list of the existing disks, use the Storage> disk list command. See About displaying information for all disk devices on page 105. The disk needs to be part of the pool or an error message is displayed.

protection

The default value for the protection field is "disk." Available options are:

disk - Creates mirrors on separate disks. pool - Uses pools from any available pool.

For example:
Storage> fs addmirror fs1 pool3,pool4 Storage>

126

Creating and maintaining file systems Configuring FastResync for a file system

To remove a mirror from a file system

To remove a mirror from a file system, enter the following:


Storage> fs rmmirror fs_name [pool_or_disk_name] fs_name Specifies the file system from which to remove the mirror. If you specify a file system that does not exist, an error message is displayed. Specifies the pool or disk name to remove from the mirrored file system that is spanning on the specified pool or disk. If you specify a pool or disk that is not part of the mirrored file system, an error message is displayed, and no action is taken.

pool_or_disk_name

For a striped-mirror file system, if any of the disks are bad, the Storage> fs rmmirror command disables the mirrors on the disks that have failed. If no disks have failed, SFS chooses a mirror to remove. For example:
Storage> fs rmmirror fs1 AMS_WMS0_0 Storage>

Configuring FastResync for a file system


If the power fails or a switch fails mirrors in a file system may not be in a consistent state. The Storage> fs setfastresync (Fast Mirror Resynchronization (FastResync)) command keeps the mirrors in the file system in a consistent state. Note: You must have at least two mirrors on the file system to enable FastResync. The setfastresync command is enabled by default.

Creating and maintaining file systems Disabling the FastResync option for a file system

127

To enable the FastResync option

To enable FastResync, enter the following:


Storage> fs setfastresync fs_name [pool_or_disk_name] fs_name Specifies the name of the file system for which to enable FastResync. If you specify a file system that does not exist, an error message is displayed. If the FastResync on the specified file system already has FastResync enabled, an error message is displayed, and no action is taken. Specifies the pool or disk name to remove from the mirrored file system that is spanning the specified pool or disk. If you specify a pool or disk that is not part of the mirrored file system, an error message is displayed, and no action is taken.

pool_or_disk_name

For example, to enable for a file system, enter the following :


Storage> fs setfastresync fs6 Storage>

Disabling the FastResync option for a file system


You can disable the FastResync option for a file system. To disable the FastResync option

To disable the FastResync option, enter the following:


Storage> fs unsetfastresync fs_name

where fs_name specifies the name of the file system for which to disable FastResync. If you specify a file system does not exist, an error message is displayed. For example:
Storage> fs unsetfastresync fs6 Storage>

Increasing the size of a file system


To increase the size of a file system it must be online. If the file system is not online, an error message is displayed, and no action is taken.

128

Creating and maintaining file systems Increasing the size of a file system

To increase the size of a file system to a specified size

To increase the size of a file system to a specified size, enter the following:
Storage> fs growto {primary|secondary} fs_name new_length [pool1[,disk1,...]] [protection=disk|pool]

For example:
Storage> fs growto primary fs1 1G Storage>

To increase the size of a file system by a specified size

To increase the size of a file system by a specified size, enter the following:
Storage> fs growby {primary|secondary} fs_name length_change [pool1[,disk1,...]] [protection=disk|pool]

For example:
Storage> fs growby primary fs1 50M Storage> primary|secondary Specifies the primary or secondary tier. fs_name Specifies the file system whose size will be increased. If you specify a file system that does not exist, an error message is displayed. Expands the file system to a specified size. The size specified must be a positive number, and it must be bigger than the size of the existing file system. If the new file system is not larger than the size of the existing file system, an error message is displayed, and no action is taken. This variable is used with the Storage> fs growto command. length_change Expands the file system to a specified size. The size specified must be a positive number, and it must be bigger than the size of the existing file system. If the new file system is not larger than the size of the existing file system, an error message is displayed, and no action is taken. This variable is used with the Storage> fs growby command.

new_length

Creating and maintaining file systems Decreasing the size of a file system

129

pool1[,disk1,...]

Specifies the pool(s) or disk(s) to use for the file system. If you specify a pool or disk that does not exist, an error message is displayed, and the file system is not resized. You can specify more than one pool or disk by separating the name with a comma; however, do not include a space between the comma and the name. To find a list of existing pools and disks, use the Storage> pool list command. See About configuring storage pools on page 96. To find a list of the existing disks, use the Storage> disk list command. See About displaying information for all disk devices on page 105. The disk needs to be part of the pool or an error message is displayed.

protection

The default value for the protection field is "disk." Available options are: disk - New disks required for increasing the size of the file system must come from the same pool. pool - Pools are used from any available pool.

Decreasing the size of a file system


You can decrease the size of the file system. To decrease the size of the file system, it must be online. If the file system is not online, an error message is displayed, and no action is taken. To decrease the size of a file system to a specified size

To decrease the size of a file system, enter the following:


Storage> fs shrinkto {primary|secondary} fs_name new_length

For example:
Storage> fs shrinkto primary fs1 10M Storage>

130

Creating and maintaining file systems Checking and repairing a file system

To decrease the size of a file system by a specified size

To decrease the size of a file system, enter the following:


Storage> fs shrinkby {primary|secondary} fs_name length_change

For example:
Storage> fs shrinkby primary fs1 10M Storage> primary|secondary fs_name Specifies the primary or secondary tier. Specifies the file system whose size will decrease. If you specify a file system that does not exist, an error message is displayed. Specifies the size to decrease the file system to. The size specified must be a positive number, and it must be smaller than the size of the existing file system. If the new file system size is not smaller than the size of the existing file system, an error message is displayed, and no action is taken. Decreases the file system by a specified size. The size specified must be a positive number, and it must be smaller than the size of the existing file system. If the new file system size is not smaller than the size of the existing file system, an error message is displayed, and no action is taken.

new_length

length_change

Checking and repairing a file system


The Storage> fs fsck command lets you check and repair a file system. Warning: Using the Storage> fs fsck command on an online file system can damage the data on the file system. Only use the Storage> fs fsck command on a file system that is offline.

Creating and maintaining file systems Changing the status of a file system

131

To check and repair a file system

To check and repair a file system, enter the following:


Storage> fs fsck fs_name

where fs_name specifies the file system for which to check and repair. For example:
Storage> fs fsck fs1 SFS fs ERROR V-288-693 fs1 must be offline to perform fsck.

Changing the status of a file system


The Storage> fs online or Storage> fs offline command lets you mount (online) or unmount (offline) a file system. You cannot access an offline file system from a client.

132

Creating and maintaining file systems Changing the status of a file system

To change the status of a file system

To change the status of a file system, enter one of the following, depending on which status you are using:
Storage> fs online fs_name Storage> fs offline fs_name

where fs_name specifies the name of the file system that you want to mount (online) or unmount (offline). If you specify a file system that does not exist, an error message is displayed. For example, to bring a file system online:
Storage> fs list FS STATUS SIZE === ====== ==== fs1 online 5.00G fs2 offline 10.00M NFS SHARED ======= no no CIFS SHARED ======= no no

LAYOUT ====== simple simple

MIRRORS ======= -

COLUMNS ======= -

USE% ==== 10% -

SECONDARY TIER ========= no no

Storage> fs online fs2 100% [#] Online filesystem Storage> fs list FS STATUS SIZE === ====== ==== fs1 online 5.00G fs2 online 10.00M NFS SHARED ======= no no LAYOUT ====== simple simple MIRRORS ======= COLUMNS ======= USE% ==== 10% 100%

CIFS SECONDARY SHARED TIER ======= ========= no no no no

For example, to place a file system offline:


Storage> fs offline fs1 100% [#] Offline filesystem

Creating and maintaining file systems Destroying a file system

133

Destroying a file system


The Storage> fs destroy command unmounts a file system and releases its storage back to the storage pool. You can only destroy an unshared file system. If a file system is shared by using the NFS> share add filesystem command, you must delete the share before you can destroy the file system. To destroy a file system

To destroy a file system, enter the following:


Storage> fs destroy fs_name

where fs_name specifies the name of the file system that you want to destroy. For example:
Storage> fs destroy fs1 100% [#] Destroy filesystem

About snapshots
A snapshot is a virtual image of the entire file system. You can create snapshots of a parent file system on demand. Physically, it contains only data that corresponds to changes made in the parent, and so consumes significantly less space than a detachable full mirror. Snapshots are used to recover from data corruption. If files, or an entire file system, are deleted or become corrupted, you can replace them from the latest uncorrupted snapshot. You can mount a snapshot and export it as if it were a complete file system. Users can then recover their own deleted or corrupted files. You can limit the space consumed by snapshots by setting a quota on them. If the total space consumed by snapshots remains above the quota, SFS rejects attempts to create additional ones. You can create a snapshot by either using the snapshot create command or by creating a schedule that calls the snapshot create command depending on the values entered for the number of hours or minutes after which this command should run. This method automatically creates the snapshot by storing the following values in the crontab: minutes, hour, day-of-month, month, and day-of-week.

134

Creating and maintaining file systems About snapshots

Table 7-3 Command


snapshot create

Snapshot commands Definition


A storage snapshot is a copy of a set of files and directories as they were at a particular point in the past. SFS supports file system level snapshots. SFS limits the space a snapshot can use. Snapshots use free space in the file system from which they were taken. See To create a snapshot on page 134.

snapshot list

Lists all the snapshots for the specified file system. If you do not specify a file system, snapshots of all the file systems are displayed. See To list snapshots on page 136.

snapshot destroy

Deletes a snapshot. See To destroy a snapshot on page 137.

snapshot online

Mounts a snapshot. See To mount or unmount snapshots on page 137.

snapshot offline

Unmounts a snapshot. See To mount or unmount snapshots on page 137.

snapshot quota list Displays snapshot information for all the file systems. See To display snapshot quotas on page 137. snapshot quota on Enables the creation of snapshots on the given file system when the space used by all of the snapshots of that file system exceeds a given capacity. The space used by the snapshots is not restricted. See To enable or disable a quota limit on page 138. snapshot quota off Disables the creation of snapshots on the given file system when the space used by all of the snapshots of that file system exceeds a given capacity. The space used by the snapshots is not restricted. See To enable or disable a quota limit on page 138.

Configuring snapshots
To create a snapshot

To create a snapshot, enter the following:


Storage> snapshot create snapshot_name fs_name [removable]

Creating and maintaining file systems About snapshots

135

snapshot_name fs_name removable

Specifies the name for the snapshot. Specifies the name for the file system. Valid values are:

yes no

If the removable attribute is yes, and the file system is offline, the snapshot is removed automatically if the file system runs out of space. The default value is removable=no.

For example:
Storage> snapshot create snapshot1 fs1 100% [#] Create snapshot

136

Creating and maintaining file systems About snapshots

To list snapshots

To list snapshots, enter the following:


Storage> snapshot list [fs_name] [schedule_name] fs_name Displays all of the snapshots of the specified file system. If you do not specify a file system, snapshots of all of the file systems are displayed. Displays the schedule name. If you do not specify a schedule name, then snapshots created under fs_name are displayed.

schedule_name

For example:
Storage> snapshot list Snapshot =============================== schedule2_26_Feb_2009_00_15_01 schedule2_26_Feb_2009_00_10_01 presnap_schedule2_25_Feb_2009_18_00_02 ctime ===== 2009.Feb.26.00:15:04 2009.Feb.26.00:10:03 2009.Feb.25.18:00:04 Snapshot FS

FS == fs2 fs2 fs2

Status ====== offline offline offline Removable ========= no no no Preserved ========= No No Yes

mtime ===== 2009.Feb.26.00:15:04 2009.Feb.26.00:10:03 2009.Feb.25.18:00:04

Displays the name of the created snapshots. Displays the file systems that correspond to each created snapshots. Displays whether or not the snapshot is mounted (that is, online or offline). Displays the time the snapshot was created. Displays the time the snapshot was modified. Determines if the snapshot should be automatically removes in case the underlying file system runs out of space. You entered either yes or no in the snapshot create snapshot_name fs_name [removable] Determines if the snapshot is preserved when all of the automated snapshots are destroyed.

Status

ctime mtime Removable

Preserved

Creating and maintaining file systems About snapshots

137

To destroy a snapshot

To destroy a snapshot, enter the following:


Storage> snapshot destroy snapshot_name fs_name snapshot_name fs_name Specifies the name of the snapshot to be destroyed. Specifies the name of the file system to be destroyed.

For example:
Storage> snapshot destroy snapshot1 fs1 100% [#] Destroy snapshot

To mount or unmount snapshots

To mount or unmount snapshots, enter one of the following commands, depending on which operation you want to perform:
Storage> snapshot online|offline snapshot_name fs_name snapshot_name fs_name Specifies the name of the snapshot. Specifies the name of the file system.

For example, to bring a snapshot online, enter the following:


Storage> snapshot online snapshot1 fs1 100% [#] Online snapshot Storage>

For example, to place snapshot offline, enter the following:


Storage> snapshot offline snapshot fs1 100% [#] Offline snapshot

To display snapshot quotas

To display snapshot quotas, enter the following:


Storage> snapshot quota list FS Quota Capacity Limit == ===== ============== fs1 on 1G fs2 off 0 fs3 off 0

138

Creating and maintaining file systems About snapshot schedules

To enable or disable a quota limit

To enable or disable a quota limit, enter the following:


Storage> snapshot quota on fs_name [capacity_limit] Storage> snapshot quota off [fs_name] on Enables the quota limit, which disallows creation of snapshots on the given file system when the space used by all the snapshots of that file system exceeds a given capacity limit. The space used b the snapshots is not restricted. Specifies the name of the file system. You can specify a capacity limit on the number of blocks used by all the snapshots for a specified file system. Enter a number that needs to be followed by K, M, G, or T (for kilo, mega, giga, or terabyte). Disables the quota capacity limit for the specified file system.

fs_name capacity_limit

off

For example, to enable the snapshot quota, enter the following:


Storage> snapshot quota on fs1 1024K Storage> snapshot quota list FS Quota Capacity Limit == ===== ============== fs1 ON 1024K

For example, to disable the snapshot quota, enter the following:


Storage> snapshot quota off fs1 Storage>

About snapshot schedules


The Storage> snapshot schedule commands let you automatically create or remove a snapshot that stores the values for minutes, hour, day-of-the-month, month, and day-of-the-week in the crontab along with the name of the file system. To distinguish the automated snapshots, a time stamp corresponding to their time of creation is appended to the schedule name. For example, if a snapshot is created using the name schedule1 on February 27, 2009 at 11:00 AM, the name becomes: schedule1_Feb_27_2009_11_00_01_IST.

Creating and maintaining file systems About snapshot schedules

139

The crontab interprets the numeric values in a different manner when compared to the manner in which SFS interprets the same values. For example, snapshot schedule create schedule1 fs1 30 2 * * * automatically creates a snapshot every day at 2:30 AM, and does not create snapshots every two and a half hours. If you wanted to create a snapshot every two and a half hours with at most 50 snapshots per schedule name, then run snapshot schedule create schedule1 fs1 50 */30 */2 * * *, where the value */2 implies that the schedule runs every two hours. You can also specify a step value for the other parameters, such as day-of-month or month and day-of-week, as well and can use a range along with a step value. Specifying a range in addition to the numeric_value implies the number of times the crontab skips for a given parameter. For example, to create a snapshot every two and a half hours with no restrictions on the maximum number of snapshots per schedule name, run the following command:snapshot schedule create schedule1 fs1 0 0-59/30 0-23/2 * * * as crontab interprets a step value and a step and range combination in a similar manner. Table 7-4 Command Snapshot schedule commands Definition

snapshot schedule Creates a schedule to automatically create a snapshot of a particular create file system. See To create a snapshot schedule on page 140. snapshot schedule Modifies the snapshot schedule of a particular filesystem. modify See To modify a snapshot schedule on page 141. snapshot schedule Creates a schedule to destroy all of the automated snapshots. This destroyall excludes the preserved and online snapshots. See To remove all snapshots on page 141. snapshot schedule Preserves a limited number of snapshots corresponding to an existing preserve schedule and specific file system name. These snapshots are not removed as part of the snapshot schedule autoremove command. See To preserve snapshots on page 142. snapshot schedule Displays all schedules that have been set for automatically creating show snapshots. See To display a snapshot schedule on page 142. snapshot schedule Deletes the schedule set for automatically creating snapshots for a delete particular file system or for a particular schedule. See To delete a snapshot schedule on page 142.

140

Creating and maintaining file systems About snapshot schedules

Configuring snapshot schedules


To create a snapshot schedule

To create a snapshot schedule, enter the following:


Storage> snapshot schedule create schedule_name fs_name max_snapshot_limit minute [hour] [day_of_the_month] [month] [day_of_the_week]

For example, to create a schedule for an automated snapshot creation of a given file system every 3 hours on a daily basis, enter the following:
Storage> snapshot schedule create schedule1 fs1 * 3 * * * Storage>

When an automated snapshot is created, the entire date value is appended, including the time zone.
schedule_name Specifies the name of the schedule corresponding to the automatically created snapshot. The schedule_name cannot contain an underscore ('_') as part of its value. For example, sch_1 is not allowed. fs_name Specifies the name of the file system. The file system name should be a string.

max_snapshot_limit Specifies the number of snapshots that can be created for a given file system and schedule name. This field only accepts numeric input. Entering 0 implies the snapshots can be created on a given file system and schedule name without any restriction. Any other value would imply that only x number of snapshots can be created for a given file system and schedule name. If the number of snapshots corresponding to the schedule name is equal to or greater than the value of this field, then snapshots that are more than an hour old are automatically destroyed until the number of snapshots is less than the maximum snapshot limit value. The range allowed for this parameter is 0-999. minute This parameter may contain either an asterisk, (*), which implies "every minute," or a numeric value between 0-59. You can enter */(0-59), a range such as 23-43, or just the *. hour This parameter may contain either an asterisk, (*), which implies "run every hour," or a number value between 0-23. You can enter */(0-23), a range such as 12-21, or just the *.

Creating and maintaining file systems About snapshot schedules

141

day_of_the_month This parameter may contain either an asterisk, (*), which implies "run every day of the month," or a number value between 1-31. You can enter */(1-31), a range such ass 3-22, or just the *. month This parameter may contain either an asterisk, (*), which implies "run every month," or a number value between 1-12. You can enter */(1-12), a range such as 1-5, or just the *. You can also enter the first three letters of any month (must use lowercase letters). day_of_the_week This parameter may contain either an asterisk (*), which implies "run every day of the week," or a numeric value between 0-6. Crontab interprets 0 as Sunday. You can also enter the first three letters of the week (must use lowercase letters).

To modify a snapshot schedule

To modify a snapshot schedule, enter the following:


Storage> snapshot schedule modify schedule_name fs_name max_snapshot_limit minute [hour] [day_of_the_month] [month] [day_of_the_week]

For example, to modify the existing schedule so that a snapshot is created every 2 hours on the first day of the week, enter the following:
Storage> snapshot schedule modify schedule1 fs1 *2**1 Storage>

To remove all snapshots

To automatically remove all of the snapshots created under a given schedule and file system name (excluding the preserved and online snapshots), enter the following:
Storage> snapshot schedule destroyall schedule_name fs_name

For example:
Storage> snapshot schedule destroyall schedule1 fs1 Storage>

142

Creating and maintaining file systems About snapshot schedules

To preserve snapshots

To preserve a number of snapshots corresponding to an existing schedule and specific file system name, enter the following:
Storage> snapshot schedule preserve schedule_name fs_name snapshot_name

For example, to preserve a snapshot created according to a given schedule and file system name, enter the following:
Storage> snapshot schedule preserve schedule fs1 schedule1_Feb_27_16_42_IST Storage>

To display a snapshot schedule

To display all of the schedules for automated snapshots, enter the following:
Storage> snapshot schedule show [fs_name] [schedule_name] fs_name Displays all of the schedules of the specified file system. If no file system is specified, schedules of all of the file systems are displayed. Displays the schedule name. If no schedule name is specified, then all of the schedules created under fs_name are displayed.

schedule_name

For example, to display all of the schedules for creating or removing snapshots to an existing file system, enter the following:
Storage> snapshot schedule show fs2 FS Schedule Name Max Snapshot Minute == ============= ============ ====== fs2 schedule2 0 0 fs2 schedule2 10 5 fs2 schedule1 20 30

Hour ==== 2 * 16

Day === * * *

Month ===== * * *

WeekDay ======= * * 5

To delete a snapshot schedule

To delete a snapshot schedule, enter the following:


Storage> snapshot schedule delete fs_name [schedule_name]

For example:
Storage> snapshot schedule delete fs1 Storage>

Chapter

Creating and maintaining NFS shares


This chapter includes the following topics:

About NFS file sharing

About NFS file sharing


The Network File System (NFS) protocol enables files hosted by an NFS server to be accessed by multiple UNIX and Linux client systems. Using NFS, a local system can mount and use a disk partition or file system from a remote system (an NFS server), as if it were local. The SFS NFS server exports a disk partition or file system, with selected permissions and options, and makes it available to NFS clients. The selected permissions and options can also be updated, to restrict or expand the permitted use. To remove sharing, unexport the NFS file system. The SFS NFS service is clustered. The NFS clients continuously retry during a failover transition. Even if the TCP connection is broken for a short time, the failover is transparent to NFS clients, and NFS clients regain access transparently as soon as the failover is complete. However, depending on client configuration and the nature of the failure, a client operation may time out, resulting in an error message such as: NFS server not responding, still trying. You use NFS commands to export or unexport your file systems. The NFS> share commands are defined in Table 8-1.

144

Creating and maintaining NFS shares About NFS file sharing

To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter the NFS> mode. For login instructions, go to About using the SFS command-line interface. Table 8-1 Command
share show

NFS mode commands Definition


Display exported file systems. See To display exported file systems on page 144.

share add

Export a file system. See Adding an NFS share on page 145.

share delete

Unexport the file system of the exported file system. See Unexporting a file system or deleting NFS options on page 151.

Displaying exported file systems


You can display the exported file systems and the NFS options that are specified when the file system was exported. To display exported file systems

To display exported file systems, enter the following:


NFS> share show

For example:
NFS> share show /vx/fs2 /vx/fs3

* (sync) * (secure,ro,no_root_squash)

The command output displays two columns.


Left-hand column Displays the file system that was exported. For example: /vx/fs2

Right-hand column

Displays the system that the file system is exported to, and the NFS options with which the file system was exported. For example: * (secure,ro,no_root_squash)

Creating and maintaining NFS shares About NFS file sharing

145

Adding an NFS share


You can export an NFS share with the specified NFS options that can then be accessed by one or more client systems. The new NFS options are updated after the command is run. If you add a file system that has already been exported with a different NFS option (rw, ro, async, or secure, for example), SFS provides a warning message saying that the file system has already been exported. SFS updates (overwrite) the old NFS options with the new NFS options. File system options appear in parentheses. File system options are exactly the same as those given at the time of exporting the file system. If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the file system. File systems that have been exported to different clients appear as different entries. File systems that are exported to <world> and other specific clients also appear as different entries. For example: Consider the following set of exported file systems where only the client (1.1.1.1) has read-write access to file system (fs2), while all other clients have read access only.
/vx/fs2 /vx/fs2 * (ro) 1.1.1.1 (rw)

When sharing a file system, SFS does not check whether the client exists or not. If you add a share for an unknown client, then an entry appears in the NFS> show command output. If the file system does not exist, you will not be able to export to any client. SFS gives the following error:
SFS nfs ERROR V-288-0 File system file_system_name is offline or does not exist

You cannot export a non-existent file system. The NFS> show fs command displays the list of exportable file systems. Valid NFS options include the following:
rw Grants read and write permission to the file system. Hosts mounting this file system will be able to make changes to the file system.

146

Creating and maintaining NFS shares About NFS file sharing

ro

Grants read-only permission to the file system. Hosts mounting this file system will not be able to change it. Grants synchronous write access to the file system. Forces the server to perform a disk write before the request is considered complete. Grants asynchronous write access to the file system. Allows the server to write data to the disk when appropriate. Grants secure access to the file system. Requires that clients originate from a secure port. A secure port is between 1-1024. Grants insecure access to the file system. Permits client requests to originate from unprivileged ports (those above 1024). Prevents the root user on an NFS client from having root privileges on an NFS mount. This effectively "squashes" the power of the remote root user to the lowest local user, preventing remote root users from acting as though they were the root user on the local system.

sync

async

secure

insecure

root_squash

no_root_squash Disables the root_squash option. Allows root users on the NFS client to have root privileges on the NFS server. wdelay Causes the NFS server to delay writing to the disk if another write request is imminent. This can improve performance by reducing the number of times the disk must be accessed by separate write commands, reducing write overhead. Disables the wdelay option.

no_wdelay

The default NFS export options are: sync, ro, root_squash, and wdelay. The no_wdelay option has no effect if the async option is set. For example, you could issue the following commands:
NFS> share add rw,async fs2 NFS> share add rw,sync,secure,root_squash fs3 10.10.10.10

Note: With root_squash, the root user can access the share, but with 'nobody' permissions.

Creating and maintaining NFS shares About NFS file sharing

147

To export a file system

To see your exportable online file systems and snapshots, enter the following:
NFS> show fs

For example:
NFS> show fs FS/Snapshot =========== fs2 fs3

To see your NFS share options, enter the following:


NFS> share show

For example:
NFS> share show /vx/fs2 /vx/fs3

* (sync) * (secure,ro,no_root_squash)

To export a file system, enter the following command:


NFS> share add nfsoptions filesystem [client] nfsoptions filesystem client Comma-separated list of export options from the set. Specifies the name of the file system you want to export. Clients may be specified in the following ways: Single host - specify a host either by an abbreviated name that is recognized by the resolver (DNS is the resolver), the fully qualified domain name, or an IP address. Netgroups - netgroups may be given as @group. Only the host part of each netgroup member is considered for checking membership.

If the client is not given, then the specified file system can be mounted or accessed by any client. To re-export new options to an existing share, the new options will be updated after the command is run.

Example using NFS options:

148

Creating and maintaining NFS shares About NFS file sharing

NFS> share add sync fs4 Exporting *:/vx/fs4 with options sync ..Success.

Sharing file systems using CIFS and NFS protocols


SFS provides support for multi-protocol file sharing where the same file system can be exported to both Windows and UNIX users using the CIFS and NFS protocols. The result is an efficient use of storage by sharing a single data set across multi-application platforms. Figure 8-1 shows how the file system sharing for the two protocols works. Figure 8-1 Exporting and/or sharing CIFS and NFS file systems

Shared Storage File System FS1

2-node SFS cluster Data access by CIFS protocol Data access by NFS protocol

Windows user

UNIX user

Creating and maintaining NFS shares About NFS file sharing

149

Note: When a share is exported over both NFS and CIFS protocols, the applications running on the NFS and CIFS clients may attempt to concurrently read or write the same file. This may lead to unexpected results since the locking models used by these protocols are different. For example, an application reads stale data. For this reason, SFS warns you when the share export is requested over NFS or CIFS and the same share has already been exported over CIFS or NFS, when at least one of these exports allows write access.

150

Creating and maintaining NFS shares About NFS file sharing

To export a file system to Windows and UNIX users

To export a file system to Windows and UNIX users with read-only and read-write permission respectively, go to CIFS mode and enter the following commands:
CIFS> show Name Value ---- ----netbios name mycluster ntlm auth yes allow trusted domains no homedirfs quota 0 idmap backend rid:10000-20000 workgroup SYMANTECDOMAIN security ads Domain SYMANTECDOMAIN.COM Domain user administrator Domain Controller SYMSERVER CIFS> share add fs1 share1 ro Exporting CIFS filesystem : share1... CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,ro

Enter the NFS mode and enter the following commands:

CIFS> exit > nfs Entering share mode... NFS> share add rw fs1 SFS nfs WARNING V-288-0 Filesystem (fs1) is already shared over CIFS with 'ro' permission. Do you want to proceed (y/n): y Exporting *:/vx/fs1 with options rw ..Success. NFS> share show /vx/fs1 * (rw) NFS>

Creating and maintaining NFS shares About NFS file sharing

151

Unexporting a file system or deleting NFS options


You can unexport the file system of the exported file system. Note: You will receive an error message if you try to remove a file system that does not exist. To unexport a file system or delete NFS options

To see your existing exported file systems, enter the following command:
NFS> share show

Only the file systems that are displayed can be unexported. For example:
NFS> share show /vx/fs2 /vx/fs3

* (sync) * (secure,ro,no_root_squash)

To delete a file system from the export path, enter the following command:
NFS> share delete filesystem [client]

For example:
NFS> share delete fs3 Removing export path *:/vx/fs3 ..Success. filesystem Specifies the name of the file system you want to delete. Where filesystem can be a string of characters, but the following characters are not allowed: / \ ( ) < >. For example: NFS> share delete "*:/vx/example" You cannot include single or double quotes that do not enclose characters. You cannot use one single quote or one double quote, as in the following example: NFS> share delete ' "filesystem

152

Creating and maintaining NFS shares About NFS file sharing

client

Clients may be specified in the following ways:

Single host - specify a host either by an abbreviated name that is recognized by the resolver (DNS is the resolver), the fully qualified domain name, or an IP address. Netgroups - netgroups may be given as @group. Only the host part of each netgroup member is considered for checking membership.

If client is included, the file system is removed from the export path that was directed at the client. If a file system is being exported to a specific client, the NFS> share delete command must specify the client to remove that export path. If the client is not specified, then the specified file system can be mounted or accessed by any client.

Chapter

Using SFS as a CIFS server


This chapter includes the following topics:

About configuring SFS for CIFS About configuring CIFS for standalone mode About configuring CIFS for NT domain mode About leaving an NT domain Changing NT domain settings Changing security settings Changing security settings after the CIFS server is stopped About configuring CIFS for AD domain mode Leaving an AD domain Changing domain settings for AD domain mode Removing the AD interface About setting NTLM About setting trusted domains About storing account information About reconfiguring the CIFS service About managing CIFS shares Sharing file systems using CIFS and NFS protocols About SFS cluster and load balancing

154

Using SFS as a CIFS server About configuring SFS for CIFS

About managing home directories About managing local users and groups About configuring local groups

About configuring SFS for CIFS


The Common Internet File System (CIFS), also known as the Server Message Block (SMB), is a network file sharing protocol that is widely used on Microsoft and other operating systems. This chapter describes the initial configuration of the SFS CIFS service on three operating modes, and how to reconfigure the SFS CIFS service when, some CIFS settings are changed. SFS can be integrated into a network that consists of machines running the following:

Windows 2000 Server Windows XP Windows Server 2003 Older Windows NT Windows 9.x operating systems

You can control and manage the network resources by using Active Directory or NT workgroup domain controllers. Before you use SFS with CIFS, you must have administrator-level knowledge of the Microsoft operating systems, Microsoft services, and Microsoft protocols (including Active Directory and NT services and protocols). You can find more information about them at: www.microsoft.com. To access the commands, log into your administrative console (master, system-admin, or storage-admin) and enter CIFS> mode. For login instructions, go to About using the SFS command-line interface. When serving the CIFS clients, SFS can be configured to operate in one of the modes described in Table 9-1.

Using SFS as a CIFS server About configuring CIFS for standalone mode

155

Table 9-1 Mode


Standalone

CIFS modes Definition


Information about the user and group accounts is stored locally on SFS. SFS also authenticates users locally using the Linux password and group files. This mode of operation is provided for SFS testing and may be appropriate in other cases, for example, when SFS is used in a small network and is not a member of a Windows security domain. In this mode of operation, you must create the local users and groups; they can access the shared resources subject to authorization control. SFS becomes a member of an NT4 security domain. The domain controller (DC) stores user and group account information, and the Microsoft NTLM or NTLMv2 protocol authenticates. SFS becomes a member of an AD security domain and is configured to use the services of the AD domain controller, such as DNS, LDAP, and NTP. Kerberos, NTLMv2, or NTLM authenticate users.

NT Domain

Active Directory

When SFS operates in the NT or AD domain mode, it acts as a domain member server and not as the domain controller.

About configuring CIFS for standalone mode


If you do not have an AD server or NT domain controller, you can use SFS as a standalone server. SFS is used in standalone mode when testing SFS functionality and when it is not a member of a domain. Before you configure the CIFS service for the standalone mode, do the following:

Make sure that the CIFS server is not running. Set security to user. Start the CIFS server.

To make sure that the configuration has changed, do the following:


Check the server status. Display the server settings. Configure CIFS for standalone mode commands Definition
Checks the status of the server. See To check the CIFS server status on page 156.

Table 9-2 Command


server status

156

Using SFS as a CIFS server About configuring CIFS for standalone mode

Table 9-2 Command


server stop

Configure CIFS for standalone mode commands (continued) Definition


Stops the server if it is running. See To check the CIFS server status on page 156.

show

Checks the security setting. See To check the security setting on page 157.

set security user

Sets security to user. This is the default value. In standalone mode you do not need to set the domaincontroller, domainuser, or domain. See To check the security setting on page 157.

server start

Starts the service in standalone mode. See To start the CIFS service in standalone mode on page 158.

Configuring CIFS server status for standalone mode


To check the CIFS server status

To check the status of the server, enter the following:


CIFS> server status

Be default, security is set to user, the required setting for standalone mode. The following example shows that security was previously set to ads. For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : ads Disabled SYMANTECDOMAIN.COM symantecdomain_ad administrator

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

Using SFS as a CIFS server About configuring CIFS for standalone mode

157

To check the security setting

Check the current settings before setting security, enter the following:
CIFS> show

For example:
Value ---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Domain user Domain Controller Name

----mycluster yes no 0 rid:10000-20000 SYMANTECDOMAIN ads SYMANTECDOMAIN.COM administrator SYMSERVER

To set security to user, enter the following:


CIFS> set security user Global option updated. Note: Restart the CIFS server.

158

Using SFS as a CIFS server About configuring CIFS for standalone mode

To start the CIFS service in standalone mode

To start the service in standalone mode, enter the following:


CIFS: server start Starting CIFS Server.....Success.

To display the new settings, enter the following:


CIFS> show

For example:
Name Value ----mycluster yes no 0 rid:10000-20000 SYMANTECDOMAIN user SYMANTECDOMAIN.COM administrator SYMSERVER

---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Domain user Domain Controller

To make sure that the server is running in standalone mode, enter the following:
CIFS> server status

For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security : user

The CIFS service is now running in standalone mode. To create local users and groups, go to About managing local users and groups. To export the shares, go to About managing CIFS shares.

Using SFS as a CIFS server About configuring CIFS for NT domain mode

159

About configuring CIFS for NT domain mode


Before you configure the CIFS service for the NT domain mode, do the following:

Make sure that an NT domain has already been configured. Make sure that SFS can communicate with the domain controller (DC) over the network. Make sure that the CIFS server is stopped. Set the domain user, domain, and domain controller. Set the security to domain. Start the CIFS server.

To make sure that the configuration has changed, do the following:


Check the server status. Display the server settings. Configure CIFS for NT domain mode commands Definition
Sets the name of the domain user. The credentials of the domain user will be used at the domain controller while joining the domain. Therefore the domain user should be an existing NT domain user who has permission to perform the join domain operation. See To set the domain user name for NT mode on page 160.

Table 9-3 Command


set domainuser

set domain

Sets the name for the NT domain that you would like SFS to join and become a member. See To set the domain for the NT domain node on page 160.

set domaincontroller

Sets the domain controller server name.

Note: If security is set to domain, you can use both the AD server and
the Windows NT 4.0 domain controller as domain controllers. However, if you use the Windows NT 4.0 domain controller, you can only use the netbios name of the domain controller to set the domaincontroller parameter. See To set the domain controller for the NT domain mode on page 161.

set security

Before you set the security for the domain, you must set the domaincontroller, domainuser, and domain. See To set security to domain for the NT domain mode on page 161.

160

Using SFS as a CIFS server About configuring CIFS for NT domain mode

Table 9-3 Command


server start

Configure CIFS for NT domain mode commands (continued) Definition


The server joins the NT domain only when the server is started after issuing the CIFS> set security command. See To start the CIFS server for the NT domain mode on page 162.

Configuring CIFS for the NT domain mode


To set the domain user name for NT mode

To verify that the CIFS server is stopped, enter the following:


CIFS> server status

If the server is running, stop the server. enter the following:


CIFS> server stop

To set the user name, enter the following:


CIFS> set domainuser username

where username is an existing NT domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.

To set the domain for the NT domain node

To set the domain, enter the following:


CIFS> set domain domainname

where domainname is the name of the domain that SFS will join. For example:
CIFS> set domain SYMANTECDOMAIN.COM Global option updated. Note: Restart the CIFS server.

Using SFS as a CIFS server About configuring CIFS for NT domain mode

161

To set the domain controller for the NT domain mode

To set the domain controller, enter the following:


CIFS> set domaincontroller servername

where servername is the netbios name if it is an Windows NT 4.0 domain controller. For example, if the domain controller is in Windows NT 4.0, enter the server name SYMSERVER:
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.

To set security to domain for the NT domain mode

To set security to domain, enter the following:


CIFS> set security security

Enter domain for security.


CIFS> set security domain Global option updated. Note: Restart the CIFS server.

162

Using SFS as a CIFS server About configuring CIFS for NT domain mode

To start the CIFS server for the NT domain mode

To start the CIFS server, enter the following:


CIFS> server start

You are prompted for a domainuser password by:


CIFS> server start Trying to become a member in domain SYMANTECDOMAIN.COM ... Enter password for user 'administrator':

When you enter the correct password, the following messages appear:
Joined domain SYMANTECDOMAIN.COM OK Starting CIFS Server.....Success.

To find the current settings for the domain name, domain controller name, and domain user name, enter the following:
CIFS> show

To make sure that the service is running as a member of the NT domain, enter the following:
CIFS> server status

For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : domain Enabled SYMANTECDOMAIN.COM SYMSERVER administrator

The CIFS service is now running in the NT domain mode. You can export the shares, and domain users can access the shares subject to authentication and authorization control.

Using SFS as a CIFS server About leaving an NT domain

163

About leaving an NT domain


There is no SFS command that lets you leave an NT domain. It happens automatically when the security or domain settings change, and then starts or stops the CIFS server. Thus, SFS provides the domain leave operation depending on existing security and domain settings and new administrative commands. However, the leave operation requires the credentials of the old domains user. All of the cases for the domain leave operation are documented in Table 9-4. Table 9-4 Command
set domain

Change NT domain settings commands Definition


Sets the domain. When you change any of the domain settings and you restart the CIFS server, the CIFS server leaves the old domain. Thus, when a change is made to either one or more of the domain, domain controller, or domain user settings, and the next time the CIFS server is started, the CIFS server first attempts to leave the existing join, and then joins the NT domain with the new settings. See To change domain settings on page 164.

set security user

Sets the security user. When you change the security setting, and you start or stop the CIFS server, the CIFS server leaves the existing NT domain. For example, if you change the security setting from domain to user and you stop or restart the CIFS server, it leaves the NT domain. See To change security settings on page 165. If the CIFS server is already stopped, and you change the security to a value other than domain, SFS leaves the domain. This method of leaving the domain is provided so that if a CIFS server is already stopped, and may not be restarted soon, you have a way to leave an existing join to the NT domain. See To change security settings for a CIFS server that has been stopped on page 165.

Changing NT domain settings


Each case assumes that the SFS cluster is part of an NT domain.

164

Using SFS as a CIFS server Changing NT domain settings

To verify if cluster is part of NT domain

To verify if your cluster is part of the NT domain, enter the following:


CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : domain Enabled SYMANTECDOMAIN.COM SYMSERVER administrator

To change domain settings

To stop the CIFS server, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To change the domain, enter the following:


CIFS> set domain newdomain.com Global option updated. Note: Restart the CIFS server.

where newdomain.com is the new domain name. When you start the CIFS server, the CIFS server tries to leave the existing domain. This requires the old domainuser to enter their password. After the password is supplied, and the domain leave operation succeeds, the CIFS server joins an NT domain with the new settings.

To start the CIFS server, enter the following:


CIFS> server start Disabling membership in existing domain SYMANTECDOMAIN.COM Enter password for user 'administrator' of domain SYMANTECDOMAIN.COM : Left domain SYMANTECDOMAIN.COM Trying to become a member in domain NEWDOMAIN.COM Enter password for user 'administrator':

Using SFS as a CIFS server Changing security settings

165

Changing security settings


To change security settings

To set the security to user, enter the following:


CIFS> set security user Global option updated. Note: Restart the CIFS server.

To stop the CIFS server:


CIFS> server stop Disabling membership in existing domain SYMANTECDOMAIN.COM Enter password for user 'administrator' of domain SYMANTECDOMAIN.COM : Stopping CIFS Server.....Success. Left domain SYMANTECDOMAIN.COM

Changing security settings after the CIFS server is stopped


To change security settings for a CIFS server that has been stopped

To set security to a value other than domain, enter the following:


CIFS> set security user Disabling membership in existing domain SYMANTECDOMAIN.COM Enter password for user 'administrator' of domain SYMANTECDOMAIN.COM : Left domain SYMANTECDOMAIN.COM Global option updated. Note: Restart the CIFS server.

If the server is stopped, then changing the security mode will disable the membership of the existing domain.

About configuring CIFS for AD domain mode


This section assumes that an Active Directory domain has already been configured and that SFS can communicate with the AD domain controller (DC) over the network. The AD domain controller is also referred to as the AD server. Before you configure the CIFS service for the AD domain mode, do the following:

166

Using SFS as a CIFS server About configuring CIFS for AD domain mode

Make sure that the SFS and AD server clocks are reasonably synchronized with each other. The most commonly allowed maximum value of clock difference would be 5 minutes, but it depends on the AD server settings. One of the ways to ensure this, is by configuring SFS to use the NTP service running on the AD server. You can change the clock settings on the AD server by modifying Kerberos Policy, which is a part of the Domain Security Policy. Make sure that SFS is configured to use a DNS service that has entries for the AD domain controller and SFS nodes. You can also use the DNS service running on the AD domain controller. Make sure that the CIFS server is not running. Set the AD domain user, AD domain, and domain controller. Set security to ads. Start the CIFS server. Check the server status. Display the server settings. Configure CIFS for AD domain mode commands Definition
Sets the name of the domain user. The domain user's credentials will be used at the domain controller while joining the domain. Therefore, the domain user should be an existing AD user who has the permission to perform the join domain operation. See To set the domain user for AD domain mode on page 167.

Table 9-5 Command


set domainuser

set domain

Sets the name of the domain for the AD domain mode that SFS will join. See To set the domain for AD domain mode on page 167.

set domaincontroller set security

Sets the domain controller server name. See To set the domain controller for AD domain mode on page 168. Sets security for the domain. You must first set the domaincontroller, domainuser, and domain. See To set security to ads on page 168.

Using SFS as a CIFS server About configuring CIFS for AD domain mode

167

Table 9-5 Command


server start

Configure CIFS for AD domain mode commands (continued) Definition


Starts the server. The CIFS server joins the Active Directory domain only when the server is started after issuing the CIFS> set security command. See To start the CIFS server on page 169.

Configuring CIFS for the AD domain mode


To set the domain user for AD domain mode

To verify that the CIFS server is stopped, enter the following:


CIFS> server status

If the server is running, stop the server. enter the following:


CIFS> server stop

To set the domain user, enter the following:


CIFS> set domainuser username

where username is the name of an existing AD domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.

To set the domain for AD domain mode

To set the domain for AD domain mode, enter the following:


CIFS> set domain domainname

where domainname is the name of the domain. For example:


CIFS> set domain SYMANTECDOMAIN.COM Global option updated. Note: Restart the CIFS server.

168

Using SFS as a CIFS server About configuring CIFS for AD domain mode

To set the domain controller for AD domain mode

To set the domain controller, enter the following:


CIFS> set domaincontroller servername

where servername is the server's IP address or DNS name. For example, if the server SYMSERVER has an IP address of 172.16.113.118, you can specify one of the following:
CIFS> set domaincontroller 172.16.113.118 Global option updated. Note: Restart the CIFS server.

or
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.

To set security to ads

To set security to ads, enter the following:


CIFS> set security security

Enter ads for security.


CIFS> set security ads Global option updated. Note: Restart the CIFS server.

Using SFS as a CIFS server About configuring CIFS for AD domain mode

169

To start the CIFS server

To start the CIFS server, enter the following:


CIFS> server start The skew of the system clock with respect to Domain controller is: -17 seconds Time on Domain controller : Thu Dec 4 05:21:47 2008 Time on this system : Thu Dec 4 05:22:04 PST 2008 If the above clock skew is greater than that allowed by the server, then the system won't be able to join the AD domain Trying to become a member in AD domain SYMANTECDOMAIN.COM ... Enter password for user 'administrator':

After you enter the correct password for the user administrator belonging to AD domain SYMANTECDOMAIN.COM, the following message appears:
Joined domain SFSQA.COM OK Starting CIFS Server.....Success.

To make sure that the service is running, enter the following:


CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : ads Enabled SYMANTECDOMAIN.COM SYMSERVER administrator

The CIFS server is now running in the AD domain mode. You can export the shares, and the domain users can access the shares subject to the AD authentication and authorization control.

170

Using SFS as a CIFS server Leaving an AD domain

Leaving an AD domain
There is no SFS command that lets you leave an AD domain. It happens automatically as a part of change in security or domain settings, and then starts or stops the CIFS server. Thus, SFS provides the domain leave operation depending on existing security and domain settings and new administrative commands. However, the leave operation requires the credentials of the old domains user. All of the cases for a domain leave operation have been documented in Table 9-6. Table 9-6 Command
set domain

Change AD domain mode settings commands Definition


Sets the domain. When you change any of the domain settings and you restart the CIFS server, the CIFS server leaves the old domain. Thus, when a change is made to either one or more of domain, domain controller, or domain user settings, and the next time the CIFS server is started, the CIFS server first attempts to leave the existing join and then joins the AD domain with the new settings. See To change domain settings for AD domain mode on page 172.

set security user

Sets the security user. If you change the security setting from ads to user and you stop or restart the CIFS server, it leaves the AD domain. When you change the security setting, and you stop or restart the CIFS server, the CIFS server leaves the existing AD domain. For example, the CIFS server leaves the existing AD domain if the existing security is ads, and the new security is changed to user, and the CIFS server is either stopped, or started again. See To change the security settings for the AD domain mode on page 173. If the CIFS server is already stopped, changing the security to a value other than ads causes SFS to leave the domain. Both the methods mentioned earlier require either stopping or starting the CIFS server. This method of leaving the domain is provided so that if a CIFS server is already stopped, and may not be restarted in near future, you should have some way of leaving an existing join to AD domain. See Changing security settings with stopped server on the AD domain mode on page 173.

Using SFS as a CIFS server Changing domain settings for AD domain mode

171

Changing domain settings for AD domain mode


Each case assumes that the SFS cluster is part of an AD domain. To verify cluster is part of an AD domain

To verify that you cluster is part of an AD domain, enter the following:


CIFS> server status CIFS Status on SFS_1 : ONLINE CIFS Status on SFS_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : ads Enabled SYMANTECDOMAIN.COM symantecdomain_ad administrator

172

Using SFS as a CIFS server Changing domain settings for AD domain mode

To change domain settings for AD domain mode

To stop the CIFS server, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To change the domain, enter the following:


CIFS> set domain newdomain.com

When you start the CIFS server, it tries to leave the existing domain. This requires the old domainuser to enter its password. After the password is supplied, and the domain leave operation succeeds, the CIFS server joins an AD domain with the new settings.

To start the CIFS server, enter the following:


CIFS> server start Disabling membership in existing AD domain SYMANTECDOMAIN.COM Enter password for user 'administrator' of domain SYMANTECDOMAIN.COM : Left domain SYMANTECDOMAIN.COM The skew of the system clock with respect to Domain controller is: -18 seconds Time on this system: Thu Dec 4 05:21:47 2008 Time on this system : Thu Dec 4 05:22:04 PST 2008 If the above clock skew is greater than that allowed by the server, then the system won't be able to join the AD domain Trying to become a member in AD domain NEWDOMAIN.COM... Enter password for user 'administrator':

Using SFS as a CIFS server Removing the AD interface

173

To change the security settings for the AD domain mode

To set the security to user, enter the following:


CIFS> set security user Global option updated. Note: Restart the CIFS server.

To stop the CIFS server:


CIFS> server stop Disabling membership in existing AD domain SYMANTECDOMAIN.COM Enter password for user 'administrator' of domain SYMANTECDOMAIN.COM : Stopping CIFS Server.....Success. Left AD domain SYMANTECDOMAIN.COM

Changing security settings with stopped server on the AD domain mode

To set security to a value other than ads, enter the following:


CIFS> set security user Disabling membership in existing AD domain SYMANTECDOMAIN.COM Enter password for user 'administrator': Left AD domain SYMANTECDOMAIN.COM Global option updated. Note: Restart the CIFS server.

Removing the AD interface


You can remove the SFS cluster from the AD domain by using the Active Directory interface. To remove the SFS cluster

1 2 3

Open the interface Active Directory Users and Computers. In the domain hierarchy tree, click on Computers. In the details pane, right-click the computer entry corresponding to SFS (this can be identified by the SFS cluster name) and click Delete.

About setting NTLM


When you use SFS in NT or AD domain mode, there is an optional configuration step that can be done. You can disable the use of Microsoft NTLM (NT LAN Manager) protocol for authenticating users.

174

Using SFS as a CIFS server About setting NTLM

When SFS CIFS service is running in the standalone mode (with security set to user) some versions of the Windows clients require NTLM authentication to be enabled. You can do this by setting CIFS> set ntlm_auth to yes. When NTLM is disabled and you use SFS in the NT domain mode, the only protocol available for user authentication is Microsoft NTLMv2. When NTLM is disabled and you use SFS in AD domain mode, the available authentication protocols is Kerberos and NTLMv2. The one used depends on the capabilities of both the SFS clients, and domain controller. If no special action is taken, SFS allows the NTLM protocol to be used. For any specific CIFS connection, all the participants, that is the client machine, SFS and domain controller select the protocol that they all support and that provides the highest security. In the AD domain mode, Kerberos provides the highest security. In the NT domain mode, NTLMv2 provides the highest security. Table 9-7 Command
set ntlm_auth no

Set NTLM commands Definition


Disables NTLM. See To disable NTLM on page 175.

set ntlm_auth yes Enables NTLM. See To enable NTLM on page 175.

Using SFS as a CIFS server About setting NTLM

175

Setting NTLM
To disable NTLM

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To disable NTLM, enter the following:


CIFS> set ntlm_auth no

For example:
CIFS> set ntlm_auth no Global option updated. Note: Restart the CIFS server.

To start the CIFS service, enter the following:


CIFS> server start Starting CIFS Server.....Success.

To enable NTLM

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To enable the NTLM protocol, enter the following:


CIFS> set ntlm_auth yes

For example:
CIFS> set ntlm_auth yes Global option updated. Note: Restart the CIFS server.

To start the CIFS service, enter the following:


CIFS> server start Starting CIFS Server.....Success.

176

Using SFS as a CIFS server About setting trusted domains

About setting trusted domains


The Microsoft Active Directory supports the concept of trusted domains. When you authenticate users, you can configure domain controllers in one domain to trust the domain controllers in another domain. This establishes the trust relation between the two domains. When SFS is a member in an AD domain, both SFS and DC are involved in authenticating the clients. You can configure SFS to support or not support trusted domains. Table 9-8 Command Set trusted domains commands Definition

set Enables the use of trusted domains in the AD domain mode. allow_trusted_domains Note: Depending on the value you specify for idmap_backend it may yes or it may not be possible to enable AD trusted domains. See To enable AD trusted domains on page 176. set Disables the use of trusted domains in the AD domain mode. allow_trusted_domains See To disable trusted domains on page 177. no

Setting AD trusted domains


To enable AD trusted domains

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To enable trusted domains, enter the following:


CIFS> set allow_trusted_domains yes

For example:
CIFS> set allow_trusted_domains yes Global option updated. Note: Restart the CIFS server.

To start the CIFS server, enter the following:


CIFS> server start Starting CIFS Server.....Success.

Using SFS as a CIFS server About storing account information

177

To disable trusted domains

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To disable trusted domains, enter the following:


CIFS> set allow_trusted_domains no

For example:
CIFS> set allow_trusted_domains no Global option updated. Note: Restart the CIFS server.

To start the CIFS server, enter the following:


CIFS> server start Starting CIFS Server.....Success.

About storing account information


SFS maps between the domain users and groups (their identifiers) and local representation of these users and groups. Information about these mappings can be stored locally on SFS or remotely using the DC directory service. SFS uses the idmap_backend configuration option to decide where this information is stored. This option can be set to one of the following:
rid ldap Stores the user and group information locally. Stores the user and group information in the LDAP directory service.

The rid value can be used in any of the following modes of operation:

standalone NT domain AD domain

It is the default value for idmap_backend in all of these operational modes. The ldap value can be used if the AD domain mode is used.

178

Using SFS as a CIFS server About storing account information

Table 9-9 Command

Store account information commands Definition

set idmap_backend Configures SFS to store information about users and groups locally. rid Note: This command requires that the allow_trusted_domains variable be set to no, as the command is not compatible with trusted domains. See To set idmap_backend to rid on page 179. set idmap_backend Configures SFS to store information about users and groups in a ldap remote LDAP service. You can only use this command when SFS is operating in the AD domain mode. The LDAP service can run on the domain controller or it can be external to the domain controller.

Note: For SFS to use the LDAP service, the LDAP service must include
both RFC 2307 and Samba schema extensions. When the idmap_backend command is set to ldap you can enable or disable trusted domains. If idmap_backend is set to ldap, you must first configure the SFS LDAP options using the Network> ldap commands. See About LDAP on page 72. See To set idmap_backend to LDAP on page 179.

Using SFS as a CIFS server About storing account information

179

Storing user and group accounts


To set idmap_backend to rid

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To store information about user and group accounts locally, enter the following:
CIFS> set idmap_backend rid [uid_range]

where the uid_range represents the range of identifiers which are used by SFS when mapping domain users and groups to local users and groups. The default range is 10000-20000.

To start the CIFS server, enter the following:


CIFS> server start Starting CIFS Server.....Success.

To set idmap_backend to LDAP

To make sure that you have first configured LDAP, enter the following:
Network> ldap

If the server is running, enter the following:


CIFS> server stop Stopping CIFS Server.....Success.

To use the remote LDAP store for information about the user and group accounts, enter the following:
CIFS> set idmap_backend ldap

To start the CIFS server, enter the following:


CIFS> server start Starting CIFS Server.....Success.

180

Using SFS as a CIFS server About reconfiguring the CIFS service

About reconfiguring the CIFS service


Sometime after you have configured the CIFS service, and used it for awhile, you need to change some of the settings. For example, you may want to allow the use of trusted domains or you need to move SFS from one security domain to another. To carry out these changes, set the new settings and then start the CIFS server. As a general rule, you should stop the CIFS service before making the changes. An example where SFS is moved to a new security domain (while the mode of operation stays unchanged as, AD domain) is shown in Reconfiguring the CIFS service. This example deals with reconfiguring CIFS. So make sure that if any of the other AD services like DNS or NTP are being used by SFS, that SFS has already been configured to use these services from the AD server belonging to the new domain. Make sure that the DNS service, NTP service and, if used as ID mapping store, also the LDAP service, are configured as required for the new domain. To reconfigure the CIFS service, do the following:

Make sure that the server is not running. Set the domain user, domain, and domain controller. Start the CIFS server. Reconfigure the CIFS service commands Definition
Changes the configuration option to reflect the values appropriate for the new domain. See To set the user name for the AD on page 181.

Table 9-10 Command


set domainuser

set domain

Changes the configuration option to reflect the values appropriate for the new domain. See To set the AD domain on page 181.

set domaincontroller

Changes the configuration option to reflect the values appropriate for the new domain. See To set the AD server on page 182.

server start

Starts the server and causes it to leave the old domain and join the new Active Directory domain. You can only issue this command after you enter the CIFS> set security command. See To start the CIFS server on page 183.

Using SFS as a CIFS server About reconfiguring the CIFS service

181

Reconfiguring the CIFS service


To set the user name for the AD

To verify that the CIFS server is stopped, enter the following:


CIFS> server status

If the server is running, stop the server. enter the following:


CIFS> server stop

To set the user name for the AD, enter the following:
CIFS> set domainuser username

where username is the name of an existing AD domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.

To set the AD domain

To set the AD domain, enter the following:


CIFS> set domain domainname

where domainname is the name of the domain. This command also sets the system workgroup. For example:
CIFS> set domain NEWDOMAIN.COM Global option updated. Note: Restart the CIFS server.

182

Using SFS as a CIFS server About reconfiguring the CIFS service

To set the AD server

To set the AD server, enter the following:


CIFS> set domaincontroller servername

where servername is the AD server IP address or DNS name. For example, if the AD server SYMSERVER has an IP address of 172.16.113.118, you can specify on of the following:
CIFS> set domaincontroller 172.16.113.118 Global option updated. Note: Restart the CIFS server.

or
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.

If you use the AD server name, you must configure SFS to use a DNS server which can resolve this name.

Using SFS as a CIFS server About managing CIFS shares

183

To start the CIFS server

To start the CIFS server, enter the following:


CIFS> server start The skew of the system clock with respect to Domain controller is: 3 seconds Time on Domain controller : Fri May 30 06:00:03 2008 Time on this system : Fri May 30 06:00:00 PDT 2008 If the above clock skew is greater than that allowed by the server, then the system wont be able to join the AD domain Enter password for user 'administrator': Trying to become a member in AD domain SYMANTECDOMAIN.COM ... Joined domain SYMANTECDOMAIN.COM OK Starting CIFS Server..

To make sure that the service is running, enter the following:


CIFS> server status

To find the current settings, enter the following:


CIFS> show

About managing CIFS shares


You can export the SFS file systems to the clients as CIFS shares. When a share is created, it is given a name. The name is different from the file system name. Clients use the share name when they import the share. You create and export a share with one command. The same command binds the share to a file system, and you can also use it to specify share properties. In addition to exporting file systems as CIFS share, you can use SFS to store the users' home directories. Each of these home directories is called a home directory share. Shares which are used to export ordinary file systems (that is, file systems which are not used for home directories), are called ordinary shares to distinguish them from the home directory shares.

184

Using SFS as a CIFS server About managing CIFS shares

Table 9-11 Command


share show

Manage the CIFS shares commands Definition


Displays information on one or all exported shares. The information is displayed for a specific share includes the name of the file system which is being exported and the values of the share options. See To display share properties on page 187.

share add

Exports a file system with the given sharename or re-export new options to an existing share. The new options are updated after this command is run. This CIFS command, which creates and exports a share, takes as input the name of the file system which is being exported, the share name, and optional attributes. You can use the same command for a share that is already exported. You can do this if it is required to modify the attributes of the exported share. A file system used for storing users home directories cannot be exported as a CIFS share, and a file system that is exported as a CIFS share cannot be used for storing users' home directories. See To export a file system on page 184.

share delete

Stops the associated file system from being exported. Any files and directories which may have been created in this file system remain intact; they are not deleted as a result of this operation. See To delete a CIFS share on page 187.

Setting share properties


To export a file system

To export a file system, enter the following:


CIFS> share add filesystem sharename [cifsoptions] filesystem An SFS file system that you want to export as a CIFS share. The given file system must not be currently used for storing the home directory shares. The name for the newly exported share. Names of the SFS shares are case sensitive and can consist of the following characters: lower and upper case letters "a" - "z" and "A" - "Z," numbers "0" - "9" and special characters: "_" and "-". ( "-", cannot be used as the first character in a share name).

sharename

Using SFS as a CIFS server About managing CIFS shares

185

cifsoptions

A comma-separated list of export options. This part of the command is optional. If it is not given, SFS uses the default value. (Example: ro,rw,guest,noguest,oplocks,nooplocks,owner=ownername, group=groupname,ip=virtualip). The default values are: ro, noguest, oplocks, owner=root, group=root.

For example, an existing file system called fsA being exported as a share called ABC:
CIFS> share add fsA ABC rw,guest,owner=john,group=abcdev

There is a share option which specifies if the files in the share will be read-only or if both read and write access will be possible, subject to the authentication and authorization checks when a specific access is attempted. This share option can be given one of these values:
ro Grants read-only permission to the exported share. Files cannot be created or modified. This is the default value. Grants read and write permission to the exported share.

rw

Another configuration option specifies if a user trying to establish a CIFS connection with the share must always provide the user name and password, or if they can connect without it. In this case, only restricted access to the share will be allowed. The same kind of access is allowed to anonymous or guest user accounts. This share option can have one of the following values:
guest SFS allows restricted access to the share when no user name or password is provided. SFS always requires the user name and password for all of the connections to this share. This is the default value.

noguest

SFS supports the CIFS opportunistic locks. You can enable or disable them for a specific share. The opportunistic locks improve performance for some workloads, and there is a share configuration option which can be given one of the following values:
oplocks SFS supports opportunistic locks on the files in this share. This is the default value.

186

Using SFS as a CIFS server About managing CIFS shares

nooplocks

No opportunistic locks will be used for this share. Disable the oplocks when:

1) A file system is exported over both CIFS and NFS protocols. 2) Either CIFS or NFS protocol has read and write access.

There are more share configuration options that can be used to specify the user and group who own the share. If you do not specify these options for a share, SFS uses the default values for these options, which are the privileged or root SFS user and group. You may want to change the default values to allow a specific user or group to be the share owner.
owner By default, the SFS root owns the root directory of the exported share. This lets CIFS clients create folders and files in the share. However, there are some operations which require owner privileges; for example, changing the owner itself, and changing permissions of the top-level folder (that is, the root directory in UNIX terms). To enable these operations, you can set the owner option to a specific user name, and this user can perform the privileged operations. By default, the SFS root is the primary group owner of the root directory of the exported share. This lets CIFS clients create folders and files in the share. However, there are some operations which require the group privileges; for example, changing the group itself, and changing permissions of the top level folder (that is, the root directory in UNIX terms). To enable these operations you can set the group option to a specific group name and this group can perform the privileged operations. SFS lets you specify a virtual IP address. This address must be part of the SFS cluster, and is used by the system to serve the share internally.

group

ip

After a file system is exported as a CIFS share, you can decide to change one or more share options. This is done using the same share add command, giving the name of an existing share and the name of the file system exported with this share. SFS will realize the given share has already been exported and that it is only required to change the values of the share options. For example, to export the file system fs1 with name share1, enter the following:
CIFS> share add fs1 share1 "owner=administrator,group=domain users,rw" Exporting CIFS filesystem : share1 ... CIFS> share show

Using SFS as a CIFS server About managing CIFS shares

187

ShareName share1

FileSystem fs1

ShareOptions owner=administrator,group=domain users,rw

To display share properties

To display the information about all of the exported shares, enter the following:
CIFS> share show

For example:
CIFS> share show ShareName FileSystem share1 fs1

ShareOptions owner=root,group=root

To display the information about one specific share, enter the following:
CIFS> share show sharename

For example:
CIFS> share show share1 ShareName VIP Address share1 10.10.10.10

To delete a CIFS share

To delete a share, enter the following:


CIFS> share delete sharename

where sharename is the name of the share you want to delete. For example:
CIFS> share delete share1 Unexporting CIFS filesystem : share1 .. CIFS>

To confirm the share is no longer exported, enter the following:


CIFS> share show ShareName FileSystem CIFS>

ShareOptions

188

Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols

Sharing file systems using CIFS and NFS protocols


SFS provides support for multi-protocol file sharing, where the same file system can be exported to both Windows and UNIX users using the CIFS and NFS (Network File System) protocols. The result is an efficient use of storage by sharing a single data set across multi-application platforms. Figure 9-1 shows how file system sharing for the two protocols works. Figure 9-1 Exporting files systems

Shared Storage File System FS1

2-node SFS cluster Data access by CIFS protocol Data access by NFS protocol

Windows user

UNIX user

It is recommended that you disable the oplocks option when the following occurs:

A file system is exported over both the CIFS and NFS protocols. Either the CIFS and NFS protocol is set with read and write permission.

Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols

189

To disable oplocks refer to Setting share properties Note: When a share is exported over both NFS and CIFS protocols, the applications running on the NFS and CIFS clients may attempt to concurrently read or write the same file. This may lead to unexpected results since the locking models used by these protocols are different. For example, an application reads stale data. For this reason, SFS warns you when the share export is requested over NFS or CIFS and the same share has already been exported over CIFS or NFS, when at least one of these exports allows write access.

190

Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols

To export a file system to Windows and UNIX users

Go to the NFS mode and enter the following commands:


NFS> share add ro fs1 Exporting *:/vx/fs1 with options ro ..Success. NFS> share show /vx/fs1 * (ro) NFS> exit

To export a file system to Windows and UNIX users with read-only permission, go to CIFS mode and enter the following commands:
CIFS> show Name ---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Value ----mycluster yes no 0 rid:10000-20000 SYMANTECDOMAIN ads SYMANTECDOMAIN.COM

Domain user administrator Domain Controller SYMSERVER CIFS> share add fs1 share1 rw SFS cifs WARNING V-288-0 Filesystem (fs1) is already shared over NFS with 'ro' permission. Do you want to proceed (y/n): y Exporting CIFS filesystem : share1 .. CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,rw CIFS>

When the file system in CIFS is set to homedirfs, the SFS software assumes that the file system is exported to CIFS users in read and write mode. SFS does not allow you to export the same file system as an CIFS share and a home directory file system (homedirfs). For example, if the file system fs1 is already exported as a CIFS share then you cannot set it as homedirfs.

Using SFS as a CIFS server About SFS cluster and load balancing

191

To export a file system set as homedirfs

To request that a file system be used for home directories, you need to export the file system. Go to the CIFS mode and enter the following:
CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,rw CIFS> set homedirfs fs1 SFS cifs ERROR V-288-615 Filesystem (fs1) is already exported by another CIFS share. CIFS>

About SFS cluster and load balancing


CIFS users can access an exported share on any of the SFS nodes. All of the nodes can concurrently perform file operations. All of the file systems are mounted on every node. The exported shares are also exported from every node. However, there is a restriction: only one node at a time can perform file operations on a single share. The decision which node is currently allowed to perform the file operations for a specific share is made by the SFS software and is transparent to the CIFS users. When a CIFS share is accessed by a node that is not the owner of that share, SFS transparently redirects the access to the node that is the owner of that share. So all of the processing for a CIFS share is performed by the node that is designated as the owner of that share. If the SFS work load is found to be too high on a node that owns a share, you can "split" the share by using the CIFS> split command. By splitting a share:

Each share's top-level directories is treated as a single share. Each top-level directory becomes like a root of a new share and only one node at a time can perform the file operations on this new share. The ownership of different top-level directories is assigned to different nodes in the SFS cluster, balancing the CIFS-related workload.

Caution: You cannot specify which node owns the split share. If the node getting the ownership already has a heavy load, the new load distribution may worsen your situation.

192

Using SFS as a CIFS server About SFS cluster and load balancing

Use the CIFS> share show command to view which virtual IP is assigned to a share. Use the Network> ip addr show command to view which node is assigned a virtual IP. This shows which node is the current owner of the exported shares.

Splitting a share
You can split an exported share with the split command. This changes the way a CIFS-related workload is allocated to the SFS nodes. The purpose of the split command is to have multiple nodes serving a large share. Although the command can balance the subdirectory share in a round-robin fashion, the split is not based on the actual load. Restrictions for split command include the following:

You cannot split a sharename more than once. You cannot delete the subdirectory share of a split share. You cannot undo the effects of the split command.

Using SFS as a CIFS server About SFS cluster and load balancing

193

To split a share

To split a share, enter the following:


CIFS> split sharename [DirName] sharename The name of the share you want to split. It distributes the top-level directories of a file system across the SFS nodes. You must first split the share before you can enter a directory name. After you have split the share, enter CIFS> share show split share name for a list of directories. The name of the new top-level share directory in the split share. This optional variable adds a top-level directory to a file system, whose corresponding share may or may not have been split.

DirName

For example:
CIFS> split share1 Splitting share splitshare : .........Success.

To display the list of all of the CIFS shares, enter the following command. The output, the asterisk and the word split indicate that a share is split.
CIFS> share show ShareName FileSystem share1* fs3 share2 fs2 share3 fs3

ShareOptions split,rw rw,guest ro,oplocks

To display the details of a share name, enter the following:


CIFS> share show share1 DirName VIP Address Finan 172.16.113.116 HR 172.16.113.117 Mark 172.16.113.118 Prod 172.16.113.119

194

Using SFS as a CIFS server About managing home directories

To create a new top-level directory in a split share, enter the following command. To create a new top-level directory called newdir in an already split share called share1, enter the following:
CIFS> split share1 newdir Creating directory: newdir Success: Directory 'newdir' created

About managing home directories


You can use SFS to store the home directories of CIFS users. The home directory share name is identical to the SFS user name. When SFS receives a new CIFS connection request, it checks if the requested share is one of the ordinary exported shares. If it is not, SFS checks if the requested share name is the name of an existing SFS user (either local user or domain user, depending on the current mode of operation). If a match is found, it means that the received connection request is for a home directory share. You can access your home directory share the same way you access the file system ordinary shares. A user can connect only to his or her own home directory. Table 9-12 Command
set homedirfs

Home directory commands Definition


Specifies one or more file systems to be used for home directories. See To specify one or more file systems as the home directories on page 195.

homedir quota

Enables use of quotas on home directory file systems. See To enable use of quotas on home directory file systems on page 197.

homedir set

Manually creates a home directory. See To manually create a home directory on page 198.

homedir setall

Sets the quota for all of the users. The command also modifies the value of the global quota. See To set the quota value for all of the home directories on page 199.

homedir show

Displays information about home directories. See To display information about home directories on page 200.

Using SFS as a CIFS server About managing home directories

195

Table 9-12 Command


homedir delete

Home directory commands (continued) Definition


Deletes a home directory share. See To delete a home directory share on page 200.

homedir deleteall

Deletes the home directories. See To delete the home directories on page 201.

Setting the home directory file systems


Home directory shares are stored in one or more file systems. A single home directory can exist only in one of these file systems, but a number of home directories can exist in a single home directory file system. The file systems which are to be used for home directories are specified using the CIFS> set homedirfs command. To specify one or more file systems as the home directories

To reserve one or more file systems for home directories, enter the following:
CIFS> set homedirfs [filesystemlist]

where filesystemlist is a comma-separated list of names of the file systems which are used for the home directories. For example:
CIFS> set homedirfs fs1,fs2,fs3 Global option updated. Note: Restart the CIFS server.

If you want to remove the file systems you previously set up, enter the command again, without any file systems:
CIFS> set homedirfs

To find which file systems (if any) are currently used for home directories, enter the following:
CIFS> show

After you select one or more of the file systems to be used in this way, you cannot export the same file systems as ordinary CIFS shares. If you want to change the current selection, for example, to add an additional file system to the list of home directory file systems or to specify that no file

196

Using SFS as a CIFS server About managing home directories

system should be used for home directories, you have to use the same CIFS> set homedirfs command. In each case you must enter the entire new list of home directory file systems, which may be an empty list when no home directory file systems are required. SFS treats home directories differently from ordinary shares. The differences are as follows:

An ordinary share is used to export a file system, while a number of home directories can be stored in a single file system. The file systems used for home directories cannot be exported as ordinary shares. The CIFS> split command can be used for an ordinary share but not for a home directory share. Exporting a home directory share is done differently than exporting ordinary share. Also, removing these two kinds of shares is done differently. The configuration options you specify for an ordinary share (such as read-only or use of opportunistic locks) are different from the ones you specify for a home directory share.

Enabling quotas on home directory file systems


You can use the CIFS> homedir quota command to enable or disable the use of quotas and check if the quotas are enabled or disabled. Note: When quotas on home directory file systems are disabled, the CIFS> homedir show command does not show values for quotas.

Using SFS as a CIFS server About managing home directories

197

To enable use of quotas on home directory file systems

To enable the use of quotas, enter the following:


homedir quota quotaoption

where quotaoption is the variable you want to enter for the command. To enable the use of quotas, enter the following:
CIFS> homedir quota on

To disable the use of quotas, enter the following:


CIFS> homedir quota off

To check the status of quotas, enter the following:


CIFS> homedir quota status

Setting up home directories and use of quotas


You can manually create a home directory with the CIFS> homedir set command, or SFS can create it automatically when it accesses the home directory for the first time. The homedir set command lets you specify or change a quota value for the given home directory. The other method is automatic, but does not let you specify a quota value at the time of creation. You can specify a global quota value by using the CIFS> homedir setall quota command. See To set the quota value for all of the home directories on page 199. Once the global quota value is specified, the value applies to the automatically created homedir. For example, if you set the global quota value to CIFS> homedir setall 100M and you then create a new homedir in Windows, then the 100M quota value is assigned to that homedir.

198

Using SFS as a CIFS server About managing home directories

To manually create a home directory

To manually create a home directory, enter the following:


CIFS> homedir set username [domainname] [quota] username domainname quota The name of the new home directory. The domain for the new home directory. The storage space quota to be used for this home directory. The allowed values for quota are: 0 - Enter zero if there is no quota for this home directory. N - Enter a number greater than zero optionally followed by: k, K, m, M, g, G, t, or T (for kilo, mega, giga, or terabyte). If you do not enter a letter, the value is in bytes.

To find the current settings for a home directory, including the quota, enter the following:
CIFS> homedir show [username] [domainname] username domainname The name of the home directory. The Active Directory/Windows NT domain name or specify 'local' for the SFS local user [local].

To find the current settings for all home directories, including quotas, enter the following:
CIFS> homedir show

When you connect to your home directory for the first time, and if the home directory has not already been created, SFS selects one of the available home directory file systems and creates the home directory there. The file system is selected in a way that tries to keep the number of home directories balanced across all available home directory file systems. The automatic creation of a home directory does not require any commands, and is transparent to both the users and the SFS administrators. The quota limits the amount of disk space you can allocate for the files in a home directory. You can set the same quota value for all home directories using the CIFS> homedir setall command.

Using SFS as a CIFS server About managing home directories

199

To set the quota value for all of the home directories

To set the quota value which will be applied to all home directories, enter the following:
CIFS> homedir setall quota

where quota is the number you want to set.


quota 0 - Enter zero if there is no quota for this home directory. N - Enter a number greater than zero optionally followed by: k, K, m, M, g, G, t, or T (for kilo, mega, giga, or terabyte). If you do not enter a letter, the value is in bytes.

For example:
CIFS> homedir Setting quota Setting quota Setting quota Setting quota Done CIFS> setall 6M for CIFS local user: usr1 for CIFS local user: usr2 for SFSQA domain user: administrator for SFSQA domain user: smith

SFS CIFS currently uses soft quotas for home directories. This means that the storage space quota can be exceeded, but only for a period of time. This period is seven days and it cannot be changed. After this period has expired, if the allocated space is still over the limit, any new request to allocate space for files in the same home directory fails.

Displaying home directory usage information


You can display information about home directories using the CIFS> homedir show command. Note: Information about home directory quotas is up to date only when you enable the use of quotas for the home directory file systems.

200

Using SFS as a CIFS server About managing home directories

To display information about home directories

To display information about a specific user's home directory, enter the following:
CIFS> homedir show [username] [domainname] username domainname The name of the home directory. The domain where the home directory is located.

To display information about all home directories, enter the following:


CIFS> homedir show

Deleting home directories and disabling creation of home directories


You can delete a home directory share. This also deletes the files and sub-directories in the share. After a home directory is deleted, if you try to access the same home directory again, a new home directory will automatically be created. If you have an open file when the home directory is deleted, and you try to save the file, a warning appears:
Warning: Make sure the path or filename is correct. Save dialog?

Click on the Save button which saves the file to a new home directory. To delete a home directory share

To delete the home directory of a specific user, enter the following:


CIFS> homedir delete username [domainname] Do you want to delete homedir for username(y/n): username The name of the home directory you want to delete. Respond with y(es) or n(o) to confirm the deletion. domainname The domain it is located in.

You can delete all of the home directory shares with the CIFS> homedir deleteall command. This also deletes all files and subdirectories in these shares.

Using SFS as a CIFS server About managing local users and groups

201

After you delete the existing home directories, you can again create the home directories manually or automatically. To delete the home directories

To delete all home directories, enter the following:


CIFS> homedir deleteall Do you want to delete all home directories (y/n):

Respond with y(es) or n(o) to confirm the deletion. After you delete the home directories, you can stop SFS serving home directories by using the CIFS> set homedirfs command. To disable creation of home directories

To specify that there are no home directory file systems, enter the following:
CIFS> set homedirfs

After these steps, SFS does not serve home directories.

About managing local users and groups


When SFS is operating in the standalone mode, only the local users and groups of users can establish CIFS connections and access the home directories and ordinary shares. The SFS local files store the information about these user and group accounts. Local procedures authenticate and authorize these users and groups based on the use of names and passwords. You can manage the local users and groups as described in the rest of this topic. Accounts for local users can be created, deleted, and information about them can be displayed using the CIFS> local user commands. Table 9-13 Command
local user add

Manage local users and groups commands Definition


Adds a new user to CIFS. You can add the user to a local group, by entering the group name in the optional grouplist variable. Before you add the user to a grouplist, you must create the grouplist. When you create a local user, SFS assigns a default password to the new account. The default password is the same as the user name. For example, if you enter usr1 for the user name, the default password is also usr1. See To create the new local CIFS user on page 202.

202

Using SFS as a CIFS server About managing local users and groups

Table 9-13 Command


local password

Manage local users and groups commands (continued) Definition


The default password for a newly-created account is the same as the user name. You can change the default password using the CIFS> local password command. The maximum password length is eight characters. See To set the local user password on page 203.

local user delete

Deletes local user accounts. See To delete the local CIFS user on page 204.

local user show

Displays the user ID and lists the groups to which the user belongs. If you do not enter an optional username, the command lists all CIFS existing users. See To display the local CIFS user(s) on page 203.

local user members

Adds a user to one or more groups. For existing users, this command changes a user's group membership. See To change a user's group membership on page 204.

Creating a local CIFS user


To create the new local CIFS user

To create a local CIFS user, enter the following:


CIFS> local user add username [grouplist]

where username is the name of the user. The grouplist is a comma-separated list of group names. For example:
CIFS> local user add usr1 grp1,grp2 Adding USER : usr1 Success: User usr1 created successfully

Using SFS as a CIFS server About managing local users and groups

203

To set the local user password

To set the local password, enter the following:


CIFS> local password username

where username is the name of the user whose password you are changing. For example, to reset the local user password for usr1, enter the following:
CIFS> local password usr1 Changing password for usr1 New password:***** Re-enter new password:***** Password changed for user: 'usr1'

To display the local CIFS user(s)

To display local CIFS users, enter the following:


CIFS> local user show [username]

where username is the name of the user. For example, to list all local users:
CIFS> local user show List of Users ------------usr1 usr2 usr3

To display one local user, enter the following:


CIFS> local user show usr1 Username : usr1 UID : 1000 Groups : grp1

204

Using SFS as a CIFS server About configuring local groups

To delete the local CIFS user

To delete a local CIFS user, enter the following:


CIFS> local user delete username

where username is the name of the local user you want to delete. For example:
CIFS> local user delete usr1 Deleting User: usr1 Success: User usr1 deleted successfully

To change a user's group membership

To change a user's group membership, enter the following:


CIFS> local user members username grouplist

where username is the local user name being added to the grouplist. Group names in the grouplist must be separated by commas. For example:
CIFS> local user members usr3 grp1,grp2 Success: usr3's group modified successfully

About configuring local groups


A local user can be a member of one or more local groups. This group membership is used in the standalone mode to determine if the given user can perform some file operations on an exported share. You can create, delete, and display information about local groups using the CIFS> local group command. Table 9-14 Command
local group add

Configure local groups commands Definition


Creates a local CIFS group. See To create a local group on page 205.

local group show

Displays the list of available local groups you created. See To list all local groups on page 205.

Using SFS as a CIFS server About configuring local groups

205

Table 9-14 Command

Configure local groups commands (continued) Definition

local group delete Deletes a local CIFS group. See To delete the local CIFS groups on page 206.

Configuring a local group


To create a local group

To create a local group, enter the following:


CIFS> local group add groupname

where groupname is the name of the local group. For example:


CIFS> local group add grp1 Adding GROUP: grp1 Success: Group grp1 created successfully

To list all local groups

To list all existing local groups, enter the following:


CIFS> local group show [groupname]

where groupname lists all of the users that belong to that specific group. For example:
CIFS> local group show List of groups ------------grp1 grp2 grp3

For example:
CIFS> local group show grp1 GroupName UsersList ----------------grp1 usr1, usr2, usr3, urs4

206

Using SFS as a CIFS server About configuring local groups

To delete the local CIFS groups

To delete the local CIFS group, enter the following:


CIFS> local group delete groupname

where groupname is the name of the local CIFS group. For example:
CIFS> local group delete grp1 Deleting Group: grp1 Success: Group grp1 deleted successfully

Chapter

10

Using FTP
This chapter includes the following topics:

About FTP Displaying FTP server About FTP server commands About FTP set commands About FTP session commands Using the logupload command

About FTP
The File Transfer Protocol (FTP) server feature allows clients to access files on the SFS servers using the FTP protocol. The FTP service provides secure/non-secure access via FTP to files in the SFS servers. The FTP service runs on all of the nodes in the cluster and provides simultaneous read/write access to the files. The FTP service also provides configurable anonymous access to the filer. The FTP commands are used to configure the FTP server. By default, the FTP server is not running. You can start the FTP server using the FTP> server start command. The FTP server starts on the standard FTP port 21. FTP mode commands are listed in Table 10-1. To access the commands, log into the administrative console (master, system-admin, or storage-admin) and enter FTP> mode. For login instructions, go to About using the SFS command-line interface.

208

Using FTP Displaying FTP server

Table 10-1 Command


show

FTP mode commands Definition


Displays the FTP server settings. See Displaying FTP server on page 208.

server

Starts, stops, and displays the status of the FTP server. See About FTP server commands on page 208.

set

Configures the FTP server. See About FTP set commands on page 210.

session

Displays and terminates the FTP sessions. See About FTP session commands on page 216.

logupload

Uploads the FTP logs to a URL. See Using the logupload command on page 219.

Displaying FTP server


To display the FTP settings

To display the FTP settings, enter the following:


FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout Current Value ------------2000 no no yes /vx/ 30000:40000 15 minutes

About FTP server commands


The FTP> server commands start, stop, and display the status of the FTP server.

Using FTP About FTP server commands

209

Note: All configuration changes made using the FTP> set commands come into effect only when the FTP server is restarted. Table 10-2 Command
server status

FTP server commands Definition


Displays the status of the FTP server. See To display the FTP server status on page 209.

server start

Starts the FTP server on all nodes. If the FTP server is already started, the SFS software clears any faults and tries to start the FTP server. See To start the FTP server on page 209.

server stop

Stops the FTP server and terminates any existing FTP sessions. By default, the FTP server is not running. See To stop the FTP server on page 210.

Using the FTP server commands


To display the FTP server status

To display the FTP server status, enter


FTP> server status FTP Status on sfs_1 : OFFLINE FTP Status on sfs_2 : OFFLINE

To start the FTP server

To start the FTP server, enter the following:


FTP> server start FTP>

To check server status, enter the following:


FTP> server status FTP Status on sfs_1 : ONLINE FTP Status on sfs_2 : ONLINE

210

Using FTP About FTP set commands

To stop the FTP server

To stop the FTP server, enter the following:


FTP> server stop FTP>

To check the server status, enter the following:


FTP> server status FTP Status on sfs_1 : OFFLINE FTP Status on sfs_2 : OFFLINE

About FTP set commands


The FTP> set commands let you set various configurable options for the FTP server. Table 10-3 Command
set anonymous_logon

FTP set commands Definition


Tells the FTP server whether or not to allow anonymous logons. Enter yes to allow anonymous users to log on to the FTP server. Enter no (default) to not allow anonymous logons. For the changes to take effect you will need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set anonymous logons on page 213.

set anonymous_login_dir

Specifies the login directory for anonymous users. The default value of this parameter is /vx/. Valid values of this parameter start with /vx/. Make sure that the anonymous user (UID:40 GID:49 UNAME:ftp) has the appropriate permissions to read files in login_directory. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set anonymous logins on page 214.

Using FTP About FTP set commands

211

Table 10-3 Command

FTP set commands (continued) Definition


Specifies whether or not anonymous users have the [write] value in their login_directory. Enter yes to allow anonymous users to modify contents of their login_directory. Enter no (default) to not allow anonymous users to modify the contents of their login_directory. Make sure that the anonymous user (UID:40 GID:49 UNAME:ftp) has the appropriate permissions to modify files in their login_directory. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set anonymous write access on page 214.

set anonymous_write

set allow_non_ssl

Specifies whether or not to allow non-secure (plain-text) logins into the FTP server. Enter yes (default) to allow non-secure (plain-text) logins to succeed. Enter no to allow non-secure (plain-text) logins to fail. For the changes to take effect you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set non-secure logins on page 214.

set max_connections

Specifies the maximum number of simultaneous FTP clients allowed. Valid values for this parameter range from 1-9999. The default value is 2000. It affects the entire cluster. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set maximum connections on page 215.

212

Using FTP About FTP set commands

Table 10-3 Command

FTP set commands (continued) Definition


Specifies the range of port numbers to listen on for passive FTP transfers. The port_range defines a range specified as startingport:endingport. A port_range of 30000:40000 specifies that port numbers starting from 30000 to 40000 can be used for passive FTP. Valid values for port numbers range from 30000 to 50000. The default value of this option is 30000:40000. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set range of port numbers on page 215.

set passive_port_range

set idle_timeout

Specifies the amount of time in minutes after which an idle connection is disconnected. Valid values for time_in_minutes range from 1 to 600 (default value is 15 minutes). For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set idle timeout on page 215.

Using FTP About FTP set commands

213

Using the set commands


To set anonymous logons

To enable anonymous logons, enter the following:


FTP> set anonymous_logon yes|no yes no (default) Allows anonymous users to log on to the FTP server. Does not allow anonymous logons.

You need to stop and then start the server for the new setting to take affect. For example:
FTP> set anonymous_logon yes FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout FTP> server stop FTP> server start FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout Current Value ------------2000 yes no yes /vx/ 30000:40000 15 minutes Current Value ------------2000 no no yes /vx/ 30000:40000 15 minutes New Value --------yes

214

Using FTP About FTP set commands

To set anonymous logins

To set anonymous logins, enter the following:


FTP> set anonymous_login_dir login_directory

where the login_directory is the login directory of the anonymous users on the FTP server. To set anonymous write access

To set anonymous write access, enter the following:


FTP> set anonymous_write yes|no yes Allows anonymous users to modify the contents of their login_directory. Does not allow anonymous users to modify the contents of their login_directory.

no (default)

For example:
FTP> set anonymous_write yes FTP>

To set non-secure logins

To set non-secure login access to the FTP server, enter the following:
FTP> set allow_non_ssl yes|no yes (default) no Allows non-secure (plain-text) logins to succeed. Allows non-secure (plain-text) logins to fail.

For example:
FTP> set allow_non_ssl no FTP>

Using FTP About FTP set commands

215

To set maximum connections

To set the maximum number of allowed simultaneous FTP clients, enter the following:
FTP> set max_connections connections_number

where connections_number is the number of concurrent FTP connections allowed on the FTP server. For example:
FTP> set max_connections 3000 FTP>

To set range of port numbers

To set the range of port numbers to listen on for passive FTP transfers, enter the following:
FTP> set passive_port_range port_range

where port_range is the range of port numbers to listen on for passive FTP transfers. For example:
FTP> set passive_port_range 35000:45000 FTP>

To set idle timeout

To set the amount of time a connection can stay idle before disconnecting, enter the following:
FTP> set idle_timeout time_in_minutes

where time_in_minutes is the amount of time you want the connection to stay idle before disconnecting. For example:
FTP> set idle_timeout 30 FTP>

216

Using FTP About FTP session commands

To implement set command changes

To view all of the FTP> set command changes, enter the following:
FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout Current Value ------------2000 no no yes /vx/ 30000:40000 15 minutes New Value --------3000 yes yes no 35000:45000 30 minutes

To implement the new changes, enter the following:


FTP> server stop FTP> server start

To view the new command settings, enter the following:


FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout

Current Value ------------3000 yes yes no /vx/ 35000:45000 30 minutes

About FTP session commands


The FTP> session commands allow you to view or terminate the FTP sessions that are currently active.

Using FTP About FTP session commands

217

Table 10-4 Command


session show

FTP session Commands Definition


Displays the number of current FTP sessions to each node. See To display the current FTP sessions on page 217.

session showdetail Displays the details of each session that matches the filter_options criteria. If no filter_options are specified, all sessions are displayed. If multiple filter options are provided then sessions matching all of the filter options are displayed. Filter options can be combined by using ','. The details displayed include: Session ID, User, Client IP, Server IP, State (UL for uploading; DL for downloading, or IDLE), and File (the name of the files that appear are either being uploaded or downloaded). If an '?' appears under User, the session is not yet authenticated. See To display the FTP session details on page 218. session terminate Terminates the session entered for the session_id variable. What you enter is the same session displayed under Session ID with the FTP> session showdetail command. See To terminate an FTP session on page 218.

Using the FTP session commands


To display the current FTP sessions

To display the current FTP sessions, enter the following:


FTP> session show Max Sessions : 2000 Nodename Current Sessions -------- ---------------sfs_1 4 sfs_2 2

218

Using FTP About FTP session commands

To display the FTP session details

To display the details in the FTP sessions, enter the following:


FTP> session showdetail [filter_options]

where filter_options display the details of the sessions under specific headings. Filter options can be combined by using ','. If multiple filter options are used, sessions matching all of the filter options are displayed. For example, to display all of the session details, enter the following:
FTP> session showdetail Session ID User Client IP ---------- -----------sfs_1.1111 user1 10.209.105.219 sfs_1.1112 user2 10.209.106.11 sfs_2.1113 user3 10.209.107.21 sfs_1.1117 user4 10.209.105.219 sfs_2.1118 user1 10.209.105.219 sfs_1.1121 user5 10.209.111.219

Server IP --------10.209.105.111 10.209.105.111 10.209.105.112 10.209.105.111 10.209.105.111 10.209.105.112

State ----IDLE IDLE IDLE DL UL IDLE

File ----

file123 file345

For example, to display the details of the current FTP sessions to the Server IP (10.209.105.112), originating from the Client IP (10.209.107.21), enter the following:
FTP> session showdetail server_ip=10.209.105.112,client_ip=10.209.107.21 Session ID User Client IP Server IP State File ---------- ------------------------ ---sfs_2.1113 user3 10.209.107.21 10.209.105.112 IDLE

To terminate an FTP session

To terminate one of the FTP sessions displayed in the FTP> session showdetail command, enter the following:
FTP> session terminate session_id

where session_id is the unique identifier for each FTP session displayed in the FTP> session showdetail output.
FTP> session terminate sfs_2.1113 Session sfs_2.1113 terminated

Using FTP Using the logupload command

219

Using the logupload command


The FTP> logupload command allows you to upload the FTP server logs to a specified URL. To upload the FTP server logs

To upload the FTP server logs to a specified URL, enter the following:
FTP> logupload url [nodename] url The URL where the FTP logs will be uploaded. The URL supports both FTP and SCP (secure copy protocol). If a nodename is specified, only the logs from that node are uploaded. The default name for the uploaded file is ftp_log.tar.gz. nodename The node on which the operation occurs. Enter the value all for the operation to occur on all of the nodes in the cluster. Use the password you already set up on the node to which you are uploading the logs.

password

For example, to upload the logs from all of the nodes to an SCP-based URL:
FTP> logupload scp://user@host:/path/to/directory all Password: Collecting FTP logs, please wait..... Uploading the logs to scp://root@host:/path/to/directory, please wait...done

For example, to upload the logs from sfs_1 to an FTP-based URL:


FTP> logupload ftp://user@host:/path/to/directory sfs_1 Password: Collecting FTP logs, please wait..... Uploading the logs to ftp://root@host:/path/to/directory, please wait...done

220

Using FTP Using the logupload command

Chapter

11

Configuring event notifications


This chapter includes the following topics:

About configuring event notifications About severity levels and filters About email groups About syslog event logging Displaying events About SNMP notifications Configuring events for event reporting Exporting events in syslog format to a given URL

About configuring event notifications


Event notifications link applications that generate messages (the "events") to applications that monitor the associated conditions and respond when triggered by the events. This chapter discusses the SFS report commands. The Report commands are defined in Table 11-1. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter Report> mode. For login instructions, go to About using the SFS command-line interface.

222

Configuring event notifications About severity levels and filters

Table 11-1 Command


email

Report mode commands Definition


Configures an email group. See Configuring an email group on page 225.

syslog

Configures a syslog server. See Configuring a syslog server on page 230.

showevents

Displays events. See Displaying events on page 231.

snmp

Configures an SNMP management server. See Configuring an SNMP management server on page 233.

event

Configures events for event reporting. See Configuring events for event reporting on page 236.

exportevents

Exports events in syslog format to a given URL. See Exporting events in syslog format to a given URL on page 237.

About severity levels and filters


Each group can have its own severity definition. You can define the lowest level of the severity that will trigger all other severities higher than it. The following table describes the valid SFS severity levels. Table 11-2 Valid value
emerg alert crit err warning notice info

Severity levels Description


Indicates that the system is unusable Indicates that immediate action is required Indicates a critical condition Indicates an error condition Indicates a warning condition Indicates a normal but significant condition Indicates an informational message

Configuring event notifications About email groups

223

Table 11-2 Valid value


debug

Severity levels (continued) Description


Indicates a debugging message

Valid filters include:

network - if an alert is for a networking event, then selecting the "network" filter triggers that alert. If you select the "network" filter only, and an alert is for a storage-related event, the "network" alert will not be sent. storage - is for storage-related events, for example, file systems, snapshots, disks, and pools all

About email groups


The email commands configure the email notifications of events. These commands support the following:

Adding email groups. Adding filters to the group. Adding email addresses to the email group. Adding event severity to the group. Configuring an external email server for sending the event notification emails. Email group commands Definition
Displays an existing email group or details for the email group. See To display an existing email group or details for the email group on page 225.

Table 11-3 Command


email show

email add group

Uses email groups to group multiple email addresses into one entity; the email group is used as the destination of the SFS email notification. Email notification properties can be configured for each email group. When an email group is added initially, it has the all default filter. When a group is added initially, the default severity is info. See To add an email group on page 225.

224

Configuring event notifications About email groups

Table 11-3 Command


email add email-address

Email group commands (continued) Definition


Adds an email address to a group. See To add an email address to a group on page 226.

email add severity Adds a severity level to an email group. See To add a severity level to an email group on page 226. email add filter Adds a filter to a group. See To add a filter to a group on page 227. email del email-address email del filter Deletes an email address. See To delete an email address from a specified group on page 227. Deletes a filter from a specified group. See To delete a filter from a specified group on page 228. email del group Deletes an email group. See To delete an email group on page 228. email del severity Deletes a severity from a specified group. See To delete a severity from a specified group on page 228. email get Displays the details of the configured email server. Obtain the following information:

Name of the configured email server Email user's name Email user's password

See To display the details of the configured email server on page 229. email set Displays details for the configured email server and the email user. See To set the details of the email server on page 229. email set Deletes the configured email server by specifying the command without any options to delete the email server. See To delete the configured email server on page 229.

Configuring event notifications About email groups

225

Configuring an email group


To display an existing email group or details for the email group

To display an existing email group or details for the email group, enter the following:
Report> email show [group]

group is optional, and it specifies the group for which to display details. If the specified group does not exist, an error message is displayed. For example:
Report> email show root Group Name: root Severity of the events: info,debug Filter of the events: all,storage Email addresses in the group: adminuser@localhost OK Completed

To add an email group

To add an email group, enter the following:


Report> email add group group

where group specifies the name of the added group and can only contain the following characters:

Alpha characters Numbers Hyphens Underscores

Entering invalid characters results in an error message. If the entered group already exists, then no error message is displayed. For example:
Report> email add group alert-grp OK Completed

226

Configuring event notifications About email groups

To add an email address to a group

To add an email address to a group, enter the following:


Report> email add email-address group email-address

For example:
Report> email add email-address alert-grp symantecexample.com OK Completed group Specifies the group to which the email address is being added. If the email group specified does not exist, then an error message is displayed. Specifies the email address to be added to the group. If the email address is not a valid email address, for example, name@symantecexample.com, a message is displayed. If the email address has already been added to the specified group, a message is displayed.

email-address

To add a severity level to an email group

To add a severity level to an email group, enter the following:


Report> email add severity group severity

For example:
Report> email add severity alert-grp alert OK Completed group Specifies the email group for which to add the severity. If the email group specified does not exist, an error message is displayed. Indicates the severity level to add to the email group. See About severity levels and filters on page 222. Entering an invalid severity results in an error message, prompting you to enter a valid severity. Only one severity level is allowed at one time. You can have two different groups with the same severity levels and filters. Each group can have its own severity definition. You can define the lowest level of the severity that will trigger all other severities higher than it.

severity

Configuring event notifications About email groups

227

To add a filter to a group

To add a filter to a group, enter the following:


Report> email add filter group filter group Specifies the email group for which to apply the filter. If the specified email group does not exist, an error message is displayed. Specifies the filter for which to apply to the group. See About severity levels and filters on page 222. The default filter is all. A group can have more than one filter, but there may not be any duplicate filters for the group.

filter

For example:
Report> email add filter root storage OK Completed

To delete an email address from a specified group

To delete an email address, enter the following:


Report> email del email-address group email-address group Specifies the group from which to delete the email address. If the entered group does not exist, an error message is displayed. Specifies the email address from which to delete from the group. If the email address entered does not exist for the group, an error message is displayed.

email-address

For example, to delete an existing email address from the email group, enter the following:
Report> email del email-address root testuser@localhost

228

Configuring event notifications About email groups

To delete a filter from a specified group

To delete a filter from a specified group, enter the following:


Report> email del filter group filter group Specifies the group to remove the filter from. If the entered email group does not exist, an error message is displayed. Specifies the filter to be removed from the group. See About severity levels and filters on page 222. The default filter is all. If the specified filter is not in the specified group, an error message is displayed.

filter

To delete an email group

To delete an email group, enter the following:


Report> email del group group

group specifies the name of the email group to be deleted. If the email group specified does not exist, an error message is displayed. To delete a severity from a specified group

To delete a severity from a specified group, enter the following:


Report> email del severity group severity group Specifies the name of the email group from which the severity is to be deleted. If the specified email group does not exist, an error message is displayed. Specifies the severity to delete from the specified group. See About severity levels and filters on page 222. A severity cannot be deleted from a group if it does not exist for that group. If this occurs, an error message is displayed.

severity

Configuring event notifications About syslog event logging

229

To display the details of the configured email server

To display the details of the configured email server, enter the following:
Report> email get E-Mail Server: smtp.symantec.com E-Mail Username: adminuser E-mail User's Password: ******** OK Completed

To set the details of the email server

To set the details of the email server, enter the following:


Report> email set [email-server] [email-user] email-server Specifies the external email server for which you want to display the details for. For example, you would specify the following command: Report> email set smtp.symantecexample.com

email-user

Specifies the email user for which you want to display details for. For example, you would specify the following command:

For example:
Report> email set smtp.symantec.com adminuser Enter password for user 'adminuser': ********

To delete the configured email server

To delete the configured email server, enter the following command without any options:
Report> email set

About syslog event logging


Reporting of events by writing a message to the system log file is one of the options for administrators to report any significant occurrence in the system or in an application. In SFS, options include specifying the syslog messages for the event reporting, selecting the types of events to report, and selecting the severity of the occurrences to report.

230

Configuring event notifications About syslog event logging

For the syslog messages, options can be selected to report about storage, networks, or all. For the list of severities to report syslog messages, go to Table 11-2. Table 11-4 Commands
syslog show

Syslog commands Definition


Displays the list of syslog servers. See To display the list of syslog servers on page 230.

syslog add

Adds a syslog server See To add a syslog server on page 230.

syslog set severity Sets the severity for the syslog server. See To set the severity of the syslog server on page 231. syslog set filter Sets the syslog server filter. See To set the filter of the syslog server on page 231. syslog get filter Displays the values of the configured syslog server. See To display the values of the configured syslog server on page 231. syslog delete Deletes a syslog server. See To delete a syslog server on page 231.

Configuring a syslog server


To display the list of syslog servers

To display the list of syslog servers, enter the following:


Report> syslog show

To add a syslog server

To add a syslog server, enter the following:


Report> syslog add syslog-server-ipaddr

syslog-server-ipaddr specifies the hostname or the IP address of the external syslog server.

Configuring event notifications Displaying events

231

To set the severity of the syslog server

To set the severity of the syslog server, enter the following:


Report> syslog set severity value

For example:
Report> syslog set severity warning

value for severity indicates the severity for the syslog server. See About severity levels and filters on page 222. To set the filter of the syslog server

To set the filter of the syslog server, enter the following:


Report> syslog set filter value

value for filter indicates the filter for the syslog server. See About severity levels and filters on page 222. To display the values of the configured syslog server

To display the values of the configured syslog server, enter the following:
Report> syslog get filter|severity

To delete a syslog server

To delete a syslog server, enter the following:


Report> syslog delete syslog-server-ipaddr

syslog-server-ipaddr specifies the hostname or the IP address of the syslog server.

Displaying events
To display events

To display events, enter the following:


Report> showevents [number_of_events]

number of events specifies the number of events that you want to display. If you leave number_of_events blank, or if you enter 0, SFS displays all of the events.

232

Configuring event notifications About SNMP notifications

About SNMP notifications


Simple Network Management Protocol (SNMP) is a network protocol to simplify the management of remote network-attached devices such as servers and routers. SNMP is an open standard system management interface. Information from the Management Information Base (MIB) can also be exported. SNMP messages enable the reporting of a serious condition to a management station. The management station is then responsible for initiating further interactions with the managed node to determine the nature and extent of the problem. In SFS, options include specifying the SNMP messages for the event reporting, selecting the types of events to report, and selecting the severity of the occurrences to report. The SNMP server must be specified during configuration. See About severity levels and filters on page 222. Table 11-5 Command
snmp add

SNMP commands Definition


Adds an SNMP management server. See To add an SNMP management server on page 233.

snmp show

Displays the current list of SNMP management servers. See To display the current list of SNMP management servers on page 233.

snmp delete

Deletes an already configured SNMP management server. See To delete an already configured SNMP server on page 234.

snmp set severity

Sets the severity for SNMP notifications. See To set the severity for SNMP notifications on page 234.

snmp set filter

Sets the filter for SNMP notifications. See To set the filter for SNMP notifications on page 235.

snmp get filter|severity

Displays the values of the configured SNMP notifications. See To display the values of the configured SNMP notifications on page 235.

Configuring event notifications About SNMP notifications

233

Table 11-5 Command


snmp exportmib

SNMP commands (continued) Definition


Uploads the SNMP Management Information Base (MIB) file to the given URL. The URLs support FTP and SCP. If the url specifies a remote directory, the default filename is sfsfs_mib.txt. See To export the SNMP MIB file to a given URL on page 235.

Configuring an SNMP management server


To add an SNMP management server

To add an SNMP management server, enter the following:


Report> snmp add snmp-mgmtserver-ipaddr

snmp-mgmtserver-ipaddr specifies the host name or the IP address of the SNMP management server. For example, if using the IP address, enter the following:
Report> snmp add 10.10.10.10 OK Completed

For example, if using the host name, enter the following:


Report> snmp add mgmtserv1.symantec.com OK Completed

To display the current list of SNMP management servers

To display the current list of SNMP management servers, enter the following:
Report> snmp show Configured SNMP management servers: 10.10.10.10,mgmtserv1.symantec.com OK Completed

234

Configuring event notifications About SNMP notifications

To delete an already configured SNMP server

To delete an already configured SNMP server, enter the following:


Report> snmp delete snmp-mgmtserver-ipaddr

specifies the host name or the IP address of the SNMP management server. For example:
Report> snmp delete 10.10.10.10 OK Completed

If you input an incorrect value for snmp-mgmtserver-ipaddr you will get an error message. For example:
Report> snmp delete mgmtserv22.symantec.com SFS snmp delete ERROR V-288-26 Cannot delete SNMP management server, it doesn't exist.

To set the severity for SNMP notifications

To set the severity for SNMP notifications, enter the following:


Report> snmp set severity value

where value for indicates the severity level of the notification. For example:
Report> snmp set severity warning OK Completed

See About severity levels and filters on page 222. Notifications are sent for events having the same or higher severity.

Configuring event notifications About SNMP notifications

235

To set the filter for SNMP notifications

To set the filter for SNMP notifications, enter the following:


Report> snmp set filter value

For example:
Report> snmp set filter network OK Completed

value for filter indicates the filter for the notification. See About severity levels and filters on page 222. Notifications are sent for events matching the given filter. To display the values of the configured SNMP notifications

To display the values of the configured SNMP notifications, enter the following:
Report> snmp get filter|severity

For example:
Report> snmp get severity Severity of the events: warning OK Completed Report> snmp get filter Filter for the events: network OK Completed

To export the SNMP MIB file to a given URL

To export the SNMP MIB file to a given URL, enter the following:
Report> snmp exportmib url

url specifies the location the SNMP MIB file is exported to. For example:
Report> snmp exportmib scp://admin@server1.symantec.com:/tmp/sfsfs_mib.txt Password: ***** OK Completed

If the url specifies a remote directory, the default filename is sfsfs_mib.txt.

236

Configuring event notifications Configuring events for event reporting

Configuring events for event reporting


The event commands configure the settings for the event reporting. To set the time interval or the number of duplicate events sent for notifications

To set the time interval or the number of duplicate events sent for notifications, enter the following:
Report> event set dup-frequency number

For the event set dup-frequency command, number indicates the time interval for which only one event of duplicate events is sent for notifications. For example:
Report> event set dup-frequency 120 OK Completed

For the event set dup-number command, number indicates the number of duplicate events to ignore during notifications.
Report> event set dup-number number

For example:
Report> event set dup-number 10 OK Completed

Configuring event notifications Exporting events in syslog format to a given URL

237

To display the time interval or the number of duplicate events sent for notifications

To display the time interval, enter the following:


Report> event get dup-frequency

For example:
Report> event get dup-frequency Duplicate events frequency (in seconds): 120 OK Completed

To set the number of duplicate events sent for notifications, enter the following:
Report> event get dup-number

For example:
Report> event get dup-number Duplicate number of events: 10 OK Completed

Exporting events in syslog format to a given URL


You can export events in syslog format to a given URL. You can export audit events in syslog format to a given URL. Supported URLs for upload include:

FTP SCP

To export events in syslog format to a given URL

To export events in syslog format to a given URL, enter the following:


Report> exportevents url

url specifies the location to which the events in syslog format are exported to. For example: scp://root@server1.symantecexample.com:/exportevents/event.1. If the URL specifies a remote directory, the default filename is sfsfs_event.log.

238

Configuring event notifications Exporting events in syslog format to a given URL

To export audit events in syslog format to a given URL

To export audit events in syslog format to a given URL, enter the following:
Report> exportevents url [audit]

url specifies the location to which the audit events in syslog format are exported to. For example: scp://root@server1.symantecexample.com:/exportauditevents/auditevent.1. If the URL specifies a remote directory, the default filename is sfsfs_audit.log.

Chapter

12

Configuring backup
This chapter includes the following topics:

About backup Configuring backups using NetBackup or other third-party backup applications About NetBackup Adding a NetBackup master server to work with SFS Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation Configuring the virtual name of NetBackup About Network Data Management Protocol About NDMP supported configurations About the NDMP policies Displaying all NDMP policies About retrieving the NDMP data Restoring the default NDMP policies About backup configurations

About backup
The Backup commands are defined in Table 12-1. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter Backup> mode. For login instructions, go to About using the SFS command-line interface.

240

Configuring backup Configuring backups using NetBackup or other third-party backup applications

Table 12-1 Command


netbackup

Backup mode commands Definition


Configures the local NetBackup installation of SFS to use an external NetBackup master server, Enterprise Media Manager (EMM) server, or media server. See About NetBackup on page 241.

virtual-ip

Configures the NetBackup and NDMP data server installation on SFS nodes to use ipaddr as its virtual IP address. See Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation on page 244.

virtual-name

Configures the NetBackup installation on SFS nodes to use name as its hostname. See Configuring the virtual name of NetBackup on page 245.

ndmp

Transfers data between the data server and the tape server under the control of a client. The Network Data Management Protocol (NDMP) is used for data backup and recovery. See About Network Data Management Protocol on page 246.

show

Displays settings of the configured servers. See About backup configurations on page 259.

status

Displays status of configured servers. See About backup configurations on page 259.

start

Starts the configured servers. See About backup configurations on page 259.

stop

Stops the configured servers. See About backup configurations on page 259.

Configuring backups using NetBackup or other third-party backup applications


You can backup SFS using the Veritas NetBackup 6.5 client capability, or other third-party backup applications that use the standard NFS mount to backup over the network. The SFS ISO image includes the Netbackup 6.5 SFS client code.

Configuring backup About NetBackup

241

For information about the Veritas NetBackup 6.5 client capability, refer to the Veritas NetBackup 6.5 product documentation set. The Backup> netbackup commands configure the local NetBackup installation of SFS to use an external NetBackup master server, Enterprise Media Manager (EMM) server, or media server. When NetBackup is installed on SFS, it acts as a NetBackup client to perform IP-based backups of SFS file systems. Note: A new public IP address, not an IP address that is currently used, is required for configuring the NetBackup client. Use the Backup> virtual-ip and Backup> virtual-name commands to configure the NetBackup client.

About NetBackup
SFS includes built-in client software for Symantecs NetBackup data protection suite. If NetBackup is the enterprises data protection suite of choice, file systems hosted by SFS can be backed up to a NetBackup media server. To configure the built-in NetBackup client, you need the names and IP addresses of the NetBackup master and media servers. Backups are scheduled from those servers, using NetBackups administrative console. Consolidating storage reduces the administrative overhead of backing up and restoring many separate file systems. With a 256 TB maximum file system size, SFS makes it possible to collapse file storage into fewer administrative units, thus reducing the number of backup interfaces and operations necessary. All critical file data can be backed up and restored through the NetBackup client software included with SFS (separately licensed NetBackup master and media servers running on separate computers are required), or through any backup management software that supports NAS systems as data sources.

242

Configuring backup About NetBackup

Table 12-2 Command


netbackup master-server

Netbackup commands Definition


Provides a functioning external NetBackup master server to work with SFS. SFS only includes the NetBackup client code on the SFS nodes. If you want to use NetBackup to back up your SFS file systems, you must add an external NetBackup master server. For NetBackup clients to be compliant with the NetBackup End-User License Agreement (EULA), you must have purchased and entered valid license keys on the external NetBackup master server prior to configuring NetBackup to work with SFS. For more information on entering NetBackup license keys on the NetBackup master server, refer to the Veritas NetBackup Installation Guide, Release 6.5. See To add an external NetBackup master server on page 243.

netbackup emm-server

Adds an external NetBackup Enterprise Media Manager (EMM) server (which can be the same as the NetBackup master server) to work with SFS.

Note: If you want to use NetBackup to backup SFS file systems, you
must add an external NetBackup EMM server. See To add a NetBackup EMM server on page 243. netbackup media-server add Adds an external NetBackup media server (if the NetBackup media server is not co-located with the NetBackup master server).

Note: Adding an external NetBackup media server is optional. If you


do not add one, then SFS uses the NetBackup master server as the NetBackup media server. See To add a NetBackup media server on page 243. netbackup media-server delete Deletes an already configured NetBackup media server. See To delete an already configured NetBackup media server on page 244.

Configuring backup Adding a NetBackup master server to work with SFS

243

Adding a NetBackup master server to work with SFS


To add an external NetBackup master server

To add an external NetBackup master server, enter the following:


Backup> netbackup master-server server

where server is the hostname of the NetBackup master server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup master-server nbumaster.symantecexample.com Ok Completed

To add a NetBackup EMM server

To add the external NetBackup EMM server, enter the following:


Backup> netbackup emm-server server

where server is the hostname of the NetBackup EMM server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup emm-server nbumedia.symantecexample.com OK Completed

To add a NetBackup media server

To add an NetBackup media server, enter the following:


Backup> netbackup media-server add server

where server is the hostname of the NetBackup media server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup media-server add nbumedia.symantecexample.com OK Completed

244

Configuring backup Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation

To delete an already configured NetBackup media server

To delete an already configured NetBackup media server, enter the following:


Backup> netbackup media-server delete server

where server is the hostname of the NetBackup media server you want to delete. For example:
Backup> netbackup media-server delete nbumedia.symantecexample.com OK Completed

Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation
You can configure or change the virtual IP address used by NetBackup and the NDMP data server installation on SFS nodes. This is a highly available virtual IP address in the cluster. For information about the Veritas NetBackup 6.5 client capability, refer to the Veritas NetBackup 6.5 product documentation set. Note: If you are using NetBackup and the NDMP data server installation on SFS nodes, configure the virtual IP address using the Backup> virtual-ip command so that it is different from all of the virtual IP addresses, including the console server IP address and the physical IP addresses used to install SFS.

Configuring backup Configuring the virtual name of NetBackup

245

To configure or change the virtual IP address used by NetBackup and NDMP data server installation

To configure or change the virtual IP address used by NetBackup and the NDMP data server installation on SFS nodes, enter the following:
Backup> virtual-ip ipaddr

where ipaddr is the virtual IP address to be used with the NetBackup and the NDMP data server installation on the SFS nodes. Make sure that ipaddr can be resolved back to the hostname that is configured by using the Backup> virtual-name command. For example:
Backup> virtual-ip 10.10.10.10 OK Completed

See Configuring the virtual name of NetBackup on page 245.

Configuring the virtual name of NetBackup


To configure or change the NetBackup hostname

To configure the NetBackup installation on SFS nodes to use name as its hostname, enter the following:
Backup> virtual-name name

where name is the hostname to be used by the NetBackup installation on SFS nodes.
Backup> virtual-name nbuclient.symantecexample.com

Make sure that name can be resolved through DNS, and its IP address can be resolved back to name through the DNS reverse lookup. Also, make sure that name resolves to an IP address configured by using the Backup> virtual-ip command. For example:
Backup> virtual-name nbuclient.symantecexample.com OK Completed

See Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation on page 244.

246

Configuring backup About Network Data Management Protocol

About Network Data Management Protocol


The Network Data Management Protocol (NDMP) is an open protocol for transferring data between the data server and the tape server under the control of a client. NDMP is used for data backup and recovery. NDMP is based on a client-server architecture. The Data Management Application is the client and the data and tape services are the servers. The Data Management Application initiates the backup session. A single control connection from the Data Management Application to each of the data and tape services and a data connection between the tape and the data services creates a backup session. Note: The information in this section assumes you have the correct backup infrastructure in place that will support the NDMP environment. NDMP provides the following services:

Defines a mechanism and protocol for controlling backup, recovery, and other transfers of data between the data server and the tape server. Separates the network attached Data Management Application, Data Servers, and Tape Servers participating in archival, recovery, or data migration operations. Provides low-level control of tape devices and SCSI media changers. NDMP terminology Definition
The host computer system that executes the NDMP server application. Data is backed up from the NDMP host to either a local tape drive or to a backup device on a remote NDMP host. The virtual state machine on the NDMP host that is controlled using the NDMP protocol. This term is used independently of implementation. There are three types of NDMP services: data service, tape service, and SCSI service. An instance of one or more distinct NDMP services controlled by a single NDMP control connection. Thus a Data/Tape/SCSI Server is an NDMP server providing data, tape, and SCSI services. The configuration of one client and two NDMP services to perform a data management operation such as a backup or a recovery.

Table 12-3 Terminology


host

service

server

session

Configuring backup About NDMP supported configurations

247

Table 12-3 Terminology


client

NDMP terminology (continued) Definition


The application that controls the NDMP server. Backup and restore are initiated by the NDMP client. In NDMP version 4, the client is the Data Management Application.

Data Management An application that controls the NDMP session. In NDMP there is a Application master-slave relationship. The Data Management Application is the session master; the NDMP services are the slaves. In NDMP versions 1, 2, and 3 the term "NDMP client" is used instead of the Data Management Application.

The Backup> ndmp commands configure the default policies that will be used during the NDMP backup and restore sessions. In SFS, NDMP supports two sets of commands.

setenv commands. The set environment commands let you configure the

variables that make up the NDMP backup policies for your environment.

getenv commands. The get environment commands display what you have

set up with the setenv commands or the default values of all of the NDMP environment variables.

showenv command. The show environment command displays all of the NDMP

policies.

restoredefaultenv command. The restore default environment command

restores the NDMP policies back to their default values.

About NDMP supported configurations


SFS currently supports the three-way NDMP backup. The data and tape services reside on different nodes on a network. The Data Management Application has two control connections, one to each of the data and tape services. There is also a data connection between the data and the tape services. Data travels from the disk on an NDMP host to a tape device on another NDMP host. Backup data is sent over the local network. The tape drives must be in NDMP-type storage units.

248

Configuring backup About NDMP supported configurations

Figure 12-1

Illustration of three-way NDMP SFS backup

NFS clients NBU / TSM / EMC Legato with Control NDMP Flow

Control Flow

Data Flow SFSFS Cluster NDMP Server NBU Media Server with NDMP

Tape Library

Primary Storage Array

The NDMP commands configure the default policies that are used during the NDMP backup or restore sessions. The Data Management Application (client) initiating the connection for NDMP backup and restore operations to the NDMP data/tape server can override these default policies by setting the same policy name as environment variable and using any suitable value of that environment variable. The SFS NDMP server supports MD5 and text authentication. The Data Management Application that initiates the connection to the server uses master for the username and for the password for the NDMP backup session authentication. The password can be changed using the Admin> passwd command. To change the password, Creating Master, System Administrator, and Storage Administrator users.

Configuring backup About the NDMP policies

249

About the NDMP policies


The Backup> ndmp commands configure the default policies which will be used during the NDMP backup/restore sessions. The DMA (NDMP client) initiating the connection for the NDMP backup/restore operation to the SFS NDMP data server can override these default policies by setting the same policy name as environment variable and using any suitable value of that environment variable. Table 12-4 Command
ndmp setenv overwrite_policy

NDMP set commands Definition


Defines how new data is recorded over old data. There are three options available to configure this command: See To configure the overwrite policy on page 251.

ndmp setenv failure_resilient

Continues the backup and restore session even if an error condition occurs. During a backup or restore session, if a file or directory cannot be backed up or restored, setting value to yes lets the session continue with the remaining specified files and directories in the list. A log message is sent to the Data Management Application about the error. Refer to the Data Management Application documentation for the location of the NDMP logs. Some conditions, such as an I/O error, will not let the command continue the backup and restore session. See To configure the failure resilient policy on page 251.

ndmp setenv restore_dst

Configures the dynamic storage tiering (DST) restore policy.

Note: During the restore session, the DST policy only applies to the
file system, but it does not become effective until you run it through the storage tier policy commands. See To configure the restore DST policy on page 252.

ndmp setenv recursive_restore

Configures the NDMP recursive restore policy to restore the contents of a directory each time you restore. See To configure the recursive restore policy on page 252.

ndmp setenv Contains the file system backup information for the backup command. update_dumpdates In the SFS NDMP environment, the dumpdates file is /etc/ndmp.dumpdates. See To configure the update dumpdates policy on page 253.

250

Configuring backup About the NDMP policies

Table 12-4 Command


ndmp setenv send_history

NDMP set commands (continued) Definition


States whether or not you want the file history of the backed up data to be sent to the Data Management Application. See To configure the send history policy on page 253.

ndmp setenv use_snapshot

Lets you bring back previous versions of the files for review or to be used. A snapshot is a virtual copy of a set of files and directories taken at a particular point in time. The NDMP use snapshot policy enables the backup of a point-in-time image of a set of files and directories instead of a continuous changing set of files and directories. See To configure the use snapshot policy on page 253.

ndmp setenv backup_method

Enables the configuration of the NDMP backup method policy. This policy enables an incremental backup. See To configure the backup method policy on page 254.

ndmp setenv Configures the masquerade as EMC policy. masquerade_as_emc See To configure the masquerade as EMC policy on page 254.

Configuring the NDMP policies


Caution: No checks are made when overwriting the directory with the file or vice versa. The destination path being overwritten is removed recursively.

Configuring backup About the NDMP policies

251

To configure the overwrite policy

To configure the overwrite policy, enter the following:


Backup> ndmp setenv overwrite_policy value

where the variables for value are listed in the following table.
no_overwrite Checks if the file or directory to be restored already exists. If it does, the command responds with an error message. A log message is returned to the Data Management Application. Refer to the Data Management Application documentation for the location of the NDMP log messages. The file or directory is not overwritten. Checks if the file or directory already exists. If it does, it is renamed with the suffix .#ndmp_old and a new file or directory is created. If the file or directory already exists, it will be overwritten. It is recommended that while doing a restore from incremental backups, the value is set to overwrite_always. No checks are made when overwriting a directory with files. The destination path being overwritten is removed recursively.

rename_old (default)

overwrite_always

For example:
Backup> ndmp setenv overwrite_policy rename_old Ok Completed

To configure the failure resilient policy

To configure the failure resilient policy, enter the following:


Backup> ndmp setenv failure_resilient value

where the variables for value are yes or no.


yes (default) The backup and restore session continues even if an error condition is encountered. However some conditions, such as the I/O error, will cause the backup and restore session to stop. The backup and restore session terminates immediately when it encounters any error condition.

no

252

Configuring backup About the NDMP policies

To configure the restore DST policy

To configure the restore DST policy, enter the following:


Backup> ndmp setenv restore_dst value

where the variables for value are yes or no.


yes (default) During the backup session, if the specified directory set up for backup is a directory in the file system mount point, then the DST policy will be backed up. During the restore session, if the DST policy exists in the backup stream, the DST policy that was backed up will be applied to the restore destination path if that path is a mount point (full file system restore). The DST policy will not be restored if the secondary tier does not exist on the destination path. If the DST policy could not be restored, a log message is returned to the Data Management Application (refer to the Data Management Application documentation for the location of the NDMP logs). During the restore, the DST policy will only be applied to the file system, but it will not be effective until you run it through the Storage> tier policy commands. no The DST policy is not applied even if all of the other conditions are met.

To configure the recursive restore policy

To configure the recursive restore policy, enter the following:


Backup> ndmp setenv recursive_restore value

where the variables for value are yes or no.


yes (default) If the name list (names of the files and directories to be restored from the backup) specifies a directory, the contents of that directory will be restored recursively. Restores the directory, but not the contents of the directory.

no

Configuring backup About the NDMP policies

253

To configure the update dumpdates policy

To configure the update dumpdates policy, enter the following:


Backup> ndmp setenv update_dumpdates value

where the variables for value are yes or no.


yes (default) Updates the dumpdates files by the SFS NDMP data server with the details of the current backup which includes the time at which the backup was taken, the directory that was backed up, and the level of the backup. This information can be later used for the next backup session for the incremental and differential backups. The dumpdates files will not be updated.

no

To configure the send history policy

To configure the send history policy, enter the following:


Backup> ndmp setenv send_history value

where the variables for value are yes or no.


yes (default) Sends the history of the backed up data to the Data Management Application. The history includes information for every file and directory that was backed up, such as name, stat, positioning data (used for DAR restore), and inode information. The file history information will not be sent to the Data Management Application.

no

To configure the use snapshot policy

To configure the use snapshot policy, enter the following:


Backup> ndmp setenv use_snapshot value

where the variables for value are yes or no.


yes (default) The backup session will first take the snapshot of the file system which is being backed up. The snapshot will also be taken if any directory of the file system is being backed up. The snapshot taken uses the same storage space as that of the main file system. The backup session takes the backup of only the live file system.

no

254

Configuring backup About the NDMP policies

To configure the backup method policy

To configure the backup method policy, enter the following:


Backup> ndmp setenv backup_method value

where the variables for value are fcl or mtime.


FCL (default) File Change Log. FCL can be used to directly get the list of modified files in the file system and they can then be backed up. However, since FCL is finite in size, it is possible that not all of the changes could be recorded in the FCL. In that case, use the mtime backup method. Time of last modification. By checking the mtimes of the files in the file system, the time of last backup can be stored reliably somewhere in the file system, and the time can be used to find all of the modified files since last backup. The location where the 'time of last backup' is stored is /etc/ndmp.dumpdates. The filename is mentioned when you configure the update_dumpdates command.

mtime

For example:
Backup> ndmp setenv backup_method mtime OK Completed

To configure the masquerade as EMC policy

To configure the masquerade as the EMC policy, enter the following:


Backup> ndmp setenv masquerade_as_emc value

where the variables for value are yes or no.


yes The SFS NDMP server masquerades as an EMC-compatible device for certain NDMP backup applications. The SFS NDMP server does not masquerade as an EMC-compatible device.

no (default)

For example:
Backup> ndmp setenv masquerade_as_emc yes OK Completed Backup>

Configuring backup Displaying all NDMP policies

255

Displaying all NDMP policies


To display all of the NDMP policies

To display the NDMP policies, enter the following:


Backup> ndmp showenv

For example:
Backup> ndmp showenv Overwrite policy: Failure Resilient: Restore DST policies: Recursive restore: Update dumpdates: Send history: Use snapshot: Backup method: Masquerade as EMC: OK Completed

Rename old yes yes yes yes yes yes fcl yes

About retrieving the NDMP data


Table 12-5 Command
ndmp getenv overwrite_policy

NDMP get commands Definition


Defines how new data is recorded over old data. To retrieve the settings for the policy that you set up, use the ndmp getenv overwrite_policy command. See To retrieve the overwrite backup data on page 257.

ndmp getenv failure_resilient

Enables the continuation of the backup and restore session even if an error condition occurs because a file or directory cannot be backed up or restored. To retrieve the settings for the policy that you set up, use the ndmp getenv failure_resilient command. See To retrieve the failure resilient backup data on page 257.

ndmp getenv restore_dst

Configures the dynamic storage tiering (DST) restore policy. To retrieve the settings for the policy that you set up, use the ndmp getenv restore_dst command. See To retrieve the restore DST data on page 257.

256

Configuring backup About retrieving the NDMP data

Table 12-5 Command


ndmp getenv recursive_restore

NDMP get commands (continued) Definition


Enables the configuration of the restore session to restore the contents of a directory. To retrieve the settings for the policy that you set up, use the ndmp getenv recursive_restore command. See To retrieve the recursive restore data on page 257.

ndmp getenv Enables the configuration of the dumpdates file. To retrieve the update_dumpdates settings for the policy that you set up, use the ndmp getenv update_dumpdates command. See To retrieve the update dumpdates data on page 258. ndmp getenv send_history States whether or not you want the file history of the backed up data to be sent to the Data Management Application. To retrieve the settings for the policy that you set up, use the ndmp getenv send_history command. See To retrieve the send history data on page 258. ndmp getenv use_snapshot Enables how much of the files and directories you want to copy during the back up session. To retrieve the settings for the policy that you set up, use the ndmp getenv use_snapshot command. See To retrieve the NDMP use snapshot data on page 258. ndmp getenv backup_method Enables the configuration of the method to back up the file system. To retrieve the settings for the policy that you set up, use the ndmp getenv backup_method command. See To retrieve the NDMP backup method on page 258. ndmp getenv Configures the NDMP server to masquerade as an EMC-compatible masquerade_as_emc device for certain NDMP backup applications. See To retrieve the masquerade as EMC policy on page 259.

Configuring backup About retrieving the NDMP data

257

Retrieving the NDMP data


To retrieve the overwrite backup data

To retrieve the overwrite backup data, enter the following:


Backup> ndmp getenv overwrite_policy

For example:
Backup> ndmp getenv overwrite_policy Overwrite policy: Rename old OK Completed

To retrieve the failure resilient backup data

To retrieve the failure resilient data, enter the following:


Backup> ndmp getenv failure_resilient

For example:
Backup> ndmp getenv failure_resilient Failure Resilient: yes OK Completed

To retrieve the restore DST data

To retrieve the restore DST data, enter the following:


Backup> ndmp getenv restore_dst

For example:
Backup> ndmp getenv restore_dst Restore DST policies: no OK Completed

To retrieve the recursive restore data

To retrieve the recursive restore data, enter the following:


Backup> ndmp getenv recursive_restore

For example:
Backup> ndmp getenv recursive_restore Recursive restore: yes OK Completed

258

Configuring backup About retrieving the NDMP data

To retrieve the update dumpdates data

To retrieve the update dumpdates data, enter the following:


Backup> ndmp getenv update_dumpdates

For example:
Backup> ndmp getenv update_dumpdates Update dumpdates: yes OK Completed

To retrieve the send history data

To retrieve the send history data, enter the following:


Backup> ndmp getenv send_history

For example:
Backup> ndmp getenv send_history Send history: no OK Completed

To retrieve the NDMP use snapshot data

To retrieve the send history data, enter the following:


Backup> ndmp getenv use_snapshot

For example:
Backup> ndmp getenv use_snapshot Use snapshot: yes OK Completed

To retrieve the NDMP backup method

To retrieve the configured backup method policy, enter the following:


Backup> ndmp getenv backup_method

For example:
Backup> ndmp getenv backup_method Backup Method: fcl OK Completed

Configuring backup Restoring the default NDMP policies

259

To retrieve the masquerade as EMC policy

To retrieve the configured masquerade as EMC policy, enter the following:


Backup> ndmp getenv masquerade_as_emc

For example:
Backup> ndmp getenv masquerade_as_emc Masquerade as EMC: yes OK Completed Backup>

Restoring the default NDMP policies


To restore the NDMP policies to default values

To restore the NDMP policies to default values, enter the following:


Backup> ndmp restoredefaultenv

About backup configurations


Table 12-6 Command
show

Backup configuration commands Definition


Displays the NetBackup configured settings. If the settings were configured while backup and restore services were running, then they may not be currently in use by the SFS nodes. To display all of the configured settings, first run the stop command, then run the start command. See To display NetBackup configurations on page 260.

status

Displays if the NetBackup and the NDMP data server has started or stopped on the SFS nodes. If the NetBackup and the NDMP data server has currently started and is running, then Backup> status displays any on-going backup or restore jobs. See Configuring the virtual name of NetBackup on page 245. See To display the status of backup services on page 261.

260

Configuring backup About backup configurations

Table 12-6 Command


start

Backup configuration commands (continued) Definition


Starts processes that handle backup and restore. You can also change the status of a virtual IP address to online after it has been configured using the Backup> virtual-ip command. This applies to any currently active node in the cluster that handles backup and restore jobs. The Backup> start command does nothing if the backup and restore processes are already running. See To start backup services on page 262.

stop

Enables the processes that handle backup and restore. You can also change the status of a virtual IP address to offline after it has been configured using the Backup> virtual-ip command. The Backup> stop command does nothing if backup jobs are running that involve SFS file systems. See To stop backup services on page 262.

Configuring backup
To display NetBackup configurations

To display NetBackup configurations, enter the following:


Backup> show

For example:
Backup> show Virtual name: Virtual IP: NetBackup Master Server: NetBackup EMM Server: NetBackup Media Server(s): Ok Completed

nbuclient.symantec.com 10.10.10.10 nbumaster.symantec.com nbumaster.symantec.com not configured

Configuring backup About backup configurations

261

To display the status of backup services

To display the status of backup services, enter the following:


Backup> status

An example of the status command when no backup services are running:


Backup> status Virtual IP state NDMP server state NetBackup client state No backup/restore jobs running. OK Completed

: up : running : running

An example of the status command when backup services are running with file systems on the SFS nodes:
Backup> status Virtual IP state NDMP server state NetBackup client state

: up : running : running

Following filesystems are currently busy in backup/restore jobs by NDMP: myfs1 OK Completed

An example of the status command when the backup jobs that are running involve file systems using the NetBackup client.
Backup> status Virtual IP state NDMP server state NetBackup client state

: up : running : running

Some filesystems are busy in backup/restore jobs by NetBackup Client OK Completed

262

Configuring backup About backup configurations

To start backup services

To start backup processes, enter the following:


Backup> start

For example:
Backup> start OK Completed

To stop backup services

To stop backup services, enter the following:


Backup> stop

For example:
Backup> stop SFS backup ERROR V-288-0 Cannot stop, some backup jobs are running.

Chapter

13

Configuring SFS Dynamic Storage Tiering


This chapter includes the following topics:

About SFS Dynamic Storage Tiering (DST) How SFS uses Dynamic Storage Tiering About policies About adding tiers to file systems Removing a tier from a file system About configuring a mirror on the tier of a file system Listing all of the files on the specified tier Displaying a list of DST file systems Displaying the tier location of a specified file About configuring the policy of each tiered file system Relocating a file or directory of a tiered file system About configuring schedules for all tiered file systems Displaying files that will be moved by running a policy

About SFS Dynamic Storage Tiering (DST)


The SFS Dynamic Storage Tiering (DST) feature makes it possible to allocate two tiers of storage to a file system.

264

Configuring SFS Dynamic Storage Tiering About SFS Dynamic Storage Tiering (DST)

The following features are part of the SFS Dynamic Storage Tiering Solution:

Relocate files between primary and secondary tiers automatically as files age and become less business critical. Promote files from a secondary storage tier to a primary storage tier based on I/O temperature. Retain original file access paths to eliminate operational disruption, for applications, backup procedures, and other custom scripts. Allow you to manually move folders/files and other data between storage tiers. Enforce policies that automatically scan the file system and relocate files that match the appropriate tiering policy.

In SFS, there are two predefined tiers for storage:


Current active tier 1 (primary) storage. Tier 2 (secondary) storage for aged or older data.

To configure SFS DST, add tier 2 (secondary) storage to the configuration. Specify where the archival storage will reside (storage pool) and the total size. Files can be moved from the active storage after they have aged for a specified number of days, depending on the policy selected. The number of days for files to age (not accessed) before relocation can be changed at any time. Note: An aged file is a file that exists without being accessed. Figure 13-1 depicts the features of SFS and how it maintains application transparency.

Configuring SFS Dynamic Storage Tiering About SFS Dynamic Storage Tiering (DST)

265

Figure 13-1

Dynamic Storage Tiering


/one-file-system

/sales

/financial /sales

/development /sales

/current

/forecast

/current /2007

/forecast /2008

/current /new

/forecast /history

storage

Primary Tier

Secondary Tier

mirrored RAID5

If you are familiar with Veritas Volume Manager (VxVM), every SFS file system is a multi-volume file system (one file system resides on two volumes). The DST tiers are predefined to simplify the interface. When an administrator wants to add storage tiering, a second volume is added to the volume set, and the existing file system is encapsulated around all of the volumes in the file system. This chapter discusses the SFS storage commands. You use these commands to configure tiers on your file systems. The Storage commands are defined in Table 13-1. You log into the administrative console (for master, system-admin, or storage-admin) and enter Storage> mode to access the commands. For login instructions, go to About using the SFS command-line interface.

266

Configuring SFS Dynamic Storage Tiering How SFS uses Dynamic Storage Tiering

Table 13-1 Command


tier add

Storage mode commands Definition


Adds different types of storage tier to the file system. See About adding tiers to file systems on page 268.

tier remove

Removes a tier from a file system. See Removing a tier from a file system on page 270.

tier addmirror

Adds a mirror to a tier of a file system. See About configuring a mirror on the tier of a file system on page 271.

tier rmmirror

Removes a mirror from a tier of a file system. See About configuring a mirror on the tier of a file system on page 271.

tier listfiles

Lists all of the files on the specified tier. See Listing all of the files on the specified tier on page 273.

tier mapfile

Displays the tier location of a specified file. See Displaying the tier location of a specified file on page 274.

tier policy

Configures the policy of each tiered file system. See About configuring the policy of each tiered file system on page 274.

tier relocate

Relocates a file or directory. See Relocating a file or directory of a tiered file system on page 277.

tier schedule

Creates schedules for all tiered file systems. See About configuring schedules for all tiered file systems on page 277.

tier query

Displays a list of files that will be moved by running a policy. See Displaying files that will be moved by running a policy on page 280.

How SFS uses Dynamic Storage Tiering


SFS provides two types of tier:

Primary tier

Configuring SFS Dynamic Storage Tiering About policies

267

Secondary Tier

Each newly created file system has only one primary tier initially. This tier cannot be removed. For example, the following operations are applied to the primary tier:
Storage> fs addmirror Storage> fs growto Storage> fs shrinkto

The Storage> tier commands manage file system DST tiers. All Storage> tier commands take a file system name as an argument and perform operations on the combined construct of that file system. The SFS file system default is to have a single storage tier. An additional storage tier can be added to enable storage tiering. A file system can only support a maximum of two storage tiers.
Storage> tier commands can be used to perform the following:

Adding/removing/modifying the secondary tier Setting policies Scheduling policies Locating tier locations of files Listing files that are located on the primary or secondary tier Moving files from the secondary tier to the primary tier

About policies
Each tier can be assigned a policy. The policies include:

Specify on which tier (primary or secondary) the new files get created. Relocate files from the primary tier to the secondary tier based on any number of days of inactivity of a file. Relocate files from the secondary tier to the primary tier based on the Access Temperature of the file.

268

Configuring SFS Dynamic Storage Tiering About adding tiers to file systems

About adding tiers to file systems


You can add different types of tiers to file systems. Table 13-2 Command
tier add simple

Tier add commands Definition


Adds a second tier to a file system. The storage type of the second tier is independent of the protection level of the first tier. See To add a second tier to a file system on page 268.

tier add mirrored

Adds a mirrored second tier to a file system. See To add a mirrored tier to a file system on page 268.

tier add striped

Adds a striped second tier to a file system. See To add a striped tier to a file system on page 269.

tier add mirrored-stripe tier add striped-mirror

Adds a mirrored-striped second tier to a file system. See To add a mirrored-striped tier to a file system on page 269. Adds a striped-mirror second tier to a file system. See To add a striped-mirror tier to a file system on page 269.

Adding tiers to a file system


To add a second tier to a file system

To add a tier to a file system where the volume layout is "simple" (concatenated), enter the following:
Storage> tier add simple fs_name size pool1[,disk1,...]

For definitions of the command variables, go to Table 13-3. To add a mirrored tier to a file system

To add a mirrored tier to a file system, enter the following:


Storage> tier add mirrored fs_name size nmirrors pool1[,disk1,...] [protection=disk|pool]

For definitions of the command variables, go to Table 13-3. For example:


Storage> tier add mirrored fs1 100M 2 pool3,pool4 100% [#] Creating mirrored secondary tier of filesystem

Configuring SFS Dynamic Storage Tiering About adding tiers to file systems

269

To add a striped tier to a file system

To add a striped tier to a file system, enter the following:


Storage> tier add striped fs_name size ncolumns pool1[,disk1,...] [stripeunit=kilobytes]

For definitions of the command variables, go to Table 13-3. To add a mirrored-striped tier to a file system

To add a mirrored-striped tier to a file system, enter the following:


Storage> tier add mirrored-stripe fs_name size nmirrors ncolumns pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes]

For definitions of the command variables, go to Table 13-3. To add a striped-mirror tier to a file system

To add a striped-mirror tier to a file system, enter the following:


Storage> tier add striped-mirror fs_name size nmirrors ncolumns pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes]

For definitions of the command variables, go to Table 13-3. Table 13-3 Command variable
fs_name

Definitions of tier add command variables Definition


Specifies the name of the file system to which the mirrored tier will be added. If the specified file system does not exist, an error message is displayed. Specifies the size of the tier to be added to the file system, for example, 10m, 10M, 25g, 100G. Specifies the numbers of columns to add to the striped tiered file system. Specifies the number of mirrors to be added to the tier for the specified file system. Specifies the pool(s) or disk(s) that will be used for the specified tiered file system. If the specified pool or disk does not exist, an error message is displayed. You can specify more than one pool or disk by separating the pool or disk name with a comma, but do not include a space between the comma and the name. The disk needs to be part of the pool or an error message is displayed.

size

ncolumns

nmirrors

pool1[,disk1,...]

270

Configuring SFS Dynamic Storage Tiering Removing a tier from a file system

Table 13-3 Command variable


protection

Definitions of tier add command variables (continued) Definition


If no protection level is specified, disk is the default protection level. The protection level of the second tier is independent of the protection level of the first tier. Available options are: disk - If disk is entered for the protection field, then mirrors will be created on separate disks. The disks may or may not be in the same pool. pool - If pool is entered for the protection field, then mirrors will be created in separate pools. If not enough space is available, then the file system will not be created.

stripeunit=kilobytes Specifies a stripe width of either 128K, 256k, 512K, 1M, or 2M. The default stripe width is 512K.

Removing a tier from a file system


The Storage> tier remove command removes a tier from the file system and releases the storage used by the file system back to the storage pool. This command requires that the file system be online, and that no data resides on the secondary tier. If the storage tier to be removed contains any data residing on it, then the tier cannot be removed from the file system. To remove a tier from a file system

To remove a tier from a file system, enter the following:


Storage> tier remove fs_name

where fs_name specifies the name of the tiered file system that you want to remove. For example:
Storage> tier remove fs1 Storage>

Configuring SFS Dynamic Storage Tiering About configuring a mirror on the tier of a file system

271

About configuring a mirror on the tier of a file system


These commands add or remove mirrors to the tier of the file system. Table 13-4 Command
tier addmirror

Tier mirror commands Definition


Adds a mirror to a tier of a file system. See To add a mirror to a tier of a file system on page 271.

tier rmmirror

Removes a mirror from a tier of a file system.

Note: For a striped-mirror file system, if any of the disks are bad, this
command disables the mirrors from the tiered file system for which the disks have failed. If no disks have failed, SFS chooses a mirror to remove from the tiered file system. See To remove a mirror from a tier of a file system on page 272.

Configuring a mirror to a tier of a file system


To add a mirror to a tier of a file system

To add a mirror to a tier of a file system, enter the following:


Storage> tier addmirror fs_name pool1[,disk1,...] [protection=disk|pool] fs_name Specifies the file system to which the a mirror will be added. If the specified file system does not exist, an error message is displayed. Specifies the pool(s) or disk(s) that will be used as a mirror for the specified tiered file system. You can specify more than one pool or disk by separating the name with a comma. But do not include a space between the comma and the name. The disk needs to be part of the pool or an error message is displayed.

pool1[,disk1,...]

272

Configuring SFS Dynamic Storage Tiering About configuring a mirror on the tier of a file system

protection

If no protection level is specified, disk is the default protection level. Available options are: disk - If disk is entered for the protection field, then mirrors will be created on separate disks. The disks may or may not be in the same pool. pool - If pool is entered for the protection field, then mirrors will be created in separate pools. If not enough space is available, then the file system will not be created.

For example:
Storage> tier addmirror fs1 pool5 100% [#] Adding mirror to secondary tier of filesystem

To remove a mirror from a tier of a file system

To remove a mirror from a tier of a file system, enter the following:


Storage> tier rmmirror fs_name

where fs_name specifies the name of the tiered file system from which you want to remove a mirror. For example:
Storage> tier rmmirror fs1 Storage>

This command provides another level of detail for the remove mirror operation. You can use the command to specify which mirror you want to remove by specifying the pool name or disk name. Note: The disk must be part of a specified pool.

Configuring SFS Dynamic Storage Tiering Listing all of the files on the specified tier

273

To remove a mirror from a tier spanning a specified pool or disk

To remove a mirror from a tier that spans a specified pool or disk, enter the following:
Storage> tier rmmirror fs_name [pool_or_disk_name] fs_name Specifies the name of the file system from which to remove a mirror. If the specified file system does not exist, an error message is displayed.

pool_or disk_name Specifies the pool or disk from which the mirror of the tiered file system spans.

The syntax for the Storage> tier rmmirror command is the same for both pool and disk. If you try to remove a mirror using Storage> fs rmmirror fs1 abc, SFS first checks for the pool with the name abc, then SFS removes the mirror spanning on that pool. If there is no pool with the name abc, then SFS removes the mirror that is on the abc disk. If there is no disk with the name abc, then an error message is displayed.

Listing all of the files on the specified tier


You can list all of the files that reside on either the primary tier or the secondary tier. Note: If the tier contains a large number of files, it may take some time before the output of this command is displayed. To list all of the files on the specified tier

To list all of the files on the specified tier, enter the following:
Storage> tier listfiles fs_name {primary|secondary}

where fs_name indicates the name of the tiered file system from which you want to list the files. You can specify to list files from either the primary or secondary tier. For example:
Storage> tier listfiles fs1 secondary Storage>

274

Configuring SFS Dynamic Storage Tiering Displaying a list of DST file systems

Displaying a list of DST file systems


You can display a list of DST file systems using the Storage> fs list command. See Listing all file systems and associated information on page 120.

Displaying the tier location of a specified file


To display the tier location of a specified file

To display the tier location of a specified file, enter the following:


Storage> tier mapfile fs_name file_path fs_name Specifies the name of the file system for which the specified file on the tiered file system resides. If the specified file system does not exist, an error message is displayed. Specifies the tier location of the specified file. The path of the file is relative to the file system.

file_path

For example, to show the location of a.txt, which is in the root directory of the fs1 file system, enter the following:
tier mapfile fs1 /a.txt Tier Extent Type ==== =========== Primary Data

File Offset =========== 0 Bytes

Extent Size =========== 1.00 KB

About configuring the policy of each tiered file system


You can configure the policy of each tiered file system. Table 13-5 Command
tier policy list

Tier policy commands Definition


Displays the policy for each tiered file system. You can have one policy for each tiered file system. See To display the policy of each tiered file system on page 275.

tier policy modify

Modifies the policy of a tiered file system. See To modify the policy of a tiered file system on page 276.

Configuring SFS Dynamic Storage Tiering About configuring the policy of each tiered file system

275

Table 13-5 Command


tier policy run

Tier policy commands (continued) Definition


Runs the policy of a tiered file system. See To run the policy of a tiered file system on page 276.

tier policy remove Removes the policy of a tiered file system. See To remove the policy of a tiered file system on page 277.

Configuring the policy of each tiered file system


To display the policy of each tiered file system

To display the policy of each tiered file system, enter the following:
Storage> tier policy list

For example:
Storage> tier policy list FS Create on Days MinAccess Temp == ========= ==== ============== fs1 primary 2 3

PERIOD ====== 4

Each tier can be assigned a policy. A policy assigned to a file system has three parts:
file creation inactive files Specifies on which tier the new files are created. Indicates when a file has to be moved from the primary tier to the secondary tier. For example, if the days option of the tier is set to 10, and if a file has not been accessed for more than 10 days, then it is moved from the primary tier of the file system to the secondary tier. Measures the number of I/O requests to the file during the period designated by the period. In other words, it is the number of read or write requests made to a file over a specified number of 24-hour periods divided by the number of periods. If the access temperature of a file exceeds minacctemp (where the access temperature is calculated over a period of time previously specified) then this file is moved from the secondary tier to the primary tier.

access temperature

276

Configuring SFS Dynamic Storage Tiering About configuring the policy of each tiered file system

To modify the policy of a tiered file system

To modify the policy of a tiered file system, enter the following:


Storage> tier policy modify fs_name {primary|secondary} days minacctemp period fs_name The name of the tiered file system from which you want to modify a policy. Causes new files to be created on the primary or secondary tier. You need to input either primary or secondary. Number of days from which the inactive files move from the primary to the secondary tier. The minimum access temperature value for moving files from the secondary to the primary tier. The number of past days used for calculating the access temperature.

tier

days

minacctemp

period

For example:
Storage> tier policy modify fs1 primary 6 5 3 SFS fs SUCCESS V-288-0 Successfully modifies tiering policy for File system fs1

To run the policy of a tiered file system

To run the policy of a tiered file system, enter the following:


Storage> tier policy run fs_name

where fs_name indicates the name of the tiered file system for which you want to run a policy. For example:
Storage> tier policy fs1 SFS fs SUCCESS V-288-0 Successfully ran tiering policy for File system fs1

Configuring SFS Dynamic Storage Tiering Relocating a file or directory of a tiered file system

277

To remove the policy of a tiered file system

To remove the policy of a tiered file system, enter the following:


Storage> tier policy remove fs_name

where fs_name indicates the name of the tiered file system from which you want to remove a policy. For example:
Storage> tier policy remove fs1 SFS fs SUCCESS V-288-0 Successfully removed tiering policy for File system fs1

You can run the policy of a tiered file system, which would be similar to scheduling a job to run your policies, except in this case running the policy is initiated manually. The Storage> tier policy run command moves the older files from the primary tier to the secondary tier according to the policy setting.

Relocating a file or directory of a tiered file system


To relocate a file or directory

To relocate a file or directory, enter the following:


Storage> tier relocate fs_name dirPath fs_name The name of the tiered file system from which you want to relocate a file or directory. The relocation of the file or directory is done from the secondary tier to the primary tier. Enter the relative path of the directory (dirPath) you want to relocate. Or enter the relative path of the file (FilePath) that you want to relocate.

dirPath

About configuring schedules for all tiered file systems


The tier schedule commands display, modify, and remove the tiered file systems.

278

Configuring SFS Dynamic Storage Tiering About configuring schedules for all tiered file systems

Table 13-6 Command


tier schedule modify tier schedule list

Tier schedule commands Definition


Modifies the schedule of a tiered file system. See To modify the schedule of a tiered file system on page 279. Displays the schedules for all tiered file systems. You can have one schedule for each tiered file system. You cannot create a schedule for a non-existent or a non-tiered file system. See To display schedules for all tiered file systems on page 280.

tier schedule remove

Removes the schedule of a tiered file system. See To remove the schedule of a tiered file system on page 280.

Configuring SFS Dynamic Storage Tiering About configuring schedules for all tiered file systems

279

Configuring schedules for all tiered file systems


To modify the schedule of a tiered file system

To modify the schedule of a tiered file system, enter the following:


Storage> tier schedule modify fs_name minute hour day_of_the_month month day_of_the_week

For example, enter the following:


Storage> tier schedule modify fs1 1 1 1 * * * SFS fs SUCCESS V-288-0 Command 'tier schedule modify' executed successfully for fs1 fs_name Specifies the file system where the schedule of the tiered file system resides. If the specified file system does not exist, an error message is displayed. This parameter may contain either an asterisk, (*), which implies "every minute," or a numeric value between 0-59. You can enter */(0-59), a range such as 23-43, or just the *. hour This parameter may contain either an asterisk, (*), which implies "run every hour," or a number value between 0-23. You can enter */(0-23), a range such as 12-21, or just the *. day_of_the_month This parameter may contain either an asterisk, (*), which implies "run every day of the month," or a number value between 1-31. You can enter */(1-31), a range such ass 3-22, or just the *. month This parameter may contain either an asterisk, (*), which implies "run every month," or a number value between 1-12. You can enter */(1-12), a range such as 1-5, or just the *. You can also enter the first three letters of any month (must use lowercase letters). day_of_the_week This parameter may contain either an asterisk (*), which implies "run every day of the week," or a numeric value between 0-6. Crontab interprets 0 as Sunday. You can also enter the first three letters of the week (must use lowercase letters).

minute

280

Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy

To display schedules for all tiered file systems

To display schedules for all tiered file systems, enter the following:
Storage> tier schedule list [fs_name]

where fs_name indicates the name of the tiered file system for which you want to run a policy. For example:
Storage> tier schedule list FS Minute Hour Day === ====== ==== === fs1 1 1 1

Month ===== *

WeekDay ======= *

To remove the schedule of a tiered file system

To remove the schedule of a tiered file system, enter the following:


Storage> tier schedule remove fs_name

where fs_name is the name of the tiered file system from which you want to remove a schedule. For example:
Storage> tier schedule remove fs1 SFS fs SUCCESS V-288-0 Command tier schedule remove executed successfully for fs1

Displaying files that will be moved by running a policy


You can display the list of files that will be moved by running a policy. This is very useful as a "what if" type of analysis. The command does not physically move any file blocks.

Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy

281

To display a list of files that will be moved by running a policy

To display a list of files that will be moved by running a policy, enter the following:
Storage> tier query fs_name

where fs_name is the name of the tiered file system for which you want to display a list of files that will be moved by running a policy. For example:
Storage> tier query fs1 /a.txt /b.txt /c.txt /d.txt

282

Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy

Chapter

14

Configuring system information


This chapter includes the following topics:

About system commands About setting the clock commands About configuring the locally saved configuration files Using the more command About coordinating cluster nodes to work with NTP servers Displaying the system statistics Using the swap command About the option commands

About system commands


The system commands set or show the date and time of the system, and start, stop, or check the status of the NTP server. The system command class also allows you to display cluster-wide performance statistics, swap network interfaces, and enable or disable the more filter on output of the administrative console. It also contains option command display and configure the tunable parameters. The system commands are listed in Table 14-1. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter the System> mode. For login instructions, go to About using the SFS command-line interface.

284

Configuring system information About setting the clock commands

Table 14-1 Command


clock

System mode commands Definition


Sets or shows the date and time of the system, including setting time zones and displaying the list of regions. See About setting the clock commands on page 284.

config

Imports or exports the SFS configuration settings. See About configuring the locally saved configuration files on page 288.

more

Enables, disables, or checks the status of the more filter. See Using the more command on page 292.

ntp

Sets the Network Time Protocol (NTP) server on all of the nodes in the cluster. See About coordinating cluster nodes to work with NTP servers on page 292.

stat

Displays the system, Dynamic Multipathing (DMP), and process-related node wide statistics. See Displaying the system statistics on page 294.

swap

Swaps two network interfaces of a node in a cluster. See Using the swap command on page 295.

option

Adjusts a variety of tunable variables that affect the global SFS settings. See Using the option commands on page 299.

About setting the clock commands


These commands set or show the date and time of the system, including setting time zones and displaying the list of regions. Table 14-2 Command
clock show

Clock commands Definition


Displays the current system date and time. See To display the current date and time of the system on page 285.

Configuring system information About setting the clock commands

285

Table 14-2 Command


clock set

Clock commands (continued) Definition


Sets the system date and time. See To set the system date and time on page 286.

clock timezone

Sets the time zone for the system.

Note: This command only accepts the name of a city or GMT


(Greenwich Mean Time). See To set the time zone and region for the system on page 287. clock regions Sets the region for the system. See To set the region for the system on page 287.

Setting the clock commands


To display the current date and time of the system

To display the current system date and time, enter the following:
System> clock show

For example:
System> clock show Fri Feb 20 12:16:30 PST 2009

You can set the current date and time of the system on all of the nodes in the cluster.

286

Configuring system information About setting the clock commands

To set the system date and time

To set the system date and time, enter the following:


System> clock set time day month year time HH:MM:SS using a 24-hour clock Pacific Daylight Time (PDT) is the time zone used for the system. Greenwich Mean Time (GMT) is the time zone used for the BIOS. day month 1..31 January, February, March, April, May, June, July, August, September, October, November, December YYYY

year

For example:
System> clock set 12:00:00 17 July 2009 .Done. Fri Jul 17 12:00:00 PDT 2009 System>

Configuring system information About setting the clock commands

287

To set the time zone and region for the system

To set the time zone for the system, enter the following:
System> clock timezone timezone

To reset the time zone on your system, enter the following:


System> clock timezone region

The system will reset to the time zone for that specific region. For example:
System> clock show Thu Apr 3 09:40:26 PDT 2008 System> clock timezone GMT Setting time zone to: GMT ..Done. Thu Apr 3 16:40:37 GMT 2008 System> clock show Thu Apr 3 16:40:47 GMT 2008 System> clock timezone Los_Angeles Setting time zone to: Los_Angeles ..Done. Thu Apr 3 09:41:06 PDT 2008

System> clock show Thu Apr 3 09:41:13 PDT 2008

To set the region for the system

To set the region for the system, enter the following:


System> clock regions [region]

288

Configuring system information About configuring the locally saved configuration files

region

Specifies the region for the system. Valid values include:


Africa America Asia Australia Canada Europe GMT-offset - (this includes GMT, GMT +1, GMT +2) Pacific US

For example:
System> clock regions US

The software responds with the areas included in the US region.


System> clock regions US Alaska Aleutian Arizona Central East-Indiana Eastern Hawaii Indiana-Starke Michigan Mountain Pacific Samoa

About configuring the locally saved configuration files


Table 14-3 Command
config list

Configuration commands Definition


Views locally saved configuration files. See To list configuration settings on page 289.

Configuring system information About configuring the locally saved configuration files

289

Table 14-3 Command

Configuration commands (continued) Definition

config export local Exports configuration settings locally. See To export configuration settings either locally or remotely on page 290. config export remote Exports configuration settings remotely. See To export configuration settings either locally or remotely on page 290.

config import local Imports configuration settings locally.

Warning: Running the system> config import command


overwrites all of your existing configuration settings except cluster name. See To import configuration settings either locally or remotely on page 290. config import remote Imports configuration settings remotely.

Warning: Running the system> config import command


overwrites all of your existing configuration settings except cluster name. See To import configuration settings either locally or remotely on page 290.

config delete

Deletes the locally saved configuration file. See To delete the locally saved configuration file on page 291.

Configuring the locally saved configuration files


To list configuration settings

To view locally saved configuration files, enter the following:


System> config list

290

Configuring system information About configuring the locally saved configuration files

To export configuration settings either locally or remotely

To export configuration settings locally, enter the following:


System> config export local file_name

For example:
System> config export local 2007_July_20

To export configuration settings remotely, enter the following:


System> config export remote URL

For example:
System> config export remote ftp://admin@ftp.docserver.symantec.com/configs/config1.tar.gz Password: ******* file_name URL Specifies the saved configuration file. Specifies the URL of the export file (supported protocols are FTP and SCP).

You can import the configuration settings saved in a local file or saved to a remote machine specified by a URL. To import configuration settings either locally or remotely

To import configuration settings locally, enter the following:


System> config import local file_name {network|admin|all| report|system|cluster_specific|all_except_cluster_specific| nfs|cifs|ftp|backup|replication|storage_schedules}

For example:
System> config import local 2008_July_20 network Backup of current configuration was saved as 200907150515 network configuration was imported Configuration files are replicated to all the nodes

To import configuration settings remotely, enter the following:


System> config import remote URL {network|admin|all| report|system|cluster_specific|all_except_cluster_specific| nfs|cifs|ftp|backup|replication|storage_schedules}

Configuring system information About configuring the locally saved configuration files

291

For example:
System> config import remote ftp://user1@server.com/home/user1/ 2008_July_20.tar.gz report Password: ******* file_name URL Specifies the saved configuration file. Specifies the saved configuration at a remote machine specified by a URL. Available import configuration options are: network - Imports DNS, LDAP, NIS, nsswitch settings (does not include IP). admin - Imports list of users, passwords.

import configuration options

all - Imports all configuration information. report - Imports report settings. system - Imports NTP settings.

cluster_specific - Imports public IP addresses, virtual IP addresses, and console IP addresses. Be careful before using this import option. The network connection to the console server will be lost after performing an import. You need to reconnect to the console server after importing the configuration option. all_except_cluster_specific - Imports all configuration information except for cluster-specific information. nfs - Imports NFS settings.

cifs - Imports CIFS settings. ftp - Imports the FTP setting.

backup - Imports the NBU client and NDMP configuration, excluding the virtual-name and virtual-ip. replication - Imports replication settings.

storage_schedules - Imports dynamic storage tiering (DST) and automated snapshot schedules.

To delete the locally saved configuration file

To delete the locally saved configuration file, enter the following:


System> config delete file_name

file_name specifies the locally saved configuration file for which to delete.

292

Configuring system information Using the more command

Using the more command


The System> more command enables, disables, or checks the status of the more filter. The default setting is enable, which lets you page through the text one screen at a time. To set the more command

To use the more command, enter the following:


System> more enable|disable|status enable disable status Enables the more filter on all of the nodes in the cluster. Disables the more filter on all of the nodes in the cluster. Displays the status of the more filter.

For example:
System> more status Status : Enabled System> more disable SFS more Success V-288-748 more deactivated on console System> more enable SFS more Success V-288-751 more activated on console

About coordinating cluster nodes to work with NTP servers


You can set the Network Time Protocol (NTP) server on all of the nodes in the cluster. The Storage Foundation Cluster File Server (SFCFS) configuration recommends setting the NTP server, though setting the NTP server is optional. Note: Use 127.127.1.0 as the IP address for selecting the local clock as the time source for the NTP server.

Configuring system information About coordinating cluster nodes to work with NTP servers

293

Table 14-4 Command


ntp servername

NTP commands Definition


Sets the NTP server on all of the nodes in the cluster. See To set the NTP server on all of the nodes in the cluster on page 293.

ntp show

Displays NTP status and server name. See To display the status of the NTP server on page 293.

ntp enable

Enables the NTP server on all of the nodes in the cluster. See To enable the NTP server on page 294.

ntp disable

Disables the NTP server on all of the nodes in the cluster. See To disable the NTP server on page 294.

Coordinating cluster nodes to work with NTP servers


To set the NTP server on all of the nodes in the cluster

To set the NTP server on all of the nodes in the cluster, enter the following:
System> ntp servername server-name

where server-name specifies the name of the server or IP address you want to set. For example:
System> ntp servername ntp.symantec.com Setting NTP server = ntp.symantec.com ..Done.

To display the status of the NTP server

To display NTP status and server name, enter the following:


System> ntp show

Example output:
System> ntp show Status: Enabled Server Name: ntp.symantec.com

294

Configuring system information Displaying the system statistics

To enable the NTP server

To enable the NTP server on all of the nodes in the cluster, enter the following:
System> ntp enable

For example:
System> ntp enable Enabling ntp server: ntp.symantec.com ..Done.

To disable the NTP server

To disable the NTP server on all of the nodes in the cluster, enter the following:
System> ntp disable

For example:
System> ntp disable Disabling ntp server:..Done. System> ntp show Status : Disabled Server Name: ntp.symantec.com

Displaying the system statistics


The System> stat command displays the system, Dynamic Multipathing (DMP), and process-related node-wide statistics. The load in the displayed output is the load from the last 1, 5, and 15 minutes.

Configuring system information Using the swap command

295

To display the system statistics

To display cluster wide or node wide statistics, enter the following:


System> stat sys|dmp|all|cluster [node] sys dmp cluster Displays the system-related statistics. Displays the DMP-related statistics. Displays the aggregate of the I/O and network performances from each node and averages out the number of nodes in the cluster to show the statistics at the cluster level. The variable node does not apply to this option. Displays the system and DMP-related statistics of one node at a time in the cluster or all of the nodes in the cluster. The name of the node in the cluster.

all

node

To view the cluster-wide network and I/O throughput, enter the following:
System> stat cluster Gathering statistics... Cluster wide statistics:::: ======================================= IO throughput :: 0 Network throughput :: 1.205

Using the swap command


If you set up a single-node cluster, and you were not able to ping the gateway through the private or public interface, then the cables may have been attached incorrectly. To correct this problem, you first need to switch the cables back to the correct connectors. You then need to run the System> swap command. For example, if the public switch is 'priveth0' and the private switch is 'pubeth0,' the System> swap command switches the MAC addresses for 'priveth0' and 'pubeth0.' After running the System> swap command, all Secure Shell (SSH) connections hosted on the input interfaces will terminate. You can check the status of the System> swap command using the > history command. The System> swap command works only on a single-node cluster. No other service should be running.

296

Configuring system information About the option commands

Note: Do not use this command if you have exported CIFS/NFS shares. To use the swap command

To use the swap command, enter the following:


System> swap interface1 interface2

For example:
System> swap pubeth0 priveth0 All ssh connection(s) need to start again after this command. Do you want to continue [Enter "y/yes" to continue]... Check status of this command in history. Wait.......

About the option commands


The option commands were created to allow you to adjust a variety of tunable variables that affect the global SFS settings. The tunable variables that can be changed or displayed are listed in Table 14-5. Note: Only system administrators with advanced knowledge of Dynamic Multipathing (DMP) I/O policies should use the System> option commands. For assistance, contact Technical Support. Table 14-5 Command
option show nfsd

Option commands Definition


Displays the number of Network File System (NFS) daemons for each node in the cluster. See Displaying the NFS daemons on page 299.

option modify nfsd Modifies the number of Network File System (NFS) daemons on all of the nodes in the cluster. The range for the number of daemons you can modify is 16 to 1892.

Warning: The option modify nfsd command overwrites the


existing configuration settings. See Changing the NFS daemons on page 299.

Configuring system information About the option commands

297

Table 14-5 Command

Option commands (continued) Definition

option show dmpio Displays the type of Dynamic Multipathing (DMP) I/O policy and the enclosure for each node in a cluster. See To display the DMP I/O policy on page 300. option modify dmpio Modifies the Dynamic Multipathing (DMP) I/O policy, corresponding to the enclosure, arrayname, and arraytype.

Warning: Check the sequence before modifying the I/O policy. The
policies need to be applied in following sequence: arraytype, arrayname, and enclosure. The enclosure-based modification of the I/O policy overwrites the I/O policy set using the arrayname and the arraytype for that particular enclosure. In turn, the arrayname-based modification of the I/O policy overwrites the I/O policy set using the arraytype for that particular arrayname. See To change the DMP I/O policy on page 300. option reset dmpio Resets the Dynamic Multipathing (DMP) I/O policy setting for the given input (enclosure, arrayname, and arraytype). Use this command when you want to change the I/O policy from the previously set enclosure to arrayname. The settings hierarchy is enclosure, arrayname, and arraytype, so to modify the I/O policy to arraytype, you need to reset arrayname and enclosure.

Note: This command does not set the default I/O policy.
See To reset the DMP I/O policy on page 301. option show ninodes option modify ninodes Displays the ninodes cache size in the cluster. See To display the ninodes cache size on page 302. Changes the cache size of the global inodes. If your system is caching a large number of metadata transactions, or if there is significant virtual memory manager usage, modifying some of the variables may improve performance. The range for the inode cache size is from 10000 to 2097151.

Warning: The option modify ninodes command requires a


cluster-wide reboot. See To change the ninodes cache size on page 302. option show tunefstab Displays the global value of the write_throttle parameter. See To display the tunefstab parameter on page 302.

298

Configuring system information About the option commands

Table 14-5 Command


option modify tunefstab

Option commands (continued) Definition


Modifies the global write_throttle parameter for all the mounted file systems. The write_throttle parameter is useful in situations where a computer system combines a large amount of memory and slow storage devices. In this configuration, sync operations (such as fsync()) may take so long to complete that a system appears to hang. This behavior occurs because the file system is creating dirty buffers (in-memory updates) faster than they can be asynchronously flushed to disk without slowing system performance. Lowering the value of write_throttle limits the number of dirty buffers per file that a file system generates before flushing the buffers to disk. After the number of dirty buffers for a file reaches the write_throttle threshold, the file system starts flushing buffers to disk even if free memory is available. The default value of write_throttle is zero, which puts no limit on the number of dirty buffers per file. See To modify the tunefstab parameter on page 303.

Configuring system information About the option commands

299

Using the option commands


Displaying the NFS daemons

To display the NFS daemons, enter the following:


System> option show nfsd

For example:
System> option show nfsd NODENAME NUMBER_DAEMONS --------------------sfs_1 96 sfs_2 96

If you want to view your current enclosure names, use the following command:
Storage> disk list detail

For example:
Storage> disk list detail Disk Pool Enclosure Size ==== ==== ========== ==== sda p1 OTHER_DISKS 10.00G ID == VMware%2C:VMware%20Virtual%20S:0:0 Serial Number ============= -

Changing the NFS daemons

To display the number of NFS daemons, enter the following:


System> option modify nfsd number [nodename]

For example:
System> option modify nfsd 97 System>

300

Configuring system information About the option commands

To display the DMP I/O policy

To change the dmpio policy, enter the following:


System> option show dmpio

For example:
NODENAME -------rama_01 rama_01 TYPE --------arrayname enclosure ENCLR/ARRAY ----------disk disk IOPOLICY -------balanced minimumq

To change the DMP I/O policy

To change the DMP I/O policy, enter the following:


System> option modify dmpio {enclosure enclr_name|arrayname array_name|arraytype {A/A|A/P|...}} iopolicy={adaptive|adaptiveminq|balanced|minimumq|priority| round-robin|singleactive}

The dmpio policy variables are the following:


enclosure enclr_name Name of the enclosure to distinguish between arrays having the same array name. Name of the array. Two physical array boxes of the same make will have the same array name. A multipathing type of array. Use one of the following: active-active, active-active-A, active-active-A-HDS, active-active-A-HP, APdisk, active-passive, active-passive-C, active-passiveF-VERITAS, active-passiveF-T3PLUS, active-passiveF-LSI, active-passiveG, active-passiveG-C, Disk, CLR-A-P, CLR-A-PF

arrayname

array_name

arraytype

array_type

Configuring system information About the option commands

301

iopolicy

adaptive

In storage area network (SAN) environments, this option determines the paths that have the least delays, and schedules the I/O on paths that are expected to carry a higher load. Priorities are assigned to the paths in proportion to the delay. The I/O is scheduled according to the length of the I/O queue on each path. The path with the shortest queue is assigned the highest priority. Takes into consideration the track cache when balancing the I/O across paths. Uses a minimum I/O queue policy. The I/O is sent on paths that have the minimum number of I/O requests in the queue. This policy is suitable for low-end disks or JBODs where a significant track cache does not exist. This is the default policy for Active/Active (A/A) arrays. Assigns the path with the highest load carrying capacity as the priority path. This policy is useful when the paths in a SAN have unequal performances, and you want to enforce load balancing manually. Sets a simple round-robin policy for the I/O. This is the default policy for Active/Passive (A/P) and Asynchronous Active/Active (A/A-A) arrays. The I/O is channeled through the single active path. The optional attribute use_all_paths controls whether the secondary paths in an Asymmetric Active/Active (A/A-A) array are used for scheduling I/O requests in addition to the primary paths. The default setting is no, which disallows the use of the secondary paths.

adaptiveminq

balanced

minimumq

priority

round-robin

singleactive

To reset the DMP I/O policy

To reset the DMP I/O policy, enter the following:


System> option reset dmpio {enclosure enclr_name|arrayname array_name|arraytype {A/A|A/P|...}}

302

Configuring system information About the option commands

To display the ninodes cache size

To display the ninodes cache size, enter the following:


System> option show ninodes

For example:
System> option show ninodes INODE_CACHE_SIZE ---------------2000343

To change the ninodes cache size

To change the ninodes cache, enter the following:


System> option modify ninodes number

For example:
System> option modify ninodes 2000343 SFS option WARNING V-288-0 This will require cluster wide reboot. Do you want to continue (y/n)?

To display the tunefstab parameter

To display the tunefstab parameter, enter the following:


System> option show tunefstab

For example:
System> option show tunefstab NODENAME ATTRIBUTE ---------------sfs_01 write_throttle sfs_02 write_throttle

VALUE ----Default Default

Configuring system information About the option commands

303

To modify the tunefstab parameter

To modify the tunefstab parameter, enter the following:


System> option modify tunefstab write_throttle value

where value is the number you are assigning to the write_throttle parameter. For example:
System> option modify tunefstab write_throttle 20003 System> option show tunefstab NODENAME ATTRIBUTE VALUE -------------------sfs_01 write_throttle 20003 sfs_02 write_throttle 20003 System>

304

Configuring system information About the option commands

Chapter

15

Upgrading Storage Foundation Scalable File Server


This chapter includes the following topics:

About upgrading drivers Displaying the current version of SFS About installing patches

About upgrading drivers


The upgrade commands install or uninstall upgrades to the SFS software. The upgrades can be patches or drivers. The software is installed or uninstalled on all of the nodes. The upgrade commands are defined in Table 15-1 . To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter the Upgrade> mode. For login instructions, go to About using the SFS command-line interface. Note: The Upgrade> patch install command can also be used for DUD upgrades in case the new node you want to add into the cluster has a separate set of driver requirements compared to the first node.

306

Upgrading Storage Foundation Scalable File Server About upgrading drivers

Table 15-1 Command


show

Upgrade mode commands Definition


Displays the current version of SFS, the patch level, and the DUD upgrade(s). The Upgrade> show detail command displays information about major upgrades. Error messages are displayed if any of the nodes in the cluster do not have matching software versions, operating system packages installed, or any DUD upgrade(s) installed. See Displaying the current version of SFS on page 307.

patch install

Downloads the patch from the specified URL and installs it on all of the nodes. See About installing patches on page 308.

patch uninstall-upto

Uninstalls the software upgrade from all of the nodes up to the specified version. See About installing patches on page 308.

patch sync

Synchronizes the specified node. See About installing patches on page 308.

patch duduninstall Removes all of the driver updates previously added to the cluster and reverts back to the original driver update image. See About installing patches on page 308.

Upgrading Storage Foundation Scalable File Server Displaying the current version of SFS

307

Displaying the current version of SFS


To display the current version of SFS

To display the current version of SFS and the patch level, enter the following:
Upgrade> show

For example:
Upgrade> show 5.5 (Tue Aug 11 08:40:23 2009), Installed on Tue Aug 11 17:21:18 EDT 2009

To display the current version of SFS, the DUD upgrades, the patch level, and major upgrades, enter the following:
Upgrade> show detail

For example:
Upgrade> show detail 5.5SP1RP1 (Tue Dec 15 08:40:23 2009) 5.5SP1 (Tue Aug 11 08:40:23 2009), Installed on Tue Aug 11 17:21:18 EDT 2009 5.5SP1RP1 (Tue Dec 15 08:40:23 2009), Installed on Tue Dec 15 19:19:54 EDT 2009 Major Upgrade(s) ================ Upgraded from 5.5 to 5.5SP1 (Tue Aug 11 08:40:23 2009) on Tue Aug 11 17:21:18 EDT 2009

308

Upgrading Storage Foundation Scalable File Server About installing patches

About installing patches


Table 15-2 Command
patch install

Patch commands Definition


Downloads the patch from a specified URL and install it on all of the nodes. The Upgrade> patch install command first synchronizes the nodes that have different software versions compared to the other nodes. If the remaining nodes (nodes other then first node added into the cluster) have a different set of driver requirements, then you can also use the same patch install command to add drivers in the driver update image present in the install server. The driver update image present in the install server acts as a Driver Update Disk (DUD) image during the installation for any node using the PXE boot. To use the same patch install interface for the DUD update process, along with the URL path of the DUD patch (the DUD ISO), you have to specify the list of drivers you want to add.

Note: After you have installed, uninstalled, or synchronized a new


SFS patch into your cluster, the list of available commands may have changed. Please re-login to the CLI to access the updated features. See To install the latest patches on your system on page 310. patch uninstall-upto Uninstalls the software upgrade from all of the nodes up to the specified version. You must specify the versions of software up to the version that you want to uninstall. This command first synchronizes the nodes that have different software versions compared to other nodes in the cluster. See To uninstall patches on page 311. patch sync Forcefully synchronizes the specified node, bringing it up to the currently installed software version of the remaining nodes in the cluster. You only need to install the patch on one node, and then run the Upgrade> patch sync command to synchronize all of the nodes. See To forcefully synchronize software upgrades on a node on page 311.

Upgrading Storage Foundation Scalable File Server About installing patches

309

Table 15-2 Command

Patch commands (continued) Definition

patch duduninstall Removes all of the driver updates previously added to the cluster and reverts back to the original driver update image. This process does not remove the drivers that were added during the installation of the first node. The DUD uninstall process is not incremental, unlike the DUD upgrade process where you can add different drivers by using the patch install commands multiple times. See To uninstall driver updates on page 312.

310

Upgrading Storage Foundation Scalable File Server About installing patches

Installing patches
To install the latest patches on your system

To install the latest patches, enter the following:


Upgrade> patch install URL [driver_list]

For example, you can download a DUD ISO from an HTTP server with authentication and install it. The following output shows the update of the driver update image (on all of the nodes present in the cluster) with the tg3 driver of version 3.71b and the megaraid-sas.ko driver of version 00.00.03.16.
http://admin@docserver.symantec.com/DRIVER_UPDATES/SFS_DUD.iso tg3.ko:3.71b,megaraid_sas.ko:00.00.03.16 Enter password for user 'admin': ********** Please wait. Upgrade is in progress... Patch upgraded on all nodes of cluster. URL The URL of the location from where you can download the software patch. The URL supports HTTP, FTP, and SCP protocols for download. The username and password for the HTTP and FTP protocols are supported. An optional variable that you can use for DUD upgrades. Enter a list of comma-separated [drivername:versionnumber] pairs when you want to apply the DUD upgrade. You can exit the patch DUD upgrade process by entering no/no at the prompt. For example: Upgrade> patch install scp:// support@10.209.106.101:/home/support/SFS.iso Enter password for user 'support':******** No input driver given... List of drivers present in DUD:: Drivername:Versionnumber ************************** e1000.ko:7.6.9.1 tg3.ko:3.71b megaraid_sas.ko:00.00.03.16 Please enter driver list you want to add [Enter "No" to exit from here]:: no Sorry...Patch driverupgrade process is terminated by you.

driver_list

Upgrading Storage Foundation Scalable File Server About installing patches

311

To uninstall patches

To uninstall the software upgrades, enter the following:


Upgrade> patch uninstall-upto version

where version specifies the versions of software up to the version that you want to uninstall. For example:
Upgrade> patch uninstall-upto 5.5RP1 OK Completed

To forcefully synchronize software upgrades on a node

To forcefully synchronize software upgrades on a node, enter the following:


Upgrade> patch sync nodename

where nodename specifies the node that needs to be synchronized to the same software version as the one currently installed in the cluster. For example:
Upgrade> patch sync node2 ............... Syncing software upgrades on node2... SFS patch SUCCESS V-288-122 Patch sync completed.

This command lists all of the drivers updated on the cluster and asks you to confirm the uninstall on each one by entering y or yes. If you decide not to uninstall the drivers, press any key other than y or yes to exit the uninstall process.

312

Upgrading Storage Foundation Scalable File Server About installing patches

To uninstall driver updates

To uninstall the driver updates, enter the following:


Upgrade> patch duduninstall

You will be asked to confirm the uninstallation of the drivers. For example:
Upgrade> patch duduninstall patch duduninstall DUD updated with following drivers :: =================================== tg3.ko:3.71b megaraid_sas.ko:00.00.03.16 Do you really want to continue with uninstallation [Enter "y/yes" to continue]:: y Uninstalling DUD... DUD uninstall completed successfully.

Chapter

16

Troubleshooting
This chapter includes the following topics:

About troubleshooting commands Retrieving and sending debugging information About the iostat command About excluding the PCI ID prior to the SFS installation Testing network connectivity About the services command Using the support login About network traffic details Accessing processor activity Using the traceroute command

About troubleshooting commands


This chapter discusses the SFS troubleshooting commands. You use these commands to check the status of the nodes and the SFS cluster. The troubleshooting mode commands are in Table 16-1. To access the a particular troubleshooting submode command, log into the administrative console (for master, system-admin, or storage-admin) and enter the appropriate mode. For login instructions, go to About using the SFS command-line interface.

314

Troubleshooting Retrieving and sending debugging information

Table 16-1 Command


debuginfo

Support mode commands Definition


Retrieves SFS debug information from an SFS node and send the information to a server using an external FTP or SCP server. See Retrieving and sending debugging information on page 314.

iostat

Generates CPU statistical information. Generates the device utilization report. See About the iostat command on page 315.

pciexclusion

Excludes the Peripheral Component Interconnect (PCI) IDs from the nodes in a cluster prior to installing the SFS software. The PCI IDs must be excluded prior to the PXE boot. See About excluding the PCI ID prior to the SFS installation on page 317.

network> ping

Tests whether a particular host or gateway is reachable across an IP network. See Testing network connectivity on page 321.

services

Brings services that are OFFLINE or FAULTED back into the ONLINE state. See Using the services command on page 323.

support login

Reports SFS technical support issues. See Using the support login on page 325.

tethereal

Exports the network traffic details to the specified location. Displays captured packet data from a live network. See About network traffic details on page 325.

top

Displays the dynamic real-time view of currently running tasks. See Accessing processor activity on page 327.

traceroute

Displays all of the intermediate nodes on a route between two nodes. See Using the traceroute command on page 328.

Retrieving and sending debugging information


You can retrieve SFS debug information from an SFS node and send the information to a server using an external FTP or SCP server.

Troubleshooting About the iostat command

315

To upload debugging information

To upload debugging information from a specified node to an external server, enter the following:
Support> debuginfo nodename debug-url

For example:
Support> debuginfo sfsnode scp://john@abc.com:/tmp nodename Specifies the nodename from which to collect the debugging information. Specifies the URL where you want to upload the debugging information. Depending on the type of server from which you are uploading debugging information, use one of the following example URL formats: ftp://admin@ftp.docserver.company.com/patches/ scp://root@server.company.com:/tmp/ If debug-url specifies a remote directory, the default filename is sfsfs_debuginfo.tar.gz.

debug-url

About the iostat command


The iostat commands display the CPU and I/O statistics. Table 16-2 Command
iostat cpu

Iostat commands Definition


Generates CPU statistical information. When the command is used for the first time, it contains information since the system was booted. Each subsequent report shows the details since the last report. See To use the iostat command on page 316.

iostat device

Generates the device utilization report. This information can be used to balance the load among the physical disks by modifying the system configuration. When this command is executed for the first time, it contains information since the system was booted. Each subsequent report shows the details since the last report. There are two options for this command. See To use the iostat device command on page 317.

316

Troubleshooting About the iostat command

Generating CPU and device utilization reports


To use the iostat command

To use the iostat cpu command, enter the following:


Support> iostat cpu [nodename] [interval] [count] nodename The name of the node from where the report will be generated. The default is console for the Management Console. The duration between each report in seconds. The default is 2 seconds. The number of reports generated at the interval entered in seconds. The default is one report.

interval

count

where the nodename option asks for the name of the node from where the report will be generated. The default is console for the Management Console. For example, to generate the CPU utilization report of the console node, enter the following:
Support> iostat cpu sfs_01 Linux 2.6.16.60-0.21-smp (sfs_01) avg-cpu: %user 1.86 %nice 0.07 %system 4.53

07/09/09 %iowait 0.13 %steal 0.00 %idle 93.40

Troubleshooting About excluding the PCI ID prior to the SFS installation

317

To use the iostat device command

To use the iostat device command, enter the following:


Support> iostat device [nodename] [dataunit] [interval] [count] nodename The nodename option asks for the name of the node from where the report will be generated. The default is console for the Management Console. The dataunit option lets you generate the report in block(s) or kilobytes(s). The default is block(s). The duration between each report in seconds. The default is two seconds. The number of reports generated at the interval entered in seconds. The default is one report.

dataunit

interval

count

For example, to generate a device utilization report of a node, enter the following:
Support> iostat device sfs_01 Blk Linux 2.6.16.60-0.21-smp (sfs_01) Device: hda sda hdc tps 4.82 1.95 0.00 Blk_read/s 97.81 16.83 0.01

07/09/09 Blk_read 1410626 242712 136 Blk_wrtn 1241992 58342 0

Blk_wrtn/s 86.11 4.05 0.00

About excluding the PCI ID prior to the SFS installation


During the initial SFS software installation, you excluded certain PCI IDs in your cluster to reserve them for future use. This action applied only to the first node. Use the commands in this section to exclude additional PCI IDs from the second node or subsequent nodes before you install SFS software on the second or subsequent nodes. The PXE boot installation excludes the same PCI IDs you entered during the initial SFS software installation on the second node or subsequent nodes. Before the PXE boot, you can delete the PCI IDs from being excluded on the second node by using the Support> pciexclusion delete command.

318

Troubleshooting About excluding the PCI ID prior to the SFS installation

Note: If you decide to include the PCI IDs you previously excluded you need to reinstall SFS on your cluster. Table 16-3 Command PCI exclusion commands Definition

pciexclusion show Displays the list of PCI IDs that have been excluded during the initial SFS installation. The status of the PCI IDs is designated by a y (yes) or n (no). The yes option means they have been excluded. The no option means they have not yet been excluded. See To display the list of excluded PCI IDs on page 319. pciexclusion add Allows you to add specific PCI IDs for exclusion. You must enter the values in this command before the PXE boot installation for the PCI IDs to be excluded from the second node installation. See To add a PCI ID for exclusion on page 320. pciexclusion delete Deletes a specified PCI ID from being excluded. If you do not want the same PCI ID excluded on additional nodes, you must delete them here. You must perform this command before doing the PXE boot installation. See To delete a PCI ID on page 320.

Troubleshooting About excluding the PCI ID prior to the SFS installation

319

Excluding the PCI IDs from the cluster


To display the list of excluded PCI IDs

To display the list of PCI IDs that you excluded during the SFS installation, enter the following:
Support> pciexclusion show PCI ID -----0000:0e:00.0 0000:0e:00.0 0000:04:00:1 PCI ID EXCLUDED -------y y n NODENAME/UUID ------------sfs_1 a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820

The PCI IDs you entered to be excluded during the initial SFS installation. The PCI ID is made up of the following: [ [<domain>] : ] [ [ <bus> ] : ] [ <slot > ] [ . [ <func> ] ]

EXCLUDED

(y) means the PCI ID has been excluded. (n) means the PCI ID has not been excluded.

NODENAME UUID

The node names corresponding to the PCI IDs. The ID of the node which is in the installed state but not yet added into the cluster.

320

Troubleshooting About excluding the PCI ID prior to the SFS installation

To add a PCI ID for exclusion

To add a PCI ID for exclusion, enter the following:


Support> pciexclusion add pci_list

where pci_list is a comma-separated list of PCI IDs. The format of the PCI ID is in hexadecimal bits (XXXX:XX:XX.X). For example:
Support> pciexclusion add 0000:00:09.8 Support> pciexclusion show PCI ID EXCLUDED NODENAME/UUID ------------------------0000:0e:00.0 y sfs_1 0000:0e:00.0 y a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820 0000:04:00:1 n 0000:00:09.0 n

To delete a PCI ID

To delete a PCI ID that you excluded during the SFS installation so that the PCI ID is now available for use, enter the following:
Support> pciexclusion delete pci

where pci is the PCI ID in hexadecimal bits. For example: XXXX:XX:XX.X. You can only delete a PCI ID exclusion that was not already used on any of the nodes in the cluster. In the following example, you cannot delete PCI IDs with the NODENAME/UUID sfs_1 or a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820. For example:
Support> pciexclusion delete 0000:04:00:1 Support> pciexclusion show PCI ID EXCLUDED NODENAME/UUID ------------------------0000:0e:00.0 y sfs_1 0000:0e:00.0 y a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820 0000:00:09.0 n

Troubleshooting Testing network connectivity

321

Testing network connectivity


You can test whether a particular host or gateway is reachable across an IP network. To use the ping command

To use the ping command, enter the following:


Network> ping destination [nodename] [devicename] [packets]

For example, you can ping host1 using node1:


Network> ping host1 node1 destination Specifies the host or gateway to send the information to. The destination field can contain either a DNS name or an IP address. nodename Specifies the nodename to ping from. To ping from any node, use any in the nodename field. The nodename field is an optional field, and if omitted, any node is chosen to ping from. Specifies the device through which you will ping. To ping from any device in the cluster, use the any variable in the devicename field. Specifies the number of packets that should be sent to the destination. If the packets field is omitted, five packets are sent to the destination by default. The packets field must contain an unsigned integer.

devicename

packets

About the services command


The Support> services command lets you bring services that are OFFLINE or FAULTED back into the ONLINE state. Note: If after using the services command, a service is still offline or faulted, you need to contact Technical Support. These services include:

NFS server

322

Troubleshooting About the services command

CIFS server Console service Backup NIC information FS manager IP addresses Services commands Definition
Attempts to fix any service that is offline or faulted, running on all of the nodes in the cluster. See To display the state of the services on page 323.

Table 16-4 Command


services autofix

services online

Fixes a specific service. Enter the servicename and this option will attempt to bring the service back online. If the servicename is already online, no action is taken. If the servername is a parallel service, an attempt is made to online the service on all nodes. If the servicename is a failover service, an attempt is made to online the service on any of the running nodes of the cluster. See To display the state of all of the services on page 324.

services show

Lists the state of important services. The state of the IPs and file systems are only shown if they are not online. When the show option is used, the program will attempt to online any services that are offline or faulted. There is a timeout of 15 minutes. If you run a services show command and then run the command again before 15 minutes has elapsed, the command will not attempt to online any services. See To fix any service fault on page 324.

services showall

Lists the state of all of the services. When the show option is used, the program will attempt to online any services that are offline or faulted. There is a timeout of 15 minutes. If you run a services show command and then run the command again before 15 minutes has elapsed, the command will not attempt to online any services. See To bring a service online on page 324.

Troubleshooting About the services command

323

Using the services command


To display the state of the services

To display the important services running on the nodes, enter the following:
Support> services show Verifying cluster state...........done sfs 1 2 -------- -------ONLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE

Service ------nfs cifs ftp backup console nic_pubeth0 nic_pubeth1 fs_manager

324

Troubleshooting About the services command

To display the state of all of the services

To display all of the services running on the nodes, enter the following:
Support> services showall sfs 1 2 -------- -------ONLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE OFFLINE OFFLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE

Service ------nfs cifs ftp backup console nic_pubeth0 nic_pubeth1 fs_manager 10.182.107.201 10.182.107.202 10.182.107.203 10.182.107.204 /vx/fs1 /vx/fs2 /vx/fs3

To fix any service fault

To fix any service fault, enter the following:


Support> services autofix Attempting to fix service faults...........done

To bring a service online

To bring a service online on the nodes, enter the following:


Support> services online servicename

where servicename is the name of the service you want to bring online. For example:
Support> services online 10.182.107.203 Support>

Troubleshooting Using the support login

325

Using the support login


There is a support login used for reporting SFS technical support issues. Note: The support account is intended for Technical Support use only. It cannot be created by administrators. To use the support login

Log in to the CLI as the support account by entering:


support

and then entering:


symantec

For example,
login as: support Password: Last login: Fri Dec 14 12:09:49 2007 from 172.16.113.118 sfs_1:~ #

After having logged in as the support account, it is recommended that you change your password. See To change a user's password on page 34.

To use the supportuser commands refer to: See About the support user on page 35.

About network traffic details


The tethereal command exports and displays network traffic data. Table 16-5 Command
tethereal export

Tethereal commands Definition


Exports the network traffic details to the specified location. See To use the tethereal command on page 326.

tethereal show

Displays captured packet data from a live network. See To use the tethereal show command on page 327.

326

Troubleshooting About network traffic details

Exporting and displaying the network traffic details


To use the tethereal command

To use the tethereal export command, enter the following:


Support> tethereal export url [nodename] [interface] [count] [source] url Provides the location to export the network traffic details. The default filename tethereal.log is used if a filename is not specified in the url. The name of the node from where the traffic details are generated. Unless a name is enter, the default is console for the Management Console. Specifies the network interface for the packet capture. Specifies the maximum number of packets to read. The maximum allowed file size to capture the network traffic details is 128 MB. For a very large 'count' value, if the file size exceeds 128 MB, then the command stops capturing the network traffic details. source Specifies the node to filter the packets.

nodename

interface count

For example, to export the network traffic details, enter the following:
Support> tethereal export scp://user1@172.31.168.140:/ Password: ******* Capturing on pubeth0 ... Uploading network traffic details to scp://user1@172.31.168.140:/ is completed.

Troubleshooting Accessing processor activity

327

To use the tethereal show command

To use the tethereal show command, enter the following:


Support> tethereal show [nodename] [interface] [count] [source] nodename The name of the node from where the traffic details are displayed. The default is console for the Management Console. Specifies the network interface for the packet capture. Specifies the maximum number of packets to read. If you do not specify a count value, the network traffic continues to be displayed until you interrupt it. source Specifies the node to filter the packets.

interface count

For example, the traffic details for five packets, for the Management Console on the pubeth0 interface are:
Support> tethereal show 0.000000 172.31.168.140 0.000276 10.209.105.147 0.000473 10.209.105.147 packet len=112 0.000492 10.209.105.147 packet len=112 sfs_01 pubeth0 5 172.31.168.140 -> 10.209.105.147 ICMP Echo (ping) request -> 172.31.168.140 ICMP Echo (ping) reply -> 172.31.168.140 SSH Encrypted response -> 172.31.168.140 SSH Encrypted response

Accessing processor activity


The top command displays the dynamic real-time view of currently running tasks. It shows the resources being consumed by users and processes at a given time for a specified node.

328

Troubleshooting Using the traceroute command

To use the top command

To use the top command, enter the following:


Support> top [nodename] [iterations] [delay] nodename Displays the resources and processes at a given time for the specified node. Specifies the number of iterations you want to run. The default is three. Specifies the delay between screen updates that you want to see. The default is five seconds.

iterations

delay

For example, to show the dynamic real-time view of tasks running on the node sfs_01, enter the following:
Support> top sfs_01 1 1 top - 16:28:27 up 1 day, 3:32, 4 users, load average: 1.00, 1.00, 1.00 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 16405964k total, 1110288k used, 15295676k free, 183908k buffers Swap: 1052248k total, 0k used, 1052248k free, 344468k cached PID 6314 1 USER root root PR 15 16 NI 0 0 VIRT 5340 640 RES 1296 260 SHR 792 216 S R S %CPU 3.9 0.0 %MEM 0.0 0.0 TIME+ 0:00.02 0:04.86 COMMAND top init

Using the traceroute command


The traceroute command displays all of the intermediate nodes on a route between two nodes.

Troubleshooting Using the traceroute command

329

To use the traceroute command

To use the traceroute command, enter the following:


Support> traceroute destination [source] [maxttl] destination The target node. To display all of the intermediate nodes located between two nodes on a network, enter the destination node. Specifies the source node name from where you want to begin the trace. Specifies the maximum number of hops. The default is seven hops.

source

maxttl

For example, to trace the route to the network host, enter the following:
Support> traceroute www.symantec.com sfs_01 10 traceroute to www.symantec.com (8.14.104.56), 10 hops max, 40 byte packets 1 10.209.104.2 0.337 ms 0.263 ms 0.252 ms 2 10.209.186.14 0.370 ms 0.340 ms 0.326 ms 3 puna-spi-core-b02-vlan105hsrp.net.symantec.com (143.127.185.130) 0.713 ms 0.525 ms 0.533 ms 4 143.127.185.197 0.712 ms 0.550 ms 0.564 ms 5 10.212.252.50 0.696 ms 0.600 ms 78.719 ms

330

Troubleshooting Using the traceroute command

Glossary

CFS (cluster file system) A file system that can be simultaneously mounted on multiple nodes. CFS is used

as the underlying file system within the Scalable File Server.


CIFS (Common Internet A network protocol that provides the foundation for Windows-based file sharing File System) console IP address

and other network utilities. The Scalable File Server supports CIFS file sharing. A virtual IP address that is configured for administrative access to the Scalable File Server cluster management console. Three or more LUNs designated to function as part of the I/O fencing mechanism of the Scalable File Server. Coordinator disks cannot be used to store user data. An optional capability of NDMP Data and Tape Services where only relevant portions of the secondary media are accessed during Recovery Operations. data connection in NDMP is either an NDMP interprocess communication mechanism (for local operations) or a TCP/IP connection (for 3-way operations).

coordinator disks

DAR (Direct Access Recovery)

data connection (NDMP) The connection between the two NDMP servers that carry the data stream. The

data service (NDMP)

An NDMP service that transfers data between primary storage and the data connection. A unidirectional byte stream of data that flows over a data connection between two peer NDMP services in an NDMP session. For example, in a backup, the data stream is generated by the data service and consumed by the tape service. The data stream can be backup data, recovered data, etc. An application that controls the NDMP session. In NDMP there is a master-slave relationship. The data management application is the session master; the NDMP services are the slaves. In NDMP versions 1, 2, and 3 the term "NDMP client" is used instead of data management application. An enhancement technique that provides the load balancing and path failover to disks that are connected to the Scalable File Server cluster nodes. A feature that allows the files and directories to be automatically and seamlessly transferred to different types of storage technology that may originate from different hardware vendors. An ISO image or media that contains one or more additional drivers that are needed to install the Scalable File Server on specific hardware, if the base Scalable File Server installer did not include the necessary drivers.

data stream (NDMP)

data management application (NDMP)

DMP (Dynamic Multipathing) DST (Dynamic Storage Tiering)

DUD (Driver Update Disk)

332

Glossary

failover

The capability to have the service of a failed computer resource made available automatically with little or no interruption. With the Scalable File Server configured as a cluster, the services provided by any failed node are automatically provided by the remainder of functioning nodes. A file system quota for file and block consumption which can be established for individual users or groups. When the hard limit is reached no further files or blocks can be allocated. An optional Scalable File Server feature that configures a specific group of LUNs with (to have) an additional layer of data protection. This extra protection prevents data loss from occurring in the rare case that the redundant cluster interconnect and public low-priority interconnect fails. A NetBackup server that provides storage within a master and a media server cluster. See also NetBackup. A file system that is constructed and managed by a technique for automatically maintaining one or more copies of the file system, using separate underlying storage for each copy. If a storage failure occurs, then access is maintained through the remaining accessible mirrors. data access to network-capable clients. An open standard protocol that is used to control the data backup and the recovery communications between primary and secondary storage in a heterogeneous network environment. NDMP specifies a common architecture for the backup of network file servers. It enables the creation of a common agent which a centralized program can use to back up the data on file servers that run on different platforms. An application that controls the NDMP session. See also data management application. The host computer system that executes the NDMP server application. Data is backed up from the NDMP host to either a local tape drive or to a backup device on a remote NDMP host. An instance of one or more distinct NDMP services controlled by a single NDMP control connection. Thus a data/tape/SCSI server is an NDMP server providing data, tape, or SCSI services. The state computer on the NDMP host accessed with the Internet protocol and controlled using the NDMP protocol. This term is used independently of implementation. The three types of NDMP services are: data service, tape service, and SCSI service. The configuration of one data management application and two NDMP services to perform a data management operation such as a backup or a recovery.

hard limit

I/O fencing

media server

mirrored file system

NAS (Network Attached A file-level computer data storage that is connected to a network that provides Storage) NDMP (Network Data Management Protocol)

NDMP client

NDMP host

NDMP server

NDMP service

NDMP session

Glossary

333

NetBackup

A Veritas software product that backs up, archives, and restores files, directories, or raw partitions that reside on a client system. A protocol that lets the user on a client computer access files over a network. To the client's applications the files appear as if they resided on one of the local devices. A feature that lets a customer use the Network File System (NFS) advisory client locking feature in parallel with core Cluster File System (CFS) global lock management. An NFS sharing option. Does not map requests from the UID 0. This option is on by default. A protocol for synchronizing computer system clocks over packet-switched, variable-latency data networks. A file-locking mechanism that is designed to improve performance by controlling the caching of files on the client. An internal IP network that is used by the Scalable File Server to facilitate communications between the Scalable File Server server nodes. available data storage devices (such as hard disks) or installed operating systems. A technique in which a DNS server, not a dedicated computer, performs the load balancing. An open-source implementation of the SMB file sharing protocol. It provides file and print services to SMB/CIFS clients. A specification of a file system or proper subset of a file system, which supports shared access to a file system through an NFS or CIFS server. The specification defines the folder or directory that represents the file system along with access characteristics and limitations. A point-in-time image or replica of a file system that looks identical to the file system from which the snapshot was taken. A file system quota for file and block consumption which can be established for individual users or groups. If a user exceeds the soft limit, there is a grace period, during which the quota can be exceeded. After the grace period has expired, no more file or data blocks can be allocated. A logical construct that contains one or more LUNs from which file systems can be created. The granularity at which data is stored on one drive of the array before subsequent data is stored on the next drive of the array.

NFS (Network File System)

NFS lock management

no root_squash

NTP (Network Time Protocol) oplocks

private interconnect

PXE (Pre-boot eXecution An environment to boot computers using a network interface independent of Environment) round robin DNS

Samba

share

snapshot

soft limit

storage pool

stripe unit

334

Glossary

syslog

A standard for forwarding log messages in an IP network. The term refers to both the syslog protocol and the application sending the syslog messages. An NDMP service that transfers data between secondary storage and the data connection and allows the data management application to manipulate and access the secondary storage. A 64-bit identifier that is used in Fibre Channel networks to uniquely identify each element in the network (i.e., nodes and ports).

tape service (NDMP)

WWN (World Wide Name)

Index

A
about backup configurations 259 changing share properties 184 configuring CIFS for AD domain mode 165 configuring disks 101 configuring locally saved configuration files 288 configuring SFS for CIFS 154 configuring storage pools 96 creating and maintaining file systems 117 creating file systems 120 disk lists 105 DNS 54 FTP 207 FTP server 208 FTP session 216 FTP set 210 I/O fencing 111 installing patches 308 iostat 315 leaving AD domain 170 leaving NT domain 163 managing CIFS shares 183 managing home directories 194 NDMP policies 249 NDMP supported configurations 247 Network Data Management Protocol 246 network services 50 network traffic details 325 NFS file sharing 143 NIS 81 option commands 296 reconfiguring CIFS service 180 retrieving the NDMP data 255 services command 321 setting NTLM 173 setting trusted domains 176 SFS cluster and load balancing 191 snapshot schedules 138 snapshots 133 storage provisioning and management 95

about (continued) storing account information 177 support user 35 troubleshooting 313 about bonding Ethernet interfaces 52 accessing man pages 30 processor activity 327 Active Directory setting the trusted domains for 176 AD domain mode changing domain settings 171 configuring CIFS 165 security settings 171 CIFS server stopped 171 setting domain 167 setting domain controller 167 setting domain user 167 setting security 167 starting CIFS server 167 AD interface using 173 AD trusted domains disabling 176 adding a severity level to an email group 225 a syslog server 230 an email address to a group 225 an email group 225 CIFS share 184 disks 103 external NetBackup master server to work with SFS 243 filter to a group 225 IP address to a cluster 60 mirror to a file system 124 mirror to a tier of a file system 271 mirrored tier to a file system 268 mirrored-striped tier to a file system 268 NetBackup Enterprise Media Manager (EMM) server 243 NetBackup media server 243

336

Index

adding (continued) new nodes to the cluster 43 NFS share 145 second tier to a file system 268 SNMP management server 233 striped tier to a file system 268 striped-mirror tier to a file system 268 users naming requirements for 24 vlan 86

B
backup configurations about 259 backup services displaying the status of 260 starting 260 stopping 260 bind distinguished name setting for LDAP server 75

C
change security settings 165 after CIFS server stopped 165 changing an IP address to online on any running node 60 configuration of an Ethernet interface 65 DMP I/O policy 299 domain settings 163 local CIFS user password 202 NFS daemons 299 ninodes cache size 299 status of a file system 131 support user password 36 changing domain settings AD domain mode 171 changing share properties about 184 checking and repairing a file system 130 I/O fencing status 113 on the status of the NFS server 90 support user status 36 CIFS standalone mode 155

CIFS and NFS protocols share file systems 148, 188 CIFS server starting 181 CIFS server status standalone mode 156 CIFS server stopped change security settings 165 CIFS service standalone mode 156 CIFS share adding 184 deleting 184 clearing DNS domain names 56 DNS name servers 56 LDAP configured settings 75 CLI logging in to 25 client configurations displaying 80 LDAP server 80 cluster adding an IP address to 60 adding new nodes 43 adding the new node to 44 changing an IP address to online for any running node 60 deleting a node from 45 displaying a list of nodes 40 displaying all the IP addresses for 60 rebooting a nodes or all nodes 47 shutting down a node or all nodes in a cluster 47 command history displaying 37 Command-Line Interface (CLI) how to use 25 configuration of an Ethernet interface changing 65 configuration files deleting the locally saved 289 viewing locally saved 289 configuration settings exporting either locally or remotely 289 importing either locally or remotely 289 configuring backup using NetBackup 240 CIFS for standalone mode 155

Index

337

configuring (continued) IP routing 69 masquerade as EMC policy 250 NDMP backup method policy 250 NDMP failure resilient policy 250 NDMP overwrite policy 250 NDMP recursive restore policy 250 NDMP restore DST policy 250 NDMP send history policy 250 NDMP update dumpdates policy 250 NDMP use snapshot policy 250 NetBackup virtual IP address 244 NetBackup virtual name 245 NSS 84 NSS lookup order 84 SFS for CIFS 154 vlan 86 configuring CIFS NT domain mode 159 configuring disks about 101 configuring locally saved configuration files about 288 configuring storage pools about 96 coordinating cluster nodes to work with NTP servers 293 coordinator disks replacing 113 CPU utilization report generating 316 create snapshot schedule 140 creating local CIFS group 205 local CIFS user 202 Master, System Administrator, and Storage Administrator users 33 mirrored file systems 121 mirrored-stripe file systems 121 simple file systems 121 storage pools 99 striped file systems 121 striped-mirror file systems 121 users 33 creating and maintaining file systems about 117 creating file systems about 120

creating snapshots 134 current Ethernet interfaces and states displaying 65 current users displaying list 33

D
debugging information retrieving and sending 314 decreasing size of a file system 129 default passwords resetting Master, System Administrator, and Storage Administrator users 33 delete snapshot schedule 140 deleting a node from the cluster 45 already configured SNMP management server 233 CIFS share 184 configured email server 225 configured NetBackup media server 243 email address from a specified group 225 email group 225 filter from a specified group 225 home directories 200 home directory of given user 200 local CIFS group 205 local CIFS user 202 locally saved configuration file 289 NFS options 151 route entries from routing tables of nodes in cluster 69 severity from a specified group 225 syslog server 230 users 33 vlan 86 destroy I/O fencing 113 destroying a file system 133 storage pools 99 destroying snapshots 134 device utilization report generating 316 disabling AD trusted domains 176

338

Index

disabling (continued) creation of home directories 200 DNS settings 56 FastResync option 127 I/O fencing 113 LDAP clients configurations 80 NIS clients 82 NTLM 175 NTP server 293 quota limits used by snapshots 134 support user account 36 disk lists about 105 disks adding 103 removing 103 display FTP server 208 displaying all the IP addresses for cluster 60 command history 37 current Ethernet interfaces and states 65 current list of SNMP management servers 233 current version 307 DMP I/O policy 299 DNS settings 56 events 231 existing email groups or details 225 exported file systems 144 file systems that can be exported 93 files moved by running a policy 280 home directory usage information 199 information for all disk devices for nodes in a cluster 106 LDAP client configurations 80 LDAP configured settings 75 list of current users 33 list of DST file systems 274 list of nodes in a cluster 40 list of syslog servers 230 local CIFS group 205 local CIFS user 202 NDMP backup method 257 NDMP failure resilient data 257 NDMP masquerade as EMC 257 NDMP overwrite data 257 NDMP recursive restore data 257 NDMP restore DST data 257

displaying (continued) NDMP send history data 257 NDMP update dumpdates data 257 NDMP use snapshot data 257 NDMP variables 255 NetBackup configurations 260 network configuration and statistics 51 NFS daemons 299 NFS statistics 92 ninodes cache size 299 NIS-related commands 82 node-specific network traffic details 326 NSS configuration 84 option tunefstab 299 policy of each tiered file system 275 routing tables of the nodes in the cluster 69 schedules for all tiered file systems 279 share properties 184 snapshot quotes 134 snapshots that can be exported 93 status of backup services 260 status of the NTP server 293 system date and time 285 system statistics 294 tier location of a specified file 274 time interval or number of duplicate events for notifications 236 values of the configured SNMP notifications 233 values of the configured syslog server 230 vlan 86 DMP I/O policy changing 299 displaying 299 resetting 299 DNS about 54 domain names clearing 56 name servers clearing 56 specifying 56 settings disabling 56 displaying 56 enabling 56 domain setting 181 setting user name 181

Index

339

domain controller setting 181 domain name for the DNS server setting 56 domain settings changing 163 domain user NT domain mode 160 DUD driver updates uninstalling 310

exclusion PCI 317 exporting audit events in syslog format to a given URL 237 configuration settings 289 events in syslog format to a given URL 237 network traffic details 326 SNMP MIB file to a given URL 233

F
file systems adding a mirror to 124 changing the status of 131 checking and repairing 130 creating 121 decreasing the size of 129 destroying 133 disabling FastResync option 127 displaying exported 144 DST displaying 274 enabling FastResync 126 increasing the size of 127 listing with associated information 120 removing a mirror from 124 that can be exported displayed 93 unexporting 151 filter about 222 adding to a group 225 deleting from a specified group 225 FTP about 207 logupload 219 server start 209 server status 209 server stop 209 session show 217 session showdetail 217 session terminate 217 set anonymous login 213 set anonymous logon 213 set anonymous write 213 set non-secure logins 213 FTP server about 208 display 208

E
email address adding to a group 225 deleting from a specified group 225 email group adding 225 deleting 225 displaying existing and details 225 email server deleting the configured email server 225 obtaining details for 225 setting the details of external 225 enabling DNS settings 56 FastResync for a file system 126 I/O fencing 113 LDAP client configurations 80 NIS clients 82 NTLM 175 NTP server 293 quota limits used by snapshots 134 support user account 36 enabling quotas home directory file systems 196 Ethernet interface changing configuration of 65 Ethernet interfaces bonding 52 event notifications displaying time interval for 236 event reporting setting events for 236 events displaying 231 excluding PCI IDs 319

340

Index

FTP session about 216 FTP set about 210

G
generating CPU utilization report 316 device utilization report 316 group membership managing 202

IP addresses adding to a cluster 60 displaying for the cluster 60 modifying 60 removing from the cluster 60 IP routing configuring 69

L
LDAP before configuring settings 72 configuring server settings 73 LDAP password hash algorithm setting password for 75 LDAP server clearing configured settings 75 disabling client configurations 80 displaying client configurations 80 displaying configured settings 75 enabling client configurations 80 setting over SSL 75 setting port number 75 setting the base distinguished name 75 setting the bind distinguished name 75 setting the hostname or IP address 75 setting the password hash algorithm 75 setting the root bind DN 75 setting the users, groups, and netgroups base DN 75 leaving AD domain about 170 leaving NT domain about 163 list of DST file systems displaying 274 list of nodes displaying in a cluster 40 listing all file systems and associated information 120 all of the files on the specified tier 273 free space for storage pools 99 storage pools 99 listing snapshots 134 local CIFS group creating 205 deleting 205 displaying 205 local CIFS groups managing 204

H
history command using 37 home directories and use quotas setting up 197 home directory file systems enabling quotas 196 setting 195 home directory of given user deleting 200 home directory usage information displaying 199 hostname or IP address setting for LDAP server 75 how to use Command-Line Interface (CLI) 25

I
I/O fencing about 111 checking status 113 destroy 113 disabling 113 enabling 113 importing configuration settings 289 increase LUN storage capacity 108 increasing size of a file system 127 initiating host discovery of LUNs 110 installing patches 310 about 308 iostat about 315

Index

341

local CIFS user creating 202 deleting 202 displaying 202 local CIFS user password changing 202 local user and groups managing 201 logging in to CLI 25 login Technical Support 325 logupload FTP 219 LUN storage capacity increase 108 LUNs initiating host discovery 110

modifying (continued) schedule of a tiered file system 279 more command using 292 mounting snapshots 134 moving disks from one storage pool to another 103

N
naming requirements for adding users 24 NDMP backup method displaying 257 NDMP backup method policy configuring 250 NDMP failure resilient data displaying 257 NDMP failure resilient policy configuring 250 NDMP masquerade as EMC displaying 257 NDMP overwrite data displaying 257 NDMP overwrite policy configuring 250 NDMP policies about 249 ndmp policies restoring 259 NDMP recursive restore data displaying 257 NDMP recursive restore policy configuring 250 NDMP restore DST data displaying 257 NDMP restore DST policy configuring 250 NDMP send history data displaying 257 NDMP send history policy configuring 250 NDMP supported configurations about 247 NDMP update dumpdates data displaying 257 NDMP update dumpdates policy configuring 250 NDMP use snapshot data displaying 257

M
man pages how to access 30 managing group membership 202 local CIFS groups 204 local users and groups 201 managing CIFS shares about 183 managing home directories about 194 masquerade as EMC policy configuring 250 Master, System Administrator, and Storage Administrator users creating 33 mirrored file systems creating 121 mirrored tier adding to a file system 268 mirrored-stripe file systems creating 121 mirrored-striped tier adding to a file system 268 modify snapshot schedule 140 modifying an IP address 60 option tunefstab 299 policy of a tiered file system 275

342

Index

NDMP use snapshot policy configuring 250 NDMP variables displaying 255 NetBackup configuring NetBackup virtual IP address 244 configuring virtual name 245 displaying configurations 260 NetBackup EMM server. See NetBackup Enterprise Media Manager (EMM) server NetBackup Enterprise Media Manager (EMM) server adding to work with SFS 243 NetBackup master server configuring to work with SFS 243 NetBackup media server adding 243 deleting 243 network configuration and statistics 51 testing connectivity 321 Network Data Management Protocol about 246 network services about 50 network traffic details about 325 exporting 326 NFS daemons changing 299 displaying 299 NFS file sharing about 143 NFS options deleting 151 NFS server checking on the status 90 starting 90 stopping 90 NFS share adding 145 NFS statistics displaying 92 ninodes cache size changing 299 displaying 299 NIS about 81 clients disabling 82

NIS (continued) clients (continued) enabling 82 domain name setting on all the nodes of cluster 82 related commands displaying 82 server name setting on all the nodes of cluster 82 node adding to the cluster 4344 in a cluster displaying information for all disk devices 106 installing SFS software onto 43 node-specific network traffic details displaying 326 NSS configuring 84 displaying configuration 84 lookup order configuring 84 NT domain mode configuring CIFS 159 domain user 160 setting domain 160 setting domain controller 160 setting security 160 starting CIFS server 160 NTLM disabling 175 enabling 175 NTP server coordinating cluster nodes to work with 293 disabling 293 displaying the status of 293 enabling 293

O
obtaining details of the configured email server 225 option commands about 296 option tunefstab displaying 299 modifying 299

Index

343

P
password changing a user's password 33 patch level displaying current versions of 307 patches installing 310 synchronizing 310 uninstalling 310 PCI exclusion 317 PCI IDs excluding 319 policies about 267 policy displaying files moved by running 280 displaying for each tiered file system 275 modifying for a tiered file system 275 relocating from a tiered file system 277 removing from a tiered file system 275 running for a tiered file system 275 preserve snapshot schedule 140 printing WWN information 109 privileges about 23 processor activity accessing 327

Q
quota limits enabling or disabling snapshot 134

removing (continued) mirror from a file system 124 mirror from a tier spanning a specified disk 271 mirror from a tier spanning a specified pool 271 mirror from a tiered file system 271 policy of a tiered file system 275 schedule of a tiered file system 279 tier from a file system 270 renaming storage pools 99 replacing coordinator disks 113 resetting default passwords Master, System Administrator, and Storage Administrator users 33 DMP I/O policy 299 restoring ndmp policies 259 retrieving debugging information 314 retrieving the NDMP data about 255 roles about 23 route entries deleting from routing tables 69 routing tables of the nodes in the cluster displaying 69 running policy of a tiered file system 275

S
schedule displaying for all tiered file systems 279 modifying for a tiered file system 279 removing from a tiered file system 279 second tier adding to a file system 268 security standalone mode 156 security settings AD domain mode 171 CIFS server stopped 171 change 165 sending debugging information 314

R
rebooting a node or all nodes in cluster 47 reconfiguring CIFS service about 180 regions and time zones setting 285 relocating policy of a tiered file system 277 remove snapshot schedule 140 removing disks 103 IP address from the cluster 60

344

Index

server start FTP 209 server status FTP 209 server stop FTP 209 services command about 321 using 323 session show FTP 217 session showdetail FTP 217 session terminate FTP 217 set anonymous login FTP 213 set anonymous logon FTP 213 set anonymous write FTP 213 set non-secure logins FTP 213 setting base distinguished name for the LDAP server 75 bind distinguished name for LDAP server 75 details of the external email server 225 domain 181 domain controller 181 domain name for the DNS server 56 domain user name 181 events for event reporting 236 filter of the syslog server 230 home directory file systems 195 LDAP password hash algorithm 75 LDAP server hostname or IP address 75 LDAP server over SSL 75 LDAP server port number 75 LDAP users, groups, and netgroups base DN 75 NIS domain name on all the nodes of cluster 82 regions and time zones 285 root bind DN for the LDAP server 75 severity of the syslog server 230 SNMP filter notifications 233 SNMP severity notifications 233 system date and time 285 the NIS server name on all the nodes of cluster 82 trusted domains for the Active Directory 176

setting domain AD domain mode 167 NT domain mode 160 setting domain controller AD domain mode 167 NT domain mode 160 setting domain user AD domain mode 167 setting NTLM about 173 setting security AD domain mode 167 NT domain mode 160 setting trusted domains about 176 setting up home directories and use quotas 197 severity levels about 222 adding to an email group 225 severity notifications setting 233 SFS cluster and load balancing about 191 SFS Dynamic Storage Tiering (DST) about 263 SFS software installing onto a new node 43 share splitting 192 share file systems CIFS and NFS protocols 148, 188 share properties displaying 184 show snapshot schedule 140 shutting down node or all nodes in a cluster 47 snapshot schedule create 140 delete 140 modify 140 preserve 140 remove 140 show 140 snapshot schedules about 138 snapshots about 133

Index

345

snapshots (continued) creating 134 destroying 134 displaying quotas 134 enabling or disabling quota limits 134 listing 134 mounting 134 that can be exported displayed 93 unmounting 134 SNMP filter notifications setting 233 management server adding 233 deleting configured 233 displaying current list of 233 MIB file exporting to a given URL 233 notifications displaying the values of 233 server setting severity notifications 233 specified group deleting a severity from 225 specifying DNS name servers 56 splitting a share 192 SSL setting the LDAP server for 75 standalone mode CIFS server status 156 CIFS service 156 security 156 starting backup services 260 CIFS server 181 NFS server 90 starting CIFS server AD domain mode 167 NT domain mode 160 stopping backup services 260 NFS server 90 storage pools creating 99 destroying 99 listing 99 listing free space 99

storage pools (continued) moving disks from one to another 103 renaming 99 storage provisioning and management about 95 storing user and group accounts in LDAP 179 user and group accounts locally 179 storing account information about 177 striped file systems creating 121 striped tier adding to a file system 268 striped-mirror file systems creating 121 striped-mirror tier adding to a file system 268 support user about 35 support user account disabling 36 enabling 36 support user password changing 36 support user status checking 36 swap command using 295 synchronizing patches 310 syslog event logging about 229 syslog format exporting audit events to a given URL 237 exporting events to a given URL 237 syslog server adding 230 deleting 230 displaying the list of 230 displaying the values of 230 setting the filter of 230 setting the severity of 230 system date and time displaying 285 setting 285 system statistics displaying 294

346

Index

T
technical support login 325 testing network connectivity 321 tier adding a tier to a file system 271 displaying location of a specified file 274 listing all of the specified files on 273 removing a mirror from 271 removing a mirror spanning a specified pool 271 removing from a file system 270 removing from a tier spanning a specified disk 271 traceroute command using 328 troubleshooting about 313

virtual IP address configuring or changing for NetBackup 244 virtual name configuring for NetBackup 245 vlan adding 86 configuring 86 deleting 86 displaying 86

W
WWN information printing 109

U
unexporting file systems 151 uninstalling DUD driver updates 310 patches 310 unmounting snapshots 134 user and group accounts in LDAP storing 179 user and group accounts locally storing 179 user roles and privileges about 23 users adding new 24 changing passwords 33 creating 33 deleting 33 using AD interface 173 history command 37 more command 292 services command 323 swap command 295 traceroute command 328

V
viewing list of locally saved configuration files 289