Operations Manager Administration Guide For Use with DataFabric® Manager Server 4.

0

NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: doccomments@netapp.com Information Web: http://www.netapp.com Part number: 210-04802_A0 February 2010

Table of Contents | 3

Contents
Copyright information ............................................................................... 19 Trademark information ............................................................................. 21 About this guide .......................................................................................... 23
Audience .................................................................................................................... 23 Keyboard and formatting conventions ...................................................................... 24 Special messages ....................................................................................................... 25 How to send your comments ..................................................................................... 25

What is new in this release ......................................................................... 27
Overview of new and changed features ..................................................................... 27 User interface changes ............................................................................................... 28 New and changed CLI commands ............................................................................. 30

Introduction to Operations Manager ........................................................ 31
What DataFabric Manager server does ...................................................................... 31 What a license key is ................................................................................................. 32 Access to Operations Manager .................................................................................. 32 Information to customize in Operations Manager ..................................................... 32 Administrator accounts on the DataFabric Manager server ...................................... 33 Authentication methods on the DataFabric Manager server ..................................... 33 Authentication with native operating system ................................................ 33 Authentication with LDAP ............................................................................ 34

Discovery process ........................................................................................ 35
Discovery by the DataFabric Manager server ........................................................... 35 What SNMP is ........................................................................................................... 36 When to enable SNMP .................................................................................. 36 SNMP versions to discover and monitor storage systems ............................. 36 What the Preferred SNMP Version option is ................................................ 37 How DataFabric Manager chooses network credentials for discovery ......... 38 Discovery process using SNMPv1 or SNMPv3 ............................................ 38 Monitoring process using SNMPv1 .............................................................. 39 Monitoring process using SNMPv3 .............................................................. 39 Setting SNMPv1 or SNMPv3 as the preferred version ................................. 39 Setting SNMPv1 as the only SNMP version ................................................. 40

4 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Setting SNMPv1 or SNMPv3 to monitor a storage system ........................... 40 Modifying the network credentials and SNMP settings ................................ 40 Deleting the SNMP settings for the network ................................................. 41 Addition of a storage system from an undiscovered network ....................... 41 Diagnosis of SNMP connectivity .................................................................. 41 What host discovery is ............................................................................................... 42 Ping methods in host discovery ................................................................................. 42 What host-initiated discovery is ................................................................................ 42 How DataFabric Manager server discovers vFiler units ........................................... 43 Discovery of storage systems .................................................................................... 43 Discovery of storage systems and networks .............................................................. 44 Methods of adding storage systems and networks .................................................... 45 Guidelines for changing discovery options ................................................... 45 Discovery of a cluster by Operations Manager ......................................................... 47 Adding a cluster ............................................................................................. 48 Data ONTAP 8.0 cluster monitoring tasks using Operations Manager .................... 48 Limitations of cluster monitoring in Operations Manager ............................ 48 Introduction to V-Series SAN-attached storage management ................................... 49 Limitations of V-Series SAN-attached storage management in Operations Manager ................................................................................. 49 Tasks performed from the Storage Controller Details page for a V-Series system ...................................................................................................... 50 Viewing configuration details of storage arrays connected to a V-Series system ...................................................................................................... 50

Role-based access control in DataFabric Manager ................................. 51
What role-based access control is .............................................................................. 51 Configuring vFiler unit access control ...................................................................... 52 Logging in to DataFabric Manager ........................................................................... 52 What default administrator accounts are ....................................................... 53 List of predefined roles in DataFabric Manager ........................................................ 54 Active Directory user group accounts ....................................................................... 55 Adding administrative users ...................................................................................... 55 How roles relate to administrators ............................................................................. 55 What predefined global roles are ................................................................... 56 What inheritance roles are ............................................................................. 58 What capabilities are ..................................................................................... 58

Table of Contents | 5 Role precedence and inheritance ................................................................... 59 Creating roles ................................................................................................. 59 Modifying roles ............................................................................................. 59 What an RBAC resource is ........................................................................................ 60 Granting restricted access to RBAC resources .............................................. 60 Access check for application administrators ................................................. 61 How reports are viewed for administrators and roles ................................................ 61 What a global and group access control is ................................................................ 62 Management of administrator access ........................................................................ 62 Prerequisites for managing administrator access ........................................... 63 Limitations in managing administrator access .............................................. 63 Controlled user access for cluster management ............................................ 63 Summary of the global group ........................................................................ 63 Who local users are ........................................................................................ 64 What domain users are .................................................................................. 70 What Usergroups are ..................................................................................... 72 What roles are ................................................................................................ 75 What jobs display .......................................................................................... 78

Groups and objects ..................................................................................... 79
What group types are ................................................................................................. 80 What homogeneous groups are ...................................................................... 80 What mixed-type groups are .......................................................................... 81 What a Global group is .............................................................................................. 81 What hierarchical groups are ..................................................................................... 81 Creating groups .......................................................................................................... 82 Creating groups from a report ................................................................................... 82 What configuration resource groups are .................................................................... 83 Guidelines for managing groups ................................................................................ 84 Guidelines for creating configuration resource groups ............................................. 84 Guidelines for adding vFiler units to Appliance Resource group ............................. 84 Editing group membership ........................................................................................ 85 What group threshold settings are ............................................................................. 85 What group reports are .............................................................................................. 86 What summary reports are ......................................................................................... 86 What subgroup reports are ......................................................................................... 86 Different cluster-related objects ................................................................................ 87

6 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Creating a group of cluster objects ................................................................ 88

Storage monitoring and reporting ............................................................ 89
What monitoring is .................................................................................................... 89 Cluster monitoring with Operations Manager ........................................................... 91 What the cluster management logical interface is ......................................... 91 Information available on the Cluster Details page ........................................ 91 Viewing the utilization of resources .............................................................. 92 Links to FilerView ..................................................................................................... 93 Query intervals .......................................................................................................... 94 What global monitoring options are .............................................................. 94 Considerations before changing monitoring intervals ................................... 94 What SNMP trap listener is ....................................................................................... 94 What SNMP trap events are .......................................................................... 95 How SNMP trap reports are viewed .............................................................. 96 When SNMP traps cannot be received .......................................................... 96 SNMP trap listener configuration requirements ............................................ 96 How SNMP trap listener is stopped .............................................................. 96 Configuration of SNMP trap global options .................................................. 97 Information about the DataFabric Manager MIB .......................................... 97 What events are ......................................................................................................... 97 Viewing events .............................................................................................. 98 Managing events ............................................................................................ 98 Operations on local configuration change events .......................................... 99 Alarm configurations ................................................................................................. 99 Configuration guidelines ............................................................................... 99 Creating alarms ............................................................................................ 100 Testing alarms .............................................................................................. 101 Comments in alarm notifications ................................................................. 101 Example of alarm notification in e-mail format .......................................... 101 Example of alarm notification in script format ........................................... 102 Example of alarm notification in trap format .............................................. 102 Response to alarms ...................................................................................... 102 Deleting alarms ............................................................................................ 102 Working with user alerts .......................................................................................... 103 What user alerts are ..................................................................................... 103 Differences between alarms and user alerts ................................................. 103

Table of Contents | 7 User alerts configurations ............................................................................ 104 E-mail addresses for alerts ........................................................................... 104 Domains in user quota alerts ....................................................................... 105 What the mailmap file is .............................................................................. 105 Guidelines for editing the mailmap file ....................................................... 106 How the contents of the user alert are viewed ............................................. 106 How the contents of the e-mail alert are changed ........................................ 106 What the mailformat file is .......................................................................... 106 Guidelines for editing the mailformat file ................................................... 107 Introduction to DataFabric Manager reports ........................................................... 108 Introduction to report options ...................................................................... 109 Introduction to report catalogs ..................................................................... 109 Different reports in Operations Manager ..................................................... 110 What performance reports are ..................................................................... 114 Configuring custom reports ......................................................................... 114 Deleting custom reports ............................................................................... 115 Putting data into spreadsheet format ........................................................... 116 What scheduling report generation is .......................................................... 116 Methods to schedule a report ....................................................................... 118 What Schedules reports are ......................................................................... 121 What Saved reports are ................................................................................ 122 Data export in DataFabric Manager ........................................................................ 124 How to access the DataFabric Manager data ............................................... 125 Where to find the database schema for the views ........................................ 126 Two types of data for export ........................................................................ 126 Files and formats for storing exported data ................................................. 127 Format for exported DataFabric Manager data ........................................... 127 Format for exported Performance Advisor data .......................................... 127 Format for last updated timestamp .............................................................. 128

Security configurations ............................................................................ 129
Types of certificates in DataFabric Manager ........................................................... 129 Self-signed certificates in DataFabric Manager .......................................... 129 Trusted CA-signed certificates in DataFabric Manager .............................. 130 Creating self-signed certificates in DataFabric Manager ............................ 130 Obtaining a trusted CA-signed certificate ................................................... 131 Enabling HTTPS .......................................................................................... 131

....... 132 SecureAdmin for secure connection with DataFabric Manager clients ..................... 145 NetApp Host Agent software passwords for administration tasks ................................................................................................................... 142 Prerequisites for FSRM .................. 151 .......................... 134 Comparison between global and storage system-specific managed host options ............................................ 132 How clients communicate with DataFabric Manager ...................... 137 Authentication control in DataFabric Manager ...................................................8 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4............................................................................................................... 146 Configuring host agent administration settings ................................equiv to control authentication ................................. 145 NetApp Host Agent software passwords for monitoring tasks ....... 142 What capacity reports are ..................................... 149 Adding CIFS credentials ........................................... 136 Issue with modification of passwords for storage systems ..................................................... 141 How FSRM monitoring works ......................................... 144 NetApp Host Agent communication .................................................................................................... 150 Path management tasks ................... 137 Using hosts.......... 145 NetApp Host Agent software passwords .............. 139 File Storage Resource Management .......... 144 NetApp Host Agent software overview ...............................................0 Secure communications with DataFabric Manager ... 133 Managed host options ................................................ 149 What FSRM paths are .............................................................................................................................................................................................................................. 142 Difference between capacity reports and file system statistics ................................................................................................................................................................................................................. 148 Enabling administration access for one or more host agents ...................... 132 Guidelines to configure security options in Operations Manager ............... 143 Setting up FSRM .................... 146 Host Agent management tasks ............... 133 Where to find managed host options ................................................................................................................................ 132 Requirements for security options in Operations Manager ............................... 136 Changing passwords on multiple storage systems ............................................................................. 145 Managing host agents ............................. 149 Enabling administration access globally for all host agents .. 133 Guidelines for changing managed host options ...... 135 Limitations in managed host options ............................................................................................................................................................................. 137 Configuring HTTP and monitor services to run as different user .............. 135 Changing password for storage systems in DataFabric Manager ...................

................................... 152 Conventions for specifying paths from the CLI .............................. 158 Adding an FSRM path ........................................................................................... and threshold quotas .......................... 153 Viewing file-level details for a path ............................................... 161 Why you use quotas ..................................................... 156 FSRM prerequisites ................................................ 162 User quota management using Operations Manager ................................................................. 163 Modification of user quotas in Operations Manager .......... 164 Prerequisites to edit user quotas in Operations Manager ............................................................................................................................................................ soft.... 152 Path names for CIFS ............................................................................................................. 154 What path walks are ................................. 155 SRM path-walk recommendations ........................................................................................................................................................ 158 Grouping the FSRM paths ................................... 154 Automatically mapping SRM path ............................... 161 About quotas ................................................................................................................. 161 Differences among hard................................................................. 164 Editing user quotas using Operations Manager ...................................................................... 166 ............................................ 159 User quotas ................. 156 Identification of oldest files in a storage network ............................ 165 What DataFabric Manager user thresholds are .. 163 Monitor interval for user quotas in Operations Manager ................................................................................................ 158 Adding a schedule .... 157 Verifying administrative access for using FRSM ............................................ 161 Overview of the quota process ............................................................................................. 157 Verifying host agent communication ............................................................................. 165 What user quota thresholds are ..................................... 155 What File SRM reports are ........................................... 155 Access restriction to file system data ................. 163 Where to find user quota reports in Operations Manager ..................................................................................................................................................................................................................................... 159 Viewing a report that lists the oldest files .............................................................. 157 Creating a new group of hosts ........ 153 Viewing directory-level details for a path .... 153 Editing SRM paths ............ 165 Configuring user settings using Operations Manager .......................................... 154 Deleting SRM paths ........................................................................................................................................ 162 Prerequisites for managing user quotas using Operations Manager .................Table of Contents | 9 Adding SRM paths ................................................................................

....... 180 Changing storage capacity threshold settings for an individual group ...................... ........ SAN hosts..... 181 Available space on an aggregate . 166 Ways to configure user quota thresholds in Operations Manager ...................................................................... Windows and UNIX hosts............. 177 How a deleted SAN component delete is restored .............................................. SAN hosts...................... 170 List of tasks performed to monitor targets and initiators ............................................ and FCP targets ....... 181 Management of aggregate capacity .......... 178 File system management .............................................................................................. 174 Information about the FCP Target Details page ....... and LUNs are grouped ................................................................................................................... 170 List of tasks performed using NetApp Host Agent software ................... 182 Considerations before modifying aggregate capacity thresholds .... 177 Deleting a SAN component .................................................... 175 How storage systems.......... 169 Management of SAN components ...................... 169 SAN and NetApp Host Agent software ......................... 176 Granting access to storage systems.................................................... 171 Prerequisites to manage SAN hosts ................ 166 Precedence of user quota thresholds in DataFabric Manager ............ 179 Modification of storage capacity thresholds settings ........... 175 List of tasks performed from the Host Agent Details page ..................................................................................... 172 Reports for monitoring LUNs.......................................... 181 Volume space guarantees and aggregate overcommitment ............. 173 Tasks performed from the LUN Details page ............ 167 Management of LUNs.......................................................................... 172 Information available on the LUN Details page ..................................................... volume...........10 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4............ FCP targets....................................... 178 Where to configure monitoring intervals for SAN components ...................................................0 User quota thresholds in Operations Manager ........................................................................................ 186 ...................... 174 Information about the Host Agent Details page .................. and SAN hosts .............................. 180 Changing storage capacity threshold settings for a specific aggregate...... 180 Changing storage capacity threshold settings for global group ............. and LUNs .................................................................................................. 182 Aggregate capacity thresholds and their events ............................ or qtree . 179 Storage capacity thresholds in Operations Manager ................................................ 176 Introduction to deleting and undeleting SAN components ........ 171 Prerequisites to manage targets and initiators ......... 179 Access to storage-related reports ......................... 183 Management of volume capacity .......................................................................................

...................... 195 How Operations Manager monitors qtree quotas ......... 205 Management tasks performed using Operations Manager ...................... 204 Storage system management .. 195 Where to find vFiler storage resource details .... 192 Qtree capacity thresholds and events ............... 200 Specifying storage chargeback options at the global or group level ... 201 The storage chargeback increment .................. 199 Chargeback reports in various formats .................. 198 When is data collected for storage chargeback reports ............... 196 Why Snapshot copies are monitored .................................................................................................................................................... 207 ........................................................................................................................ 198 Storage chargeback reports ........................................................................................ 191 Management of qtree capacity .............................................................................................................................. 201 Currency display format for storage chargeback ....... 196 What clone volumes are ............................................................................................................................ 202 Specification of the annual charge rate for storage chargeback ..................................................................... 199 Determine the current month’s and the last month’s values for storage chargeback report ............................................................................................................................................. 202 Specification of the Day of the Month for Billing for storage chargeback ..................................................... 197 Detection of Snapshot copy schedule conflicts ....... 203 The formatted charge rate for storage chargeback .................... 204 Undeleting a storage object for monitoring .................................................................... 192 Volume Snapshot copy thresholds and events ........................ 191 Modification of the thresholds .. 197 Snapshot copy monitoring requirements .............. 203 What deleting storage objects for monitoring is ........ 199 The chargeback report options .................................................................... 197 Dependencies of a Snapshot copy ....................................... 203 Reports of deleted storage objects ..............................................................................................................................................................................Table of Contents | 11 Volume capacity thresholds and events ................................. 186 Normal events for a volume ...................................................................................................................................... 206 Storage system groups .............................................................. 207 Consolidated storage system and vFiler unit data and reports . 194 How Operations Manager monitors volumes and qtrees on a vFiler unit ............................................... 205 Operations Manager components for managing your storage system ............................... 206 Custom comment fields in Operations Manager .................................................... 196 Identification of clones and clone parents ...........................................

..............0 Tasks performed by using the storage systems and vFiler unit report pages ............. 218 Configuring storage systems by using FilerView ............... 208 Where to find information about a specific storage system ................................................ 210 The Refresh Monitoring Samples tool ..................................................................................... 212 Requirements for using the cluster console in Operations Manager . 211 The Run a Command tool .................................................... 212 Managing active/active configuration with DataFabric Manager ................................................................................... 211 The Run Telnet tool ............................................12 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.................................... 218 Introduction to MultiStore and vFiler units .............................................................................................................................................................................................................. 210 What Cluster Tools list is ........................... 215 Running commands on a specific storage system .. 208 Tasks performed from a Details page of Operations Manager ................................................... 213 What the Giveback tool does .................. 210 What the Diagnose Connectivity tool does ...................................................... 218 What FilerView is .............................................. 216 Running commands on a specific cluster ...................................................... 217 Running commands on a specific node of a cluster ....................................................................... 214 Remote configuration of a storage system ................................................................................................................................................................................ 215 Prerequisites for running remote CLI commands from Operations Manager .............. 213 What the Takeover tool does .................. 220 Requirements for monitoring vFiler units with DataFabric Manager ........................ 216 Remote configuration of a cluster ..................................................................... 223 ........................................ 219 Why monitor vFiler units with DataFabric Manager .............. 213 Accessing the cluster console ................... 214 DataFabric Manager CLI to configure storage systems ............................................................... 215 Running commands on a group of storage systems from Operations Manager ......................................................................................................................... 208 Editable options for storage system or vFiler unit settings .............................................................................................................................. 221 Configuration of storage systems ....................................................................................................................................... 209 What Storage Controller Tools list is .................................................. 212 Console connection through Telnet .......... 220 vFiler unit management tasks ..................................... 217 Storage system management using FilerView ............ 223 Management of storage system configuration files .....

.................................................................................... 234 New directories for backup ..... 239 Snapshot copies and retention copies ................ 225 Comparison of configurations ................................................................................................................................ 238 Best practices for creating backup relationships ................................ 225 What a configuration plug-in is ........................... 238 What backup schedules are ................... 234 SnapVault services setup .................. 226 Considerations when creating configuration groups .................................................. 236 Adding secondary volumes .......... 224 What configuration files are ............................... 235 Enabling NDMP backups .......................... 232 What backup scripts do ........................... 234 Viewing directories that are not backed up ............................................................................................................................................................................................................................................................... 229 Backup Manager ......................................... 231 Backup management deployment scenario ..................................................................... 237 Selecting primary directories or qtrees for backup ................. 233 What SnapVault relationship discovery is .................................................................. 226 List of tasks for managing configuration groups ... 225 Verification of a successful configuration push ................................... 235 Management of SnapVault relationships .......................................... 235 Configuring the SnapVault license ...................................... 237 Adding primary storage systems ..............................................................................................................................................................................................Table of Contents | 13 Prerequisites to apply configuration files to storage systems and vFiler units ......... 226 What a configuration resource group is ................................................................... 224 List of access roles to manage storage system configuration files .......................................................... 233 Methods of storage system discovery ................................................................................. 227 Creating configuration resource groups ............................................................................................................ 224 List of tasks for configuration management .............................................................................................. 233 What the Backup Manager discovery process is .................................. 227 Parent configuration resource groups ....................................................................... 231 System requirements for backup .............................. 240 .................... 239 Requirements to create a backup schedule ............................................................................................................................................................................................. 239 Creating backup schedules ..................................... 228 Configuring multiple storage systems or vFiler units ....................................................................................................................... 236 Adding secondary storage systems ............

.......... 253 Authentication of storage systems ........................................................................................... 256 Modification of the source of a SnapMirror relationship ....................................................................................... 247 Tasks performed by using Disaster Recovery Manager .................... 250 Connection management ...................................................................................... 250 Policy management tasks .................................................................................................................................................................. 242 Setting local thresholds ..................................... 248 What a replication policy does .... 241 Enabling DataFabric Manager to manage discovered relationships ............................................................................................................................................ 248 What a failover policy does ....................................................................................... 242 Bandwidth limitation for backup transfers .... 252 What multipath connections are .............. 241 What lag thresholds are ......................... 242 Setting global thresholds ................. 247 Prerequisites for using Disaster Recovery Manager .......... 255 Addition of a new SnapMirror relationship ........................................................... 243 List of CLI commands to configure SnapVault backup relationships .................................................0 Local data protection with Snapshot copies .............................................................................................................................................. 254 Modification of NDMP credentials .................................... 256 Termination of a SnapMirror transfer ................................ 248 What a policy is ......................................................................... 254 Decisions to make before adding a new SnapMirror relationship ........................................................................................................................ 244 Primary directory format ................. 246 Disaster Recovery Manager ................................................ 246 Secondary volume format ............................................. 251 Connection management tasks ............... 240 Snapshot copy schedule interaction ....... 241 Management of discovered relationships ..................................... 253 Authentication of discovered and unmanaged storage systems ....................................... 256 Modification of an existing SnapMirror relationship ................................ 257 ..................................... 243 Configuring backup bandwidth ......................................................................................................................................... 257 SnapMirror relationship quiescence .................................................................................................................................................... 256 Reason to manually update a SnapMirror relationship ................................. 253 Addition of a storage system ..........................................................................................14 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.............................................................................. 254 Deletion of a storage system ............................................. 252 What the connection describes ............................. 254 Volume or qtree SnapMirror relationships ............

.............................. 259 Reasons for changing the lag thresholds .......... 266 Scripts overview ............ 269 What the DataFabric Manager database backup process is ............... 259 Lag thresholds you can change ....................................................................................................................................................................................... 267 What the script plug-in directory is ............ 268 What the configuration difference checker script is ........................................................................................................................................ 257 Resynchronization of a broken SnapMirror relationship ................................................................................... 262 Global options for audit log files and their values ................................................. 259 Maintenance and management ................ 258 What lag thresholds for SnapMirror are .................................................................................................. 257 Disruption of a SnapMirror relationship ................................. 271 Access requirements for backup operations ............................... 257 Resumption of a SnapMirror relationship .................... 258 Where to change the lag thresholds ................................................. 262 Events audited in DataFabric Manager ................................................................................................................................................ 270 Recommendations for disaster recovery .................................................................................................................................................................................... 270 Limitation of Snapshot-based backups ......................................... 270 Where to back up data ....... 262 Audit logging ................................................................................................... 270 Backup storage and sizing ........................................... 266 Commands that can be used as part of the script ................................... 259 What the job status report is ............................................................................................................................................................. 266 Prerequisites for using the remote platform management interface ...................................................... 265 RLM card monitoring in DataFabric Manager ...............................Table of Contents | 15 View of quiesced SnapMirror relationships ......................................................... 263 Format of events in an audit log file ........ 269 When to back up data ................................................ 261 Accessing the CLI .... 265 What the remote platform management interface is ..................................................................... 257 View of a broken SnapMirror relationship .... 261 Where to find information about DataFabric Manager commands ................................ 267 What script plug-ins are ....................................................................................................... 268 What backup scripts do ....................... 263 Permissions for accessing the audit log file ........................................ 267 Package of the script content ................................. 258 Deletion of a broken SnapMirror relationship .......................................................................................... 271 ......................

....................... 284 Types of AutoSupport messages in DataFabric Manager ............................... 286 Access to the SAN log ........................................ 283 Reasons for using AutoSupport ............................. 272 Scheduling database backups from Operations Manager ............................................... 275 Disaster recovery configurations ....................................... 272 Specifying the backup retention count ............ 275 Restoring the database from the Snapshot copy-based backup ............................................. 274 Exportability of a backup to a new location . 289 ............................................. 287 Communication issues between DataFabric Manager and routers ................................................................................................. 274 Restoring the database from the archive-based backup ......... 273 Disabling database backup schedules ................................................................. 287 Common DataFabric Manager problems .......................................................................................... 276 Disaster recovery using SnapDrive ..................................16 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4............................................................................................................................................................................... 287 How discovery issues are resolved .................................................................. 289 Reasons why DataFabric Manager might not discover your network ..................................................................................... 275 Restoration of the database on different systems ....... 273 Deleting database backups from Operations Manager ...................................................... 285 DataFabric Manager logs ....................................................................................................................................0 Changing the directory path for archive backups ........ 286 Apache and Sybase log rotation in DataFabric Manager ....................... 271 Starting database backup from Operations Manager ..... 287 E-mail alerts not working in DataFabric Manager . 289 Troubleshooting network discovery issues .................... 283 AutoSupport in DataFabric Manager ...................... 274 What the restore process is ... 288 Use of the Diagnose Connectivity tool for an unmanaged storage system ... 273 Listing database backups ............................................. 286 Access to logs ..... 288 Use of the Diagnose Connectivity tool for a managed storage system ............................................................................................................. 284 Configuring AutoSupport ..... 282 Troubleshooting in Operations Manager ..................................................................... 273 Displaying diagnostic information from Operations Manager ......................... 288 Where to find the Diagnose Connectivity tool in Operations Manager ............................ 276 Disaster recovery using Protection Manager ....................................................... 284 Protection of private data by using AutoSupport .................................................... 286 Accessing the logs through the DataFabric Manager CLI .........................................................................

........................................................ 329 Discovery of SAN hosts by DataFabric Manager ....................................................................................................................... 328 SAN management ...................................................................................................... 328 DataFabric Manager access to Open Systems SnapVault agents .................. 340 DataFabric Manager options for SAN management ............................................................................................................. 323 Report fields and performance counters for qtree catalogs ........................................ 320 Report fields and performance counters for vFiler catalogs .......................... 292 Faulty FC Switch Port or HBA Port Error ................................................................................................................................................................................ 293 Import and export of configuration files .............................................................................................. 292 Offline LUNs ............................ 297 Report fields and performance counters ............................. 325 Report fields and performance counters for aggregate catalogs .............................................. 328 DataFabric Manager access to host agents ........................ 294 Appendix ....................................................... 294 How inconsistent configuration states are fixed ................................... 297 List of events and severity types .................................. 349 ....................................................................................... 291 How File Storage Resource Manager (FSRM) issues are resolved ............................. 333 DataFabric Manager options ............... 290 How configuration push errors are resolved ............................... 341 How SAN components are grouped ..... 326 Protocols and port numbers ....Table of Contents | 17 Troubleshooting appliance discovery issues with Operations Manager ................................................................. 327 DataFabric Manager server communication .... 291 Issues related to SAN events .............................................. 327 DataFabric Manager access to storage systems ................................................................................................................................................................................... 292 Offline FC Switch Port or Offline HBA Port ......................................... 329 SAN management using DataFabric Manager ......................................... 322 Report fields and performance counters for volume catalogs ..................................................................... 330 Reports for monitoring SANs ................. 320 Report fields and performance counters for filer catalogs .......................................... 294 Data ONTAP issues impacting protection on vFiler units ..................... 293 High traffic in HBA Port ..................... 293 Snapshot copy of LUN not possible ........................................... 325 Report fields and performance counters for LUN catalogs ........................... 345 Index ........................................................................................ 343 Glossary .................................................................... 326 Report fields and performance counters for disk catalogs ...................

.

All rights reserved. OR CONSEQUENTIAL DAMAGES (INCLUDING. patents. INDIRECT. BUT NOT LIMITED TO. trademark rights. EXEMPLARY. Inc. INCIDENTAL. WHETHER IN CONTRACT. foreign patents. taping. including photocopying. or mechanical. electronic. or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252. BUT NOT LIMITED TO. SPECIAL. The use or purchase of this product does not convey a license under any patent rights. INCLUDING. except as expressly agreed to in writing by NetApp. or pending applications. recording. Printed in the U. No part of this document covered by copyright may be reproduced in any form or by any means— graphic. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT. . OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. NetApp reserves the right to change any products described herein at any time. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES. or storage in an electronic retrieval system—without prior written permission of the copyright owner. RESTRICTED RIGHTS LEGEND: Use.277-7103 (October 1988) and FAR 52-227-19 (June 1987). STRICT LIABILITY. duplication. OR PROFITS. and without notice. or any other intellectual property rights of NetApp. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The product described in this manual may be protected by one or more U. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. DATA. LOSS OF USE.Copyright information | 19 Copyright information Copyright © 1994–2010 NetApp.S. NetApp assumes no responsibility or liability arising from the use of products described herein.A. WHICH ARE HEREBY DISCLAIMED.A.S. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE.

.

and RealVideo are registered trademarks and RealMedia.A. FPolicy. Simulate ONTAP. NOW (NetApp on the Web). A complete and current list of other IBM trademarks is available on the Web at http://www.A.shtml. RAID-DP. Inc. FlexScale. SpinServer. Snapshot. SnapDirector. NetApp. NetCache. RealSystem. Decru DataFort. other countries. ComplianceClock. FlexSuite. FilerView. VFM. and Web Filer are trademarks of NetApp.S. ApplianceWatch PRO. MultiStore. Go further. RealAudio.A. Tech OnTap. Apple is a registered trademark and QuickTime is a trademark of Apple. SnapMirror. Data ONTAP.com/legal/copytrade. Inc. NearStore. Lifetime Key Management. ASUP. Spinnaker Networks logo.com are trademarks or registered trademarks of International Business Machines Corporation in the United States. Inc. IBM. VPolicy. SnapFilter.S.S. and/or other countries and registered trademarks in some other countries. Inc.A. the NetApp logo. ApplianceWatch. the IBM logo. SecureShare. Inc. SnapMigrator. faster. SpinStor. MetroCluster. and/or other countries. FlexVol. AutoSupport. the Network Appliance logo. Inc. and other countries. Shadow Tape. Inc. Topio. in the U. and WAFL are registered trademarks of NetApp.A. Cryptainer. DataFabric. SnapManager. RealPlayer. SnapCopy. in the U. Virtual File Manager. SnapRestore. FlexShare. NetApp.Trademark information | 21 Trademark information NetApp. Get Successful and Select are service marks of NetApp. Spinnaker Networks. The StoreVault logo. NOW. SpinFlex. Data Motion. SnapSuite. NetCache is certified RealSystem compatible.S. SpinFS. SnapDrive. RealProxy.A. SpinMove. LockVault. DataFort. OpenKey. in the U. FAServer. ReplicatorX. SnapMover. RealNetworks. SyncMirror. SnapValidator. SecureAdmin. Inc. Manage ONTAP. and SureStream are trademarks of RealNetworks. in the U. FlexCache. SpinCluster.S. SnapVault.S. SpinAccess. and The evolution of storage are trademarks of NetApp. Network Appliance. RealText. vFiler. SpinHA. and ibm. SANscreen. gFiler.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U. SnapLock. Decru. .ibm. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. and/or other countries.S. ONTAPI. in the U. StoreVault. in the U. and/or other countries. is a licensee of the CompactFlash and CF Logo trademarks. FlexClone. Cryptoshred. or both.

.

it emphasizes the characteristics of integrated management of storage systems and describes how to use Operations Manager. This document is written with the assumption that you are familiar with the following technology: • • • Data ONTAP operating system software The protocols that you use for file sharing or transfers. and experience. keyboard. This guide describes the DataFabric Manager software and how to use it to monitor. This document is for system administrators and others interested in managing and monitoring storage systems with DataFabric Manager. the Web-based user interface (UI) of DataFabric Manager. knowledge. such as NFS. iSCSI. FC. The information in this guide applies to all supported storage system models. and optimize storage systems that run the Data ONTAP operating system. and other details about finding and using information. and typographic conventions this document uses to convey information. what special terminology is used in the document. Here you can learn what this document describes and who it is intended for. administer. based on your job. Here you can learn if this guide is right for you. Next topics Audience on page 23 Keyboard and formatting conventions on page 24 Special messages on page 25 How to send your comments on page 25 Audience This document is written with certain assumptions about your technical knowledge and experience.About this guide | 23 About this guide You can use your product more effectively when you understand this document's intended audience and the conventions that this document uses to present information. or HTTP The client-side operating systems (UNIX or Windows) . CIFS. This guide does not cover basic system or network administration topics. what command. such as IP addressing and network management.

hyphen (-) type Used to separate individual keys. Ctrl-D means holding down the Ctrl key while pressing the D key. unless your program is case-sensitive and uppercase letters are necessary for it to work properly. the key is named Return on some keyboards. Book titles in cross-references. or clicking in a field in a graphical interface and then typing information into the field. and directory names. Formatting conventions Convention What it means • • Words or characters that require special attention. Keyboard conventions Convention What it means The NOW site Refers to NetApp On the Web at http://now. enter • • Used to refer to the key that generates a carriage return. For example.netapp. Command names.com/. Enter. Used to mean pressing one or more keys on the keyboard. if the guide says to enter the arp -d hostname command. you enter the characters "arp -d" followed by the actual name of the host. For example. and daemon names. Placeholders for information that you must supply. Used to mean pressing one or more keys on the keyboard and then pressing the Enter key. File. What you type is always shown in lowercase font letters. Contents of files.0 Keyboard and formatting conventions You can use your product more effectively when you understand how this document uses keyboard and formatting conventions to present information. Information displayed on the system console or other computer monitors. . keywords. path.24 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Italic font • Monospaced font • • • • Bold monospaced Words or characters you type. option names.

About this guide | 25 Special messages This document might contain the following types of messages to alert you to conditions that you need to be aware of. loss of data. or Operations Manager 3. or Host Utilities—Solaris. For example.com.8—Windows. How to send your comments You can help us to improve the quality of our documentation by sending us your feedback. include in the subject line the name of your product and the applicable operating system. If you have suggestions for improving this document. send us your comments by e-mail to doccomments@netapp.3. To help us direct your comments to the correct division. Note: A note contains important information that helps you install or operate the system efficiently. FAS6070— Data ONTAP 7. Attention: An attention notice contains instructions that you must follow to avoid a system crash. Your feedback is important in helping us to provide the most accurate and high-quality information. or damage to the equipment. .

.

to develop a Web service application from the WSDL file. monitoring My AutoSupport Starting from DataFabric Manager 4. a link to My AutoSupport is provided in the Storage Controller Details page in Operations Manager.0 7-Mode. use SOAP 1. this feature is applicable only to systems running Data ONTAP 8.0 Cluster-Mode. Operations Manager can discover. However. My AutoSupport is a Web-based self-service support application that provides a simple way to use graphical interface for various functionalities such as device visualization. monitor.0 7-Mode. such as gSOAP. component health status.2 over HTTP or HTTPS. defined in the WSDL file.0. New features Data ONTAP 8.0 provides support for larger aggregates with a maximum aggregate size of 16 TB. Next topics Overview of new and changed features on page 27 User interface changes on page 28 New and changed CLI commands on page 30 Overview of new and changed features Operations Manager for DataFabric Manager server 4. this feature is applicable only to systems running Data ONTAP 8. These Web services APIs. Axis.0 Starting from DataFabric Manager 4. this feature is not applicable to systems running Data ONTAP 8. However.0.wsdl. Detailed information about the features is provided elsewhere in this guide. DataFabric Manager 4. Apache CXF.0 supports the Web services feature and Web services APIs. and upgrade advisor. and Perl SOAP::Lite. V-Series SANattached storage management Larger Aggregates Web service This feature provides functionalities in Operations Manager to monitor and report the SAN-attached storage of a V-Series system.0.0 Clustercluster Mode. However. The WSDL file is packaged with the DataFabric Manager Suite and is available at $INSTALL_DIR/misc/dfm. You can use various SOAP toolkits.0 contains new and changed features. . DataFabric Manager 4.What is new in this release | 27 What is new in this release The "What is new in this release" section describes new features and changes in Operations Manager for DataFabric Manager server 4. and generate reports of systems running Data ONTAP 8.

Volume Capacity Clusters.0 clusters are added: • • • • • • • • Clusters. Terminology changes The following table describes the change in terminology. and storage array ports for a V-Series system are added: • • • • • Storage Arrays report Storage Array Configuration report Storage Array Ports report Array LUN Configuration report Aggregate Array LUN Storage I/O Load report You can access these SAN-attached V-Series system-related reports from Control Center > Home > Member Details > Physical Systems > Report drop-down list. Aggregate Capacity Cluster Chargeback by Usage. All Clusters. Old terminology Node/Appliance New terminology Storage Controller Changes to Web-based UI pages Following are the modifications to the names of the tabs: • • Appliance Tab is changed to Physical Systems Tab. Licenses Clusters. storage arrays. This Month Cluster Chargeback by Usage. The following new reports for Data ONTAP 8.0 User interface changes DataFabric Manager server 4. Last Month You can access these new cluster-related reports from Control Center > Home > Member Details > Physical Systems > Report drop-down list. .28 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Last Month Cluster Chargeback by Allocation. The following new reports for array LUNs. vFilers Tab is changed to Virtual Systems Tab. This Month Cluster Chargeback by Allocation.0 introduces terminology changes and changes to the Web-based UI pages.

What is new in this release | 29 The following new reports for ports and interface groups are added: • • • Ports. Controllers Striped Volumes. Block Type. All" and "Aggregates. All You can access ports and interface groups related reports from Control Center > Home > Member Details > Physical Systems > Report drop-down list. A new field. This Month Virtual Server Chargeback by Usage. This Month Virtual Server Chargeback By Allocation. Last Month You can access virtual server related reports from Control Center > Home > Member Details > Virtual Systems > Report drop-down list. All Port Graph Interface Groups. is included in the "Volumes. Junction Paths Volume. Last Month Virtual Server Chargeback By Allocation. The following new aggregate reports are added: • • Aggregate Controllers Striped Aggregates. . All" reports. The following new file system reports are added: • • • Virtual Servers. All Virtual Servers. "Arrays and Array LUNs" section is included in the Group Summary page. The following fields and sections are included for V-Series systems: • • "SAN Summary" section is included in the Details page for a V-Series system. Deleted Virtual Servers. The following new reports for virtual servers are added: • • • • • • • Virtual Servers. All You can access these file system related reports from Control Center > Home > Member Details > File Systems > Report drop-down list. All You can access these new aggregate related reports from Control Center > Home > Member Details > Aggregates > Report drop-down list. Volume Capacity Virtual Server Chargeback by Usage.

dfm vserver delete Deletes one or more virtual servers from Operations Manager. Cluster commands • • • dfm cluster list Provides information about clusters. For detailed information about these commands. dfm perf clientstat collect Collects the CIFS and NFS client operation statistics for a storage system. dfm perf clientstat list Displays the client operation statistics for a storage system. see the DataFabric Manager man pages. New CLI commands to support new features • Virtual server commands • • • • dfm vserver list Provides information about virtual servers. dfm cluster add Adds one or more clusters to Operations Manager.0 New and changed CLI commands DataFabric Manager server 4. Changed CLI commands • dfm perf view2 Displays a list of the available performance views.30 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 includes new and changed CLI commands to support the new and changed features. dfm vserver add Adds one or more deleted virtual servers to Operations Manager. dfm cluster delete • Deletes one or more clusters from Operations Manager. Other new CLI commands • • • • dfm perf data retrieve Retrieves data collection parameters for performance views. . dfm perf clientstat purge Deletes the client operation statistics for a storage system from the DataFabric Manager database.

auditing. monitoring.8 and later supports not only IPv4. and logging for products in the NetApp Storage and Data suites. You can use Operations Manager for the following day-to-day activities on storage systems: • • • • • • • • Discover storage systems Monitor the device or the object health. the capacity utilization. vFiler units. but also IPv6. You can script commands using the command-line interface (CLI) of DataFabric Manager software that runs on a separate server. Next topics What DataFabric Manager server does on page 31 What a license key is on page 32 Access to Operations Manager on page 32 Information to customize in Operations Manager on page 32 Administrator accounts on the DataFabric Manager server on page 33 Authentication methods on the DataFabric Manager server on page 33 What DataFabric Manager server does The DataFabric Manager server provides infrastructure services such as discovery. rolebased access control (RBAC). and host roles Note: DataFabric Manager 3. . host agents. domain users. and the performance characteristics of a storage system View or export reports Configure alerts and thresholds for event managements Group devices. volumes. The software does not run on the storage systems. qtrees. user groups. local users. and LUNs Run Data ONTAP CLI commands simultaneously on multiple systems Configure role-based access control (RBAC) Manage host users.Introduction to Operations Manager | 31 Introduction to Operations Manager Operations Manager is a Web-based UI of DataFabric Manager.

In the case of the server on Windows. The license key is a character string that is supplied by NetApp. such as disaster recovery and backup.com instead of tampa.8 and later supports IPv6 along with IPv4. the DataFabric Manager server starts discovering. If you are installing the software for the first time. collecting. Objects are entities such as storage systems. monitoring. you might need to use the fully qualified name in the second URL. After successfully installing the DataFabric Manager software. However. for example. Use either of the following URLs to access Operations Manager: http://[server_ip_address]:8080 http://server_dnsname:8080 Depending on your Domain Name System (DNS) setup. volumes.32 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.equiv" file based authentication . set up SNMP communities and administrator accounts and create groups. administrator access control. Information to customize in Operations Manager You can use Operations Manager to configure storage system IP addresses or names. Operations Manager launches automatically and a welcome page appears. DataFabric Manager 3. you must enable the Operations Manager license by using the license key. the vFiler units. You can enter the license key in the Options window under Licensed Features. you enter the license key during installation. You must enable additional licenses to use other features. and qtrees on these storage systems. the following Operations Manager features lack IPv6 support: • • • • • LUN management Snapshot-based backups (because SnapDrive for Windows and SnapDrive for UNIX do not support IPv6 addressing) Disaster recovery High Availability (HA) over Veritas Cluster Servers (VCS) "hosts. and user quotas. Access to Operations Manager You can access Operations Manager and the CLI from the IP address or DNS name of the DataFabric Manager server. disks. LUNs.0 What a license key is To use DataFabric Manager. aggregates. and saving information about objects in its database. and alarms. use tampa.florida.

access is set to a value other than legacy. Next topics Authentication with native operating system on page 33 Authentication with LDAP on page 34 Authentication with native operating system You do not need to configure any options to enable the DataFabric Manager server to use the native operating system for authentication. restore. Discovery of storage systems and host agents that exist on remote network Protocols such as RSH and SSH do not support IPv6 link local address to connect to storage systems and host agents. If you configure LDAP. you can configure the server to use Lightweight Directory Access Protocol (LDAP). The DataFabric Manager software provides the following two different administrator accounts: • • Administrator—grants full access for the administrator who installed the software Everyone—allows users to have read-only access without logging in Related concepts How roles relate to administrators on page 55 Authentication methods on the DataFabric Manager server The DataFabric Manager server uses the information available in the native operating system for authentication. and full control to administrators. The server does not maintain its own database of the administrator names and the passwords. and NIS or NIS+ . write. Based on the native operating system. delete. However.Introduction to Operations Manager | 33 • • • APIs over HTTPS do not work for storage systems managed using IPv6 addresses. Note: Link local address works with SNMP and ICMP only. You can grant capabilities such as read. when the option httpd. backup. then the server uses it as the preferred method of authentication.admin. Administrator accounts on the DataFabric Manager server You can use Operations Manager to set up administrator accounts on the DataFabric Manager server. distribution. the DataFabric Manager application supports the following authentication methods: • • For Windows: local and domain authentication For UNIX: local password files.

34 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. .0 Note: Ensure that the administrator name you are adding matches the user name specified in the native operating system. The DataFabric Manager application provides predefined templates for the most common LDAP server types. Authentication with LDAP You can enable LDAP authentication on the DataFabric Manager server and configure it to work with your LDAP servers. These templates provide predefined LDAP settings that make the DataFabric Manager server compatible with your LDAP server.

” you must specify the appropriate communities on the Edit Network Credentials page. If your storage systems is not SNMP-enabled. or storage systems use SNMP communities other than “public. however. You can enable SNMP on storage systems by using either FilerView or the Data ONTAP CLI. The server needs to locate and identify storage systems so that it can add them to its database. Depending on your network setup. When you install the DataFabric Manager software. you might want to add other networks to the discovery process or to enable discovery on all networks. you might want to disable discovery entirely. .Discovery process | 35 Discovery process Discovery is the process that the DataFabric Manager server uses to find storage systems on your organization’s network. Discovery is enabled by default. Next topics Discovery by the DataFabric Manager server on page 35 What SNMP is on page 36 What host discovery is on page 42 Ping methods in host discovery on page 42 What host-initiated discovery is on page 42 How DataFabric Manager server discovers vFiler units on page 43 Discovery of storage systems on page 43 Discovery of storage systems and networks on page 44 Methods of adding storage systems and networks on page 45 Discovery of a cluster by Operations Manager on page 47 Data ONTAP 8. The server can monitor and manage only systems and networks that are in the database.0 cluster monitoring tasks using Operations Manager on page 48 Introduction to V-Series SAN-attached storage management on page 49 Discovery by the DataFabric Manager server The DataFabric Manager server depends on Simple Network Management Protocol (SNMP) to discover and periodically monitor storage systems. switches. You can disable autodiscovery and use manual discovery only if you do not want SNMP network walking. you must enable it before the server can discover them. the DataFabric Manager server attempts to discover storage systems on the local subnet. If the routers.

36 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. SNMP versions to discover and monitor storage systems DataFabric Manager uses the SNMP protocol versions to discover and monitor the storage systems. and Host Agents) communicate automatically with each other.0 Automatic discovery is typically the primary process the server uses to discover storage systems and networks. SNMP enables network administrators to manage network performance. However. find and solve network problems. You typically only need it for the storage systems and the networks that you add after the server discovers the infrastructure. . this causes a delay in the server to discover the storage systems. and plan for network growth. You can also wait until after installing the software to enable SNMP on storage systems. vFiler units. What SNMP is Simple Network Management Protocol (SNMP) is an application-layer protocol that facilitates the exchange of management information between network devices. the server and the systems (storage systems. In this process. Next topics When to enable SNMP on page 36 SNMP versions to discover and monitor storage systems on page 36 What the Preferred SNMP Version option is on page 37 How DataFabric Manager chooses network credentials for discovery on page 38 Discovery process using SNMPv1 or SNMPv3 on page 38 Monitoring process using SNMPv1 on page 39 Monitoring process using SNMPv3 on page 39 Setting SNMPv1 or SNMPv3 as the preferred version on page 39 Setting SNMPv1 as the only SNMP version on page 40 Setting SNMPv1 or SNMPv3 to monitor a storage system on page 40 Modifying the network credentials and SNMP settings on page 40 Deleting the SNMP settings for the network on page 41 Addition of a storage system from an undiscovered network on page 41 Diagnosis of SNMP connectivity on page 41 When to enable SNMP You must enable SNMP on your storage systems before you install DataFabric Manager. SNMP is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. if you want DataFabric Manager to discover the storage systems immediately. Manual addition is secondary to the discovery process.

Note: The user on the storage system whose credentials are specified in Operations Manager should have the login-snmp capability to be able to use SNMPv3. The version preferred takes precedence over the network and global settings. SNMPv3 provides user-based security with separate authentication and authorization. Specified at the storage system level Then. then the other SNMP version is not used for storage system monitoring.3 or later. The version specified in the Preferred SNMP Version option at the storage system level is used for monitoring the discovered storage system. To use a specific configuration on a network. . with public as the community string to discover the storage systems. if required. Related concepts Methods of adding storage systems and networks on page 45 Related tasks Modifying the network credentials and SNMP settings on page 40 Related references Guidelines for changing discovery options on page 45 What the Preferred SNMP Version option is The Preferred SNMP Version option is a global or network-specific option that specifies the SNMP protocol version to be used first for discovery. you can modify the SNMP version. then either the network setting or the global setting is used. If the preferred SNMP version is. Note: SNMPv3 support is available only on storage systems running Data ONTAP 7... You can use SNMPv3 to discover and monitor storage systems if SNMPv1 is disabled. If no version is specified at the storage system level. you must add the networks required. SNMPv1 is a widely used simple request/response protocol... You can use Operations Manager to configure the option with values such as SNMPv1 or SNMPv3. However. Note: If the monitoring fails using the specified SNMP version. Preferred SNMP version setup This table specifies the settings used corresponding to the SNMP version preferred at the storage system level or network level. SNMPv3 is an interoperable standardsbased protocol with security and remote configuration capabilities. It is a method to specify common credentials. DataFabric Manager uses SNMPv1.Discovery process | 37 By default.

Not specified at the storage system level Not specified at the network level Then. SNMPv3 is set as the preferred version for monitoring. then you can disable that SNMP version. This speeds up the discovery of storage systems running only a particular SNMP version. Global setting is used. You can prevent using a particular version of SNMP from being used for discovery.. The network credentials configured for that particular network are used for discovery. If.38 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. This implies that the network or global settings are used for monitoring..... However. The network credentials configured as global settings are used for discovery. If... The discovery is running on a particular network and the network credentials are configured No network exists Then.. by default. This speeds up the discovery process. The storage system is not discovered using SNMPv1 The discovery succeeds using SNMPv3 When all or most of the storage systems in a network are running only a particular SNMP version. The storage system is discovered using the preferred SNMP version (let us say.0 If the preferred SNMP version is. Network setting is used. For example... you can configure the global and network setting to use SNMPv3 as the default version. then you are recommended to specify only that version as the preferred SNMP version for the network. if a particular version of SNMP is not in use in the network. Discovery process using SNMPv1 or SNMPv3 This table describes the discovery process for a storage system by using SNMPv1 or SNMPv3. the global and network setting uses SNMPv1 as the preferred version. When DataFabric Manager is installed for the first time or updated... . SNMPv1) Then. The discovered storage system is added with the preferred SNMP version as Global/Network Default. SNMPv3 is used for storage system discovery. Related tasks Modifying the network credentials and SNMP settings on page 40 How DataFabric Manager chooses network credentials for discovery This table shows how DataFabric Manager chooses the network credentials for discovery.

If. If. You can configure SNMPv3 settings with either of the auth protocols from the Network Credentials page or from the CLI. or the Preferred SNMP Version option is not set for the storage system. and the global or network setting is SNMPv3 The storage system credentials are not specified No credentials are provided at the network level No credentials are provided at the global level Then. Alternatively. Select the Network Credentials submenu from the Setup menu.Discovery process | 39 Monitoring process using SNMPv1 This table shows how storage systems are monitored using SNMPv1. or the Preferred SNMP Version option is not set for the storage system. select the Discovery submenu from the Setup menu and click the edit link corresponding to the Network Credentials option... The login and the password specified at the global level are used for the SNMPv3 monitoring. The login and the password specified at the network level are used for the SNMPv3 monitoring. The Preferred SNMP Version option is set to SNMPv3.. The community string set at the network level is used for the SNMPv1 monitoring.. Steps 1. DataFabric Manager 4. An event is generated to indicate the SNMP communication failure with the storage system. Setting SNMPv1 or SNMPv3 as the preferred version You can set SNMPv1 or SNMPv3 as the preferred version for storage system discovery on a specific network..0 and later supports SNMPv3 communication through auth protocols: MD5 and SHA.. . SNMPv1 is disabled and an event is generated to indicate the SNMP communication failure with the storage system... Monitoring process using SNMPv3 This table shows how storage systems are monitored using SNMPv3. The login and the password specified for the storage system are used for the SNMPv3 monitoring. The Preferred SNMP Version option is set to SNMPv1. and the global or network setting is SNMPv1 The community string is not specified at either global or network level Then.

Click the edit link corresponding to the Edit field for the SNMPv3 enabled network. then a. Go to the Network Credentials page. 2. Setting SNMPv1 as the only SNMP version You can set SNMPv1 as the only SNMP version available to monitor all storage systems in a network.40 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Provide values for each of the parameters requested. 5. Set the Preferred SNMP Version option for the corresponding storage system. Setting SNMPv1 or SNMPv3 to monitor a storage system You can set SNMPv1 or SNMPv3 to monitor a storage system. Steps 1. select the Discovery submenu from the Setup menu and click the edit link corresponding to the Network Credentials option. . 3. 3. Modifying the network credentials and SNMP settings You can modify the network credentials and SNMP settings using Operations Manager. Modify the value of the Preferred SNMP Version option to Global/Network Default. 6. b. Click Update. Select the Network Credentials submenu from the Setup menu. clear the Login and Password values. Go to the Edit Appliance Settings page of the corresponding storage system. Steps 1. In the SNMPv3 Settings section. 3. Steps 1. 2.0 2. Go to the Edit Appliance Settings page. modify the value of the Preferred SNMP Version option to SNMPv1. If the storage system in the network has the Preferred SNMP Version option set to SNMPv3. Click Add. Alternatively. Click Update. 4. In the Edit Network Credentials section.

2. 3. Related concepts Use of the Diagnose Connectivity tool for a managed storage system on page 288 Use of the Diagnose Connectivity tool for an unmanaged storage system on page 288 Where to find the Diagnose Connectivity tool in Operations Manager on page 289 Reasons why DataFabric Manager might not discover your network on page 289 . the discovery is not enabled on the storage system’s network. if host credentials are unspecified. Deleting the SNMP settings for the network You can delete the SNMP settings for the network using Operations Manager. 3. with the appropriate values for the following storage system credentials: • • hostLogin hostPassword In this case. Alternatively. Modify values for the parameters required. Addition of a storage system from an undiscovered network You can add a single storage system to DataFabric Manager from an undiscovered network on which only SNMPv3 is enabled. You can add the storage system by running the dfm host add -N command. Go to the Network Credentials page. However.Discovery process | 41 2. you can run the dfm host diaghostname command to diagnose DataFabric Manager's connectivity using SNMPv1 and SNMPv3 with a host. Click the edit link corresponding to the Edit field in the Network Credentials page. The credentials used for diagnosing connectivity using rsh and ssh are the host credentials. You can access the Diagnose Connectivity tool from the Storage Controller Tools list located at the lower left of Operations Manager. then the network or global credentials are used for SNMPv3. Steps 1. Click Update. Select the check box corresponding to the Delete field for the required network. Diagnosis of SNMP connectivity You can diagnose the SNMP connectivity with a host by running the Diagnose Connectivity tool from Operations Manager. 4. Click Delete Selected.

the DataFabric Manager server does not identify the host until its network (IPv6) address details are added.0 What host discovery is The DataFabric Manager server automatically discovers storage systems and Host Agents that are in the same subnet as the server. By default. the server does not mistake the storage system to be running. The discovery of networks. or ICMP echo and SNMP.shtml . and then SNMP. Related information NetApp Host Agent Installation and Administration Guide . What host-initiated discovery is Host-initiated discovery is based on the DNS SRV record. NDMP. the host is added to the DataFabric Manager host list. the host initiates a request to DataFabric Manager. When you install the DataFabric Manager software. HTTP.8 supports discovery of IPv6 networks and hosts.42 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Ping methods might include ICMP echo.http://now. After the DataFabric Manager server identifies the network. Note: DataFabric Manager 3. The discovery occurs at the same time.netapp. the server uses ICMP echo first. Currently. is integrated with the discovery of storage systems and Host Agents. Ping methods in host discovery The DataFabric Manager server uses SNMP queries for host discovery. if a storage system (behind a transparent HTTP cache) is down and the HTTP cache responds. The latter ping method does not use HTTP to ping a host. see the NetApp Host Agent Installation and Administration Guide. Therefore. For information about how you can modify the DataFabric Manager server details for host-initiated discovery. Note: When you select ICMP echo and the SNMP ping method. You must enable SNMP on your storage systems and the routers for DataFabric Manager to monitor and manage systems. When it receives this command. the Host Discovery option is enabled by default. where DataFabric Manager details are maintained. to determine if the storage system is running. Whenever a host initiates communication with the DataFabric Manager server. The ICMP echo and SNMP ping method is the default for new installations.com/NOW/ knowledge/docs/nha/nha_index. You can add the host IPv6 network to DataFabric Manager by using the dfm network add command. SNMP is enabled on storage systems. host-initiated discovery is supported by NetApp Host Agent only. when enabled.

In addition. Description DataFabric Manager issues an SNMP GET request to all hosts on the local network. the server continues to monitor the discovered vFiler units. You can disable the vFiler discovery in Setup menu > Options > Discovery options.Discovery process | 43 How DataFabric Manager server discovers vFiler units The DataFabric Manager server monitors hosting storage systems to discover vFiler units. The server deletes from the database the vFiler units that you destroyed on the storage system. The local network is the network to which the DataFabric Manager server is attached. . or by using the following CLI command: dfm option set discovervfilers=no When you disable this option. The server monitors the hosting storage system once every hour to discover new vFiler units that you configured on the storage system. when you delete a network. or by using the following CLI command: dfm option set vFilerMonInterval=1hour Related tasks Changing password for storage systems in DataFabric Manager on page 136 Changing passwords on multiple storage systems on page 136 Discovery of storage systems This table describes the process that DataFabric Manager uses to discover storage systems if the Host Discovery option is enabled and the Network Discovery option is disabled (the default value). the server continues to monitor the vFiler units present in that network. The purpose of the request is to determine the system identity of the hosts. when the server discovers a vFiler unit. it does not add the network to which the vFiler unit belongs to its list of networks on which it runs host discovery. You must set authentication credentials for the hosting storage system to ensure the discovery of vFiler units. However. You can change the default monitoring interval in Setup menu > Options > Monitoring options. Stage 1.

If. The SNMP GET request is successful Then. If the storage system is a hosting storage system on which vFiler units are configured.. Note: vFiler units will be discovered only after you set the credentials for the hosting storage system.. DataFabric Manager also discovers those vFiler units. DataFabric Manager adds the discovered storage systems to its database and continues to Stage 4. 2. If the storage system is a hosting storage system on which vFiler units are configured.0 Stage 2.. The minimum interval for repeating the cycle is set by the Discovery Interval (the default is every 15 minutes) and the Discovery Timeout (the default is 2 seconds). Stage 1. Note: DataFabric Manager repeats Stages 1 to 2 to discover new storage systems... DataFabric Manager repeats Stages 1 through 2 until it has sent queries to all the networks in its database. Description If. The purpose of the request is to determine the system identity of the hosts.. Discovery of storage systems and networks This table describes the process that DataFabric Manager uses to discover storage systems and networks if both the Host Discovery and Network Discovery options are enabled. The minimum interval for repeating the discovery process is set by the Discovery Interval option.. 3. Note: vFiler units will be discovered only after you set the credentials for the hosting storage system. The SNMP GET request is successful Then. DataFabric Manager adds the discovered storage systems to its database and continues to Stage 4.44 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. . Description DataFabric Manager issues an SNMP GET request to all hosts on the local network. DataFabric Manager also discovers those vFiler units. The local network is the network to which the DataFabric Manager server is attached. The actual interval depends on the number of networks to scan and their size..

the minimum interval for repeating the network discovery cycle is set at every 15 minutes. After verifying that all of the storage systems have been added. By default. its network is added. if it finds networks that are not included in its database. Then other storage systems on the network are found automatically. add one storage system so that its network and all other storage systems on it are found. 6. 4. After attaching a new storage system to your network. add hosts by using either Operations Manager or through the command line with the dfm host add command. Start the discovery process by manually adding one storage system from each network that has storage systems. After storage systems are discovered. the number of networks and their size determines the interval. DataFabric Manager repeats Stages 2 through 5 until it has sent SNMP queries to all the networks in its database. disable host discovery to save network resources. This option affects the discovery interval only at the time of installation. DataFabric Manager selects another network from its database and issues an SNMP GET request to all hosts on that network. Discovery Interval (15 minutes) This option specifies the period after which DataFabric Manager scans for new storage systems and networks.Discovery process | 45 Stage 3. it adds the new networks to its database. When DataFabric Manager receives replies. The purpose of the request is to gather information about other networks to which these routers might be attached. Methods of adding storage systems and networks You can apply a combination of methods to efficiently add storage systems and networks to DataFabric Manager. Description DataFabric Manager issues another SNMP GET request to routers that responded to the first SNMP request. • • • Guidelines for changing discovery options You must follow a set of guidelines for changing the default values of the discovery options. If you choose a longer interval. and Network Discovery (Disabled). When you add a storage system. 5. If you set up a new network of storage systems. Change the default value if you want to lengthen the minimum time interval between system discovery attempts. . • • Keep the defaults for the discovery options of Host Discovery (Enabled). too.

Decrease the discovery limit if a smaller number of hops includes all the networks with storage systems you want to discover. Note: When the Network Discovery option is enabled. Manually adding storage systems is faster than discovering storage systems in the following cases: • • You want DataFabric Manager to manage a small number of storage systems. Network Discovery Limit (in hops) (15) This option sets the boundary of network discovery as a maximum number of hops (networks) from the DataFabric Manager server.46 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The other method for discovering these storage systems is to add them manually. This option enables the discovery of storage systems through SNMP. Change the default value if any of the following situations exist: • All storage systems that you expected DataFabric Manager to discover have been discovered and you do not want DataFabric Manager to keep scanning for new storage systems. Change the default value if you want to increase this limit if the storage systems that you want DataFabric Manager to discover are connected to networks that are more than 15 hops (networks) away from the network to which the DataFabric Manager server is attached. reduce the limit to six hops if there are no storage systems that must be discovered on networks beyond . but the discovery process is less likely to affect the network load. Change the default value if you want the DataFabric Manager server to automatically discover storage systems on your entire network. Change the default value if you want to lengthen the time before considering a discovery to have failed (to avoid discovery queries on a local area network failing due to long storage system response times of a storage system).0 there might be a delay in discovering new storage systems. You want to manually add storage systems to the DataFabric Manager database. Host Discovery (Enabled) • Network Discovery (Disabled) This option enables the discovery of networks. For example. Discovery Timeout (2 seconds) This option specifies the time interval after which DataFabric Manager considers a discovery query to have failed. the list of networks on the Networks to Discover page can expand considerably as DataFabric Manager discovers additional networks attached to previously discovered networks. You want to add a single new storage system to the DataFabric Manager database.

if the sysObjectId is netappProducts. Operations Manager uses per-network configuration settings or the default network settings. you can discover it by specifying the appropriate SNMP version in the Preferred SNMP Version option in the Network Credentials page. Reducing the limit prevents DataFabric Manager from using cycles to probe networks that contain no storage systems that you want to discover. Change the default value if you want to disable the discovery of LUNs or storage area network (SAN) hosts. Note: For both SNMPv1 and SNMPv3.netappCluster. When the query succeeds. the following objects are queried: • • • Name of the system (sysName) ID of the system object (sysObjectId) Cluster ID (clusterIdentityUuid) If the query fails. the cluster is added to the network. Host agent discovery This option allows you to enable or disable host agents. Change the default value if you want to add a network to DataFabric Manager that it cannot discover automatically. For a specified IP address and the SNMP version specified in the Preferred SNMP Version option. you can use another version of SNMP to send queries. or delete an SNMP community that DataFabric Manager uses for a specific network or host. Networks to discover This option allows you to manually add and delete networks that DataFabric Manager scans for new storage systems. . change. This option enables you to specify. or you want to delete a network in which you no longer want storage systems to be discovered. Change the default value if storage systems and routers that you want to include in DataFabric Manager do not use the default SNMP community.Discovery process | 47 six hops. If the cluster is in a different network. For example. Network Credentials Discovery of a cluster by Operations Manager The DataFabric Manager server automatically discovers a cluster that is in the same network as the DataFabric Manager host. Operations Manager identifies the cluster based on the sysObjectId.

Otherwise.48 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. and interfaces in Data ONTAP 8.0 cluster monitoring tasks using Operations Manager Using Operations Manager. Steps 1. SNMP objects.0 7-Mode. Generate various reports on the cluster and its components.0 Cluster-Mode systems. About this task Operations Manager uses the default SNMP version to discover a cluster. you can specify the appropriate SNMP version in the Preferred SNMP Version option in the Network Credentials page. Click Control Center > Home > Member Details > Physical Systems. However. the cluster monitoring feature is not applicable to systems running Data ONTAP 8.0 Cluster-Mode. Perform File SRM-related tasks. you can perform various monitoring tasks on cluster nodes running Data ONTAP 8. 2. you can add a cluster by specifying the IP address of the given cluster management logical interface. Limitations of cluster monitoring in Operations Manager You cannot perform certain monitoring tasks related to a cluster due to the lack of support for certain APIs. Data ONTAP 8. All report. enter the host name or IP address of the cluster you want to add. The following features are not supported in Operations Manager: . 3. Click Add. Configure the cluster to receive alerts. Execute commands remotely. either automatically or manually. You can perform the following tasks on a cluster by using Operations Manager: • • • • • • Discover a cluster. Monitor the cluster.0 Adding a cluster By using Operations Manager. Result The cluster is added and displayed in the Clusters. In the New storage system field in the lower part of the page.

Configuration management of clusters. Management of disks. or the least used and most used array LUNs. You can perform the following tasks relating to SAN-attached storage of a V-Series system by using Operations Manager: • • • Monitor storage arrays and storage array ports that are connected to a V-Series system. Next topics Limitations of V-Series SAN-attached storage management in Operations Manager on page 49 Tasks performed from the Storage Controller Details page for a V-Series system on page 50 Viewing configuration details of storage arrays connected to a V-Series system on page 50 Limitations of V-Series SAN-attached storage management in Operations Manager You cannot use Operations Manager to performa high-level analysis of the average usage trend of a storage system's back-end storage. Creation of SRM autopaths. or adapters at a given point in time. Operations Manager does not support the following features: • Listing. and virtual servers. schedules. Configuration of the high-availability checker script. SnapLock reports. aggregates. Monitoring of SAN and LUNs. Management of volume SnapMirror relationships. Receiving SNMP traps from a cluster. controllers. and jobs. Threshold monitoring of Performance Advisor. For more information about V-Series SAN-attached storage management reports. Introduction to V-Series SAN-attached storage management Operations Manager discovers storage arrays and storage array ports for a V-Series system through certain APIs. you must set the host login and password in the storage system.Discovery process | 49 • • • • • • • • • • • Management of the cluster administrator user profile and password. see the Operations Manager Help. or deleting storage array and storage array port objects from the command-line interface . Mangement of quotas. For discovery and monitoring of storage arrays and storage array ports. Monitor the storage load of array LUNs. Generate various reports on V-Series SAN-attached storage using back-end storage. adding.

You can access this page by clicking the name of the V-Series system in the appropriate Storage System report. the name of the switch. You can view this list by clicking the number in the "Array Ports connected to This Storage System" field. Before you begin Operations Manager must discover the storage arrays for which you want to view the configuration details. array LUN count. and so on. Result The Storage Array Configuration page is displayed.50 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Steps 1. you can view configuration details. . In this page. You can perform the following tasks from the Storage Controller Details page for a V-Series system: • View the list of storage arrays connected to the V-Series system. 2. the name of the V-Series system. View the list of storage array ports connected to the V-Series system. You can view this list by clicking the number in the "Arrays connected to This Storage System" field.0 • • • Events for storage arrays and storage array ports RBAC security for storage arrays and storage array ports. such as the name of the storage array. • Viewing configuration details of storage arrays connected to a V-Series system You can view the configuration details of storage arrays connected to a V-Series system in the Storage Array Configuration report in Operations Manager. A Details page for storage arrays and storage array ports Tasks performed from the Storage Controller Details page for a V-Series system You can view the list of storage arrays and the list of storage arrays ports connected to a V-Series system from the Storage Controller Details page. the adapter used. Select the Storage Array Configuration report from the Report drop-down menu. Click Control Center > Home > Member Details > Physical Systems.

If you have not changed DataFabric Manager’s default settings for administrative user access. However. DataFabric Manager prompts you to log in.Role-based access control in DataFabric Manager | 51 Role-based access control in DataFabric Manager DataFabric Manager uses role-based access control (RBAC) for user login and role permissions. RBAC allows administrators to manage groups of users by defining roles. you do not need to log in to view information by using DataFabric Manager. If you need to restrict access to the database to specific administrators. when you initiate an operation that requires specific privileges. Next topics What role-based access control is on page 51 Configuring vFiler unit access control on page 52 Logging in to DataFabric Manager on page 52 List of predefined roles in DataFabric Manager on page 54 Active Directory user group accounts on page 55 Adding administrative users on page 55 How roles relate to administrators on page 55 What an RBAC resource is on page 60 How reports are viewed for administrators and roles on page 61 What a global and group access control is on page 62 Management of administrator access on page 62 What role-based access control is DataFabric Manager uses role-based access control (RBAC) for user login and role permissions. Note: By default. Additionally. . you must apply roles to the administrator accounts you create. if you want to restrict the information that these administrators can view and the operations they can perform. to create administrator accounts. you need to log in with Administrator account access. For example. you must set up administrator accounts for them. you will not be able to view DataFabric Manager data.

Steps 1. From the Edit Group Membership page. From the Roles page. select vFiler units to add to the group. the vFiler administrator cannot view details or reports for the aggregate in which the volume exists.72. and Write. From the Edit Administrator Settings page. assign role to the vFiler administrator. 2. . The vFiler administrator does not have access to the host storage system information. 4.0 Configuring vFiler unit access control An administrator who does not have any roles on a global level.184. 2.212:/ hemzvol/hagar_root_backup_test) even though the vFiler unit does not contain the volume. 10. Steps 1. create a role for the vFiler administrator and assign it the following database operations: Delete. 3. This procedure describes how to configure access control that allows an administrator to view and monitor vFiler units. but has enough roles on a group that contains only vFiler units is considered a vFiler administrator. If a vFiler unit has a qtree assigned to it. Click Log In. Create a group that contains vFiler objects. From the Control Center. Logging in to DataFabric Manager You can log in to DataFabric Manager by entering the administrator name and password on the Operations Manager interface. 3. About this task The following restrictions are applicable to vFiler units' administrators: • • If a vFiler unit has a volume assigned to it. Note: The full name of a qtree contains a volume name (for example. Type your administrator name and password. Read. select Log In.52 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. the vFiler administrator cannot view details or reports for the volume in which the qtree exists.

3. The Administrator account is given the same name as the name of the administrator who installed the software. Administrator accounts have predefined roles assigned to them. After installing DataFabric Manager. Note: Changes made will not be seen in the audit log. default administrator accounts are created: the “Administrator” and “Everyone” accounts. Everyone account . if you install DataFabric Manager on a Linux workstation.Role-based access control in DataFabric Manager | 53 What default administrator accounts are DataFabric Manager uses administrator accounts to manage access control and maintain security. the administrator account is called root. Administrator account The Administrator has super user privileges and can perform any operation in the DataFabric Manager database and add other administrators. This is optional. Note: Prior to DataFabric Manager 3. When you install DataFabric Manager software. these legacy privileges are retained by the Everyone account and mapped to the GlobalRead and GlobalSRM roles. If you upgrade to DataFabric Manager 3. Therefore.3 and later. you must log in as the Administrator and set up the Everyone account to grant view permission on this account. the Everyone account was assigned Read and SRM View access by default.

54 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Administrator account Administrator Roles • • • • • • • • • • • • • • • • • • • • • • • • • • • • GlobalAlarm GlobalBackup GlobalConfigManagement GlobalDataProtection GlobalDataSet GlobalDelete GlobalEvent GlobalExecute GlobalFailover GlobalFullControl GlobalMirror GlobalPerfManagement GlobalProvisioning GlobalQuota GlobalRead GlobalReport GlobalResourceControl GlobalRestore GlobalSAN GlobalSDConfig GlobalSDDataProtection GlobalSDDataProtectionAndRestore GlobalSDFullControl GlobalSDSnapshot GlobalSDStorage GlobalSRM GlobalWrite No roles Everyone .0 List of predefined roles in DataFabric Manager This table provides a list of roles to different administrator accounts in DataFabric Manager.

Click Add. enter the pager address. Adding administrative users You can create and edit administrator accounts from Operations Manager. for the administrator or administrator group. or globally (and for all objects in DataFabric Manager). Type the name for the administrator or domain name for the group of administrators. How roles relate to administrators Role management allows the administrator who logs in with super-user access to restrict the use of certain DataFabric Manager functions to other administrators. . Steps 1. For example. 4. Optionally. In the Control Center.Role-based access control in DataFabric Manager | 55 Active Directory user group accounts DataFabric Manager recognizes two types of users namely Administrator and User. Note: User when added must be present locally. as an e-mail address or pager number. select Administrative Users from the Setup menu. An operation must be specified for every role. enter the e-mail address for the administrator or administrator group. 2. thereby allowing domain administrators the ability to define roles based on a company’s organizational hierarchy. if you want an administrator to perform both the backup and restore operations. by group. To set up administrator accounts as a user group. The super-user can assign roles to administrators on an individual basis. 5. all administrators who belong to group_dfmadmins can log in to DataFabric Manager and inherit the roles specified for that group. you must assign Back Up and Restore roles to the administrator. Log in to the Administrator account. 3. use the following naming convention: <AD domain>\group_dfmadmins. Optionally. In this example. 6. You can assign multiple operations levels if you want the administrator to have more control than a specific role provides.

or delete alarms. monitored objects.0 You can list the description of an operation by using the dfm role operation list [ -x ] [ <operation-name> ] command. You can create and manage backups. You can delete information in the DataFabric Manager database. You can perform DataSet write and DataSet delete operations. GlobalEvent . and GlobalDataSet. You can perform all the operations of GlobalBackup. and retention policies. and backup relationships. Role Default GlobalAlarm GlobalBackup GlobalConfigManagement GlobalDataProtection GlobalDataSet GlobalDelete Operations None You can manage alarms. You can manage storage system configurations. modify. including groups and members of a group. DataFabric Manager provides the set of predefined global roles that can be inherited to the user creating roles as described in the following table. create. including the Global group. GlobalRead. schedules. Next topics What predefined global roles are on page 56 What inheritance roles are on page 58 What capabilities are on page 58 Role precedence and inheritance on page 59 Creating roles on page 59 Modifying roles on page 59 Related concepts What a global and group access control is on page 62 Related references Guidelines for changing discovery options on page 45 What predefined global roles are Administrators assigned with global roles can view information or configure settings for all groups in the DataFabric Manager database. primary and secondary storage systems. You can view. You can view and acknowledge events in addition to create and delete alarms.56 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

You can manage custom reports and report schedules. You can provision primary dataset nodes and can attach resource pools to secondary or tertiary dataset nodes. backup configurations. modify. GlobalSDSnapshot. and can update replication or failover policies. and replication or failover policies. event thresholds. You cannot apply this role to accounts with group access control.Role-based access control in DataFabric Manager | 57 Role GlobalExecute GlobalFailover GlobalFullControl Operations You can execute commands on storage system. expand. and GlobalDataset roles for dataset nodes that are configured with provisioning policies. and delete SnapDrive configuration. You can perform operations specific to GlobalSDConfig. and alarms apart from viewing performance information in Performance Advisor. You also have all the capabilities of the GlobalResourceControl. events and alerts. GlobalQuota GlobalRead GlobalReport GlobalResourceControl GlobalRestore GlobalSAN GlobalSDConfig GlobalSDDataProtection GlobalSDDataProtection AndRestore GlobalSDFullControl . You can perform restore operations from backups on secondary volumes. You can create. You can manage views. You can manage disaster recovery for datasets. You can read. You can view the DataFabric Manager database. You can view user quota reports and events. You can view and perform any operation on any object in the DataFabric Manager database and configure administrator accounts. You can manage backups and datasets with SnapDrive. destroy. and GlobalSDStorage roles. and destroy LUNs. You can perform backup and restore operations with SnapDrive. You can add members to dataset nodes that are configured with provisioning policies. GlobalRead. GlobalMirror GlobalPerfManagement GlobalProvisioning You can create.

Resources can be groups of monitored objects. such as storage system and hosts. What inheritance roles are Administrators assigned with group roles can view or configure settings for the group to which they belong. For Linux. Note: Super users are assigned the GlobalFullControl role in Operations Manager. You can view or write to the DataFabric Manager database. Note: Roles are carried forward prior to DataFabric Manager 3. When you view roles for an administrator. and qtrees from snapshots. a combination of operations and resources. For Windows. and delete snapshots. super-users belong to the administrators group. the settings are those explicitly set for the administrator at the group level. Similarly.0 Role GlobalSDSnapshot Operations You can list the snapshots and the objects inside them. luns. You can create. and qtrees. You can view capabilities or edit them by modifying the operations that are associated with the resource. they implicitly have the Read role on all groups. if administrators have the GlobalRead role.58 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You can view information collected by SRM path walks. . modify.3. you must assign capabilities. they implicitly have the Read role on all the subgroups of that parent group. You can restore volumes. to the role. Several other factors also affect the group role that is granted to an administrator: • • The capabilities granted to the administrator. For example. GlobalSDStorage GlobalSRM GlobalWrite You can list. modify and delete storage objects and their attributes. if administrators have the Read role on a parent group. "Everyone" The administrator's membership in Active Directory (AD) user groups that have been added to the DataFabric Manager server database Group roles are named similarly to the global roles that are defined in the previous table. What capabilities are When creating roles. create. super user is the root user. luns. You can create clones of volumes.

if a user is assigned GlobalRead role and GlobalWrite role on a group. such as the name and description. Select Roles from the Setup menu. 2.. However. that user can view all groups. Creating roles You can create roles from the Setup menu in Operations Manager. Role inheritance simplifies the task of assigning roles to administrators by letting you use defined roles. Click Add Capabilities. 5. Find the role in the list of roles and click “edit”. 4. For example. the user can change settings or run commands only on the storage systems of the specified group. Optionally. select that role from the Inherit Capabilities list and click \>> to move the role to the list at the right. Modify Roles from the Setup menu. and click “<<“ to remove it. Modifying roles You can edit the roles created from the Setup menu in Operations Manager. 3. 5. from the Capabilities window. 3. Modify role inheritance by doing one of the following: • To disinherit a role. the less restrictive (more powerful) of the two roles apply.Role-based access control in DataFabric Manager | 59 Role precedence and inheritance If an administrative user has both global and group roles on a group. and. Click Add Role. Click Update. Steps 1. select the role from the list at the right. to copy capabilities from an existing role. 4. Optionally modify the basic settings of the role. Specifying roles for a parent group implicitly grants those roles on its subgroups. You should grant roles conservatively at higher levels of the group hierarchy and specify additional roles as needed at the lower levels of the hierarchy. Select the operations that you want to allow for the resource and click OK. select a resource from the resource tree. 2. Steps 1.. .

Protection policies. User with the Database Delete capability is assigned the Policy Delete capability. Steps 1. vFiler templates. 6. Controllers. these resources include Aggregates. and DataFabric Manager Groups (except configuration groups). Click Update. Similarly.60 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. a user has the following capabilities: • • User with the Database Write capability in the global scope is assigned the Policy Write capability. A user with the Policy Write capability in the global scope can create schedules and throttles. Create a user defined role. In the DataFabric Manager RBAC system. a user with the Policy Delete capability on a policy can delete that policy. Next topics Granting restricted access to RBAC resources on page 60 Access check for application administrators on page 61 Granting restricted access to RBAC resources You can grant restricted access to objects or resource groups in the DataFabric Manager server. Virtual servers. Add the following capabilities to the role created in Step 1: • • Read capability on Global resource for events Write capability on Global resource for events . A user with the Policy Write capability on the policy can modify data protection policies. LUNs. select the role from the “Inherit Capabilities” list and click “>>” to move the role to the Inherit Capabilities list.6 or later. Hosts. Note: This step is optional.0 • To inherit a role. Note: On upgrading to DataFabric Manager 3. Provisioning policies. Clusters. Example The following example shows you how to create a user role called EventRole using the CLI: $ dfm role create EventRol 2. What an RBAC resource is An RBAC resource is an object on which an operation can be performed. Volumes.

. sorted by administrators. However.Write can delete all events. regardless of what they are. Open Operations Manager. The Core AccessCheck capability allows application administrators to check the capabilities of any arbitrary user. A client application user configured on the DataFabric Manager server with this role allows the client application to check the access of all users. a user with the Database Read capability in the global scope is assigned the Core AccessCheck capability.Write. A should have the capability to check B's capabilities. 4. The following commands are used to generate reports for administrators and roles from the CLI: • • dfm report role-admins—Lists all administrators and the roles they are assigned. Ensure the user Everyone does not have the capability DFM. see the DataFabric Manager man pages for dfm report commands. Note: A user with the capability DFM.Event. using the following command: $ dfm user role add Everyone EventRole Note: You can also use Operations Manager GUI to perform Steps 1 through 3. Assign the role created in Step 1 to user Everyone. Note: After upgrading to DataFabric Manager 3. Read and acknowledge events without logging in. Application administrators can check the access permissions of any user.Event. How reports are viewed for administrators and roles You can view reports for administrators and roles from the CLI. The man pages specifically describe command organization and syntax. Access check for application administrators DataFabric Manager introduces a new capability requirement to perform access check using RBAC. dfm report admin-roles—Lists all administrators and the roles they are assigned.Write Global 3.6 or later. any user is allowed to check their own capabilities. When a user configures the client application.Database. 5. For example. For information about how to use the CLI.Read Global $ dfm role add EventRole DFM. the Core AccessCheck capability has to be assigned to a role. only if they have the permission to do so. if A wants to know the capability of B.Database.Role-based access control in DataFabric Manager | 61 Example The following example shows you how to add the capabilities: $ dfm role add EventRole DFM. sorted by role.

to define and control the access to the resources. By managing administrator access on storage systems and vFiler units. and roles from a storage system or vFiler unit to another storage system or vFiler unit. domain users and roles on storage systems and vFiler units. Monitor and manage user groups. based on the role or job function of a user. you can complete the following tasks: • • • • • • Manage and control access on storage systems and vFiler units from DataFabric Manager. You must first create a global administrator account and then grant access to specific groups. Management of administrator access You can manage administrator access on storage systems and vFiler units. You cannot directly create group access administrator accounts. and roles on storage systems and vFiler units. create a global administrator account with no roles assigned. If you want an administrator to have access to specific groups only. Push user groups. domain users. Create and modify identical local users. Group access control authorizes the administrator to view and perform actions only on the objects of the groups you specify. the administrator cannot add objects to or remove objects from the groups. and user groups on more than one storage system or vFiler unit. roles.0 What a global and group access control is Global access control authorizes an administrator to view and perform actions on any group in the DataFabric Manager database. domain users. You can apply global or group access control to administrator accounts. local users. Next topics Prerequisites for managing administrator access on page 63 Limitations in managing administrator access on page 63 Controlled user access for cluster management on page 63 Summary of the global group on page 63 Who local users are on page 64 What domain users are on page 70 What Usergroups are on page 72 What roles are on page 75 What jobs display on page 78 . local users. local users. However. Edit user groups. Modify passwords of local users on a storage system or vFiler unit.62 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

you must have Database Read capability on the host. you must have Database Read capability on host A and Core Control capability on host B. and volumes contained in that cluster. Summary of the global group You can view the summary that is specific to the global group containing storage systems and vFiler units by selecting Host Users from the Management menu. maximum password age. You can identify users who have the required capabilities to access selected objects within a cluster. In the case of local users. To create or delete roles.0 or later. user groups. or users on a host. Next topics Viewing a specific summary page on page 64 Viewing users on the host on page 64 .Role-based access control in DataFabric Manager | 63 Prerequisites for managing administrator access There are a set of prerequisites that you must consider for managing administrator access.0 and later. To list and view the details of roles. or users on a host. For example. the same capabilities are applicable to all the other objects contained in the parent object. virtual servers. if you provide a user the Write option on a cluster. Only these users are provided access to manage the cluster objects. Limitations in managing administrator access Roles. you must have Core Control capability on the host. user groups. you must have Core Control and Database Read capabilities on the host. aggregates. user groups. user groups. you can create roles and assign capabilities to control user access to selected cluster objects. Controlled user access for cluster management By using Operations Manager. Following are the prerequisites for managing administrator access on storage systems and vFiler units: • • • • • • • You must be using Data ONTAP 7. and status fields are available in Data ONTAP 7. the same Write option is valid for the controllers. except for the user group Backup Operators and the users belonging to this user group. To push roles. or users on a host. To modify roles. Resetting passwords is available only for storage systems running Data ONTAP 7. and users without capabilities are not monitored.1 and later. the minimum password age. user groups. When you provide a user with role capabilities on an object. or users from host A to host B.

64 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Next topics Viewing local users on the host on page 64 Viewing local user settings on the host on page 65 Adding local users to the host on page 66 Editing local user settings on the host on page 66 Users with Execute capability on page 67 Viewing local users on the host You can view the local users on a host using the Host Local Users report. From the left pane.0 Viewing a specific summary page You can view the summary page specific to a storage system or vFiler unit. Select Host Users. 2. 2. The Host Users report displays information about the existing users on the host. 3. select Management > Host Users. Steps 1. Click Control Center > Home > Member Details > Physical Systems. Select Host Local Users. Steps 1. All from the Report drop-down list. Who local users are Local users are the users created on storage systems and vFiler units. From any page. select Management > Host Users. . Steps 1. 2. under Storage Controller Tools. Viewing users on the host You can view the users on a host using the Host Users report. The Host Local Users report displays information about the existing local users on the host. From any page. Click desired the storage system or controller link. All from the Report drop-down list. click Host Users Summary.

select Management > Host Users > Local Users. The number of days should be less than or equal to maximum password age. Related information Data ONTAP System Administration Guide . even after the maximum retries. Maximum number of days (0 to 232−1) that a password can be used Displays the current status of the user account: • • • Enabled: The user account is enabled.Role-based access control in DataFabric Manager | 65 Viewing local user settings on the host You can view the local users on the storage systems or vFiler units. Note: Data ONTAP provides an option to set the maximum number of retries for the password. 2. Click the view link corresponding to the local user. From any page. Host Name User Name Description User Full-name Usergroups Roles Capabilities Minimum Password Age Maximum Password Age Status Name of the storage system or vFiler unit Name of the local user Description of the local user Full name of the local user Usergroups that the user belongs to Roles assigned to the local user Capabilities of roles assigned to the local user part of user group Minimum number of days that a password must be used. Expired: The user account is expired. The status of the user account is enabled only if the administrator resets the password for the user. When the user fails to enter the correct password. Steps 1. except for the root login. see the Data ONTAP System Administration Guide.http://now.netapp. For more information about maximum retries.com/NOW/knowledge/docs/ ontap/ontap_index. The following details of the selected local user appear.shtml . The user account expires if the user fails to change the password within the maximum password age. Disabled: The user account is disabled. then the user account is disabled.

select Management > Host Users > Local Users. Click the user link in the Edit column corresponding to the local user. Specify the parameters.0 Adding local users to the host You can add a local user to a storage system or vFiler unit. From any page. select Management > Host Users > Local Users. Add Local User. 4. From any page. Select one or more user groups from the list. 2. Host Name User Name Password Confirm Password User Full Name (optional) Description (optional) Minimum Password Age (optional) Maximum Password Age (optional) Usergroup Membership Name of the storage system or vFiler unit which user is to be created Name of the local user Password of the local user Confirm the password of the local user Full name of the local user Description of the local user Minimum number of days that a password must be used Maximum number of days that a password must be used User groups you want the user to be a member of 3. Steps 1.66 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. User Full-name Description Minimum Password Age (in days) Full name of the local user Description of the local user Minimum number of days that a password must be used . Edit the parameters. Steps 1. 3. Editing local user settings on the host You can edit local user settings on a storage system or vFiler unit. 2.

Next topics Pushing passwords to a local user on page 67 Deleting local users from the host on page 68 Pushing local users to hosts on page 69 Monitoring changes in local user configuration on page 69 Editing passwords on page 69 Pushing passwords to a local user You can push an identical password to a local user on multiple storage systems or vFiler units. 5. Select one or more user groups from the list. If. From any page. Click Update.Role-based access control in DataFabric Manager | 67 Maximum Password Age (in days) Maximum number of days that a password must be used Usergroup Membership Usergroups you want to be a member of Note: You cannot edit Host Name and User Name in the Edit Local User section. see the Operations Manager Help . Steps 1. Other users who do not have the Execute capability use the credentials that are provided.. to modify the password. The Storage System Passwords page containing the section Modify Password on Storage Systems is displayed. Users with Execute capability DataFabric Manager users with the Execute capability can reset the password of a local user on storage system or vFiler unit using the credentials that are stored in the database. 2. click the password link in the Push column corresponding to the local user. select Management > Host Users > Local Users. From the List of Existing Local Users section. The local user is on the storage system Then.. . Note: For more information about Storage System Passwords.. 4..

3. From any page. The vFiler Passwords page containing the section Modify Password on vFilers is displayed. From the List of Existing Local Users section. select Management > Host Users > Local Users.. Note: For more information about vFiler passwords. User Name Old Password New Password Name of the local user Password of the local user New password of the local user Confirm New Password Confirm the new password of the local user Select groups and/or Storage systems Select the following from the respective list: • • Storage systems on which the local user exists DataFabric Manager groups on which the local user exists Apply to subgroups Select the check box if the password change applies to the storage systems of the selected group and the subgroups of the selected group 4. The local user is on the vFiler Then. 3.. 2. Steps 1. Click Delete Selected. select the local user that you want to delete.0 If. Deleting local users from the host You can delete a local user from a storage system or vFiler unit.. Note: Pushing an identical password creates a job that is displayed in the Jobs tab of Password Management and Host User Management window.. Click Update. see the Operations Manager Help . Specify the parameters. .68 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Click Update. Select the DataFabric Manager group or storage system on which you want to push the local user. Steps 1. select Management > Host Users > Local Users. 4. Confirm the new password. From any page. 2. 2. 4. Steps 1. 3. Click Push. Click the password link in the Edit column corresponding to the local user.Role-based access control in DataFabric Manager | 69 Pushing local users to hosts You can push a local user to a group of storage systems or vFiler units. Create a new alarm for the event Host User Modified. Enter the old password. Note: Pushing local users to host creates a job that is displayed in the Jobs tab of Host User Management window. From any page. Steps 1. Click Setup > Alarms. Monitoring changes in local user configuration You can monitor changes in local user configuration on a storage system or vFiler unit. . Note: You cannot edit Host Name and User Name in the Edit Password page. 2. Editing passwords You can edit the password of a local user on a storage system or vFiler unit. Enter the new password. Select OK in the Resources dialog box. 3. select Management > Host Users > Local Users. 6. 5.

Select one or more user groups from the list. Specify the parameters. select Management > Host Users. From any page. Select Host Domain Users. Steps 1. . 2. Steps 1. and are authenticated by the domain.0 What domain users are Domain users are the non-local users who belong to a Windows domain. Next topics Viewing domain users on the host on page 70 Adding domain users to the host on page 70 Viewing domain user settings on the host on page 71 Editing domain user settings on the host on page 71 Removing domain users from all the user groups on page 72 Pushing domain users to hosts on page 72 Monitoring changes in domain user configuration on page 72 Viewing domain users on the host You can view the domain users on a host using the Host Domain Users report. select Management > Host Users > Domain Users. Adding domain users to the host You can add a domain user to a storage system or vFiler unit. 2.70 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. From any page. Host Name User Identifier (domainname\username or SID) Name of the storage system or vFiler unit from the drop-down list Any one of the following: • • Domain user name Security Identifier (SID) of the domain user Usergroup Membership Usergroups you want to be a member of 3. The Host Domain Users report displays information about the existing domain users on the host. All from the Report drop-down list.

Click Update. Steps 1. Viewing domain user settings on the host You can view the domain user settings on the storage systems or vFiler units. Click Add Domain User. 4. 2. Click the edit link corresponding to the domain user. 3. Steps 1. Edit Usergroup Membership. From any page. From any page. select Management > Host Users > Domain Users. Usergroup Membership Usergroups you want to be a member of Note: You cannot edit Host Name and User Name in the Edit Domain User section. Click the view link corresponding to the domain user. . 2.Role-based access control in DataFabric Manager | 71 4. select Management > Host Users > Domain Users. The following details of the selected domain user appear. Host Name User Name SID Usergroups Roles Capabilities Name of the storage system or vFiler unit Name of the domain user Security Identifier of the domain user Usergroups that the user belongs to Roles assigned to the domain user Capabilities of roles assigned to the domain user as part of the user group Editing domain user settings on the host You can edit a domain user on a storage system or vFiler unit.

0 Removing domain users from all the user groups You can remove a domain user from all the user groups. Click Setup > Alarms. Steps 1. 2. 2. Click the push link corresponding to the domain user. Next topics Viewing user groups on the host on page 73 Adding Usergroups to the host on page 73 Viewing Usergroup settings on the host on page 73 . 5.72 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Monitoring changes in domain user configuration You can monitor the changes in domain user configuration on a storage system or vFiler unit. select Management > Host Users > Domain Users. What Usergroups are Usergroups are groups to which the users belong. 3. 2. Pushing domain users to hosts You can push a domain user to a group of storage systems or vFiler units. select Management > Host Users > Domain Users. Select OK. Click Push. Click Remove From All Usergroups. Select the DataFabric Manager group. Steps 1. storage system or vFiler unit on which you want to push the domain user. Create a new alarm for the event Host Domain User Modified. From any page. Steps 1. From any page. 4. 3. Select the domain user that you want to remove.

Steps 1. Select one or more roles. Host Name Usergroup Name Description Select Roles Name of the storage system or vFiler unit from the drop-down list Name of the user group Description of the user group Capabilities of roles 3. Viewing Usergroup settings on the host You can view the user group settings on the storage systems or vFiler units. Steps 1. The Host Usergroups report displays information about the existing user groups on the host. select Management > Host Users. 2. From any page. 4. Specify the parameters. Click Add Usergroup. Adding Usergroups to the host You can add a user group to a storage system or vFiler unit. 2. Steps 1. All from the Report drop-down list. select Management > Host Users > Usergroups. Select Host Usergroups. The following details of the selected user group appear: . select Management > Host Users > Usergroups. From any page.Role-based access control in DataFabric Manager | 73 Editing Usergroup settings on the host on page 74 Deleting Usergroups from the host on page 74 Pushing Usergroups to hosts on page 75 Monitoring changes in Usergroup configuration on page 75 Viewing user groups on the host You can view the user groups on a host using the Host Usergroups report. 2. From any page. Click the view link corresponding to the user group.

Usergroup Name Description Select Roles Name of the user group Description of the user group Capabilities of roles Note: You cannot edit Host Name in the Edit Usergroup section. 3. From any page. .0 Host Name Usergroup Name Description Roles Capabilities Name of the storage system or vFiler unit Name of the user group Description of the user group Roles assigned to the user group Capabilities of the user group Editing Usergroup settings on the host You can edit user group settings on a storage system or vFiler unit. 5. Deleting Usergroups from the host You can delete a user group from a storage system or vFiler unit. Select one or more roles. select Management > Host Users > Usergroups. select Management > Host Users > Usergroups. Click the edit link corresponding to the user group that you want to edit. 2. 3. Edit the parameters.74 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. From any page. Click Delete Selected. Click Update. Select the user group that you want to delete. 4. 2. Steps 1. Steps 1.

Click Push. 4. 3. Create a new alarm for the event Host Usergroup Modified. What roles are A role is a set of capabilities that can be assigned to a group. Click Setup > Alarms. You can predefine a role. Monitoring changes in Usergroup configuration You can monitor the changes in user group configuration on a storage system or vFiler unit. storage system or vFiler unit on which you want to push the user group.Role-based access control in DataFabric Manager | 75 Pushing Usergroups to hosts You can push identical user groups to a group of storage systems or vFiler units. 2. select Management > Host Users > Usergroups. 2. Steps 1. 5. Select the DataFabric Manager group. Click the push link of the user group that you want to push on other storage systems or vFiler units. or you can create or modify it. Next topics Viewing roles on the host on page 76 Adding roles to the host on page 76 Viewing role settings on the host on page 76 Editing role settings on the host on page 77 Deleting roles from the host on page 77 Pushing roles to the hosts on page 78 Monitoring changes in role configuration on page 78 . Select OK. Steps 1. From any page.

Specify the parameters. Steps 1. 4. select Management > Host Users > Roles. Click OK. click Management > Host Users.0 Viewing roles on the host You can view the role settings on the storage systems or vFiler units by using the Host Roles report. 2. 3. Steps 1. select Management > Host Users > Roles.76 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Viewing role settings on the host You can view the roles on the storage systems or vFiler units. 2. Steps 1. From any page. Select Host Roles. 5. All from the Report drop-down list. Adding roles to the host You can add a role to a storage system or vFiler unit. Select one or more capabilities you want to add. 2. Click Add Role. Click the view link corresponding to the host role. From any page. The following details of the selected host role appear: Host Name Name of the storage system or vFiler unit . Host Name Role Name Description Capabilities Name of the storage system or vFiler unit from the drop-down list Name of the role Description of the role Capabilities of the role Click the Add Capabilities link. From any page.

Select one or more capabilities you want to add. 2. 6. 3.Role-based access control in DataFabric Manager | 77 Role Name Description Capabilities Name of the role Description of the role Capabilities of the role Editing role settings on the host You can edit a role on a storage system or vFiler unit. select Management > Host Users > Roles. Note: You cannot edit Host Name and Role Name in the Edit Role section. Click the edit link corresponding to the host role that you want to edit. Edit the parameters. Click Ok. Select the host role that you want to delete. 2. From any page. 4. . Description Capabilities Description of the role Capabilities of the role Click the Edit link. Click Delete Selected. 3. Click Update. 5. Deleting roles from the host You can delete a role from a storage system or vFiler unit. Steps 1. From any page. select Management > Host Users > Roles. Steps 1.

select Management > Host Users > Roles. 4. Steps 1.0 Pushing roles to the hosts You can push identical roles to a group of storage systems or vFiler units. select Management > Host Users >Jobs. 5. Steps 1. 3. Monitoring changes in role configuration You can monitor the changes in role configuration on a storage system or vFiler unit. Select the push job that you want to delete. Next topics Pushing jobs on page 78 Deleting push jobs on page 78 Pushing jobs To view the status of the push jobs. 2. Click Setup > Alarms.78 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Select the DataFabric Manager group or storage system on which you want to push the role. Deleting push jobs You can delete a push job. Click Push. Steps 1. 2. From any page. Click the push link of the host role you want to push on other storage systems or vFiler units. 2. Select OK. Create a new alarm for the event Host Role Modified. Click Delete. . What jobs display Jobs display the status of the push jobs.

You can group a subset of group members to create a new group. and vFiler units) Volume Qtree Configuration LUN Path Aggregate SRM Path Dataset Resource Pool Disk Following is a set of considerations for creating groups: • • • • • • You can group similar or different objects in a group. An object can be a member of any number of groups. are referred to as objects. You can copy a group or move a group in a group hierarchy. You can create any number of groups. You can group objects based on characteristics such as the operating system of a storage system (Data ONTAP version). Storage system elements monitored by DataFabric Manager. Next topics What group types are on page 80 What a Global group is on page 81 What hierarchical groups are on page 81 Creating groups on page 82 Creating groups from a report on page 82 What configuration resource groups are on page 83 Guidelines for managing groups on page 84 Guidelines for creating configuration resource groups on page 84 . aggregates. You can also group objects based on storage systems at a location. Following is a list of DataFabric Manager objects that can be added to a resource group: • • • • • • • • • • Host (can include storage systems.Groups and objects | 79 Groups and objects A group is a collection of DataFabric Manager objects. You cannot create a group of groups. and logical unit numbers (LUNs). such as storage systems. Host Agents. virtual servers. or all file systems that belong to a specific project or group in your organization. file systems (volumes and qtrees).

to the left of a group name. Next topics What homogeneous groups are on page 80 What mixed-type groups are on page 81 What homogeneous groups are You can group objects into sets of objects with common characteristics. on the left side of Operations Manager main window. You can create the following types of groups: • • • • • • • • Appliance Resource group ( Aggregate Resource group ( File System Resource group ( LUN Resource group ( )—contains storage systems. vFiler units. see the Provisioning Manager and Protection Manager Guide to Common Workflows for Administrators. They might. If you place your cursor over an icon. you can quickly find out the type of objects the group contains. and host agents )—contains aggregates only )—contains volumes and qtrees )—contains LUNs only Configuration Resource group ( )—contains storage systems associated with one or more configuration files SRM path group —contains SRM paths only Dataset—is the data that is stored in a collection of primary storage containers. including all the copies of the data in those containers Resource pool—is a collection of storage objects from which other storage containers are allocated Note: For more information about datasets and resource pools. . have the same operating system or belong to a specific project or group in your organization.0 Guidelines for adding vFiler units to Appliance Resource group on page 84 Editing group membership on page 85 What group threshold settings are on page 85 What group reports are on page 86 What summary reports are on page 86 What subgroup reports are on page 86 Different cluster-related objects on page 87 What group types are DataFabric Manager automatically determines the type of a group based on the objects it contains. for example.80 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

You can select arguments for reports to be generated.netapp. or by the client. Data collection and reporting is not resumed until the object is added back (“undeleted”) to the database. When you delete an object from a Global group. You can keep a record of trending. a group called Global exists in the DataFabric Manager database. would. You can also group objects based on their geographical location. and configurations. the data growth rate of the group. Once created. DataFabric Manager prevents you from adding other object types to configuration resource groups. . a mixed-type group can have a group of vFiler units and volumes. you cannot add a configuration file to the group. but you cannot perform management tasks on the Global group. you cannot add any objects to the configuration resource group in DataFabric Manager.com/NOW/knowledge/docs/DFM_win/dfm_index. because privileges granted to a parent group are implicitly granted to all its subgroups. The elements of a configuration resource group cannot be a part of any other homogeneous group. they support. that is. belong to the Global group. Grouping objects from different homogeneous groups also constitutes a mixed-type group. For example. Note: You can perform group management tasks only on groups that you create in DataFabric Manager. All objects created in any subgroup. Configuration resource groups can contain only storage systems. you can create subgroups within groups to establish a hierarchy of groups. vFiler units. What a Global group is By default. Hierarchical groups help you manage administrative privileges. following are the benefits of having hierarchical groups: • • • You can determine the capacity of the group and the chargeback options. DataFabric Manager stops monitoring and reporting data for that object. What hierarchical groups are In addition to creating groups of objects.Groups and objects | 81 Related information Provisioning Manager and Protection Manager Guide to Common Workflows for Administrators http://now. You cannot delete or rename the Global group. Besides.shtml What mixed-type groups are You can add objects of different types to the same group. If a group already contains objects other hosts.

From the list of groups. Click Add. 4. 5. the administrator must have a role with Database Write capability on the Global group. select the parent group for the group you are creating. To create a group directly under the Global group. . 3. select the parent group for the group you are creating. In the Group Name field. 6. Result The new group is created. Steps 1. From the Control Center. File Systems. or tabs. See “Naming conventions” for groups. click Add to New Group. click the Edit Groups. Before you begin To create a group.0 Creating groups You can create a new group from the Edit Groups page. Steps 1. From the list of groups. You might need to expand the Current Groups list to display the new group. type the name of the group you want to create. click the Member Details tab.82 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. In the Group Name field. you must be logged in as an administrator with a role having Database Write capability on the parent group. The Current Groups list in the left-pane area is updated with the new group. select the check boxes for the objects that you want to add to the group. LUN. See “Naming conventions” for groups. You might need to expand the list to display the parent group you want. 3. 4. You might need to expand the list to display the parent group you want. type the name of the group you want to create. To the left of the list of objects in the main window. 2. 2. From the Control Center. Click Aggregate. At the bottom left of the main window. Creating groups from a report You can create a new group from a report in Operations Manager.

You might need to expand the Current Groups list to display the new group. a previously created configuration resource group might already have most or all. Use Operations Manager to create configuration files and to specify the configuration settings that you want to include in them. The new group is created. Besides specifying configuration settings by associating individual configuration files with a group of storage systems.Groups and objects | 83 7. you can pull configuration settings from one storage system and vFiler unit and push the same or a partial set of settings to other storage systems or groups of storage systems and vFiler units. When you create configuration resource groups. consider the following: • • • • • • Only storage systems running Data ONTAP 6. A configuration resource group allows you to designate groups of managed storage systems that can be remotely configured to share the same configuration settings. A configuration resource group must have one or more configuration files associated with it. Such a group is known as a parent group. A storage system can belong to only one configuration resource group. . storage systems cannot be configured remotely.5. For example. you can also specify another configuration resource group from which to acquire configuration settings. By using the storage system configuration management feature. With Operations Manager.1 or later can be included in configuration resource groups. You cannot run any reports for a configuration resource group. Otherwise. you can create and manage configuration files that contain configuration settings you want to apply to a storage system and vFiler unit or groups of storage systems and vFiler units. Configuration files exist independently of groups. of the settings you require. ensuring that storage system and vFiler configuration conforms to the configuration pushed to it from Operations Manager. What configuration resource groups are A configuration resource group is a group of storage systems that share a set of common configuration settings. For configuration management. A configuration resource group must contain some number of storage systems and have one or more files containing the desired configuration settings. Click Add. The Current Groups list in the left-pane area is updated with the new group. Storage systems running on different operating system versions can be grouped in the same configuration resource group. and can be shared between groups. These configuration settings are listed in files called configuration files. appropriate plug-ins must be associated.

You can copy a group or move a group in a group hierarchy. Use the following guidelines when you create Configuration Resource groups: • • • • You can include storage systems. You can group a subset of group members to create a new group. and vFiler units. Note: Configuration resource group is supported only for Data ONTAP 6. volumes.5. the vFiler units are also added as indirect members. When you remove a vFiler unit from a group. You cannot create a group of groups. Guidelines for creating configuration resource groups You must use a set of guidelines when you create Configuration Resource groups. Use the following guidelines when you create groups: • • • • • • You can group similar or mix-types objects in a group. You can create any number of groups. The hosting Storage system and the storage resources (qtrees.1 or later. You cannot create a group of Configuration Resource groups. you must associate one or more configuration files with the group. its vFiler units are also removed. To apply settings to a Configuration Resource group. and vFiler unit. its related hosting Storage system and storage resources are also removed. but can still be a member of multiple groups for monitoring purposes. Guidelines for adding vFiler units to Appliance Resource group You must consider a set of guidelines before adding vFiler units to a resource group. can be a member of only one configuration resource group.84 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Storage system. with different model types and software versions. When you remove a hosting storage system from a group. and LUNs) assigned to the vFiler unit are also added as indirect members. An object can be a member of any number of groups. • .0 Guidelines for managing groups You should follow a set of guidelines when you create groups. • You can add vFiler units as members to an Appliance Resource group. If you add a hosting Storage system that is configured with vFiler units to a group.

if you apply threshold changes to an object that belongs to multiple groups. From the Current Group menu at the lower left of Operations Manager. Operations Manager adds the selection to the group and updates the membership list displayed on the right side of the Edit Group Membership area. Note: Indirect members are considered for determining the group status. Select the object from the Choose from All Available list and click >>to move the object to the list at the right. the new threshold values are associated with the objects in the group. Note: When you apply threshold changes to a group. click Edit Membership. 4. if you add another object to a group. the threshold value of the new object is not changed. What group threshold settings are Group thresholds determine at what point you want DataFabric Manager to generate events regarding capacity problems with object groups. 3. the vFiler unit is also added as an indirect member. For example. after applying a threshold change. . The thresholds you can change depend on the type of objects in a group. you can change Appliance CPU Too Busy Threshold for only an Appliance Resource group. The threshold value of the new object does not change if it is different from the current group. For a list of thresholds you can change for an object type. the threshold value is changed for this object across all groups. Steps 1. Go to the Groups area on the left side of Operations Manager and expand the list as needed to display the group to which you want to add members. the vFiler unit is also removed. 2. Click the name of the group to which you want to add members.Groups and objects | 85 • If you add a storage resource assigned to a vFiler unit to a group. you can add members to a group. Additionally. That is. You can create an alarm for a group to send notification to designated recipients whenever a storage event occurs. These new threshold values are not associated with the group. Editing group membership In Operations Manager. see the chapter where that type of object is the main topic of discussion in this guide. If you remove the storage resources from a group. You can change Volume Full Threshold and Volume Nearly Full Threshold for only a File System Resource Group.

Operations Manager runs the report on group members of the applicable type. you can view the total storage capacity used. you see data about the aggregates in the parent group. You do not see data about other object types in the parent group or the subgroups. What subgroup reports are If you run a report on a group with subgroups.86 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. What summary reports are Summary reports are available for all groups. What group reports are Grouping objects enables you to view consolidated data reports. by creating a group of the storage systems. and status of objects. including the Global group. and using the Summary tab of Operations Manager. and then eliminates the duplicates. the data displayed includes data on applicable objects in the subgroups. For example. if any. For example. events. . Operations Manager combines the results. You cal also see data about the aggregates in its subgroups. qtrees for the Qtree Growth report. If you run a report on a mixed-type object group.0 For information about how to change the thresholds for a group of objects. or events generated by all manufacturing storage systems. For example. see the Operations Manager Help. • • • • • • • • • • Status Group members Storage capacity used and available Events Storage chargeback information Monitored devices Physical space Storage system operating systems Storage system disks Capacity graphs Note: You can view additional reports that focus on the objects in a group by clicking the name of the group and then clicking the appropriate Operations Manager tab. if you display the Aggregate Capacity Graph report on a parent group containing aggregates.

into a group. In a Data ONTAP cluster. A port represents a physical Ethernet connection. heads. but only through the logical interfaces that are associated with the virtual server. A virtual server is associated with one or more logical interfaces through which clients access the data on the server. a list of ports to fail over to. Storage controller refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. A namespace provides a context for the interpretation of the junctions that link together a collection of volumes. reliability. providing performance. Each virtual server has its own user domain and security domain. Virtual server A virtual server represents a single file-system namespace. Cluster ports Cluster ports provide communication paths for cluster nodes. or controller modules. A logical interface (LIF) is essentially an IP address with associated characteristics. A junction points from a directory in one volume to the root directory of another volume. Storage controllers are also sometimes called controllers. and scalability benefits. storage engines. additional volumes are mounted to the root volume to extend the namespace. A cluster refers to a group of connected nodes (storage systems) that share a global namespace and that you can manage as a single virtual server or multiple virtual servers. Each logical interface is associated with a maximum of one virtual server to provide client access to it. a routing group. It can span multiple physical nodes.Groups and objects | 87 Different cluster-related objects By using Operations Manager. CPU modules. A virtual server has separate network access and provides the same flexibility and control as a dedicated node. and so on. Namespace Every virtual server has a namespace associated with it. ports are classified into the following three types: • • • Data ports Data ports provide data access to NFS and CIFS clients. such as a home port. Volume junctions are transparent to NFS and CIFS clients. appliances. storage appliances. A virtual server has a root volume that constitutes the top level of the namespace hierarchy. you can include various cluster objects. Clients can access the virtual server from any node in the cluster. a firewall policy. Management ports Junction Logical interface Cluster Storage controller Ports . All the volumes associated with a virtual server are accessed under the virtual server's namespace. such as controllers and virtual servers.

Interface group The interface group refers to a single virtual network interface that is created by grouping together multiple physical interfaces. In the Group Name field. click Add To New Group. Depending on the cluster objects you want to group. A cluster management LIF is associated with a cluster and can fail over to a different node. Example The navigational path to create a group of virtual servers is Control Center > Home > Member Details > Virtual Systems > Report > Virtual Servers. controllers. Node management LIF Cluster management LIF Creating a group of cluster objects By using Operations Manager. All. A data LIF is associated with a node or virtual server in a Data ONTAP cluster. select the cluster objects you want to include in a group. From the buttons at the bottom of the page. 7. In the resulting report. 6. 4. Data LIF A data LIF is a logical network interface mainly used for data transfers and operations. Click Control Center > Home > Member Details > Physical Systems > Report. volumes. . A node management LIF is associated with a node and does not fail over to a different node. A cluster management LIF is a logical network interface used for cluster management operations. 5. 3. you can create a group of cluster objects for easier administration and access control.0 Management ports provide data access to Data ONTAP management utility. A node management LIF is a logical network interface mainly used for node management and maintenance operations. aggregates. 2. and virtual servers to a group. select the appropriate report from the Report drop-down list. You can add objects such as clusters. Click Add. Steps 1.88 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Select an appropriate parent for your new group. enter a name for the group.

interface statistics. when the status is abnormal. and chassis environmental. Operations Manager returns the appropriate graph or selection of reports (for example. DataFabric Manager discovers the storage systems supported on your network.Storage monitoring and reporting | 89 Storage monitoring and reporting Monitoring and reporting functions in DataFabric Manager depend on event generation. Next topics What monitoring is on page 89 Cluster monitoring with Operations Manager on page 91 Links to FilerView on page 93 Query intervals on page 94 What SNMP trap listener is on page 94 What events are on page 97 Alarm configurations on page 99 Working with user alerts on page 103 Introduction to DataFabric Manager reports on page 108 Data export in DataFabric Manager on page 124 What monitoring is Monitoring involves several processes. DataFabric Manager sends a notification to a recipient when an event triggers an alarm. If configured to do so. such as CPU usage. volumes. DataFabric Manager periodically monitors data that it collects from the discovered storage systems. Depending on which tab you select. The following flow chart illustrates the DataFabric Manager monitoring process. qtree usage. You must configure the settings in Operations Manager to customize monitoring and to specify how and when you want to receive event notifications. . or when a predefined threshold is breached. DataFabric Manager generates events when it discovers a storage system. free disk space. First. reports about storage systems. and. Operations Manager allows you to generate summary and detailed reports. disks).

90 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 .

controller resources. You can gather information about the cluster resources.0 Cluster-Mode. and generate reports of systems running Data ONTAP 8. This logical interface is associated with a cluster and can be failed over to a different node. Operations Manager can discover. Information available on the Cluster Details page The Cluster Details page for a cluster provides information such as the cluster hierarchy. status of the cluster.0. The cluster management logical interface is associated with a cluster management server to provide a detailed view of the cluster. Operations Manager uses SNMP or XML APIs for cluster monitoring. You can access the Clusters Details page by clicking the cluster name from any of the cluster reports. number of logical interfaces. and virtual server resources from the cluster management logical interface. and so on. The Cluster Details page displays the following cluster-related information: • • • • • • • • • • • • Status of the cluster Serial Number Uptime Primary IP address Number of controllers Number of virtual servers Number of ports Number of logical interfaces Contact and location Current events that have the cluster as the source Storage capacity Groups to which the cluster belongs .Storage monitoring and reporting | 91 Cluster monitoring with Operations Manager Starting from DataFabric Manager 4. monitor. Next topics What the cluster management logical interface is on page 91 Information available on the Cluster Details page on page 91 Viewing the utilization of resources on page 92 What the cluster management logical interface is The cluster management logical interface is a virtual network interface that enables you to perform cluster management operations. Operations Manager monitors clusters running Data ONTAP 8. ports.0 ClusterMode by using the appropriate SNMP version or APIs.

including CPU usage. Browsing the cluster You can browse through the cluster and its components from the "Cluster and its components Hierarchy" section in the Cluster Details page. Next topics Viewing the utilization of logical resources on page 93 . you can view the details of those controllers from the Controllers. view the list of virtual servers. network traffic at the cluster level. For example. All report. Viewing the utilization of resources You can view the graphical representation of the utilization of various physical and logical resources from the Details page in Operations Manager. You can also browse to the corresponding report of a particular component. and then click a desired virtual server to view its details on the corresponding Virtual Server Details page. For example.92 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. usage versus committed CPU usage (in percentage) NFS Operations/sec NFS and CIFS Operations/sec CIFS Operations/sec Network Traffic/sec Logical Interface Traffic/sec Tasks performed from the Cluster Details page You can perform various cluster management tasks from the Cluster Details page. by clicking the number corresponding to the Controllers link. and volume capacity used.0 • • List of the most recent polling sample date and the polling interval of all the events monitored for the cluster Graphs that display the following information: • • • • • • • • • • • Volume capacity used Volume capacity used versus total capacity Aggregate capacity used Aggregate capacity used versus total capacity Aggregate space. You can perform the following tasks from the Cluster Details page: Exploring the physical view of the system Viewing total utilization of physical resources You can gather information about cluster components in the form of reports. You can access the Clusters Details page by clicking the cluster name from any of the cluster reports. you can expand the cluster. You can also view the graphical representation of the corresponding resources. You can view the total utilization of physical resources.

In the Storage Controller Details page. to view the virtual server's volume capacity used for a period of one year. monthly. the Web-based UI for storage When you click the icon. you can view the graphical representation of the utilization of your physical resources such as CPU. Steps 1. quarterly. network traffic to the controller. and so on. All. You can view the utilization graph on a daily. you can view the graphical representation of the utilization of your logical resources such as virtual servers. Click Control Center > Home > Member Details > Physical Systems > Report > Controllers. You can view the utilization graph on a daily. Links to FilerView In DataFabric Manager 2. and volumes. You can also configure alarms to send notification whenever the utilization exceeds preset thresholds. For example. you are connected to the FilerView location where you can view information about. For example. to view the controller’s CPU usage for a period of one year. Click Control Center > Home > Member Details > Virtual Systems > Report > Virtual Servers. and make changes to the object whose icon you clicked. quarterly. or yearly basis. ). 2. Click the name of the virtual server for which you want to view the utilization of logical resources. Viewing the utilization of physical resources By using Operations Manager. select the appropriate graph from the drop-down menu.3 and later.Storage monitoring and reporting | 93 Viewing the utilization of physical resources on page 93 Viewing the utilization of logical resources By using Operations Manager. 3. 2. UI pages displaying information about some DataFabric Manager objects contain links. indicated by the icon to FilerView( systems. or yearly basis. Depending on your . 3. you can select Volume Capacity Used and click 1y. monthly. All. select the appropriate graph from the drop-down menu. logical interface traffic to the virtual server. In the Virtual Server Details page. Click the name of the controller for which you want to view the utilization of physical resources. weekly. Steps 1. you can select CPU Usage (%) and click 1y. weekly.

DataFabric Manager 3. if you increase the monitoring interval. If you decrease the monitoring intervals. Event generation and alerting is faster than with . although DataFabric Manager pings each storage system every minute to ensure that the storage system is reachable. thereby increasing the network traffic and the load on the server on which DataFabric Manager is installed and the storage systems responding to the queries. However. Next topics What global monitoring options are on page 94 Considerations before changing monitoring intervals on page 94 What global monitoring options are The SNMP query time intervals are specified by the global monitoring option that is located in the Monitoring Options section of the Options page. Related concepts What SNMP trap listener is on page 94 What SNMP trap listener is In addition to periodically sending out SNMP queries.0 Cluster-Mode. Query intervals DataFabric Manager uses periodic SNMP queries to collect data from the storage systems it discovers. the storage system load are reduced. Considerations before changing monitoring intervals There are advantages and disadvantages to changing the monitoring intervals. using one of the administrator user accounts on the storage system. Note: Links to FilerView is not available for systems running Data ONTAP 8. you receive more real-time data. DataFabric Manager queries the storage systems more frequently. you might need to authenticate the storage system whose FilerView you are connecting to.1 and later include an SNMP trap listener as part of the server service. However. Similarly.0 setup. data reported might not reflect the current status or condition of a storage system.94 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. the amount of free space on the disks of a storage system is collected every 30 minutes. the network traffic. All the monitoring option values apply to all storage systems in all groups. The data is reported by DataFabric Manager in the form of tabular and graphical reports and event generation. The time interval at which an SNMP query is sent depends on the data being collected. For example.1 and later includes an SNMP trap listener that speeds event generation. DataFabric Manager 3. you might need to change some of the options to suit your environment. Although you should generally keep the default values. and.

Storage monitoring and reporting | 95 SNMP queries because the proper monitoring mechanism is started immediately after the SNMP trap is received. Next topics What SNMP trap events are on page 95 How SNMP trap reports are viewed on page 96 When SNMP traps cannot be received on page 96 SNMP trap listener configuration requirements on page 96 How SNMP trap listener is stopped on page 96 Configuration of SNMP trap global options on page 97 Information about the DataFabric Manager MIB on page 97 What SNMP trap events are When the SNMP trap listener receives an SNMP trap. The name associated with the SNMP trap Information event indicates the severity of the trap: for example. The following list describes the SNMP trap Information event types: • • • • • • • Emergency Trap Received Alert Trap Received Critical Trap Received Error Trap Received Warning Trap Received 106 SNMP traps Notification Trap Received Information Trap Received If the severity of a trap is unknown. but does not change the status of the host. Error Trap. Traps from other sources are dropped. In addition. For a complete list of traps and associated trap IDs. the corresponding monitor associated with the trap generates the proper event and continues to monitor the host to report status changes. Related concepts Information about the DataFabric Manager MIB on page 97 . see the Data ONTAP Network Management Guide. DataFabric Manager issues an Information event. if they have been manually configured to send traps to the DataFabric Manager server (over UDP port 162). monitoring is performed asynchronously. Note: The SNMP trap listener can receive SNMP traps only from storage systems that are supported on DataFabric Manager. instead of waiting for the monitoring interval. The SNMP trap listener listens for SNMP traps from monitored storage systems. The SNMP traps received by the SNMP trap listener are specified in the custom MIB. as specified in the custom MIB. Instead. The trap severities are deduced from the last digit of the trap ID. DataFabric Manager drops the trap.

and the condition that led to the error. the name of the trap. SNMP trap listener configuration requirements A set of configuration requirements must be met to enable reception of SNMP traps from managed storage systems. for example. The Events tab enables you to view a listing of all current SNMP traps or to sort them by severity.netapp. although you might want to modify these settings. DataFabric Manager cannot receive Debug traps. use the snmpTrapListenerEnabled CLI option. the severity. On DataFabric Manager: No configuration is needed to start the SNMP trap listener on DataFabric Manager (the trap listener is automatically started after installation). DataFabric Manager version is before 3. How SNMP trap listener is stopped The SNMP trap listener is enabled by default.96 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Note: If another trap listener is listening on port 162.shtml How SNMP trap reports are viewed You can use the Events tab to view reports about the SNMP traps that are received by DataFabric Manager. if any of the following conditions exist: • • • A system has not been configured to send traps to the DataFabric Manager. On managed storage systems: You must manually add the DataFabric Manager server as a trap destination on all supported systems to be monitored. The SNMP trap global options are also configured with default settings. The host is not a supported storage system. Each view provides information about each SNMP trap.http://now.1.com/NOW/knowledge/docs/ontap/ ontap_index. Additionally. the startup of the built-in trap listener fails with an error and the Warning event is displayed in the Events page. The traps must be sent to the DataFabric Manager server over UDP port 162. .0 Related information Data ONTAP Network Management Guide . If you want to start or stop the SNMP trap listener. When SNMP traps cannot be received DataFabric Manager cannot receive SNMP traps.

you can find out about the event by checking the Events window. Event messages inform you when specific events occur.Storage monitoring and reporting | 97 Configuration of SNMP trap global options You can configure the SNMP trap global options by accessing the SNMP Trap Listener options and the Event and Alert options on the Options page. Configuration of the SNMP trap global options is not necessary at start-up. The following global default settings can be modified: • • Enable SNMP trap listener Use this option to enable or disable the SNMP trap listener. • Information about the DataFabric Manager MIB The SNMP traps generated for the DataFabric Manager events are specified in the DataFabric Manager MIB. However. that is. Note: DataFabric Manager can send information to an SNMP trap host only when an alarm for which the trap host is specified is generated. All events are assigned a severity type and are automatically logged in the Events window. SNMP Trap Listener Port Use this option to specify the UDP port on which the SNMP Manager Trap Listener receives traps. you might want to modify the global default settings. you cannot query DataFabric Manager for information from an SNMP traphost. What events are Events are generated automatically when a predefined condition occurs or when an object crosses a threshold. . SNMP Maximum Traps Received per window and SNMP Trap Window Size Use these two options to limit the number of SNMP traps that can be received by the trap listener within a specified period. Supported storage systems can send SNMP traps only over UDP port 162. The MIB at the following locations provides a complete list of DataFabric Manager SNMP traps and associated trap IDs: • • For Windows: installation_directory\dfm\misc For UNIX: installation_directory/misc DataFabric Manager can send only the traps that are available in the MIB. You can configure alarms to send notification automatically when specific events or severity types occur. If an application is not configured to trigger an alarm when an event is generated. DataFabric Manager cannot serve as an SNMP agent.

select the check box for the event that you want to acknowledge. You can select multiple events. Steps 1. Note: User quota threshold events can be viewed only with the User Quota Events report available through the Report drop-down list on the Events tab. 2. click the Events tab located in the Group Summary page. and FCP targets. However. Note: Event types are predetermined. Step 1. The Details pages provide lists of events related to the specific component. you can check the events log on the server on which DataFabric Manager is installed. FC switches. you cannot find out about the event. to identify the event. click the Events tab. HBA ports. Ignoring such events can lead to poor performance and system unavailability.98 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Delete the event. Find out the cause of the event and take corrective action. View the events logged by Operations Manager in any of the following ways: • • • • Click the Events: Emergency. Click Acknowledge Selected to acknowledge the event that caused the alarm. 4. Next topics Viewing events on page 98 Managing events on page 98 Operations on local configuration change events on page 99 Viewing events You can view a list of all events that occurred and view detailed information about any event. Although you cannot add or delete event types. Warning link located at the top of the Operations Manager main window. Critical. From the Backup Manager tab or the Disaster Recovery Manager tab. . Managing events If DataFabric Manager is not configured to trigger an alarm when an event is generated.0 It is important that you take immediate corrective action for events with severity level Error or worse. From the Control Center tab. Select the Details pages for storage systems. SAN hosts. Error. From an Events view. 3. you can manage notification of events.

fixing. You must configure alarms for the events. Nevertheless. the configuration settings listed are not modified during subsequent configuration pushes. or the Global group. . or a script that you write. • Alarms must be created by group. to avoid multiple responses to the same event. From this window. DataFabric Manager undoes all the local changes. create an alarm for the newly created group. you must first create a group with that object as the only member. you should configure DataFabric Manager to repeat notification until an event is acknowledged. or deleting the event. reject the local configuration changes made on the storage system. If you click Fix. DataFabric Manager does not automatically send alarms for the events. Alarm configurations DataFabric Manager uses alarms to tell you when events occur. you can accept. If you reject the configuration changes. a pager number. If you want to set an alarm for a specific object. Then. either an individual group. Not all events are severe enough to require alarms. you specify. If you accept the configuration changes.Storage monitoring and reporting | 99 Operations on local configuration change events After receiving the event. or. and how many recipients an alarm has. you have the choice of acknowledging. a new window that shows the differences between the local configuration and the group settings is displayed. an SNMP traphost. Next topics Configuration guidelines on page 99 Creating alarms on page 100 Testing alarms on page 101 Comments in alarm notifications on page 101 Example of alarm notification in e-mail format on page 101 Example of alarm notification in script format on page 102 Example of alarm notification in trap format on page 102 Response to alarms on page 102 Deleting alarms on page 102 Configuration guidelines When configuring alarms you must follow a set of guidelines. You are responsible for which events cause alarms. and not all alarms are important enough to require acknowledgment. whether the alarm repeats until it is acknowledged. DataFabric Manager sends the alarm notification to one or more specified recipients: an e-mail address.

4. 8. Alarms can be for events of severity Information or higher. 9. 12. Specify the recipient of the alarm notification. Click Add. You might need to expand the list to display the one you want to select. or an IP address of the system to receive SNMP traps (or port number to send the SNMP trap to). continue with Step 5. 10. Optionally. Optionally. if you want to specify a class of events that should trigger this alarm. set the interval (in minutes) that Operations Manager waits before it tries to resend a notification. You can use normal expressions. 6. Optionally. 2.100 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Optionally. specify the period that Operations Manager sends alarm notifications. pager addresses. 3. 11. Alarms you create for a type of event are triggered when any event of that severity level occurs. select the group that you want Operations Manager to monitor. Related concepts What events are on page 97 Creating alarms You can create alarms from the Alarms page in Operations Manager. specify the recipients of the alarm notification. If you want to configure additional options. 7. Note: If you want to specify more than one recipient or configure repeat notification. continue to Step 5. Optionally. 5. specify the event class. e-mail addresses. select Yes to resend the alarm notification until the event is acknowledged or No to notify the recipients only once. Activate the alarm by selecting No in the Disable field. From the Alarms page. Specify what triggers the alarm: an event or the severity of event. Steps 1. . Click Add to set the alarm. Click Advanced Version.0 • • • Alarms you create for a specific event are triggered when that event occurs. Formats include administrator names. 13. Click Control Center > Setup > Alarms.

you can also access them by executing scripts.testserver@netapp.on.com] Sent: Wednesday.on. Click Test. This example shows custom comments entered for comment fields Asset Number. 2. such as asset number. location name. 2005 11:51 AM To: root Subject: dfm: Normal event on administrator-lxp (Host Up) A Normal event at 20 Jul 11:51 IST on Host hiyer-lxp: The Host is up. Location Name. When you define a custom comment field. Comments in alarm notifications By using DataFabric Manager.M ana ger. click Test (to the right of the alarm you want to test). From the Alarms page.Storage monitoring and reporting | 101 Testing alarms You can test the alarms from the Alarms page in Operations Manager. and support contact. DataFabric Manager sends this information in the alarm notification to help you respond to the alarm. Steps 1.*** Comment Fields: --------------Asset Number: Ashes00112 Department code: CS . Custom alarm notifications cannot be sent to pagers. *** Event details follow. Result DataFabric Manager generates an alarm and sends a notification to the recipient. Department Code.Manager.com[mailto:DataFabric. Example of alarm notification in e-mail format Custom alarm notifications are sent by e-mail message or SNMP traps.testserver@netapp. department code. Custom alarm notifications are sent by e-mail message or SNMP traps. July 20. you can add details to alarm notifications. From: DataFabric. and Support Contact.

it is appended to the trap. DFM_FIELD_. The format of this string is as follows: 'name1=value'.0 Location Name: Lords Support Contact: Glenn McGrath Example of alarm notification in script format A new environment variable. you should acknowledge the event and resolve the condition that triggered the alarm. Characters other than [a-z][A-Z][0-9] are replaced with “_” (underscore). .. Deleting alarms You can delete alarms from the Alarms page in Operations Manager.'Support Contact=Glenn McGrath' Response to alarms When you receive an alarm. In addition. The value of this variable is a string.'Department Code=CS'.. select the alarm for deletion. From the Alarms page. which contains the name and values of all the defined comment fields. if the repeat notification feature is enabled and the alarm condition persists. is added for each defined comment field.'Location Name=Lords'.102 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Steps 1. you continue to receive notifications. 2. Click Delete Selected.'name2=value'. If this string is not empty..'name(n)=value(n)' 'Asset Number=Ashes00112'. This same example as the e-mail format in script format is shown here DFM_FIELD_Asset_Number="Ashes00112" DFM_FIELD_Department_Code="CS" DFM_FIELD_Location_Name="Lords" DFM_FIELD_Support_Contact="Glenn McGrath" Example of alarm notification in trap format A new SNMP variable is added to all existing traps.

Alarms User alerts Alarms have to be configured for events before DataFabric Manager generates user alerts by DataFabric Manager can send out notification to default. You can disable the alerts for all users or for the users who have quotas. . The alert is in the form of an e-mail message that includes information about the file system (volume or qtree) on which the user exceeded the threshold for a quota.Storage monitoring and reporting | 103 Working with user alerts The DataFabric Manager server can send an alert to you whenever it detects a condition or problem. DataFabric Manager sends out user alerts (e-mail messages) to all users who exceed their quota limits. You can configure the mail server so that the DataFabric Manager can send alerts to specified recipients when an event occurs. the specified recipients. By default. DataFabric Manager sends an alert to the user who caused the event. in your systems that requires attention. Whenever a user event related to disk or file quotas occurs. DataFabric Manager sends out user alerts (e-mail messages) to all users who exceed their quota limits. Differences between alarms and user alerts DataFabric Manager uses alarms to tell you when events occur. Next topics What user alerts are on page 103 Differences between alarms and user alerts on page 103 User alerts configurations on page 104 E-mail addresses for alerts on page 104 Domains in user quota alerts on page 105 What the mailmap file is on page 105 Guidelines for editing the mailmap file on page 106 How the contents of the user alert are viewed on page 106 How the contents of the e-mail alert are changed on page 106 What the mailformat file is on page 106 Guidelines for editing the mailformat file on page 107 What user alerts are By default.

you can configure the mailmap file.104 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. configure e-mail addresses. or a group of file systems. use the Enable User Quota Alerts option in the Users section on the Options page. If you want to enable or disable alerts to all users in the DataFabric Manager database. User alerts can be sent only when the following user quota events occur: • • • • User Disk Space Quota Almost Full User Disk Space Quota Full User Files Quota Almost Full User Files Quota Full User alerts configurations To receive user alerts. you must enable the User Quota Alerts option. User alerts are sent to any user with user quota information in the DataFabric Manager database. E-mail addresses for alerts The e-mail address that DataFabric Manager uses to send alerts depends on DataFabric Manager configuration. You can enable. Alarms can be sent to only users listed as administrators on the Administrators page of Operations Manager. The following list specifies the checks DataFabric Manager makes before selecting an e-mail address for the user: • There are three ways to specify an e-mail address: • • Use the Edit User Settings page (Quotas or File SRM > user name > Edit Settings) Use the dfm quota user command . Alarms can be configured for any events with severity of Information or higher. User alerts can be only in the form of an e-mail message. Use the Enable User Quota Alerts option on the Edit Volume Settings or Edit Qtree Settings page of that volume or qtree. or disable alerts to only the users who have quotas configured on a specific file system. and the e-mail domain.0 Alarms Alarms can be sent to one or more of the following recipients: • • • • An e-mail address A pager address An SNMP traphost A script that you write User alerts User alerts can be sent to the user who exceeds the user quota thresholds. Optionally.

Example of mailmap file # Start mail map USER windows_domain\joe joe@company. Operations Manager uses joe as the user name.com USER jane@nis1. again Operations Manager uses the part of the user name that is unique to the user (without the domain information). • Use the mailmap file • If you need to specify e-mail addresses of many users. What the mailmap file is The mailmap file is a simple text file that contains a mapping between the user names and the e-mail addresses of these users. If you specify a value for this option. For more information about the dfm mailmap import command. • If a default e-mail domain is not configured. Note: DataFabric Manager uses only the part of the user name that is unique to the user.com jane@company. Note: If your SMTP server processes only e-mail addresses that contain the domain information. DataFabric Manager provides a mailmap file that enables you to specify many e-mail addresses in one operation. Once the file has been imported. it applies to all users except the ones who are listed with the email domain information in the mailmap file. by DataFabric Manager.com .Storage monitoring and reporting | 105 Note: For more information about the dfm quota user command. you must configure the domain in DataFabric Manager to ensure that e-mail messages are delivered to their intended recipients.company. The file is imported into the DataFabric Manager database by using the dfm mailmap import command. the default e-mail domain is configured and appended to the user name. When you do not specify the e-mail address. to specify the domain that Operations Manager appends to the user name when sending out a user quota alert. Therefore. by using the Edit User Settings page for each user might not be convenient. if the user name is company/joe. The resulting e-mail address is used to send the alert. the information in the database is used to find the e-mail addresses of the users. Domains in user quota alerts You can use the Default Email Domain for Quota Alerts option. see the DataFabric Manager man pages. The Quota Alerts option is in the User Options section on the Options page (Setup menu > Options). see the DataFabric Manager man pages. For example.

E-mail address to which the quota alert is sent when the user crosses a user quota threshold Guidelines for editing the mailmap file You should follow a set of guidelines for editing the mailmap file. How the contents of the e-mail alert are changed You can change the contents of the current e-mail alert by modifying the alerts in the mailformat file. for UNIX user joe in NIS domain nisdomain1. • When specifying a user name for a UNIX user. enclose the name in either double or single quotes.0 USER chris@nisdomain1 chris # End mail map USER user_name e-mail address A case-sensitive keyword that must appear at the beginning of each entry The Windows or UNIX user name. If no domain information is configured on the storage system. you can import the modified file in to the DataFabric Manager database by using the dfm quota mailformat import command. they are ignored. After you have changed the contents of the e-mail alert. This file must contain entries in the following format: . see the DataFabric Manager man pages. For example. If blank lines exist in the file.106 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. For more information about the dfm quota mailformat command. Path is the location of the file containing e-mail alerts. If the name contains spaces. • • • • How the contents of the user alert are viewed You can obtain the content of the current e-mail alert that DataFabric Manager sends by entering the following command at DataFabric Manager console: dfm quota mailformat export { path | . specify joe@nisdomain1 as the user name. Use one or more spaces or tabs to separate the fields in the file.}. Use the “#” character in the beginning of a line that is a comment. What the mailformat file is The mailformat file is a simple text file that enables you to customize the contents of the e-mail alert that is sent to the users. you must also leave the domain in the mailmap file empty. The specified NIS domain must match the one configured on the storage system. you must specify the full NIS domain after the user name.

Name of the event quota event DFM_QUOTA_FILE_SYSTEM_NAME Name of the file system (volume or qtree) that caused the DFM_QUOTA_FILE_SYSTEM_TYPE Type of file system (volume or qtree) DFM_QUOTA_PERCENT_USED DFM_QUOTA_USED DFM_QUOTA_LIMIT DFM_QUOTA_TYPE Percentage of quota used Amount of disk space or number of files used Total disk space or files quota Type of quota (disk space or files). Event (For IT Use only): DFM_LINK_EVENT -. The following table lists the valid variables. Variable DFM_EVENT_NAME Variable is replaced with. Please delete files that you no longer need. Specify an empty line between the header and the body of the e-mail message.. depending on whether the disk space or files quota threshold was exceeded Hyperlink to the event Name of the user exceeding the quota threshold DFM_LINK_EVENT DFM_QUOTA_USER_NAME Example of the mailformat file From: IT Administrator Subject: URGENT: Your DFM_QUOTA_TYPE quota on DFM_QUOTA_FILE_SYSTEM_NAME You (as user "DFM_QUOTA_USER_NAME") have used up DFM_QUOTA_PERCENT_USED (DFM_QUOTA_USED out of DFM_QUOTA_LIMIT) of available DFM_QUOTA_TYPE quota on DFM_QUOTA_FILE_SYSTEM_NAME. • • Ensure that the mailformat file conforms to SMTP protocol..Storage monitoring and reporting | 107 mail-headers <empty line> body mail-headers body The SMTP headers to be sent in the DATA section of the SMTP message Body of the e-mail Any words that begin with DFM_ are treated as DataFabric Manager variables and are replaced by their values. .IT Administrator Guidelines for editing the mailformat file You should follow a set of guidelines for editing the mailformat file.

However. in which you can do the following: • • • • • • View a report.0 • Use any of the headers recognized by the SMTP servers. you can view the complete reports under their respective report categories. All the reports are divided under the following categories: • • • • • • • • • • • Recently Viewed Favorites Custom Reports Logical Objects Physical Objects Monitoring Performance Backup Disaster Recovery Data Protection Transfer Miscellaneous For more information about these categories. Introduction to DataFabric Manager reports DataFabric Manager provides standard reports that you can view from the CLI or Operations Manager.108 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Next topics Introduction to report options on page 109 Introduction to report catalogs on page 109 Different reports in Operations Manager on page 110 What performance reports are on page 114 . Create a report. see Operations Manager Help. you can search all the reports from Reports menu > All. You cannot delete a standard report.6 or later. You can run reports and create custom reports from the CLI. By using DataFabric Manager 3. such as “content type: text/html” to send an e-mail message with HTML formatted body. Note: The report category Performance contains the performance characteristics of objects. Delete a custom report. Use a custom report as a template to create another custom report. However. DataFabric Manager provides reports in easy-to-use Operations Manager interface. Save a report in Excel format. Print a report.

Catalogs. You can set basic report properties from the CLI or Operations Manager. and Fields. The custom report object has the following attributes that you can set: • • • • • A short name (for CLI output) A long name (for GUI output) Field description The fields to display The report catalog it was created from The custom report also has methods that let you create. You can configure the report options in Operations Manager with respect to Name. Introduction to report catalogs DataFabric Manager provides report catalogs that you use to customize reports.Storage monitoring and reporting | 109 Configuring custom reports on page 114 Deleting custom reports on page 115 Putting data into spreadsheet format on page 116 What scheduling report generation is on page 116 Methods to schedule a report on page 118 What Schedules reports are on page 121 What Saved reports are on page 122 Introduction to report options DataFabric Manager provides standard reports that you can view from the CLI or Operations Manager. . delete. and view your custom reports. The command specifically describes how to list a report catalog and its fields and command organization and syntax. The dfm report command specifies the report catalog object that you can modify to create the custom report. use the dfm report help command. Following are the basic report properties: • • • • • A short report name (for CLI output) Long report name (for Operations Manager output) Field description The fields to display The report catalog it was created from Every report that is generated by DataFabric Manager. For more information about how to use the CLI to configure and run reports. Display tab. including those you customize. is based on the catalogs.

Information such as model. and SNMP traps. Array LUNs Aggregate Array LUNs Backup Clusters Controllers Dataset Dataset Disks Events . serial number of the LUN. performance characteristics. The disks report shows you information about the disks in your storage systems. and availability by the volumes on your aggregates. Information such as model. you can view aggregate reports from Control Center > Home > Member Details > Aggregates > Report. you can view disks reports along with the controller reports in the Member Details tab. serial number of the cluster. and size is available in these reports. The aggregate array LUNs report shows you information about array LUNs contained on the aggregates of a V-Series system. The Clusters report shows information about the clusters such as status. user quotas. vendor. You can view the Clusters report from Control Center > Home > Member Details > Physical Systems > Report The Controllers report shows information about the cluster to which the controller belongs. capacity. and the conformance status of the dataset. serial number of the LUN. By default. The information about all events. and the system ID of the cluster. as well as by size.110 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You can view the Controllers report from Control Center > Home > Member Details > Physical Systems > Report. The backup report shows you information about the data transfer during a backup. You can view the performance characteristics and sort these reports by broken or spare disks. you can view aggregate array LUNs reports from Control Center > Home > Member Details > Aggregates > Report. By default. The dataset report shows you information about the resource. vendor. number of virtual servers and controllers associated with cluster. such as model.0 Different reports in Operations Manager There are different kinds of reports in Operations Manager. This report also displays information about the policy with which the dataset is associated. vendor. and size. By default. and size is available in these reports. you can view array LUNs reports from Control Center > Home > Member Details > Physical Systems > Report. The events report shows you information about event severity. model and serial number of the controller. By default. Aggregates The aggregate report shows you space utilization. including deleted. protection. The dataset report shows you information about the data transfer from individual mirror and backup relationships within a dataset. The array LUNs report shows you information about the LUN residing on the third-party storage arrays that are attached to a V-Series system.

The FCP target report shows you information about the status. you can view file system reports in the Control Center > Home > Member Details > File systems tab. The FCP target also reports the name of the FC switch. The host domain users report shows you information about the existing domain users on the host. storage space used. The host users report shows you information about the existing users on the host. in the DataFabric Manager database are available in these reports. By default. you can view group reports in the Group Status tab. controllers. By default. FC Switch FCP Target File System Group Summary Interface Groups Host Users Host Local Users Host Domain Users Host Usergroups . By default. qtrees. and acknowledged events. By default. The FCP target report includes storage chargeback reports that are grouped by usage and allocation. The group summary report shows the status. You can view the Interface Groups report from Control Center > Home > Member Details > Physical Systems > Report. the port to which the target connects and the HBA ports that the target can access. By default. By default. and storage space available for your groups. The host local users report shows you information about the existing local users on the host. and topology of the target. you can view FC switch reports along with the SAN reports in the Member Details tab. you can view the host local users reports from Management > Host Local Users > Report. you can view event reports in the Group Status tab. By default. or are not operating. and you can filter them into reports by volumes. you can view host users reports from Management > Host Users > Report . you can view FC link reports along with the SAN reports in the Member Details tab. The FC switch report shows you FC switches that have been deleted. have user comments associated with them. and chargeback information. space reservations.Storage monitoring and reporting | 111 unacknowledged. FC Link The FC link report shows you information about the logical and physical links of your FC switches and fabric interfaces. you can view the host domain users reports from Management > Host Domain Users > Report. ports. By default. The host usergroups report shows you information about the existing user groups on the host. The file system report shows you information about all file systems. you can view the host usergroups reports from Management > Host Usergroups > Report. The Interface Groups report shows information about all the cluster interface groups defined. you can view FCP target reports in the Control Center > Home > Member Details > LUNs tab. By default. Snapshot copies. active state of the group. port state. By default.

Performance Events report displays all the Performance Advisor events. you can view the host roles reports from Management > Host Roles > Report. Logical Interfaces LUN History Performance Report Mirror Performance Events Ports Quotas Report Outputs The Report Outputs report shows you information about the report outputs that are generated by the report schedules. you can view the Performance Events reports from Control Center > Home > Group Status > Events > Report. The Mirror report displays information about data transfer in a mirrored relationship. By default. By default. along with performance characteristics. By default. you can view Report Schedules reports in the Reports > Schedule > Report Schedules. Report Schedules The Report Schedules report shows you information about the existing report schedules. status of the logical interface. you can view Report Outputs reports in the Reports > Schedule > Saved Reports. the network address and mask. This report also displays the time zone and the status of the resource pool. status of the port. By default.112 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The History.0 Host Roles The host roles report shows you information about the existing roles on the host. You can view the Ports report from Control Center > Home > Member Details > Physical Systems > Report. Performance Events reports from Group Status > Events > Report. you can view the History. you can view quota reports along with the group summary reports in the Group Status tab. The Logical Interfaces report shows information about the server. the type of role. the type of role that the port portrays. and sizes of data moving in and out of the port. By default. You can view the logical Interfaces report from Control Center > Home > Member Details > Physical Systems > Report. The Ports report shows information about the controllers that are connected. A report schedule is an association between a schedule and a report for the report generation to happen at that particular time. Resource Pools . By default. The LUN report shows you information and statistics about the LUNs and LUN initiator groups on the storage systems. By default. The quota report shows you information about user quotas that you can use for chargeback reports. The Performance Events report displays all the current Performance Advisor events. you can view LUN reports in the Member Details tab. The Resource Pools report shows you information about the storage capacity that is available and the capacity that is used by all the aggregates in the resource pool. the current port that the logical interface uses. and if the interface is at the home port or not.

and the type and status of the SAN host. The Scripts report shows you information about the script jobs and script schedules. such as model. vFiler Virtual Servers The Virtual Servers report shows information about the associated cluster. Volume The Volume report shows you all volumes with the following details. NIS domain. Schedules Scripts Spare Array LUNs SRM storage systems The storage systems report shows you information about the capacity and operations of your storage systems. directories. you can view SRM reports in the Group Status tab. By default. performance characteristics. and the status of the virtual server. By default. and performance characteristics of vFiler units that you are monitoring with DataFabric Manager. you can view the Scripts reports in the Member Details tab. and host agents. and the releases and protocols running on them. The Schedules tab displays all the schedules. By default. The file space statistics which is reported by an SRM path walk differ from the "volume space used" statistics that is provided by the file system reports. You can view the Virtual Servers report from Control Center > Home > Member Details > Virtual Systems > Report. for the current month or for the past month: • Name . By default. By default. you can view Schedules reports in the Reports > Schedule > Schedules .Storage monitoring and reporting | 113 SAN Host The SAN host report shows you information about SAN hosts. you can view SAN host reports in the Member Details tab. you can view the vFiler reports in the Member Details tab. By default. the root volume on which the virtual server resides name of the service switch. you can view User Quotas reports along with the SRM reports in the Group Status tab. and size. storage usage. vendor. The Manage Schedules link in the Reports-Add a Schedule and Reports-Edit a Schedule page points to this page. Schedules are separate entities which can be associated with reports. By default. The SRM report shows you information about the SRM files. you can view storage systems reports in the Control Center > Home > Member Details > Physical Systems tab. User Quotas The User Quotas report shows you information about the disk space usage and user quota thresholds collected from the monitored storage systems. By default. The Spare Array LUNs report shows you information about spare array LUNs of a V-Series system. The vFiler report shows you the status. you can view the Spare Array LUNs reports from Control Center > Home > Member Details > Physical Systems > Report. available protocols. paths. serial number of the LUN. including FCP traffic and LUN information. The Schedules report shows you information about the existing schedules and the names of the report schedules that are using a particular schedule.

one day. as you want it to display in the CLI. By default. you can view the volume reports along with the file system reports in the Member Details tab. Data consolidation is available only if you select the option Performance. and one year.114 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You can also view the performance counters related to various fields in catalogs. one month. 4. You can view the average. Enter a (short) name for the report. Note: The FC link and FC switch reports are available only when the SAN license for DataFabric Manager is installed. NetApp has announced the end of availability for the SAN license. Optionally. however. The following points describe the important features of performance reports: • • • You can view the object's performance characteristics for the periods last. this field is set to Average by default. enter a (long) name for the report. existing customers can continue to license the SAN option with hell lot o. . 3. Data Consolidation is a statistical technique that helps you to analyze data. What performance reports are Performance reports in Operations Manager provide information regarding object's performance characteristics. as you want it to display in Operations Manager. 2. Select the catalog from which the available report fields are based. three months. Select Custom from the Reports menu. To facilitate this transition. maximum. or median value for the performance metrics over a period. Optionally. add comments to the report description. minimum. Related concepts Report fields and performance counters on page 320 Configuring custom reports You can configure custom reports in Operations Manager. 5. one week.0 • • • • • • Capacity Available space Snapshot capacity Growth rates Expendability Chargeback by usage or allocation The reports also show the performance characteristics of volumes. Steps 1.

select the required data consolidation method from the list. 7." 10. in Operations Manager. 11. locate this report in the list at the lower part of the page and click the display tab name. 15. Select Custom from the Reports menu. specify the format of the field. You might need to expand the list to display the catalog you want to select. Repeat Steps 8 to 12 for each field you want to include in the report. You must be able to view a field name in the reports and determine which field the information relates to. To view this report. You can view two types of information fields in the “Choose From Available Fields” section: • • To view fields related to the usage and configuration metrics of the object. Deleting custom reports You can delete a custom report you no longer need. enter the name for the field. Steps 1. Select the related catalog from which you want to choose fields. Select where you want DataFabric Manager to display this report in Operations Manager. Click Create. 2. the default format displayed is used. Select a field from "Choose From Available Fields. Make your field name abbreviated and as clear as possible. click Performance. If you choose not to format the field. Click Add to move the field to the Reported Fields list. . 14. 8.Storage monitoring and reporting | 115 6. Optionally. 9. Find the report from the list of configured reports and select the report you want to delete. Find the report from the Report drop-down list. click Move Up or Move Down to reorder the fields. Click Delete. 17. 13. 3. as you want it displayed on the report. Optionally. 16. click Usage. Optionally. 12. To view fields related to performance metrics of the object. If you clicked Performance.

What scheduling report generation is Operations Manager allows you to schedule the generation of reports. SAN hosts. The report can include the following statistics: • • • • Volume capacity used CPU usage Storage system capacity Storage system up time Next topics What Report Archival Directory is on page 116 Additional capabilities for categories of reports on page 117 What Report Schedules reports are on page 117 Scheduling a report using the All submenu on page 117 Scheduling a report using the Schedule submenu on page 118 What Report Archival Directory is Report Archival Directory is a repository where all the reports are archived. You can modify the location of the destination directory by using the following CLI command: dfm options set reportsArchiveDir=<destination dir>.116 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. SAN hosts. You can view the data in report in a spreadsheet format.0 Putting data into spreadsheet format You can put data from any of the about LUNs. by clicking on the spreadsheet icon () on the right side of the Report drop-down list. and FCP targets must be available on the LUNs page of Member Details tab. 2. and FCP targets reports into spreadsheet format. Result You can use the data in the spreadsheet to create your own charts and graphs or to analyze the data statistically. Before you begin Reports about LUNs. . Click Member Details on the LUNs page to view the reports. Steps 1.

Storage monitoring and reporting | 117 When you modify the Report Archival Directory location. The capabilities that you require for the categories of reports are as follows: Report category SRM Reports Event Reports Mirror Reports Policy Reports BackupManager Reports Capabilities SRM Read capability Event Read capability Mirror Read capability Policy Read capability BackupManager Read capability What Report Schedules reports are The Report Schedules report shows you information about the existing report schedules. Scheduling a report using the All submenu You can schedule a report using the All submenu from the Reports menu in Operations Manager. the scheduler service must run with an account that has write permissions on the directory. click Reports > All to display the Report Categories page. A report schedule is an association between a report and a schedule for the report to be generated at a particular time. if the directory exists on the network. Note: You require the Database Write capability on the Global group to modify the Report Archival Directory option. By default. Additional capabilities for categories of reports You require report-specific read capability on the object in addition to the Database Read capability. then the destination directory must be a UNC path. respectively. Steps 1. Event Reports. Report Schedules reports display in Reports > Schedule > Report Schedules. From any page. DataFabric Manager checks whether the directory is writable to archive the reports. In the case of a Windows operating system. Besides. and so on. the Recently Viewed category appears. The server service must run with an account that has read and delete permissions on the directory to view and delete report output. By default. . The permissions for a service can be configured using Windows Service Configuration Manager. to save the reports. for categories of reports such as SRM Reports.

Add a Schedule page.Add a Schedule page. Click the Schedule This Report icon. . Click Add. 6. specify the report schedule parameters. specify the report schedule parameters. 4. 2. Select a report of your choice. see the Operations Manager Help. For details about the report schedule parameters. Following are the two methods with which you can schedule a report: • • Using the Schedule submenu from the Reports menu Using the All submenu from the Reports menu Next topics Editing a report schedule on page 119 Deleting a report schedule on page 119 Enabling a report schedule on page 119 Disabling a report schedule on page 119 Running a report schedule on page 120 Retrieving the list of enabled report schedules on page 120 Retrieving the list of disabled report schedules on page 120 Listing all the run results of a report schedule on page 120 . 3. From any page. located in the upper right corner of the page. Scheduling a report using the Schedule submenu You can schedule a report using the Schedule submenu from the Reports menu. Click Add New Report Schedule. 5.118 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. 4. For details about the report schedule parameters. Click Add. 3. In the Reports .0 2. Click Show to display the selected report. In the Reports . Steps 1. click Reports > Schedule to display all the report schedules. see the Operations Manager Help. Methods to schedule a report You can schedule a report in two possible methods from the Reports menu.

click Reports > Schedule to display all the report schedules. Select the report schedule that you want to enable. Select the report schedule that you want to delete. 3. Steps 1. In the Reports . Click the report schedule that you want to edit. and then click Report Schedules entry.Edit a Schedule page. Steps 1. Disabling a report schedule You can disable a report schedule using the Schedule submenu from the Reports menu. From any page. 3. click Reports > Schedule to display all the report schedules. Enabling a report schedule You can enable a report schedule using the Schedule submenu from the Reports menu. 2. 2.Storage monitoring and reporting | 119 Editing a report schedule You can edit a report schedule using the Schedule submenu from the Reports menu. 2. Click Update. click Reports > Schedule to display all the report schedules. Deleting a report schedule You can delete a report schedule using the Schedule submenu from the Reports menu. From any page. From any page. 3. Steps 1. that you want to edit. Steps 1. Click Enable Selected. 4. Click Delete Selected. . click Reports > Schedule to display all the report schedules. click Saved Reports to list all the report outputs. From any page. Alternatively. edit the report schedule parameters.

Click Run Selected. click Reports > Schedule to display all the report schedules. From any page. Enabled entry from the Report drop-down list. click Reports > Schedule to display all the report schedules. Click Disable Selected. Running a report schedule You can run a report schedule using the Schedule submenu from the Reports menu. Select the Report Schedules. Steps 1. Select the report schedule that you want to run. Steps 1. . Select the report schedule that you want to disable. Listing all the run results of a report schedule You can list all the run results of a report schedule using the Schedule submenu from the Reports menu. 2. 2. From any page. click Reports > Schedule to display all the report schedules.0 2. From any page. Steps 1. click Reports > Schedule to display all the report schedules. Retrieving the list of enabled report schedules You can retrieve the list of enabled report schedules using the Schedule submenu from the Reports menu. Retrieving the list of disabled report schedules You can retrieve the list of disabled report schedules using the Schedule submenu from the Reports menu.120 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Select the Report Schedules. From any page. 3. Disabled entry from the Report drop-down list. Steps 1. 3. 2.

. click Reports > Schedule to display all the report schedules. For details about the schedule parameters. 3. The Schedules tab displays all the schedules. Steps 1. From any page. Click Add New Schedule. Next topics Listing all the schedules on page 121 Adding a new schedule on page 121 Editing a schedule on page 122 Deleting a schedule on page 122 Listing all the schedules You can list all the schedules using the Schedule submenu from the Reports menu. click Reports > Schedule to display all the report schedules. specify the schedule parameters. In the Schedules . Schedules are separate entities that can be associated with reports.Add a Schedule page. 5. From any page.Storage monitoring and reporting | 121 2. Steps 1. Click Add. 4. see the Operations Manager Help. Click the Schedules tab. 2. By default. What Schedules reports are The Schedules report shows you information about the existing schedules and the names of the report schedules that are using a particular schedule. Click Schedules. Click Last Result Value of a report schedule to display the run result for that particular report schedule. Schedules reports display in Reports > Schedule > Schedules. Adding a new schedule You can add a new schedule using the Schedule submenu from the Reports menu. 2.

Click Delete Selected. 3. By default. 4. Steps 1.122 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Select the schedule that you want to delete. Next topics Listing the report outputs on page 123 Listing the successful report outputs on page 123 Listing the failed report outputs on page 123 Viewing the output of report outputs from the status column on page 123 Viewing the output of report outputs from the Output ID column on page 124 Viewing the output details of a particular report output on page 124 . click Reports > Schedule to display all the report schedules. Click the schedule you want to edit. 4. Saved reports display in Reports > Schedule > Saved Reports. edit the schedule parameters.Edit a Schedule page. Steps 1. then the schedule cannot be selected for deletion. and the corresponding report schedule. Click Update. 2. From any page. click Reports > Schedule to display all the report schedules. which generated the report output. What Saved reports are The Saved reports display information about report outputs such as Status. 5. Note: If the schedule is used by a report schedule. Run Time. 3.0 Editing a schedule You can edit a schedule using the Schedule submenu from the Reports menu. From any page. Click Schedules. The Saved Reports tab displays the list of all the report outputs that are generated by the report schedules. In the Schedules . Deleting a schedule You can delete a schedule using the Schedule submenu from the Reports menu. 2. Click Schedules.

3. Steps 1. . Click Saved Reports. Select the Report Outputs. click Reports > Schedule to display all the report schedules. Following are the steps to view the output of Report Output: You can also view the output of a Report Output from the Output ID column in Operations Manager. Successful entry from the Report drop-down list. click Reports > Schedule to display all the report schedules. Viewing the output of report outputs from the status column There are two possible methods to view the output of a particular Report Output that is generated by a report schedule. Steps 1.Storage monitoring and reporting | 123 Listing the report outputs You can list the report outputs that are generated by all the report schedules using the Schedule submenu from the Reports menu. Failed entry from the Report drop-down list. 2. From any page. click Reports > Schedule to display all the report schedules. Listing the failed report outputs You can list the failed report outputs that are generated by all the report schedules using the Schedule submenu from the Reports menu. 2. Steps 1. Select the Report Outputs. Click Saved Reports to display the list of report outputs. 3. From any page. From any page. Click Saved Reports. 2. Listing the successful report outputs You can list the successful report outputs generated by all the report schedules using the Schedule submenu from the Reports menu.

From any page. Viewing the output details of a particular report output You can view the output details of a particular report output. Click the Output ID entry of the report output. Viewing the output of report outputs from the Output ID column There are two possible methods to view the output of a particular Report Output. Click the Output link to view the output. select Schedule from the Reports menu. which is generated by a report schedule in Operations Manager. Operations Manager reports are detailed reports of storage system configuration and utilization. select Schedule from the Reports menu. Click the link under the Status column corresponding to the report output.124 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. 4. 3. 2.0 Steps 1. 2. Many administrators create customized reports to accomplish the following tasks: • • • • Forecasting future capacity and bandwidth utilization requirements Presenting capacity and bandwidth utilization statistics Generating performance graphs Presenting monthly Service Level Agreement (SLA) reports . 2. Click the Output ID column entry of the Report Output. From any page. Steps 1. you can create customized reports from the data you export from DataFabric Manager and Performance Advisor. Data export in DataFabric Manager By using third-party tools. Click Saved Reports. select Schedule from the Reports menu. 3. 3. From any page. Click Saved Reports. Steps 1. Click Saved Reports. which is generated by a report schedule.

easing the loading of data to user-specific database Allows you to schedule the export Allows you to customize the rate at which the performance counter data is exported Allows you to specify the list of the counters to be exported Allows you to consolidate the sample values of the data export Access to • • • • • Next topics How to access the DataFabric Manager data on page 125 Where to find the database schema for the views on page 126 Two types of data for export on page 126 Files and formats for storing exported data on page 127 Format for exported DataFabric Manager data on page 127 Format for exported Performance Advisor data on page 127 Format for last updated timestamp on page 128 How to access the DataFabric Manager data You can access the DataFabric Manager data through views. The CoreControl capability allows you to perform the following operations: • • • • Creating a database user Deleting a database user Enabling database access to a database user Disabling database access to a database user . you need to first create a database user and then enable database access to this user. you must have the CoreControl capability. By default. Database users are not related to DataFabric Manager users. which are dynamic virtual tables collated from data in the database. Note that a database user is a user created and authenticated by the database server. Before you can create and give access to a database user. These views are defined and exposed within the embedded database of DataFabric Manager.Storage monitoring and reporting | 125 Data export provides the following benefits: • • • • Saves effort in collecting up-to-date report data from different sources Provides database access to the historical data collected by the DataFabric Manager server Provides database access to the information provided by the custom report catalogs in the DataFabric Manager server Provides and validates the following interfaces to the exposed DataFabric Manager views: • Open Database Connectivity (ODBC) • Java Database Connectivity (JDBC) Enables you to export the Performance Advisor and DataFabric Manager data to text files. access to the DataFabric Manager views is not provided. To gain access to the views that are defined within the embedded database of DataFabric Manager.

the value of the perfDataExportEnabled option is No. For more information about the CLI commands. under misc in the install directory.jar files required for iAnywhere and jConnect JDBC drivers are copied as part of DataFabric Manager installation. By default. Changing the password for the database user • All these operations can be performed only through the CLI. By default. By default. you can choose the objects and the metrics you want to present. Where to find the database schema for the views You can find the schema for the database views in the Operations Manager Help. The sampling rate for the counter data export is customizable at the global level. Two types of data for export Two types of data that you can export are DataFabric Manager data and Performance Advisor data. For more information about the samples and counters. You can consolidate the sample values of the data export only if the sampling interval is greater than the interval with which the counter data is collected. Performance Advisor data Performance Advisor data export is controlled at the global level and at the host level through the perfDataExportEnabled option. • DataFabric Manager data DataFabric Manager data export is controlled at the global level through the dfmDataExportEnabled option. the counter data for the last seven days is exported. see the DataFabric Manager manual (man) pages. By using a third-party reporting tool. see the Performance Advisor Administration Guide. the value of the dfmDataExportEnabled global option is No. you can connect to the DataFabric Manager database for accessing views. Following are the connection parameters: • • • • • • Database name: monitordb User name: <database user name> Password: <database user password> Port: 2638 dobroad: none Links: tcpip Note: The .126 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. By default. one sample is exported at every 15 minutes. is created to hold the new jar files. the average method is used to consolidate the sample values. By default. Based on the database schema presented.0 Disabling the database access denies the read permission on the DataFabric Manager views for the user account. A new folder dbconn. • . in the first Performance Advisor data export run.

perfObjInstances This file contains information about the performance object instances on storage systems for which the counter data is collected.http://now. . ..com/NOW/knowledge/docs/ DFM_win/dfm_index. For example. you can store all the iGroup information in the iGroupView file. The fields in each line of the file correspond to the columns in iGroupView.shtml Files and formats for storing exported data The exported DataFabric Manager and Performance Advisor data is stored in the export_<timestamp> directory located under the top-level directory specified by the dataExportDir global option.. For more information about the CLI commands.netapp. to text files using the CLI... To schedule the data export you must have the CoreControl capability.Storage monitoring and reporting | 127 You can export the data. Format for exported Performance Advisor data The exported Performance Advisor data is stored in different files such as perfHosts. in the following format: File name: iGroupView Contents: <iGroupId> <hostId> <type> <OSType> . the value of the dataExportDir global option is <DFM-install-dir>/dataExport.. see the DataFabric Manager manual (man) pages..... Format for exported DataFabric Manager data The exported DataFabric Manager data is stored in files that are named after the views. . samples_<objType>_<hostId> • ... By default. Note: Database views are created within the DataFabric Manager embedded database. . This might increase the load on the database server if there are many accesses from the third-party tools to the exposed views. and perfObjInstances. . perfCounters. • • • perfHosts This file contains information about the storage systems from which the counter data is collected... either on-demand or on-schedule.. . . perfCounters This file contains information about the various Performance Advisor counters.. Related information Performance Advisor Administration Guide ..

. . . ...conf under the dataExport directory........ .. ....0 Export Type: [Scheduled | On-demand] Export Status: [Success | Failed | Canceled | Running] Delimiter: [tab | comma] Sampling Interval: <secs> Consolidation Method: [average | min | max | last] History: <secs> DataFabric Manager Data Export Completion Timestamp: <timestamp> Last PA data Export for following hosts at time <timestamp> ------<host-name>----------<host-name>----- .... Format for last updated timestamp The last updated timestamp for both DataFabric Manager and Performance Advisor data export is stored in a configuration file named export... File name: samples_<objType>_<hostId> Contents: instance-id counter-id sample-time sample-value . The format of these files is as follows: File name: perfHosts Contents: host-id host-name .... . File name: perfObjInstances Contents: instance-id instance-name host-id obj-type obj-id . .128 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4... . .conf file are in the following format: Database Format Version: 1. The entries in the export. .... . ...0 This file contains the sample values that are collected at various timestamps for different counters and object instances... File name: perfCounters Contents: counter-id counter-name description obj-type counter-unit ..

DataFabric Manager uses the following types of signed certificates: • • Self-signed certificates Trusted CA-signed certificates You can create self-signed certificate and add trusted CA-signed certificates with DataFabric Manager. Signed certificates provide your browser with a way to verify the identity of the storage system. You can set up DataFabric Manager as a Certificate Authority (CA).Security configurations | 129 Security configurations You can configure Secure Sockets Layer (SSL) in DataFabric Manager to monitor and manage storage systems over a secure connection by using Operations Manager. By issuing self-signed certificates. Next topics Self-signed certificates in DataFabric Manager on page 129 Trusted CA-signed certificates in DataFabric Manager on page 130 Creating self-signed certificates in DataFabric Manager on page 130 Obtaining a trusted CA-signed certificate on page 131 Enabling HTTPS on page 131 Self-signed certificates in DataFabric Manager You can generate self-signed certificates by using DataFabric Manager. and generate self-signed certificates. avoids the expense and delay of obtaining a certificate . Next topics Types of certificates in DataFabric Manager on page 129 Secure communications with DataFabric Manager on page 132 Managed host options on page 133 Changing password for storage systems in DataFabric Manager on page 136 Changing passwords on multiple storage systems on page 136 Issue with modification of passwords for storage systems on page 137 Authentication control in DataFabric Manager on page 137 Types of certificates in DataFabric Manager DataFabric Manager uses the signed certificates for secure communication.

As a result. and communications with. the browser has no way of verifying the identity of DataFabric Manager. enter the following command: dfm ssl server setup. . DataFabric Manager accepts certificates from Thawte.0 from an external trusted CA. DataFabric Manager and the file system that contains its SSL-related private files. When the DataFabric Manager server sends a self-signed certificate to a client browser. you must safeguard access to. 2. When DataFabric Manager sends a trusted CA-signed certificate to the client browser. Self-signed certificates are not signed by a mutually trusted authority for secure Web services. the browser allows the user to permanently import the certificate into the browser. Creating self-signed certificates in DataFabric Manager You can generate self-signed certificate from the command-line interface (CLI) of DataFabric Manager. and then submitting that request to a trusted authority for secure Web services. You obtain a trusted CA-signed certificate by generating a Certificate Signing Request (CSR) in DataFabric Manager. the browser displays a warning indicating to create an acception. If you decide to issue self-signed certificates. Enter the following information when prompted: • • • • • • • • • Key Size Certificate Duration Country Name State or Province Locality Name Organization Name Organizational Unit Name Common Name Mail Address DataFabric Manager is initialized with a self-signed certificate and puts the private key in the / conf/server. the browser verifies the identity of the server. 3. Install the certificate in the browser. 4. After the browser accepts the certificate. Steps 1. and RSA. Log into DataFabric Manager server as DataFabric Manager administrator. Verisign.key file in any DataFabric Manager directory. Trusted CA-signed certificates in DataFabric Manager You can generate trusted CA-signed certificates using DataFabric Manager. In the CLI.130 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Enter the following command: dfm option set httpsEnabled=Yes 2. 3. Before you begin You must set up the SSL server using the dfm ssl server setup command. Submit the CSR to a CA for signing. Steps 1. Stop the Web server by using the following command: dfm service stop http 4. Import the signed certificate by entering the following command: dfm ssl import cert_filename Enabling HTTPS You can use the httpsEnabled option using DataFabric Manager CLI for the DataFabric Manager server to provide HTTPS services. Start the Web server by using the command: dfm service start http This restarts the service using the certificate. . 2. Enter the following command: dfm ssl server req -o filename DataFabric Manager creates a CSR file.Security configurations | 131 Obtaining a trusted CA-signed certificate You can obtain a certificate from a trusted CA by running commands at the DataFabric Manager command-line interface (CLI). 3. Steps 1. Optionally. change the HTTPS port by entering the following command: dfm option set httpsPort=port_number The default HTTPS port is 8443.

com/. all browsers must connect to DataFabric Manager through HTTPS.132 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. For more information about SecureAdmin. This combination of SSL and SecureAdmin allows you to securely monitor and manage your storage systems in DataFabric Manager. in turn. command-based administrative sessions between an administrative user and storage systems over an intranet or the Internet. SecureAdmin is an add-on software module that enables authenticated. see the SecureAdmin Administrator's Guide at http://now. If you want to enable secure connections from any browser. Requirements for security options in Operations Manager The security options in DataFabric Manager have the following requirements: • • If you disable HTTP and enable HTTPS. . including browsers and managed storage systems. you must have SecureAdmin installed on your storage systems. In DataFabric Manager. the two ends of a communication link consist of a secure server and secure managed host. Clients. SecureAdmin for secure connection with DataFabric Manager clients To enable secure connection. DataFabric Manager connects to managed hosts using Secure Shell (SSH) for operations and to managed storage systems using HTTPS for monitoring purposes. uses a secure connection to connect to a storage system. Next topics How clients communicate with DataFabric Manager on page 132 SecureAdmin for secure connection with DataFabric Manager clients on page 132 Requirements for security options in Operations Manager on page 132 Guidelines to configure security options in Operations Manager on page 133 How clients communicate with DataFabric Manager DataFabric Manager and the clients use a set of protocols to communicate to each other. DataFabric Manager. The system on which DataFabric Manager is installed and clients use the following combination of protocols running over SSL: • • Browsers use HTTPS to connect to a secure DataFabric Manager server. must use a secure connection to connect to DataFabric Manager.netapp. you must enable HTTPS transport on the DataFabric Manager server.0 Secure communications with DataFabric Manager Secure communications require a secure connection at both ends of each communications link.

Security configurations | 133 • You cannot disable both HTTP and HTTPS transports. DataFabric Manager does not allow that configuration. To completely disable access to Operations Manager, stop the HTTP service at the CLI using the following command: dfm service stop http. You must select the default port for each transport type you have enabled. The ports must be different from each other.

Guidelines to configure security options in Operations Manager
You should configure security options in Operations Manager using secure protocol. When configuring the DataFabric Manager server for SSL, you are responsible for safeguarding access to and communications with the DataFabric Manager server and the file system that contains the SSL-related private files. The DataFabric Manager server should not be accessed by not secure protocols, such as Telnet and RSH. Instead, use a secure, private network, or a secure protocol, such as SSH, to connect to the DataFabric Manager server.

Managed host options
Managed host options allow you to control how DataFabric Manager connects to storage systems. You can configure managed host options to control connection between DataFabric Manager and storage systems. You can select conventional (HTTP) or secure (HTTPS) administration transport and conventional (RSH) or secure (SSH) login protocol.
Next topics

Where to find managed host options on page 133 Guidelines for changing managed host options on page 134 Comparison between global and storage system -specific managed host options on page 135 Limitations in managed host options on page 135

Where to find managed host options
You can set managed host options by using both GUI and command-line interface. The locations of managed host options are described in the following table.
Option type Global GUI Options page (Setup > Options Command-line interface dfm option list (to view) dfm option set (to set)

134 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Option type Appliance-specific

GUI

Command-line interface

Edit Storage Controller Settings page dfm host get (to view) (Controllers > controller name > dfm host set (to set) Storage Controller Tools > Edit Settings

Guidelines for changing managed host options
You can change managed host options, such as, login protocol, transport protocol, port, and hosts.equiv option. Login protocol This option allows you to select a conventional (RSH) or secure (SSH) connection for the following operations: • • Active/active configuration operations The dfm run command for running commands on the storage system

Change the default value if you want a secure connection for active/active configuration operations, running commands on the storage system. Administration transport This options allows you to select conventional (HTTP) or secure (HTTPS) connection to monitor and manage storage systems. Change the default value if you want a secure connection for monitoring and displaying the storage system UI (FilerView). Administration port This options allows you to configure the administration port to monitor and manage storage systems. If you do not configure the port option at the appliance-level, the default value for the corresponding protocol is used. hosts.equiv option This option allows users to authenticate storage systems when user name and password are not provided. Change the default value if you have selected the global default option and you want to set authentication for a specific storage system.
Note: If you do not set the transport and port options for a storage system, then DataFabric Manager uses SNMP to get appliance-specific transport and port options for communication. If SNMP fails, then DataFabric Manager uses the options set at the global level.

Security configurations | 135

Comparison between global and storage system-specific managed host options
You can set managed host options globally, for all storage systems, or individually, for specific storage systems. If you set storage system-specific options, DataFabric Manager retains information about the security settings for each managed storage system. It references this information when deciding whether to use one of the following options to connect to the storage system: • • • • HTTP or HTTPS RSH or SSH Login password hosts.equiv authentication

If a global setting conflicts with a storage system-specific setting, the storage system-specific setting takes precedence.
Note: You must use storage system-specific managed host options if you plan to use SecureAdmin

on some storage systems and not on others.

Limitations in managed host options
You can enable managed host options, but you must accept the following known limitations. • • DataFabric Manager cannot connect to storage systems without SecureAdmin installed or to older storage systems that do not support SecureAdmin. On storage systems, SecureAdmin is not mutually exclusive with HTTP access. Transport behavior is configurable on the storage system with the httpd.admin.access option. The http.admin.ssl.enable option enables HTTPS access. For more information, see the documentation for your storage system. If you have storage systems running SecureAdmin 2.1.2R1 or earlier, HTTPS options do not work with self-signed certificates. You can work around this problem by using a trusted CAsigned certificate. If the hosts.equiv option and login are set, then the hosts.equiv option takes precedence.

136 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Changing password for storage systems in DataFabric Manager
You can change the password for an individual storage system on the Edit Storage Controller Settings page using Operations Manager.
Steps

1. Go to the Storage Controller Details page for the storage system or hosting storage system (of the vFiler unit) and choose Edit Settings from the Storage Controller Tools (at the lower left of Operations Manager). The Edit Storage Controller Settings page is displayed. 2. In the Login field, enter a user name that DataFabric Manager uses to authenticate the storage system or the vFiler unit. 3. In the Password field, enter a password that DataFabric Manager uses to authenticate the storage system or the vFiler unit. 4. Click Update.

Changing passwords on multiple storage systems
DataFabric Manager can set passwords on all storage systems when you use the same authentication credentials on each. Select the Global group to set the same passwords on all storage systems at once.
Steps

1. Log in to Operations Manager. 2. Depending on the type of storage system for which you want to manage passwords, select either of the following:
Type Storage system vFiler unit Procedure Management > Storage System > Passwords Management > vFiler > Passwords

3. Enter the user name. 4. Enter the old password of the local user on the host.
Note: This field is mandatory for storage systems running Data ONTAP version earlier than

7.0 and for all the vFiler units.

Security configurations | 137 5. Enter a new password for the storage system or groups of storage systems. 6. Reenter the new password exactly the same way in the Confirm New Password field. 7. Select the target storage system or target groups. 8. Click Update.

Issue with modification of passwords for storage systems
When modifying passwords for a large number of storage systems, you might get an error message if the length of your command input exceeds the specified limit. This error occurs only when you are using the Operations Manager graphical user interface and not the CLI. If this error occurs, you can take either of the following corrective actions: • • Select fewer storage systems. Create a resource group and assign the selected storage systems to the group as members, then modify the password for the group.

Authentication control in DataFabric Manager
DataFabric Manager 3.5 and later allows you to use the hosts.equiv file to authenticate storage system. When the hosts.equiv option is set, you can authenticate storage system, vFiler unit, and active/ active configuration controller without specifying the user name and password.
Note: If the storage system or vFiler unit is configured with IPv6 addressing, then you cannot use the hosts.equiv file to authenticate the storage system or vFiler unit. Next topics

Using hosts.equiv to control authentication on page 137 Configuring HTTP and monitor services to run as different user on page 139

Using hosts.equiv to control authentication
You can control authentication of storage system, vFiler units, and active/active configuration using
host.equiv file. Steps

1. Edit the /etc/hosts.equiv file on the storage system and provide either the host name or the IP address of the system running DataFabric Manager, as an entry in the following format: <host-name-or-ip-address>.

138 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 2. You can edit the option on Edit Appliance Settings page in Operations Manager. Alternatively, provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running DataFabric Manager CLI, in the following format: <host-name-orip-address> <username>. 3.
If the operating system is... Linux Then... Provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running the HTTP service, in the following format: <host-name-or-ip-address> <username>. Provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running the HTTP, server, scheduler, and monitor services, in the following format: <host-name-or-ip-address> <username>.

Windows

Note:

By default, the HTTP service runs as nobody user on Linux. By default, the HTTP, server, scheduler, and monitor services run as LocalSystem user on Windows.

If DataFabric Manager is running on a host named DFM_HOST, and USER1 is running the dfm commands, then by default, on a Linux operating system, you need to provide the following entries:
DFM_HOST DFM_HOST USER1 DFM_HOST nobody

On Windows operating system, you need to provide the following entries:
DFM_HOST DFM_HOST USER1 DFM_HOST SYSTEM

For more information about configuring the /etc/hosts.equiv file, see the Data ONTAP Storage Management Guide.

Related information

Data ONTAP Storage Management Guide - http://now.netapp.com/NOW/knowledge/docs/ontap/ ontap_index.shtml

Security configurations | 139

Configuring HTTP and monitor services to run as different user
You can configure HTTP and monitor services using Operations Manager.
Step

1.
Operating system Linux Windows Command dfm service runas -u <user-name> http dfm service runas -u <user-name> -p <password> [http] [monitor]

Note: For security reasons the <user-name> cannot be “root” on Linux. On Windows hosts, <user-name> should belong to the administrator group.

.

you would perform the following tasks: 1.mp3. chargeback and quota reporting. or Linux workstations or servers (called hosts) to recursively examine the directory structures (paths) you have specified. Solaris. DataFabric Manager interacts with the NetApp Host Agent residing on remote Windows. and so on) on the file system You can then make an intelligent choice about how to efficiently use your existing space. FSRM can be configured to generate reports periodically. The host agents might be mounted on top of a NetApp LUN. Note: The File SRM tab in the Operations Manager includes other storage monitoring utilities: for example. For example. or qtree.File Storage Resource Management | 141 File Storage Resource Management You can use File Storage Resource Manager (FSRM) to gather file-system metadata and generate reports on different characteristics of that metadata. Configure to walk a path. if you suspect that certain file types are consuming excessive storage space on your storage systems. .gif. These reports contain the following details: • • • Files that are consuming the most space Files that are old or have been accessed recently Types of files (. Next topics How FSRM monitoring works on page 142 What capacity reports are on page 142 Difference between capacity reports and file system statistics on page 142 Prerequisites for FSRM on page 143 Setting up FSRM on page 144 NetApp Host Agent software overview on page 144 Managing host agents on page 146 Configuring host agent administration settings on page 148 What FSRM paths are on page 149 . . Deploy one or more host agents.doc. volume. 2.

This is due to the difficulty of determining how much space a file actually consumes on a volume. you can determine the capacity of a volume by viewing the corresponding volume capacity report (Reports > All > Volumes > Volume Capacity). or the host must use a LUN on the storage system. most files consume slightly more space than the length of the file. Note: DataFabric Manager cannot obtain FSRM data for files that are located in NetApp volumes. hard and soft links can cause a file to appear in more than one place and be counted twice. local disk or third-party storage systems. the remote host must mount a NetApp share using NFS or CIFS. use the DataFabric Manager capacity reports.0 How FSRM monitoring works DataFabric Manager monitors directory paths that are visible to the host agent. . do not use the results of a path walk to determine the amount of space used in a volume. if you want to enable FSRM monitoring of NetApp storage systems. Therefore. Therefore. Host agents can also gather FSRM data about other file system paths that are not on a NetApp storage system: for example. In addition.142 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. depending on the block size. Instead. For example. which are not exported by CIFS or NFS. Difference between capacity reports and file system statistics The file space statistics that are reported by a path walk differ from the "volume space used" statistics reported by Operations Manager capacity reports. What capacity reports are Capacity reports provide you with information about the file space statistics. You can determine the following using the capacity reports: • • • Total capacity of the storage system Amount of used space in the storage system Amount or percentage of free space available in the storage system For example.

netapp. NetApp Host Agent software. If you do not have an File SRM license.File Storage Resource Management | 143 Prerequisites for FSRM The prerequisites for FSRM include File SRM license. the Quotas tab in the Operations Manager is renamed “File SRM” and all the FSRM features become available. After you install the File SRM license. and visible directory paths. to enable FSRM monitoring. For example. For more information about the NetApp Host Agent . or the host agent must use a LUN on the system. All directory paths to be monitored must be visible to the host agent. The 2.com/NOW/ knowledge/docs/nha/nha_index.http://now. connection to TCP/IP network. see the NetApp Host Agent Installation and Administration Guide. The hosts must be connected to the network through an Ethernet port and must have a valid IP address. Each workstation from which FSRM data is collected must have NetApp Host Agent 2. the host agent must mount a NetApp system share (volume or qtree). using NFS or CIFS.5 and later versions are recommended. NetApp Host Agent software Visible directory paths Related concepts How FSRM monitoring works on page 142 Related information NetApp Host Agent Installation and Administration Guide .0 (or later) installed.shtml . Connection to TCP/ IP network All FSRM hosts must be connected to a TCP/IP network that is either known to or discoverable by the DataFabric Manager. contact your sales representative. Prerequisite File SRM license Description You must have a valid File SRM license installed on your DataFabric Manager.

It enables DataFabric Manager to collect SAN host bus adapter (HBA) information and remote file system metadata. 2. and setting up path-walk schedules. you must set your "login" to admin. Identify FSRM host agents. 3. You can perform the following tasks. Solaris.144 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. adding host agents. 5. Related concepts How FSRM monitoring works on page 142 Related tasks Configuring host agent administration settings on page 148 Enabling administration access for one or more host agents on page 149 Enabling administration access globally for all host agents on page 149 Adding SRM paths on page 152 Adding a schedule on page 158 Related references Host Agent management tasks on page 146 NetApp Host Agent software overview NetApp Host Agent is a software that runs on Windows. Add new host agents manually if they have not been discovered. Collect storage usage data at the file and directory levels. you should perform a set of tasks such as. 4.0 Setting up FSRM To set up and configure FSRM. Steps 1. You can verify the host administration access by checking the SRM Summary page. by deploying NetApp Host Agent software on one or more hosts and licensing File SRM: • • Collect OS version and host name information. identifying FSRM host agents. Set up host agent administration access on the hosts to be monitored. or Linux systems. Note: To use host agents for FSRM. Add paths. Set up path-walk schedules. . adding paths.

largest files. By default. Following are the values for user name and password of the NetApp Host Agent for administration tasks: . you must also set the same user name and password in Operations Manager. You cannot perform FSRM functions. Collect SAN host and SRM host data. NetApp Host Agent software passwords for monitoring tasks The default host agent user name and password allows monitoring only. If you later decide to change the guest password on the host agent. You can specify the protocol to be used for communication in Operations Manager and when you install the NetApp Host Agent on your SAN host or SRM host agent. Following are the default values for user name and password of the Host Agent software for monitoring tasks: • • User name=guest Password=public Any sessions initiated by the DataFabric Manager. and files by type. by using this user name and password are limited to basic monitoring operations. files by owner. Next topics NetApp Host Agent communication on page 145 NetApp Host Agent software passwords on page 145 NetApp Host Agent software passwords for monitoring tasks on page 145 NetApp Host Agent software passwords for administration tasks on page 145 NetApp Host Agent communication The DataFabric Manager server communicates with the NetApp Host Agent using HTTP or HTTPS. both the NetApp Host Agent software and the DataFabric Manager server are configured to communicate with each other using HTTP. NetApp Host Agent software passwords for administration tasks The administration user name and password allows read/write permission and is required for FSRM functions.File Storage Resource Management | 145 • • Identify and categorize a variety of file-related information: for example. oldest files. using the Host Agent Monitoring Password option on the Options page. Note: A workstation or server running the NetApp Host Agent software is called a host. NetApp Host Agent software passwords Host agents have two user name and password pairs: one pair for monitoring and one pair for administration tasks.

Managing host agents DataFabric Manager can discover host agents automatically. After setting the administration user name and password in the host agent.146 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. monitoring. and. HTTPS settings) Edit Agent Settings page (Group Status tab > File SRM > Report drop-down list: SRM Summary > host agent name > Manage Host Agent in Host Tools list) Edit host agent settings (including passwords) . Add Host Agents or Edit Host Agents page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Hosts) Edit Host Agent Settings page (Group Status tab > File SRM > Report drop-down list: SRM Summary > host agent name > Edit Settings in Tools list) OR Edit Host Agents page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Hosts > appliance host name edit) Edit host agent properties (for example. Add a new host agent Go here. a special-purpose agent called NetApp Host Agent is required for discovering.. the HTTP port. Note: This process of password change is applicable globally. however. you must also set the same user name and password in Operations Manager on the Options page (Setup menu > Options > Host Agent link). To change passwords for one or more host agents. Host Agent management tasks These are common host agent management tasks and the location of the Operations Manager userinterface page that enables you to complete them... NetApp Host Agent must be installed on each host agent that you want to monitor and manage with DataFabric Manager..0 • • User name=admin Password=user-specified You specify the password on the host agent's configuration UI (http://name-ofagent: 4092/). monitoring and management of API passwords. it does not use SNMP to poll for new host agents. and managing SAN and SRM hosts. This user name and password allows full access to the host agent. Instead. If you want to.

. SRM view (Group Status tab > File SRM > Report drop-down list: SRM Summary > Host Agents. respectively). For one or more host agents: Edit Agent Logins page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Edit Agent Logins) OR Create a global default using the Host Agent Options page (Setup menu > Options > Host Agent) List of available host agents If you have enabled a File SRM license on the workstation. DataFabric Manager automatically discovers all hosts that it can communicate with.. Communication between the host agent and DataFabric Manager takes place over HTTP or HTTPS (port 4092 or port 4093. SRM) Edit host agents Add Host Agents or Edit Host Agents page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Hosts link) Host Agent Options page (Setup menu > Options > Host Agent) Host Agent Details page (Group Status tab > File SRM > Report drop-down list: SRM Summary > host agent name link) Monitoring Options page (Setup menu > Options link > Monitoring Options link > Agent monitoring interval option) Edit Agent Settings page (Group Status tab > File SRM > Report drop-down list: SRM Summary > host agent name link > Manage Host Agent in Host Tools list) Host Agent Discovery option on the Options page (Setup Review host agent global settings Obtain SRM host agent information Change the SRM host agent monitoring interval Modify NetApp Host Agent software settings Disable host agent discovery menu > Options link > Discovery > Host Agent Discovery field) Delete a host agent Add Host Agents or Edit Host Agents page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Hosts link) .. Configure host agent administration access Go here.File Storage Resource Management | 147 If you want to. Host Agents..

Specify the DataFabric Manager options: Access type Monitoring only DataFabric Manager options Host Agent Login=guest Host Agent Monitoring Password Management (required for FSRM) Management Host Agent Login=admin Host Agent Management Password=youradministration-password 2. monitoring only and management access. to the host agents. Specify the NetApp Host Agent options: Access type Monitoring only Management (required for FSRM) NetApp Host Agent options Monitoring API Password Monitoring API Password Next topics Enabling administration access for one or more host agents on page 149 Enabling administration access globally for all host agents on page 149 .0 Configuring host agent administration settings You can configure administration access such as. About this task Global options apply to all affected devices that do not have individual settings specified for them. Before you begin You must enable administration access to your host agents before you can use the FSRM feature to gather statistical data. Default values are initially supplied for these options. The host agent access and communication options are globally set for all storage systems using the values specified in the Host Agent Options section on the Options page. the Host Agent Login option applies to all host agents. you should review and change the default values as necessary. Steps 1. However.148 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. For example. the passwords set in Operations Manager must match those set for the NetApp Host Agent software. To enable administration access.

They can be grouped like any other storage object. However. From any Summary page. 2. Modify the fields. From the SRM Summary page. unless the host agent has a different login name or password already specified for it. qtrees. What FSRM paths are File SRM paths define the location in the file system that is to be indexed for data.File Storage Resource Management | 149 Related concepts NetApp Host Agent software passwords on page 145 Enabling administration access for one or more host agents You can enable administration access for one or more host agents from the SRM Summary page. Enter (or modify) the required information and then click Update. Select Host Agent from the Edit Options list (in the left pane). Following are the properties of SRM paths: • • • • They must be defined for a specific host. 2. They can be mapped (linked) to volumes. Steps 1. Note: The FSRM path-walk feature can cause performance degradation. and then click Update. 3. Steps 1. click Edit Agent Logins in the Host Agents Total section. you can schedule your path walks to occur during low-use or non-business hours. For example. changing the global password option does not change the storage system password. Select the host agents for which you want to enable administration access. as needed. They can be walked by managed host agents. from any Summary page. . and LUNs. 3. select Options from the Setup drop-down menu. Enabling administration access globally for all host agents You can enable administration access globally for all host agents. if an administrator has specified a password other than Global Default in the Password field of the Edit Host Agent Settings page. This option changes all host agent login names and passwords.

Steps 1. Click Setup > Options > Host Agent. 4. In the Host agent CIFS Password field. In the Host Agent Options page. type the password for the CIFS account. . specify the CIFS account name in the Host Agent CIFS Account field. Click Update. 3.150 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. 2. you must create a CIFS account.0 Next topics Adding CIFS credentials on page 150 Path management tasks on page 151 Adding SRM paths on page 152 Path names for CIFS on page 152 Conventions for specifying paths from the CLI on page 153 Viewing file-level details for a path on page 153 Viewing directory-level details for a path on page 153 Editing SRM paths on page 154 Deleting SRM paths on page 154 Automatically mapping SRM path on page 154 What path walks are on page 155 SRM path-walk recommendations on page 155 What File SRM reports are on page 155 Access restriction to file system data on page 156 Identification of oldest files in a storage network on page 156 FSRM prerequisites on page 157 Verifying administrative access for using FRSM on page 157 Verifying host agent communication on page 157 Creating a new group of hosts on page 158 Adding an FSRM path on page 158 Adding a schedule on page 158 Grouping the FSRM paths on page 159 Viewing a report that lists the oldest files on page 159 Adding CIFS credentials To provide path names for CIFS.

File Storage Resource Management | 151 Path management tasks There are common path management tasks and the location of the Operations Manager user-interface page that enables you to complete them.. the Start button changes to a Stop button. or LUN Details page > Create an SRM path) Note: The volume must be mounted on the managed Host Agent To manually add an SRM path: Add SRM Paths or Edit SRM Paths page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Paths link) Create path-walk schedules Specify path-walk times Manually start or stop an SRM path walk Edit SRM Path Walk Schedules page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Schedules link) SRM Path Walk Schedules Times page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Schedules link > schedule name > Add Walk Times link) SRM Path Details page (SRM path name > Start) If an SRM path walk is in progress. To use the automapping feature: Create new SRM Path for this object link (Volume Details. Add paths Go here. Review SRM path details Edit SRM paths SRM Path Walk Schedule Details page (schedule name) Add SRM Paths or Edit SRM Paths page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Paths link) OR Edit SRM Path Settings page (SRM path name > Edit Settings in Host Tools list) Review SRM pathwalk schedule details Edit SRM path-walk schedules Delete SRM paths SRM Path Walk Schedule Details page (schedule name) SRM Path Walk Schedule Times page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Schedules link > schedule name > Add Walk Times link) Add SRM Paths or Edit SRM Paths page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Paths link) .. If you want to. Qtree Details..

. 2. Type a path name. the UNC format is as follows: \\servername\sharename\path\filename The SRM feature does not convert mapped drives to UNC path names. From the SRM Host drop-down list in the Add a New SRM Path section. 3. and then click Add SRM Path.0 If you want to. Valid path entries host:/u/earth/work host:/usr/local/bin host:/engineering/toi host:C:\Program Files For CIFS. suppose that drive H: on the system host5 is mapped to the following path name: \\abc\users\jones The path entry host5:H:\ fails because the FSRM feature cannot determine what drive H: is mapped to. select the name of the host agent that you want to monitor. always use Universal Naming Convention (UNC) path names.152 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. you must specify the path as a UNC path. From the SRM Summary page. as follows: host:\\storage system \share\dir Path names for CIFS For CIFS systems. For example.. Steps 1. In Windows operating systems. Delete SRM pathwalk schedules Delete SRM pathwalk schedule times Go here. click the Add/Edit Paths in the SRM Paths Total section.. select a schedule. Edit SRM Path Walk Schedules page (Group Status tab > File SRM > Report drop-down list: SRM Summary > Add/Edit Schedules link > schedule name) SRM Path Walk Schedule Details page (schedule name) Adding SRM paths You can add SRM paths from the SRM Summary page. The following path entry is correct: .

Steps 1. 2.File Storage Resource Management | 153 host5:\\abc\users\jones Conventions for specifying paths from the CLI Unique conventions have to be followed for specifying paths from CLI in Windows and UNIX. unless the argument is enclosed in double quotation marks. For example: $ dfm srm path add inchon:C:\\Program\ Files $ dfm srm path add “inchon:C:\Program Files” $ dfm srm path add oscar:/usr/local Viewing file-level details for a path You can view file-level details about an SRM path from the SRM Summary page. Following are the examples paths: C:\dfm srm path add “inchon:C:\Program Files” C:\dfm srm path add "oscar:/usr/home" UNIX requires that you double all backslashes. From the SRM Summary page. 3. Windows requires that you use double quotation marks to enclose all strings that contain spaces. Step 1. Viewing directory-level details for a path You can view directory-level details about an SRM path from the SRM Summary page. click a path name in the SRM Paths Total section. click a path name in the SRM Paths Total section. click the Extended Details link (at the upper right corner of the File SRM tab window). . This convention is also true for spaces in file names. Click the Browse Directories link in the SRM Path Tools list (at the lower left of Operations Manager). To view an expanded view of directory information that includes a listing of files by type and by user. From the SRM Summary page.

as needed. or LUN: • If the host agent is a Windows host.154 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. you cannot automatically map storage objects with an SRM path. to create a new path for an object: • • • The host agent is set up and properly configured. You must ensure the following. .0 Editing SRM paths You can edit SRM paths from the SRM Summary page. 3. From the SRM Summary page. you cannot associate an SRM path with a storage object. From the SRM Summary page. Requirements for automatically mapping SRM path You can automatically create a new path for an object using the “Create new SRM Path for this object” link on the details page for the object. Note: Sometimes. The host agent passwords match those set in DataFabric Manager. If the SRM path is not on a storage device monitored by DataFabric Manager. Automatically mapping SRM path When you automatically map the SRM path. 2. and then click Update. Deleting SRM paths You can delete SRM paths from the SRM Summary page. 2. you might not be able to do the following operations: • • When you cannot access SRM data. you must ensure that the CIFS passwords match. click Add/Edit Paths in the SRM Paths Total section. Modify the fields. Steps 1. Select the SRM path you want to modify. qtree. the initial path mapping will be correct. Steps 1. Select the SRM paths you want to delete and then click Delete. The host agent has access to the volume. Subsequent changes on the host (running the NetApp Host Agent) or storage system can cause the path mapping to become invalid. click Add/Edit Paths in the SRM Paths Total section.

schedule your SRM path walks to occur during off-peak hours. SnapDrive must be installed and the LUN must be managed by SnapDrive. qtrees. Path walks are scheduled using DataFabric Manager and executed by NetApp Host Agent software. simultaneous SRM path walks on the same SRM host agent. SRM path-walk recommendations SRM path walks can consume considerable resources on the SRM host agent and on DataFabric Manager. What path walks are A path walk is the process of recursively examining a directory path for file-level statistics. File SRM feature tracks only the files having a file name extension that exactly matches the file type specification in Operations Manager. • If the host agent is a UNIX host. then the volume or qtree must be NFS mounted. Running the host agent on Windows avoids this problem. • • You can also manually map SRM paths to volumes. Click the group for which you want a File SRM report. do not schedule multiple. Steps 1. the LUN must be formatted and mounted directly into the file system (volume managers are not supported). Also.JPG will not match the . What File SRM reports are Operations Manager provides three levels of file system statistics.jpg file type if the host agent is on UNIX. . Note: If the host agent is installed on UNIX. files that end in . Even though they would match if the agent were on Windows. • If the object is a LUN on a UNIX host. Following are the three levels of file system statistics provided by Operations Manager: • • • Consolidated data gathered from all paths SRM path-specific data This is a summary of the data for all directories in the specified path. and LUNs. Therefore. The NetApp Host Agent scans all subdirectories of the specified directory path and gathers per-file and per-directory data. The Host Agent Login and Management Password are set correctly. For example. Directory-level data This contains the data for the specified directory only.File Storage Resource Management | 155 If the object is a LUN on a Windows host. Viewing file system statistics You can view the File SRM report for a group by clicking on the File SRM tab.

or qtrees) in your selected group. remove the GlobalSRM role from the access privileges in the Administrators page (Setup menu > Administrative Users link). Related concepts How roles relate to administrators on page 55 Identification of oldest files in a storage network You can find the oldest files residing in its storage network using File SRM (FSRM) and archive the files to a NearStore system. There is a list of high-level tasks to be performed to use FSRM. group the engineering host agents together if you want to search for them separately.0 2. Access restriction to file system data To restrict access to private or sensitive file system information. Least Recently Accessed SRM Files. volumes.156 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The following reports are available: • • • • • • • • SRM Paths. Least Recently Modified SRM Files. Click File SRM and select a report from the Report drop-down list. You can list reports by using the CLI dfm report list command (without arguments) to display all available reports. All SRM Directories. • • • • Check FSRM prerequisites Verify administrative access Verify host agent communication Create a new group Group the host agents in a logical way. Largest SRM Files. for identifying oldest files in a storage network. aggregates. Recently Modified SRM File Types SRM File Owners Each FSRM report page displays statistics for the users who have storage space that are allocated on the objects (storage systems. For example. Largest SRM Files. Add a FSRM path Add a schedule Group the FSRM paths View a report listing the oldest files • • • • .

If the status of one or more of the storage systems is Unknown. Verifying administrative access for using FRSM You must verify administrative access in Operations Manager for using FSRM. . Click Update. Edit the login or password information. Click Update to apply the changes. Select Options from the Setup drop-down menu. Verify that the Host Agent Management Password is set. Click File SRM to return to the SRM Summary page. Click Host Agent in the Edit Options section. 6. 7. Click File SRM then select SRM Summary from the Report drop-down list. 5. capacity reports and file system statistics. 4. 2. 3. click Edit Agent Logins. Steps 1. 3.File Storage Resource Management | 157 FSRM prerequisites Before using the FSRM feature for the first time. Select the host agents for engineering that the administrator wants to communicate with. 5. Steps 1. Check the list of host agents to view the status. the host agent login settings might not be properly configured. the ABC administrator must complete a certain list of tasks. Verifying host agent communication To verify that DataFabric Manager can communicate with the host agents. If the status is Unknown. 2. 4. 6. Click Home to return to the Control Center. you must verify that all prerequisites are met by referring to difference between. Change the Host Agent Login to Admin. 7. Click File SRM to return to the SRM Summary page.

From the SRM Summary page. From the SRM Summary page. the administrator must complete a certain list of tasks. 5. 6.0 Creating a new group of hosts The ABC Company wants to find the oldest files in its engineering department first. Select a host from the SRM Host drop-down list. From the buttons at the bottom of the page. Select SRM Summary from the Report drop-down list. To create a new group of hosts. In the Add a New Schedule section. Click Update. 3. 3. . the ABC Company administrator groups together the host agents in the engineering domain. the administrator must complete a certain set of tasks. Select the host agent in the engineering domain.158 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Steps 1. 4. enter a meaningful name for the schedule. 4. In the Schedule Template list. Adding an FSRM path The administrator can add an FSRM path from the SRM Summary page. enter the name Eng and Click Add. Steps 1. 3. DataFabric Manager refreshes. select the days and times to start the SRM path walks. 2. 2. Click the Add/Edit Schedules link in the SRM Summary page (File SRM tab). Add/Edit Paths. 5. SRM from the Report drop-down list. 2. Steps 1. select Host Agents. Click Add. When prompted. click Add To New Group. Adding a schedule To create the path-walk schedule. Click Add A Schedule. To find these files. 4. select a schedule or select None. In the SRM Path Walk Schedule Times . Enter the path to be searched in the Path field.

Click File SRM. Click Add SRM Path. Viewing a report that lists the oldest files To view a report listing the oldest files in the SRM path. 2. . 3. DataFabric Manager adds the new group and refreshes. From the buttons at the bottom of the page. the administrator must group the FSRM path. the administrator completes a set of tasks. Click File SRM. To do so. Select the SRM path that the administrator wants to group. click Add/Edit Paths. 5. select a host agent to associate the new schedule with. the ABC Company administrator must complete a set of tasks.File Storage Resource Management | 159 7. Select the engineering group in the Groups section at the left side of the tab window. 3. All from the Report drop-down list. SRM Summary page. Steps 1. 10. Click Home. 9. select the schedule name the administrator just created. enter a name for the group. In the Add a New SRM Path section. Grouping the FSRM paths To view consolidated data. Least Recently Accessed from the Report drop-down list. Select the report SRM Files. Select SRM Paths. 5. Click File SRM. Review the data. 11. 4. 8. When prompted. Steps 1. 4. click New Group. 2. In the Schedule field. 12. Click Home to navigate back to the main window.

.

or simply to track resource usage. You specify a quota for the following reasons: • • • To limit the amount of disk space or the number of files that can be used by a user or group. group. or that can be contained by a qtree To track the amount of disk space or the number of files used by a user. group. or qtree. When Data ONTAP receives a request to write to a volume. if . Data ONTAP determines whether any quota for that volume (and. without imposing a limit To warn users when their disk usage or file usage is high Overview of the quota process Quotas can cause Data ONTAP to send a notification (soft quota) or to prevent a write operation from succeeding (hard quota) when quotas are exceeded. to provide notification when resource usage reaches specific levels. If so. Quotas are applied to a specific volume or qtree. soft.User quotas | 161 User quotas You can use user quotas to limit the amount of disk space or the number of files that a user can use. and threshold quotas on page 162 User quota management using Operations Manager on page 162 Where to find user quota reports in Operations Manager on page 163 Modification of user quotas in Operations Manager on page 164 Configuring user settings using Operations Manager on page 165 What user quota thresholds are on page 165 About quotas Quotas provide a way to restrict or track the disk space and number of files used by a user. Next topics About quotas on page 161 Why you use quotas on page 161 Overview of the quota process on page 161 Differences among hard. or qtree. Why you use quotas You can use quotas to limit resource usage. it checks to see whether quotas are activated for that volume.

Threshold quotas (quotas specified using the ) are equivalent to quotas specified using the . When you configure a user quota threshold for a volume or qtree. and the projected time when users exceed their quota limit View graphs of total growth and per-quota growth of each user View details about a user Obtain chargeback reports for users Edit user quotas when you edit user quotas through Operations Manager. the /etc/quotas file is updated on the storage system on which the quota is located.162 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. chargeback reports. the write operation fails. If any hard quota would be exceeded. the write operation succeeds. Notify users when they exceed the user quota thresholds configured in DataFabric Manager Configure the monitoring interval for user quota monitoring View and respond to user quota events Configure alarms to notify administrators of the user quota events • • • • • • • • • Related concepts Where to find user quota reports in Operations Manager on page 163 Monitor interval for user quotas in Operations Manager on page 163 What user quota thresholds are on page 0 . soft. so you can take appropriate action before the quota is exceeded. Differences among hard. and a quota notification is sent. the settings apply to all user quotas on that volume or qtree. Configure and edit user quota thresholds for individual users. for that qtree) would be exceeded by performing the write operation. and threshold quotas Hard quotas impose a hard limit on system resources. quota events. The soft quotas send a warning message when resource usage reaches a certain level. user details. about files and disk space that is used by users. You can perform the following user quota management tasks by using Operations Manager: • View summary reports (across all storage systems) and per-quota reports with data. and a quota notification is sent. hard and soft quota limits. and qtrees. volumes. If any soft quota would be exceeded. any operation that would result in exceeding the limit fails. and so on.0 the write is to a qtree. User quota management using Operations Manager You can view user quota summary reports. but do not affect data access operations.

the Quotas tab is renamed as “File SRM. • • • • • • • Where to find user quota reports in Operations Manager You can view user quota reports in Operations Manager at Control Center > Home > File SRM (or Quotas) > Report. ensure that your storage system meets certain prerequisites. you cannot view the File SRM tab. you can access the reports from the Quotas tab. You should enable RSH or SSH access to the storage system and configure login and password credentials that are used to authenticate DataFabric Manager. After you install the File SRM license. If you have not installed a File SRM license. The /etc/quotas file on the storage system must not contain any errors.” Monitor interval for user quotas in Operations Manager You can use Operations Manager to view the monitoring interval at which DataFabric Manager is monitoring a user quota on a storage system. The User Quota Monitoring Interval option on the Options page (Setup > Options > Monitoring) determines how frequently DataFabric Manager collects the user quota information from the monitored storage systems. You must configure and enable quotas for each volume for which you want to view the user quotas. the user quota information is collected once every day. You must log in to Operations Manager as an administrator with the quota privilege to view user quota reports and events so that you can configure user quotas for volumes and qtrees. You must use DataFabric Manager to configure the root login name and root password of a storage system on which you want to monitor and manage user quotas. • • • The storage systems on which you want to monitor the user quotas must have Data ONTAP 6. By default. Following are the prerequisites to monitor and edit user quotas assigned to vFiler units: The hosting storage system must be running Data ONTAP 6. Additional requirements for editing quotas: • Directives.User quotas | 163 Prerequisites for managing user quotas using Operations Manager To monitor and manage user quotas by using Operations Manager.3 or later installed. however. The storage systems on which you want to manage user quotas must have Data ONTAP 6. . you can change this monitoring interval. However.4 or later installed.5.1 or later. such as QUOTA_TARGET_DOMAIN and QUOTA_PERFORM_USER_MAPPING must not be present in the /etc/quotas file on the storage system.

Operations Manager conducts vFiler quota editing by using the jobs. disk space hard limit. ensure that your storage system meets the prerequisites. see the Operations Manager Help.0 Note: The process of collecting the user quota information from storage systems is resource intensive. . DataFabric Manager collects the information more frequently. DataFabric Manager creates a backup file named DFM (timestamp). • • • • • Disk space threshold Disk space hard limit Disk space soft limit Files hard limit Files soft limit For more information about these fields. Next topics Prerequisites to edit user quotas in Operations Manager on page 164 Editing user quotas using Operations Manager on page 165 Prerequisites to edit user quotas in Operations Manager If you want to edit user quota in Operations Manager. to protect the quota file against damage or loss. • • • You must configure the root login name and root password in DataFabric Manager for the storage system on which you want to monitor and manage user quotas. you can recover data by renaming the backup quota file. verify the quota file on the hosting storage system. the /etc/quotas file on the storage system where the quota exists is appropriately updated.bak. You must configure and enable quotas for each volume for which you want to view the user quotas. If the job fails. before starting a job.164 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. If a vFiler quota editing job fails. When you edit the options for a user quota. However. Modification of user quotas in Operations Manager You can edit disk space threshold. In addition. When you decrease the User Quota Monitoring Interval option to a low value. and so on for a user quota in Operations Manager. disk space soft limit. decreasing the User Quota Monitoring Interval might negatively affect the performance of the storage systems and DataFabric Manager.

Configuring user settings using Operations Manager You can configure user settings such as e-mail address of users and quota alerts. and set user quota threshold using Operations Manager. Before you begin Ensure that the storage system meets the prerequisites before you edit user quotas in Operations Manager.User quotas | 165 Editing user quotas using Operations Manager You can edit user quotas using the Edit Quota Settings page in Operations Manager. Click the Edit Settings link in the lower left corner. Click any quota related fields for the required quota. . Click Control Center > Home > Group Status > File SRM (or Quotas) > Report > User Quotas. DataFabric Manager generates user quota events. What user quota thresholds are User quota thresholds are the values that DataFabric Manager uses to evaluate whether the space consumption by a user is nearing. You can edit E-mail Address of the user. 3. Send Quota Alerts Now. Steps 1. and Resource Tag. 2. 4. Click Update. Owner E-mail. Click Control Center > Home > Group Status > File SRM (or Quotas) > Report > User Quotas. User Quota Full Threshold (%). Owner Name. User Quota Nearly Full Threshold (%). All. If these thresholds are crossed. You can leave the e-mail address field blank if you want DataFabric Manager to use the default email address of the user. All 2. or has reached the limit that is set by the user’s quota. Steps 1.

All . DataFabric Manager uses the user quota thresholds to monitor the hard and soft quota limits configured in the /etc/quotas file of each storage system Ways to configure user quota thresholds in Operations Manager You can configure user quota thresholds by applying the thresholds to all quotas of a specific user or on a specific file system or on a group of file systems using Operations Manager. You can access the Volume Details page by clicking on a volume name at Control Center > Home > Member Details > File Systems > Report > Volumes. or an SNMP trap host) of user quota events. Therefore. DataFabric Manager uses the soft quota limits set in the /etc/quotas file of a storage system to determine whether a user has crossed the soft quota. User quota thresholds in Operations Manager You can set a user quota threshold to all the user quotas present in a volume or a qtree. clicking on the qtree name at Control Center > Home > Member Details > File Systems > Report > Qtrees. DataFabric Manager can also send a user alert when users exceed their soft quota limit. you can configure alarms that notify the specified recipients (DataFabric Manager administrators.0 By default. the settings apply to all user quotas on that volume or qtree.166 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. for the Qtree Details page. When you configure a user quota threshold for a volume or qtree. All. DataFabric Manager sends user alerts in the form of e-mail messages to the users who cause user quota events. For more information about user thresholds. see the Operations Manager Administration Guide. no thresholds are defined in DataFabric Manager for the soft quotas. the user quota thresholds are crossed even before users exceed their hard limits for user quotas. a pager address. The user quota threshold makes the user stay within the hard limit for the user quota. • • Apply user quota thresholds to all quotas of a specific user Apply user quota thresholds to all quotas on a specific file system (volume or qtree) or a group of file systems You can apply thresholds using the Edit Quota Settings links on the lower left pane of the Details page for a specific volume or qtree. however. Similarly. Next topics What DataFabric Manager user thresholds are on page 166 User quota thresholds in Operations Manager on page 166 Ways to configure user quota thresholds in Operations Manager on page 166 Precedence of user quota thresholds in DataFabric Manager on page 167 What DataFabric Manager user thresholds are The DataFabric Manager user quota thresholds are a percentage of the Data ONTAP hard limits (files and disk space) configured in the /etc/quotas file of a storage system. Additionally.

and. File systems (volumes and qtrees) user quota thresholds specified for a specific volume or qtree 3. Global user quota thresholds specified for all users in the DataFabric Manager database . all user quotas in the DataFabric Manager database You can apply thresholds at Setup > Options > Edit Options: Default Thresholds.User quotas | 167 To apply settings to a group of file systems. Apply user quota thresholds to all quotas on all users on all file systems: that is. select the group name from the Apply Settings To list on the quota settings page. User quota thresholds specified for a specific user 2. • Precedence of user quota thresholds in DataFabric Manager DataFabric Manager prioritizes user quota threshold based on a specific user. The following list specifies the order in which user quota thresholds are applied: 1. a specific volume or a qtree. all users in DataFabric Manager .

.

DataFabric Manager customers should check with their NetApp sales representative regarding other NetApp SAN management solutions.netapp. Data is collected . FCP targets in your SANs. and FCP targets | 169 Management of LUNs. and. SAN hosts. FCP targets. Windows and UNIX hosts. Next topics Management of SAN components on page 169 SAN and NetApp Host Agent software on page 170 List of tasks performed using NetApp Host Agent software on page 170 List of tasks performed to monitor targets and initiators on page 171 Reports for monitoring LUNs. the DataFabric Manager server starts collecting pertinent data—for example. For more information about setting up a SAN. and LUNs are grouped on page 176 Introduction to deleting and undeleting SAN components on page 177 Where to configure monitoring intervals for SAN components on page 178 Related information Data ONTAP Block Access Management Guide for iSCSI and FCP . which LUNs exist on which storage systems. and SAN hosts on page 172 Information available on the LUN Details page on page 173 Information about the FCP Target Details page on page 174 Information about the Host Agent Details page on page 175 How storage systems . see the Data ONTAP Block Access Management Guide for iSCSI and FC.com/ NOW/knowledge/docs/san/#ontap_san Management of SAN components To monitor and manage LUNs. After SAN components have been discovered. and SAN hosts. FCP targets. Windows and UNIX hosts. but SAN hosts must already have the NetApp Host Agent software installed and configured on them before the DataFabric Manager server can discover them. Existing customers can continue to license the SAN option with DataFabric Manager. The DataFabric Manager server uses SNMP to discover storage systems. SANs on the DataFabric Manager server are storage networks that have been installed in compliance with the specified SAN setup guidelines.http://now. and FCP targets Operations Manager is used to monitor and manage LUNs. Windows and UNIX hosts.Management of LUNs. the DataFabric Manager server must first discover them. Note: NetApp has announced the end of availability for the SAN license for DataFabric Manager.

the DataFabric Manager server generates and logs an event in its database. Note: NetApp Host Agent is also used for File Storage Resource Management functions. After the NetApp Host Agent software is installed on a client host and you have installed the DataFabric Manager server with the Operations Manager license. For example. see the NetApp Host Agent Installation and Administration Guide. You can also configure DataFabric Manager server to send notifications to a pager. or expanding a LUN. an SNMP trap host. and SAN hosts. you can use the DataFabric Manager server to manage these components. you can perform various management tasks. View detailed HBA and LUN information. . when the state of an HBA port changes to online or offline or when the traffic on an HBA port exceeds a specified threshold. NetApp Host Agent software discovers. you can create. however. delete. you can configure the DataFabric Manager server to send notification about such events (also known as alarms) to an e-mail address. FCP targets. Perform management functions such as creating. You must install the NetApp Host Agent software on each SAN host that you want to monitor and manage with the DataFabric Manager server. SAN and NetApp Host Agent software The DataFabric Manager server can automatically discover SAN hosts. Additionally.0 periodically and reported through various Operations Manager reports. you can perform a variety of tasks: • • • Monitor basic system information for SAN hosts and related devices.170 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. For more information about the NetApp Host Agent software. FCP targets. it does not use SNMP to poll for new hosts. These events can be viewed through the details page of the affected object. modifying. (The frequency of data collection depends on the values that are assigned to the DataFabric Manager server monitoring intervals. or expand a LUN. you must change the SAN Host Monitoring Interval (Setup > Options > Monitoring). and SAN hosts for a number of predefined conditions and thresholds. with a File SRM license.) The DataFabric Manager server monitors LUNs. or a script you write. In addition to monitoring LUNs. Note: To modify the global host agent monitoring interval for SAN hosts. If a predefined condition is met or a threshold is exceeded. List of tasks performed using NetApp Host Agent software Once NetApp Host Agent software is installed on a client host along with DataFabric Manager server. monitors. and manages SANs on SAN hosts. For example.

FCP target on a storage system. View and respond to LUN and SAN host events. and SAN hosts. Change the monitoring intervals for LUNs. Next topics Prerequisites to manage targets and initiators on page 171 Prerequisites to manage SAN hosts on page 172 Related concepts Reports for monitoring LUNs. or SAN host. Windows and UNIX hosts. and group LUNs. or SAN hosts for efficient monitoring and management.shtml List of tasks performed to monitor targets and initiators You can use Operations Manager to perform management tasks such as view reports. and SAN hosts.Management of LUNs. see the Compatibility and Configuration Guide for FCP and iSCSI Products.netapp. Following is a list of tasks you can perform to monitor targets and initiators: • • • • • • View reports that provide information about LUNs. Related information DataFabric Manager Installation and Upgrade Guide . monitor. and SAN hosts on page 172 Information available on the LUN Details page on page 173 Tasks performed from the LUN Details page on page 174 Information about the FCP Target Details page on page 174 Information about the Host Agent Details page on page 175 List of tasks performed from the Host Agent Details page on page 175 Prerequisites to manage targets and initiators Operations Manager does not report any data for your targets and initiators if you do not have the SAN set up according to the specified hardware and software requirements. manage. SAN deployments are supported on specific hardware platforms running Data ONTAP 6. FCP targets.shtml . Configure DataFabric Manager server to generate alarms to notify recipients of LUN and SAN host events. View details about a specific LUN.com/NOW/ knowledge/docs/nha/nha_index.com/NOW/knowledge/ docs/DFM_win/dfm_index. For more information about specific software requirements.netapp.3 or later. and respond to LUN and SAN host events. For more information about the supported hardware platforms. Group LUNs. and FCP targets | 171 Related information NetApp Host Agent Installation and Administration Guide . see the DataFabric Manager Installation and Upgrade Guide.http://now.http://now. FCP targets. storage systems in a SAN.

netapp.netapp. you must have appropriate privileges set up on those storage systems. SAN hosts.http://now. FCP SAN Hosts. Windows SAN hosts must have the proper version of SnapDrive software installed. and qtree they are contained in. The SAN hosts must be connected to the network through an Ethernet port and must each have a valid IP address. click the group name in the left pane of Operations Manager. and SAN hosts Reports about LUNs. You can view reports by selecting from the Report drop-down list. • All SAN hosts to be managed by the DataFabric Manager server must be connected to a TCP/IP network either known to or discoverable by the DataFabric Manager server.netapp.com/ NOW/products/interoperability/ Prerequisites to manage SAN hosts You must ensure a proper network connection and software installed on SAN hosts before you manage SAN hosts with DataFabric Manager server.http://now. LUNs inherit access control settings from the storage system. see the DataFabric Manager server software download pages at the NOW site. see the NetApp Host Agent Installation and Administration Guide. Each SAN host must have the NetApp Host Agent software installed on it. To find out which SnapDrive version you received. are available on the LUNs page of Member Details tab.com/ Reports for monitoring LUNs. Therefore. and FCP targets that the DataFabric Manager server monitors. Related information • • NetApp Host Agent Installation and Administration Guide .http://now. The NetApp Host Agent software is required for discovering. monitoring.172 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.com/NOW/ knowledge/docs/nha/nha_index. and managing SAN hosts.shtml The NOW site . volume. For LUN management using the DataFabric Manager server.0 Compatibility and Configuration Guide for FCP and iSCSI Products . Comments SAN Hosts. You can view the following reports from the LUNs page: • • • • • FCP Targets SAN Hosts. All SAN Hosts. If you want to view a report about a specific group. to perform LUN operations on storage systems. For more information about the NetApp Host Agent software. iSCSI . FCP targets. Note: LUN management on UNIX SAN hosts using the DataFabric Manager server is not available.

iSCSI SAN Hosts LUNs. and FCP targets | 173 • • • • • • • • • • • • • SAN Hosts. see the Operations Manager Help.Management of LUNs. Following are the information about the LUN Details page: • • • • Status of the LUN Storage system on which the LUN exists Volume or qtree on which the LUN exists Initiator groups to which the LUN is mapped You can access all LUN paths that are mapped to an initiator group by clicking the name of the initiator group. the LUN's storage system and so on. All LUNs. Comments LUNs. Note: If a LUN is mapped to more than one initiator group. Deleted SAN Hosts Traffic. Information available on the LUN Details page The LUN Details page for a LUN consists of information such as. Additionally. You can access the Details page for a LUN by clicking the LUN path displayed in any of the reports. FCP SAN Host Cluster Groups SAN Host LUNs. • • • • • • Size of the LUN Serial number of the LUN Description of the LUN Events associated with the LUN Groups to which the LUN belong Number of LUNs configured on the storage system on which the LUN exists and a link to a report displaying those LUNs . status of the LUN. the report contains all other LUN mappings (LUN paths to initiator groups) that exist for those LUN paths. when you click an initiator group. the displayed page lists all the LUN paths that are mapped to the initiator group. Unmapped LUN Statistics LUN Initiator Groups Initiator Groups For more information about descriptions of each report field. Windows and UNIX hosts. All SAN Hosts LUNs. FCP LUNs. Deleted LUNs.

perform the operation on the active controller.0 • • • • Number of SAN hosts mapped to the LUN and a link to the report displaying those hosts Number of HBA ports that can access this LUN and a link to the report displaying those LUNs Time of the last sample collected and the configured polling interval for the LUN Graphs that display the following information: • • LUN bytes per second—Displays the rate of bytes (bytes per second) read from and written to a LUN over time LUN operations per second—Displays the rate of total protocol operations (operations per second) performed on the LUN over time Related concepts Reports for monitoring LUNs. the operation fails. adapter speed.174 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. FCP targets. FCP Targets). and SAN hosts on page 172 Tasks performed from the LUN Details page By using the LUN Path Tools links on the LUN Details page. you can perform various management tasks. and so on. The FCP Target Details page contains the following information: • • Name of the storage system on which target is installed Operational status of the target . Otherwise. You must have appropriate authentication to run commands on the storage system from the DataFabric Manager server. You can access the FCP Target Details page for a target by clicking its port number in the FCP Targets report (LUNs > View. name of the storage system. Following are the tasks you can perform on the LUN Details page: Expand this LUN Destroy this LUN Refresh Monitoring Samples Run a Command Launches a wizard that helps you expand the LUN Launches a wizard that helps you destroy the LUN Obtains current monitoring samples from the storage system on which this LUN exists Runs a Data ONTAP command on the storage system on which this LUN exists Note: To manage a shared LUN on MSCS. Information about the FCP Target Details page The FCP Target Details page contains information such as. operational status of the target.

password. edit settings. see the Operations Manager Help. List of tasks performed from the Host Agent Details page The Host Tools list on the Host Agent Details page enables you to perform various tasks such as. such as the cluster name. refresh monitoring samples. create a LUN. if any. and the user name and password for CIFS access in Operations Manager. cluster partner. You can access the Details page for a NetApp Host Agent by clicking its name in any of the SAN Host reports. administration transport. status of SAN host. devices related to SAN host. Windows and UNIX hosts. port information for the SAN host. Edit Settings Displays the Edit Host Agent Settings page. and so on. such as the HBA port traffic per second or the HBA port frames for different time intervals For more information about the SAN Host reports. and cluster groups to which the SAN host belongs The events that occurred on this SAN host The number of HBAs and HBA ports on the SAN host (links to report) The devices related to the SAN host. events that occurred on SAN host. such as the FC switch ports connected to it and the storage systems accessible from the SAN host Graphs of information. where you configure login. in addition to protocols and features running on the SAN host The MSCS configuration information about the SAN host. .Management of LUNs. The Details page for a Host Agent on a SAN host contains the following information: • • • • • • • Status of the SAN host and the time since the host has been up The operating system and the NetApp Host Agent version. and FCP targets | 175 • • • Adapter hardware and firmware versions Adapter speed FC topology of the target: • Fabric • Point-To-Point • Loop • Unknown Node name (WWNN) and port name (WWPN) of the target Name of the FC port to which the target connects Number of other FCP targets on the storage system on which the target is installed (link to report) Number of HBA ports (SAN host ports) that the target can access (link to report) Time of the last sample collected and the configured polling interval for the FCP target • • • • • Information about the Host Agent Details page The Host Agent Details page contains information such as. and so on.

the HTTP and HTTPS ports. Automates connectivity troubleshooting. SAN hosts. and destroy LUNs.0 The login and password information is used to authenticate the DataFabric Manager server to the NetApp Host Agent software running on the SAN host. This enables remote upgrading and specifies a filewalk log path. Related concepts Where to configure monitoring intervals for SAN components on page 178 How storage systems. Related tasks Creating groups on page 82 Granting access to storage systems. When you create a group of storage systems or SAN hosts. Before you begin The GlobalSan role must be enabled for LUN management. Allows you to edit settings for the Host Agent. SAN hosts. or SAN hosts for easier management and to apply access control. the type of the created group is “Appliance resource group. storage systems. . You can edit details such as.176 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Obtains current monitoring samples from the SAN host. the created group is “LUN resource group. The GlobalSAN role allows an administrator to create. monitoring and management of API passwords.” When you create a group of LUNs. Specify a value on this page only if you want to change the global setting.” Note: You cannot group HBA ports or FCP targets. expand. and LUNs are grouped You can group LUNs. Create a LUN Diagnose Connectivity Refresh Monitoring Samples Manage Host Agent Takes you to a LUN creation page that help you create a LUN. and LUNs You can allow an administrator access for managing all your SAN hosts and LUNs.

Introduction to deleting and undeleting SAN components You can stop monitoring a SAN component (a LUN. complete the Add a New Administrator option. Windows and UNIX hosts. You must delete the SAN component from the Global group for the DataFabric Manager server to stop monitoring it altogether. To allow administrator access. Option To create a new administrator To grant access to an existing administrator Description In the Administrators page. Click the Delete Selected button at the bottom of each report to delete the selected component. You cannot stop monitoring a specific FCP target or an HBA port. Next topics Deleting a SAN component on page 177 How a deleted SAN component delete is restored on page 178 Deleting a SAN component You can delete a SAN component from any of the reports related to that component. Steps 1. the component is deleted only from that group. The DataFabric Manager server does not stop collecting and reporting data about it. and then select GlobalSAN from the Roles list. unless you first stop monitoring the storage system (for the FCP target) or the SAN host (for the HBA port) on which the target or the port exists.Management of LUNs. or a SAN host) with the DataFabric Manager server by deleting it from the Global group. a storage system. click the Edit column of the administrator to be granted access. Select the component you want to delete by clicking the check boxes in the left-most column of a report. the DataFabric Manager server stops collecting and reporting data about it. In the Administrator page. 2. and FCP targets | 177 Step 1. Data collection and reporting is not resumed until the component is again added by performing the undelete operation. go to the Administrator page and select Setup menu > Administrative users. . from the List of administrators. When you delete a SAN component from the Global group. Note: When you delete a SAN component from any group except Global. and then select GlobalSAN from the Roles list.

For example. all deleted LUNs are listed in the LUNs Deleted report. All > LUN Path > LUN Path Tools: Edit Settings. you must be on the Edit Settings page of that specific object (Details page > Tools list: Edit Settings).178 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. For example. Related concepts Where to find information about a specific storage system on page 208 . Where to configure monitoring intervals for SAN components You can configure the global options on the Options page (Setup menu > Options) in Operations Manager. by selecting it and then clicking the Undelete button from the Deleted report. All deleted objects are listed in their respective Deleted reports.0 How a deleted SAN component delete is restored You can restore a deleted object. To configure local options (for a specific object). to access the LUNs Details page. click Member Details > LUNs > Report drop-down list: LUNS.

You can find storage-related reports on the tabs accessible from the Member Details tab: Physical Systems. Next topics Access to storage-related reports on page 179 Storage capacity thresholds in Operations Manager on page 179 Management of aggregate capacity on page 181 Management of volume capacity on page 186 Management of qtree capacity on page 192 How Operations Manager monitors volumes and qtrees on a vFiler unit on page 195 What clone volumes are on page 196 Why Snapshot copies are monitored on page 197 Storage chargeback reports on page 198 The chargeback report options on page 200 What deleting storage objects for monitoring is on page 203 Access to storage-related reports You can view storage-related reports about storage objects that DataFabric Manager monitors. Storage capacity thresholds in Operations Manager Storage capacity threshold is the point at which DataFabric Manager generates events to report capacity problem. Each tab has a Report drop-down list from which you can select the report you want to display. File Systems.File system management | 179 File system management You can manage storage on storage system by using the data displayed by Operations Manager storage reports and the options you use to generate storage-related events and alarms. The storage reports present information about a selected aspect of the storage object. and LUNs tabs. . and status. SANs. You can configure alarms to send notification whenever a storage event occurs. For information about specific storage-related reports. space availability. such as chargeback. Virtual Systems. see the Operations Manager Help. Aggregates. Storage capacity thresholds determine at what point you want DataFabric Manager to generate events about capacity problems. Note: The status specifies the current status of the selected storage object.

Click Update. or qtree. 3. 4. Note: Changing storage capacity threshold settings for the Global group changes the default storage capacity settings for all groups and individual storage objects. a group of objects. to change aggregate options or.180 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Click Edit Settings under the Tools section in the left pane. volume. the storage capacity thresholds for all aggregates. 2. for a specific group. . volumes. 2. volume. 4. Select Setup > Options. Edit the Default settings as needed. Steps 1. Steps 1. You can change the settings as needed for an object. Next topics Modification of storage capacity thresholds settings on page 180 Changing storage capacity threshold settings for global group on page 180 Changing storage capacity threshold settings for an individual group on page 180 Changing storage capacity threshold settings for a specific aggregate. 3. and qtrees are set to default values. Click Default Thresholds in the left pane. If you edit capacity threshold settings. or qtree. or the Global group. Changing storage capacity threshold settings for an individual group Perform the following steps to change the storage capacity threshold settings for an individual group. Changing storage capacity threshold settings for global group Perform the following steps to change the storage capacity threshold settings for global group. Click the name of an aggregate. volume. Click Aggregates. Select Control Center > Home > Member Details. or for a specific aggregate. or qtree on page 181 Modification of storage capacity thresholds settings You can change the storage capacity threshold settings globally. the edited thresholds override the global thresholds. File Systems to change volume or qtree options.0 When DataFabric Manager is installed.

6. it is important to understand the role that aggregate overcommitment plays in space availability. Click Update. Click Update and then click OK. volume. and by determining aggregate capacity threshold. Edit the desired settings. Select the name of the group from Apply Settings to drop-down list. 8. When managing storage resources. by tracking aggregate space utilization. or qtree Perform the following steps to change the storage capacity threshold settings for specific aggregate. 3. volume. Management of aggregate capacity You can manage aggregate capacity by gathering aggregate information and aggregate overcommitment information. Click the name of an aggregate. or qtree. Changing storage capacity threshold settings for a specific aggregate. 4. Approve the change by clicking OK on the verification page. volume. Click Edit Settings under the Tools section in the left pane. 5. Edit the settings as needed. 2. leave the fields empty.File system management | 181 5. 7. Next topics Volume space guarantees and aggregate overcommitment on page 181 Available space on an aggregate on page 182 Considerations before modifying aggregate capacity thresholds on page 182 Aggregate capacity thresholds and their events on page 183 Volume space guarantees and aggregate overcommitment You can use aggregate overcommitment to advertise more available space than the available space. Steps 1. you must create flexible volumes with a space guarantee of none or file so that the aggregate size does not limit the . To use aggregate overcommitment. or qtree. Note: To revert to the default settings. Click Aggregates to change Aggregate options or File Systems to change volume or qtree options.

you would want to increase the Aggregate Overcommitted threshold to more than 100. To help you determine the space availability on an aggregate. review the capacity graphs of historical data to get a sense of how the amount of storage used changes over time. or adding data to volumes. as needed. By using aggregate overcommitment. Note: If you have overcommitted your aggregate. Operations Manager displays three values on the Aggregate Details page for each aggregate: Aggregate size Capacity used Total committed capacity Total size of the aggregate. you must decide at what point the aggregate is too overcommitted. Considerations before modifying aggregate capacity thresholds You must note the aggregate overcommitment point before changing aggregate capacity threshold. The total committed capacity can be larger than the total capacity by using aggregate overcommitment. by creating LUNs.182 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The total amount of disk space in use by volumes present in the aggregate. and aggregate overcommitment. the storage system can advertise more available storage than actually exists in the aggregate.shtml Available space on an aggregate With Operations Manager. With aggregate overcommitment. if you have several volumes that sometimes need to grow temporarily. .netapp.com/NOW/knowledge/docs/ontap/ ontap_index. Also. you must monitor its available space carefully and add storage as needed to avoid write errors due to insufficient space.0 volume size. you can determine the available space on an aggregate. see the Data ONTAP Storage Management Guide. Alternatively. Each volume can be larger than its containing aggregate. the volumes can dynamically share the available space with each other. For details about volume space reservations. To determine how far beyond 100 to set the threshold. The total amount of committed disk space to volumes present in the aggregate.http://now. You can use the storage space that the aggregate provides. Ensure that you note the difference between your storage commitments and actual storage usage. Related information Data ONTAP Storage Management Guide . Ensure that you take care of the following points when deciding whether to modify the aggregate capacity threshold: Use of aggregate overcommitment strategy If you use the aggregate overcommitment strategy. you could provide greater amounts of storage that you know would be used immediately.

from the Aggregate Details page. you must leave the Aggregate Overcommitted and Nearly overcommitment Overcommitted threshold values unchanged from their default. You can edit thresholds for a particular aggregate. This causes DataFabric Manager to generate an Aggregate Full event only if the condition persists for the specified time. For the Aggregate Full threshold. Set the Aggregate Nearly Full threshold If an aggregate is routinely more than 80 percent full. set the Aggregate Nearly Full threshold to a value higher than the default. Because the aggregate is overcommitted. Default value: 90 percent Event generated: Aggregate Full . You can configure alarms to send notification whenever an aggregate capacity event occurs. you must create a group with that aggregate as the only member. By default. Lowering the thresholds generate an event well before completely filling the storage. before the storage space is full and write errors occur. such as installing more storage. Aggregate capacity thresholds and their events You can configure aggregate capacity threshold and their events from DataFabric Manager. You can set alarms to monitor capacity and monitor commitment of an aggregate. the edited thresholds override the global thresholds. Note: To reduce the number of Aggregate Full Threshold events generated. you can also configure an alarm to send notification only when the condition persists over a specified time. Note: If you edit capacity thresholds for a particular aggregate. You can set the following aggregate capacity thresholds: Aggregate Full (%) Description: Specifies the percentage at which an aggregate is full. DataFabric Manager issues the alarm only once per event. DataFabric Manager features aggregate capacity thresholds and their events to help you monitor both the capacity and the commitments of an aggregate. Early notification gives you more time to take corrective action. you might want to set the Aggregate Full and Aggregate Nearly Full thresholds to values lower than the default. Note: If you want to set an alarm on a specific aggregate. You can configure the alarm to repeat until you receive an acknowledgment. Non-usage of aggregate If you do not use aggregate overcommitment as a storage-management strategy. if you have configured an alarm to alert you to an event.File system management | 183 Set the Aggregate Full and Aggregate Nearly Full thresholds Set the Aggregate Full and Aggregate Nearly Full thresholds so that you have time to take corrective action if storage usage approaches capacity. you can set an Aggregate Full Threshold Interval (0 seconds).

For more information about the Snapshot reserve. It is. you cannot remove it without first destroying all flexible volumes present in the aggregate to which the disk belongs. Add one or more disks to the aggregate that generated the event. Default value: 80 percent The value for this threshold must be lower than the value for Aggregate Full Threshold for DataFabric Manager to generate meaningful events. reducing the reserve can free disk space. Destroy the aggregate itself once all the flexible volumes are removed from the aggregate.184 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. important to maintain a large enough reserve for Snapshot copies so that the active file system always has space available to create new files or modify existing ones. Event generated: Aggregate Almost Full Event severity: Warning Corrective action: Same as Aggregate Full. therefore. There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. Aggregate Overcommitted (%) Description: Specifies the percentage at which an aggregate is overcommitted. . the reserve is 20 percent of disk space.0 Event severity: Error Corrective action: Take one or more of the following actions: • • To free disk space. • Aggregate Nearly Full (%) Description: Specifies the percentage at which an aggregate is nearly full. Default value: 100 percent Event generated: Aggregate Overcommitted Event severity: Error Corrective action: Take one or more of the following actions: • Create new free blocks in the aggregate by adding one or more disks to the aggregate that generated the event. Once you add a disk to an aggregate. ask your users to delete files that are no longer needed from volumes contained in the aggregate that generated the event. see the Data ONTAP Data Protection Online Backup and Recovery Guide. If the reserve is not in use. giving you more time to add a disk. Temporarily reduce the Snapshot reserve. By default.

See the Operations Manager Help for instructions on how to identify Snapshot copies you can delete. it requires the space again. see the Data ONTAP Data Protection Online Backup and Recovery Guide. Default value: 80 percent Event generated: Aggregate Snapshot Reserve Almost Full Event severity: Warning Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. Event generated: Aggregate Almost Overcommitted Event severity: Warning Corrective action: Same as Aggregate Overcommitted. when you bring the flexible volume online again. Disabling would ensure that there is always space available to create new files or modify present ones. . • Temporarily free some already occupied blocks in the aggregate by taking unused flexible volumes offline. However. • Permanently free some already occupied blocks in the aggregate by deleting unnecessary files. Aggregate Snapshot Reserve Nearly Full Threshold (%) Description: Specifies the percentage of the Snapshot copy reserve on an aggregate that you can use before the system raises the Aggregate Snapshots Nearly Full event. you cannot remove it without first destroying all flexible volumes present in the aggregate to which the disk belongs. Note: When you take a flexible volume offline. Default value: 95 percent The value for this threshold must be lower than the value for Aggregate Full Threshold for DataFabric Manager to generate meaningful events.File system management | 185 Note: Add disks with caution. Destroy the aggregate itself once all the flexible volumes are destroyed. it returns any space it uses to the aggregate. If you disable the aggregate Snapshot autodelete option. it is important to maintain a large enough reserve. Disabling would ensure that there is always space available to create new files or modify present ones. For more information about the Snapshot reserve. Aggregate Nearly Overcommitted (%) Description: Specifies the percentage at which an aggregate is nearly overcommitted. Once you add a disk to an aggregate.

com/NOW/ knowledge/docs/ontap/ontap_index.shtml Management of volume capacity You can manage volume capacity by gathering volume information. Note: A newly created traditional volume tightly couples with its containing aggregate so that the capacity of the aggregate determines the capacity of the new traditional volume. by determining volume capacity threshold and events.netapp. .186 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Aggregate Snapshot Reserve Full Threshold (%) Description: Specifies the percentage of the Snapshot copy reserve on an aggregate that you can use before the system raises the Aggregate Snapshots Full event. by modifying volume capacity threshold. By default. For the Volume Full threshold. Related tasks Creating alarms on page 100 Related information Data ONTAP Data Protection Online Backup and Recovery Guide . You can configure alarms to send notification whenever a volume capacity event occurs. Default value: 90 percent Event generated: Aggregate Snapshot Reserve Full Event severity: Warning Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. Next topics Volume capacity thresholds and events on page 186 Normal events for a volume on page 191 Modification of the thresholds on page 191 Volume capacity thresholds and events DataFabric Manager features thresholds to help you monitor the capacity of flexible and traditional volumes. by setting volume Snapshot copy thresholds and events.http://now. You can configure the alarm to repeat until it is acknowledged. DataFabric Manager issues the alarm only once per event. For this reason. if you have configured an alarm to alert you to an event. you can also configure an alarm to send notification only when the condition persists over a specified period. synchronize the capacity thresholds of traditional volumes with the thresholds of their containing aggregates.

which includes two or more monitoring intervals. the Volume Full threshold interval is set to zero. Volume Full Threshold Interval specifies the period of time during which the condition must persist before the event is generated. By default. you can set a Volume Full Threshold Interval to a non-zero value. • For traditional volumes. For flexible volumes containing enough aggregate space. the DataFabric Manager server generates a Volume Full event. Default value: 90 Event generated: Volume Full Event severity: Error Corrective action: Take one or more of the following actions: • • • Ask your users to delete files that are no longer needed. the DataFabric Manager server waits for the specified threshold interval. you must create a group with that volume as the only member.File system management | 187 Note: If you want to set an alarm on a specific volume. . For example. you can increase the volume size. if the monitoring cycle time is 60 seconds and the threshold interval is 90 seconds. Note: Add disks with caution. Once you add a disk to an aggregate. Threshold intervals apply only to error and informational events. You can set the following volume capacity thresholds: Volume Full Threshold (%) Description: Specifies the percentage at which a volume is considered full. For traditional volumes containing aggregate with limited space. and generates a Volume Full event only if the condition persisted throughout the threshold interval. • • If the threshold interval is 0 seconds or a value less than the volume monitoring interval. temporarily reduce the Snapshot copy reserve. If the condition persists for the specified amount of time. Note: To reduce the number of Volume Full Threshold events generated. you cannot remove it without destroying the volume and its containing aggregate. you can increase the size of the volume by adding one or more disks to the containing aggregate. to free disk space. the threshold event is generated only if the condition persists for two monitoring interval. If the threshold interval is greater than the volume monitoring interval. the DataFabric Manager server generates Volume Full events as they occur.

see the Data ONTAP Data Protection Online Backup and Recovery Guide. This option applies to volumes with LUNs. it is important to maintain a large enough reserve for Snapshot copies. Default value: 90 Event generated: Volume Space Reservation Depleted Event severity: Error When the status of a volume returns to normal after one of the preceding events. By maintaining the reserve for Snapshot copies. Volume Space Reserve Nearly Depleted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed most of its reserved blocks. the active file system always has space available to create new files or modify existing ones. The value for this threshold must be lower than the value for the Volume Full Threshold in order for DataFabric Manager to generate meaningful events. events with a severity of Normal are generated.188 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. If the reserve is not in use. giving you more time to add a disk. A volume that crosses this threshold is getting close to having write failures. and a fractional overwrite reserve of less than 100%. Therefore. and a fractional overwrite reserve of less than 100%. There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. This option applies to volumes with LUNs. no free blocks. Threshold (%) Default value: 80. Event generated: Volume Almost Full Event severity: Warning Corrective action: Same as Volume Full. A volume that has crossed this threshold is getting dangerously close to having write failures. Normal events do . reducing the reserve frees disk space. no free blocks. Snapshot copies. Default value: 80 Event generated: Volume Space Reservation Nearly Depleted Event severity: Warning Volume Space Reserve Depleted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed all its reserved blocks.0 By default. Volume Nearly Full Description: Specifies the percentage at which a volume is considered nearly full. For more information about the Snapshot copy reserve. the reserve is 20 percent of disk space. Snapshot copies.

and the growth is abnormal with respect to the volume-growth history. If you disable the volume Snapshot autodelete option. Permanently free some of the already occupied blocks in the volume by deleting unnecessary files. which display events of Warning or worse severity. Default value: 100 Event generated: Volume Quota Overcommitted Event severity: Error Corrective action: Take one or more of the following actions: • • Create new free blocks by increasing the size of the volume that generated the event. Volume Quota Overcommitted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed the whole of the overcommitted space for that volume. Default value: 1 Event generated: Volume Growth Abnormal Volume Snap Reserve Full Threshold (%) Description: Specifies the value (percentage) at which the space that is reserved for taking volume Snapshot copies is considered full. Volume Growth Event Minimum Change (%) Description: Specifies the minimum change in volume size (as a percentage of total volume size) that is acceptable.File system management | 189 not generate alarms or appear in default event lists. If the change in volume size is more than the specified value. the DataFabric Manager server generates a Volume Growth Abnormal event. Disabling would ensure Snapshot copies . it is important to maintain a large enough reserve. Default value: 90 Event generated: Volume Snap Reserve Full Event severity: Error Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. Default Value: 95 Event generated: Volume Quota Almost Overcommitted Event Severity: Warning Corrective action: Same as that of Volume Quota Overcommitted. Volume Quota Nearly Overcommitted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed most of the overcommitted space for that volume.

For instructions on how to identify Snapshot copies you can delete. a fractional overwrite reserve set to greater than 0. If this limit is exceeded. Default value: 80 Event generated: User Quota Almost Full Volume No First Snapshot Threshold (%) Description: Specifies the value (percentage) at which a volume is considered to have consumed all the free space for its space reservation. This option applies to volumes that contain space-reserved files. The users' quota includes hard limit in the / etc/quotas file. This is the space that the volume needs when the first snapshot is created. the DataFabric Manager server generates a User Disk Space Quota Almost Full event or a User Files Quota Almost Full event. The user's quota includes hard limit in the / etc/quotas file. User Quota Full Threshold (%) Description: Specifies the value (percentage) at which a user is considered to have consumed all the allocated space (disk space or files used) as specified by the user’s quota. no snapshots. This option applies to volumes that contain space-reserved files. This is the space that the volume needs when the first Snapshot copy is created. Default value: 90 Event generated: User Quota Full User Quota Nearly Description: Specifies the value (percentage) at which a user is considered Full Threshold (%) to have consumed most of the allocated space (disk space or files used) as specified by the user’s quota. Default value: 80 . a fraction of Snapshot copies overwrite reserve set to greater than 0. and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. Default value: 90 Event generated: Volume No First Snapshot Volume Nearly No First Snapshot Threshold (%) Description: Specifies the value (percentage) at which a volume is considered to have consumed most of the free space for its space reservation. see the Operations Manager Help.0 that there is always space available to create new files or modify present ones. and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. the DataFabric Manager server generates a User Disk Space Quota Full event or a User Files Quota Full event. no Snapshots . If this limit is exceeded.190 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

than the default. Related concepts Volume Snapshot copy thresholds and events on page 192 Related tasks Creating alarms on page 100 Related information Data ONTAP Data Protection Online Backup and Recovery Guide . To view normal events for a volume. You might want to set the thresholds to a value higher than the default if storage space is routinely more than 80 percent full.netapp.com/NOW/ knowledge/docs/ontap/ontap_index. Leaving the Nearly Full Threshold at the default value might generate events that notify you that storage space is nearly full more often than you want. Lowering the threshold ensures that DataFabric Manager generates the event well before completely filling the storage.shtml Normal events for a volume Normal events do not generate alarms or appear in default event lists. which display events of Warning or worst severity. you should synchronize the capacity thresholds of traditional volumes with the thresholds of their containing aggregates. You might want to set the thresholds to a value lower than the default. Click the Events tab. An early notification gives you more time to take corrective action before the storage space is full.File system management | 191 Event generated: Volume Almost No first Snapshot Note: When a traditional volume is created. . it is tightly coupled with its containing aggregate so that its capacity is determined by the capacity of the aggregate. then go to the Report drop-down list and select the History report. do either of the following: • • Display the Volume Details page for the volume. Modification of the thresholds You can set the thresholds to a value.http://now. higher or a lower. For this reason.

Access the Volume Snapshot copy details report. Select the Snapshot copies. tracking qtree capacity utilization. a fractional overwrite reserve set to greater than 0. and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. no Snapshot copies. You can configure the alarm to repeat until it is acknowledged. and determining qtree capacity threshold and events.192 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Volume Nearly No First Snapshot Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed most of the free space for its space reservation. Default value: 90 Event generated: Snapshot Reserve Full Event severity: Warning Corrective action: 1. This is the space that the volume needs when the first Snapshot copy is created. You can set the following volume Snapshot thresholds: Volume Snap Reserve Full Threshold (%) Description: Specifies the percentage at which the space that is reserved for taking volume Snapshot copies is considered full. . Note: If you want to set an alarm on a specific volume. DataFabric Manager features thresholds to help you monitor Snapshot copy usage for flexible and traditional volumes.0 Management of qtree capacity You can manage qtree capacity by gathering qtree information. You can configure alarms to send notification whenever a volume Snapshot copy event occurs. By default. if you have configured an alarm to alert you to an event. 2. you must create a group with that volume as the only member. Click Compute Reclaimable. 3. Next topics Volume Snapshot copy thresholds and events on page 192 Qtree capacity thresholds and events on page 194 Volume Snapshot copy thresholds and events You can set alarms whenever a Snapshot copy is taken on a flexible or a traditional volume. This option applies to volumes that contain space-reserved files. DataFabric Manager issues the alarm only once per event.

if exceeded. A volume is allowed up to 255 Snapshot copies. or weeks. is considered too old for the volume. Default value: 250 Event generated: Too Many Snapshots Event severity: Error Volume Too Old Snapshot Threshold Description: Specifies the age of a Snapshot copy. minutes. Default value: 52 weeks Event generated: Too Old Snapshot copies Event severity: Warning Related concepts Volume capacity thresholds and events on page 186 Related references Configuration guidelines on page 99 . This is the space that the volume needs when the first Snapshot copy is created. is considered too many for the volume. The Snapshot copy age can be specified in seconds. and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. no Snapshot copies . Default value: 90 percent Event generated: No Space for First Snapshot Event severity: Warning Volume Snapshot Count Threshold Description: Specifies the number of Snapshot copies.File system management | 193 Default value: 80 percent Event generated: Nearly No Space for First Snapshot Event severity: Warning Volume No First Snapshot Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed all the free space for its space reservation. which. days. hours. a fractional overwrite reserve set to greater than 0. which. This option applies to volumes that contain space-reserved files. if exceeded.

to free disk space.194 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Note: To reduce the number of Qtree Full Threshold events generated. if you have configured an alarm to alert you to an event. Note: If you want to set an alarm on a specific qtree. and generates a Qtree Full event only if the condition persisted throughout the threshold interval. Qtree Full Threshold Interval specifies the period of time during which the condition must persist before the event is generated. • • If the threshold interval is 0 seconds or a value less than the volume monitoring interval. if the monitoring cycle time is 60 seconds and the threshold interval is 90 seconds. You can configure the alarm to repeat until it is acknowledged. the Qtree Full threshold interval is set to zero. You can set the following qtree capacity thresholds: • Qtree Full (%) Description: Specifies the percentage at which a qtree is considered full. DataFabric Manager features thresholds to help you monitor the capacity of qtrees. If the threshold interval is greater than the volume monitoring interval. • Increase the hard disk space quota for the qtree.0 Qtree capacity thresholds and events Operations Manager enables you to monitor qtree capacity and set alarms. For the Qtree Full threshold. DataFabric Manager issues the alarm only once per event. the DataFabric Manager server waits for the specified threshold interval. By default. Default value: 90 percent Event generated: Qtree Full Event severity: Error Corrective action: Take one or more of the following actions: • Ask users to delete files that are no longer needed. you must create a group with that qtree as the only member. Threshold intervals apply only to error and informational events. you can set a Qtree Full Threshold Interval to a non-zero value. If the condition persists for the specified amount of time. the DataFabric Manager server generates Qtree Full events as they occur. you can also configure an alarm to send notification only when the condition persists over a specified period. the threshold event is generated only if the condition persists for two monitoring interval. Qtree Nearly Full Threshold (%) Description: Specifies the percentage at which a qtree is considered nearly full. which includes two or more monitoring intervals. • . the DataFabric Manager server generates a Qtree Full event. Quotas must be enabled on the storage system. You can configure alarms to send notification whenever a qtree capacity event occurs. For example. By default.

DataFabric Manager monitors storage resources (volumes and qtrees) that are configured on a vFiler unit. DataFabric Manager uses the stored information to update resource ownership. DataFabric Manager maintains information in its database about volumes and qtrees that are removed or destroyed from a vFiler unit. After it discovers a configured vFiler unit on the hosting storage system.File system management | 195 Default value: 80 percent The value for this threshold must be lower than the value for Qtree Full Threshold for DataFabric Manager to generate meaningful events. you can monitor volumes and qtrees on a vFiler unit. to free disk space. . Related tasks Creating alarms on page 100 How Operations Manager monitors volumes and qtrees on a vFiler unit By using Operations Manager. Event generated: Qtree Almost Full Event severity: Warning Corrective action: Take one or more of the following actions: • • Ask users to delete files that are no longer needed. As DataFabric Manager monitors hosting storage systems for vFiler unit storage resources. it also provides information about qtree quotas. During initial discovery. DataFabric Manager assigns the resource objects to the vFiler unit. Next topics How Operations Manager monitors qtree quotas on page 195 Where to find vFiler storage resource details on page 196 How Operations Manager monitors qtree quotas You can monitor qtree quotas by using Operations Manager. DataFabric Manager uses SNMP to discover the volumes and qtrees as a hosting storage system’s objects. Increase the hard disk space quota for the qtree. As the volumes and qtrees are reassigned to other vFiler units.

Changes made to the parent volume after the clone is created are not reflected in the clone. Volumes that have clone children have the names of those children. you can view the volumes and qtrees on a vFiler unit in the vFiler Details page. If you later decide you want to sever the connection between the parent and the clone. Each clone name links to the Volume Details page for the clone child.196 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.shtml Identification of clones and clone parents By using Operations Manager.com/NOW/knowledge/docs/ontap/ ontap_index. Clone List report by selecting Member Details > File Systems > Report drop-down list. Clone volumes and their parent volumes share the same disk space for any data common to the clone and parent. This means that creating a clone is instantaneous and requires no additional disk space (until changes are made to the clone or parent). This removes all restrictions on the parent volume and enables the space guarantee on the clone. . For general information about clone volumes and clone parents. The vFiler Details page (Member Details > vFilersvFiler_name) provides you with a link to the volumes and qtrees assigned to a vFiler unit. you can view clone volume and the parent volume information. see the Data ONTAP Storage Management Guide.netapp. which is a link to the Volume Details page for the parent volumes. you can view details of clones and their parents. The Volume Details and Qtree Details pages provide you with details about the volumes and qtrees that are assigned to a vFiler unit. the Volume Details page for a clone child includes the name of the clone parent. You can display the Volumes. you can split the clone. DataFabric Manager helps you manage clone hierarchies by making it easier to view clone relationships between volumes. known as volume clone. Alternatively.http://now. A clone is a point-in-time. Related information Data ONTAP Storage Management Guide . What clone volumes are Clone volumes are fully functional volumes that always exist in the same aggregate as their parent volumes. Cloned volumes have an entry in the Clone Parent column of the report. Clone volumes can themselves be cloned. writable copy of the parent volume. Data ONTAP enables you to create writable copy of a volume. indicating the name of the parent.0 Where to find vFiler storage resource details With Operations Manager. which are included in the Clones column. By using Operations Manager.

Next topics Snapshot copy monitoring requirements on page 197 Detection of Snapshot copy schedule conflicts on page 197 Dependencies of a Snapshot copy on page 198 Snapshot copy monitoring requirements To use the Snapshot copy monitoring features.4 or later. the Related Storage section of its Volume Details page includes a link to a list of its direct clone children. . Why Snapshot copies are monitored The Snapshot copy monitoring and space management help you monitor. alert. DataFabric Manager monitors for conflicts between the Snapshot copy schedule and SnapMirror and SnapVault schedules on the same volume.File system management | 197 If a volume is a clone parent. Detection of Snapshot copy schedule conflicts By using Operations Manager.0 or later. The Aggregate Details and Volume Details pages both feature a Protection area that indicates whether scheduled Snapshot copies and SnapMirror are enabled. Conflicts can cause scheduled Snapshot copies to fail. you can monitor conflicts between the Snapshot copy schedule and SnapMirror and SnapVault schedules. An event is also generated if a Snapshot copy is scheduled at the same time as a SnapMirror transfer. The features that help you track space-usage issues (space reservations. When Snapshot copies are scheduled for a volume. overwrite rates. DataFabric Manager generates a schedule conflict event if a volume is configured with both Snapshot copy and SnapVault copy schedules. Use DataFabric Manager to answer the following questions about Snapshot copies: • • • • • How much aggregate and volume space is used for Snapshot copies? Is there adequate space for the first Snapshot copy? Which Snapshot copies can be deleted? Which volumes have high Snapshot copy growth rates? Which volumes have Snapshot copy reserves that are nearing capacity? See the Operations Manager Help for instructions. and report on Snapshot copies and how they influence your space management strategy. The list of Snapshot copies on the Volume Details page is available for systems running Data ONTAP 6. and so on) are available for systems running on Data ONTAP 7. DataFabric Manager generates these events only if the SnapVault or SnapMirror relationships are monitored on the volume. DataFabric Manager requires a valid login name and password for each system being monitored.

such as Perl. Thresholds on Snapshot copies You can set a threshold of a maximum number of Snapshot copies to determine when to delete a Snapshot copies. length of billing cycle. up to 10 of the most recent Snapshot copies for the volume. The billing reports contain information such as average use. This information helps you determine whether you can delete a Snapshot copy or.xml. You can avoid the problem of Snapshot failures due to inadequate space. and the steps you would need to take to delete the copy. . Excel (. and charges based on the rate and use. and the format for currency in the report. or a group of storage objects. if any. The storage objects include storage system. DataFabric Manager generates events when it exceeds the thresholds. volume. qtree. . Storage chargeback reports You can create storage chargeback report using Operations Manager to collect the amount of space used or allocated in specific storage objects (storage system. The storage chargeback feature of DataFabric Manager provides billing reports for the amount of space used or allocated in specific storage objects. what steps you need to take to delete the Snapshot copy. Set the Volume Snapshot Count threshold to define the maximum number of Snapshot copies for a volume.198 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. rate per GB of space used.xls).0 Dependencies of a Snapshot copy You can view the dependencies of a Snapshot copy. You can specify the day when the billing cycle starts. and the steps to delete it using Operations Manager. click the hyperlinked text in the Dependency column for that Snapshot copy. To generate a page that lists dependent storage components for a Snapshot copy. Comma-separated values (. Set the Volume Snapshot Too Old threshold to indicate the maximum allowable age for copies of the volume. if the Snapshot copy has dependencies. These reports can be generated in various formats. whether you can delete a Snapshot copy. the rate. The link is not available when the dependency is due to SnapMirror or SnapVault or to a FlexClone volume. The Snapshots area of the Volume Details page displays information about.csv). This includes the last time the Snapshot copies that were accessed and their dependencies. or user. volume. qtree) or a group of storage objects.txt. The thresholds that address the number and age of volume Snapshot copies let you know when you must delete Snapshot copies. that is offline. and .

Last Month report on April 3.txt. If Chris views the Chargeback. The data reported by the chargeback feature on a specific day is based on the last data sample that is collected before midnight GMT of the previous night. Chargeback reports in various formats You can generate chargeback report in various formats by running the dfm report view -F format _report-name command. You can also generate chargeback data in other formats. this Month report on April 3. if the last data sample before midnight on April 17 was collected at 11:45 p. and . the report does provide data in a spreadsheet form that you can use for other billing applications. The Day of the Month for Billing option determines when the current month begins and the last month ends. charges. groups. the report displays data for the period of February 5 through March 4. For example. The data is accessible through the spreadsheet icon () on the right side of the Report drop-down list. When you select a Chargeback.csv). the report displays data for the period of March 5 through midnight (GMT) of April 2. as described in the following example. Company A’s DataFabric Manager system is configured for the billing cycle to start on the fifth day of every month. If Chris (an administrator at Company A) views the Chargeback. GMT.xls). Next topics When is data collected for storage chargeback reports on page 199 Determine the current month’s and the last month’s values for storage chargeback report on page 199 Chargeback reports in various formats on page 199 When is data collected for storage chargeback reports You can collect data for chargeback reports for a specific period. and other data based on the sample collected on April 17 at 11:45 p.File system management | 199 Storage chargeback reports provide an efficient way to track space used and space that is allocated for generating bills based on your specifications. the chargeback reports viewed on April 18 display details about average use. For example. the data displayed pertains to the current or the last billing cycles. DataFabric Manager does not integrate with any specific billing program. or users in your company for the storage services they use. Comma-separated values (.m. Excel (. such as Perl.xml by using the dfm report command. Chargeback reports are useful if your organization bills other organizations. All chargeback reports contain Period Begin and Period End information that indicates when the billing cycle begins and ends for the displayed report. Determine the current month’s and the last month’s values for storage chargeback report You can calculate the current and previous month's chargeback report. respectively. to generate the . . However. This Month or Last Month view.m.

The global level can be chargeback increment. the chargeback rate. you should issue the following command: dfm report view -F perl report_name. You can specify storage chargeback option at a global or a group level.0 chargeback reports in Perl format so that other billing applications can use the chargeback data. In this command. and the day when the billing cycle starts. see the DataFabric Manager man pages. the currency format. the currency format. the chargeback rate. report_name is one of the following: • • • • • • • • • • • • • • • • • • • • • • • • groups-chargeback-this-month groups-chargeback-last-month groups-chargeback-allocation-this-month groups-chargeback-allocation-last-month filers-chargeback-this-month filers-chargeback-last-month filers-chargeback-allocation-this-month filers-chargeback-allocation-last-month volumes-chargeback-this-month volumes-chargeback-last-month volumes-chargeback-allocation-this-month volumes-chargeback-allocation-last-month qtrees-chargeback-this-month qtrees-chargeback-last-month qtrees-chargeback-allocation-this-month qtrees-chargeback-allocation-last-month clusters-chargeback-this-month clusters-chargeback-last-month clusters-chargeback-allocation-this-month clusters-chargeback-allocation-last-month vservers-chargeback-this-month vservers-chargeback-last-month vservers-chargeback-allocation-this-month vservers-chargeback-allocation-last-month For more information about using the dfm report command. . The chargeback report options The chargeback report options enable you to specify the chargeback increment. or specify an annual charge rate for objects in a specific group.200 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

DataFabric Manager calculates the charges as follows: Annual Rate/ 12. Step 1.. you can specify an annual charge rate (per GB) for objects in a specific group. then click the Edit column for the group for which you want to specify an annual charge rate. The Options page (Setup > Options link). By default.File system management | 201 In addition to these global settings. you can set chargeback options at a global or group level.. DataFabric Manager calculates the charges as follows: Annual Rate / 365 * number of days in the billing period. there is a flat rate for each billing period regardless of the number of days in the period. The annual charge rate specified for a group overrides the setting specified at the global level. . The Edit Group Settings page (click Edit Groups in the left pane). To apply changes to.. Next topics Specifying storage chargeback options at the global or group level on page 201 The storage chargeback increment on page 201 Currency display format for storage chargeback on page 202 Specification of the annual charge rate for storage chargeback on page 202 Specification of the Day of the Month for Billing for storage chargeback on page 203 The formatted charge rate for storage chargeback on page 203 Specifying storage chargeback options at the global or group level By using Operations Manager. they are adjusted based on the number of days in the billing period. Monthly Charges are fixed. You can specify this setting only at the global level. You can specify storage chargeback increment using Operations Manager. The following values can be specified for this option: Daily Charges are variable. All objects that DataFabric Manager manages Objects in a specific group Go to.. The storage chargeback increment The storage chargeback increment indicates how the charge rate is calculated. then select Chargeback in the Edit Options section. the chargeback increment is Daily.

If the currency symbol you want to use is not part of the standard ASCII character set.123. no rate is specified. For example.67.55. . use the following guidelines: • You must specify four # characters before the decimal point.) is used for US dollars and a period (.# and JD #.202 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.###. the format is $ #. enter 150. For example.###.## as your currency format for a specific installation. to specify an annual charge rate of $150. enter 150. For example. in the number 5.y format. By default. Specify this value in the x. this format is used for all chargeback reports generated by that installation. use the code specified by the HTML Coded Character Set. You must specify a value for this option for DataFabric Manager to generate meaningful chargeback reports. you must specify at least one # character after the decimal separator.55. use ¥ for the Yen (¥) symbol. You can specify only one currency format per DataFabric Manager. A decimal point separates the integer part of a number from its fractional part. For example.0 Currency display format for storage chargeback You can specify currency formats to display in Operations Manager. For example. For example. The symbol used as a decimal point depends on the type of currency. such as EUR or ¥. • • • • Specification of the annual charge rate for storage chargeback You can set annual charge rate for storage chargeback at a global level or a specific group. the period (. where x is the integer part of the number and y is the fractional part. Note: You must use a period (. if you specify $ #.###. to suit your needs. The Currency Format setting indicates the format to use for displaying currency amounts in Operations Manager. The Annual Charge Rate (per GB) setting indicates the amount to charge for storage space used per GB per year. You can use any currency symbol. If you need to specify any other format. You can specify this setting at the global level.55 Danish Kroner. ###. a comma (. the comma (.890.##. By default.) as the decimal separator.) is used for Danish Kroner. a period (. $ #.) is used for Danish Kroner Although a decimal separator is optional in the currency format.) to indicate the fractional part of the number in the Annual Charge Rate box.) is the thousands-separator in the number 567.) is used for US dollars and a comma (. You can optionally specify a thousands-separator.55.) is the decimal point. A thousands-separator separates digits in numbers into groups of three. Even if you are specifying a currency format that uses a comma (. if you use it. For example. The symbol used as a thousandsseparator depends on the type of currency. where # indicates a digit. in addition to specific groups. to specify 150. For example.###. For example.

What deleting storage objects for monitoring is With Operations Manager. DataFabric Manager stops collecting and reporting data about it. For example. The Formatted Charge Rate setting displays the annual charge rate value in the currency format. it indicates the fifteenth day of the month. Note: When you delete a storage object from any group other than Global. You must delete the object from the Global group for DataFabric Manager to stop monitoring it. specify -4. The value is automatically generated and displayed based on the currency format and the annual charge rate you specify.###. if you want to bill on the fifth day before the month ends every month. You cannot set or change this option. For example. if the currency format is $ #. 0 specifies the last day of the month. volume. this value is set to 1. you can stop monitoring a storage object (aggregate.File system management | 203 Specification of the Day of the Month for Billing for storage chargeback You can specify the day of the month from which the billing cycle begins. When you delete an object from the Global group. Next topics Reports of deleted storage objects on page 204 Undeleting a storage object for monitoring on page 204 . DataFabric Manager does not stop collecting and reporting data about it. the Formatted Charge Rate option displays $150. By default. For example. if you specify 15.## and the annual charge rate is 150.55. or qtree) with DataFabric Manager by deleting it from the Global group.55. Therefore. The Day of the Month for Billing setting indicates the day of the month on which the billing cycle begins. The formatted charge rate for storage chargeback Operations Manager displays the annual charge rate for storage chargeback in the specified format. The following values can be specified for this option: 1 through 28 These values specify the day of the month. Data collection and reporting is not resumed until the object is added back to the database. -27 through 0 These values specify the number of days before the last day of the month. the object is deleted only from that group.

Deleted vFilers. Deleted Note: These reports are accessible from the Report drop-down list on the Member Details tab for each storage object (Storage Systems. Deleted Aggregates. Select the check box next to each object you want to return to the database. 2. and LUNs). SANs. Deleted File Systems. Deleted Fibre Channel Switches.204 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Steps 1. . Undeleting a storage object for monitoring You can undelete a storage object using Operations Manager.0 Reports of deleted storage objects All storage objects deleted from the DataFabric Manager database are listed in various reports. Deleted Qtrees. Deleted Volumes. Aggregates. Deleted SAN Hosts. Following are the reports of deleted storage objects: • • • • • • • • • Storage Systems. Click Undelete. File Systems. vFiler units. Deleted LUNs.

As soon as DataFabric Manager is installed. it begins the process of discovering. Next topics Management tasks performed using Operations Manager on page 205 Operations Manager components for managing your storage system on page 206 Storage system groups on page 206 Custom comment fields in Operations Manager on page 207 Consolidated storage system and vFiler unit data and reports on page 207 Where to find information about a specific storage system on page 208 Managing active/active configuration with DataFabric Manager on page 212 Remote configuration of a storage system on page 215 Remote configuration of a cluster on page 216 Storage system management using FilerView on page 218 Introduction to MultiStore and vFiler units on page 219 Management tasks performed using Operations Manager You can use Operations Manager to manage your storage system. View information for individual systems. View the status of and obtain reports and information for a group of systems. View the active/active configuration status and perform takeover and giveback operations if the storage system is an active/active controller. View and respond to events. before you can use the data to simplify your network administration tasks. view configuration status. so on. you can do the following: • • • • • • • • • Create groups. and gathering data about your supported storage systems. Access the console of a storage system. . Link to FilerView for a selected storage system or vFiler unit. By using Operations Manager. However. and. Configure alarms that send you notification if DataFabric Manager logs a specific type of event or severity of event. view and respond to events.Storage system management | 205 Storage system management You can use Operations Manager to view the status of and report of groups. Edit the configuration settings of a storage system. you need to understand the different ways you can use Operations Manager to manage your storage systems. monitoring. configure alarms.

or delete existing groups—Control Center > Home > Edit Groups. To manage your storage systems effectively. Operations Manager components for managing your storage system By using Operations Manager you can create or modify groups. move. and so on. select the Global group in the Groups pane on the left side of Operations Manager.206 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. • • • Storage system groups Using Operations Manager you can create and manage groups of storage systems.0 • • • Insert values into custom comment fields. Configure alarms for generated events. manage DataFabric Manager administrators. and configure custom reports—Control Center > Setup > Alarms. View and respond to system events—Control Center > Group Status > Events. You can group your storage . select the desired group name in the Groups pane on the left side of Operations Manager. configure alarms. manage storage system configurations. View information about vFiler units that are configured on hosting storage systems and access details of specific vFiler units —Control Center > Member Details > Virtual Systems. To display information about a specific group of systems. View user. The Global group is the default group containing the superset of all storage systems. you should organize them into smaller groups so that you can view information only about objects in which you are interested. Operations Manager is designed around the concept of groups. the pages change to display information relating to that group. To display information about all your storage systems. Compare storage system configurations and configuration files against a template and modify R global options—Control Center > Management. Edit user quotas. View information about all or a group of storage systems and access details of specific storage systems—Control Center > Member Details > Physical Systems. copy. When a group is selected in the left pane of the Operations Manager main window. and manage all scripts installed on DataFabric Manager—Control Center > Management . qtree. Configure host users and roles—Control Center > Management > Host Users. Modify passwords for one or multiple storage systems. view information about vFiler units. and group quotas. You can perform the following tasks by using Operations Manager to manage your storage systems: • • • • • Create new groups or modify. establish roles to manage DataFabric Manager access.

One example would be to associate a department code with a quota user. . You can view global and group information and select individual system data in detail using the Member Details report pages.Storage system management | 207 systems to meet your business needs. FC switches. For example. volumes. operating system version. for use in chargeback for business accounting. the Qtree Details page for a specific qtree. inserting data to create specific comments. the Qtrees Comments report. You can use the custom comment fields for any purpose. You can view storage system related information from Control Center > Member Details > Physical Systems > Report list. for example. and viewing comments. and storage system platform. to associate a comment with a qtree. by geographic location. use the Edit Qtree Settings page for that qtree. Using custom comment fields in Operations Manager has three aspects: creating the field. SAN hosts. qtrees. Creating the comment field: You create the custom comment field using Setup menu > Options > Custom Comment Fields. see the Operations Manager Help for the Options page. Inserting data: You insert data into the custom comment field in the Edit Settings page for the object you want to associate with the comment. you can view storage system and vFiler unit reports. You can use the Search function in Operations Manager to display every object that contains specific data in a custom comment field. aggregates. You can also view the comments for a single object in its Details page—for example. Viewing comments: You view custom comment data for multiple objects in the Comments report for the type of object. groups. Custom comment fields in Operations Manager You can create custom comment fields and associate them with specific storage systems. You can view vFiler-related information from Control Center > Member Details > Virtual Systems > Report list. for example. and quota users. LUNs. For detailed instructions on creating custom comment fields. Consolidated storage system and vFiler unit data and reports By using Operations Manager.

or you can click Refresh Group Monitors to manually refresh the data. You can perform the following storage system management tasks from the Details page: • • View specific storage system or vFiler unit details. DataFabric Manager regularly refreshes monitoring data for the entire group within which a storage system or vFiler unit resides. Next topics Tasks performed from a Details page of Operations Manager on page 208 Editable options for storage system or vFiler unit settings on page 209 What Storage Controller Tools list is on page 210 What Cluster Tools list is on page 210 What the Diagnose Connectivity tool does on page 210 The Refresh Monitoring Samples tool on page 211 The Run a Command tool on page 211 The Run Telnet tool on page 212 Console connection through Telnet on page 212 Tasks performed from a Details page of Operations Manager You can view and modify storage system or vFiler unit configurations. view events related to the storage system or vFiler unit. you can view information about a specific storage system or a vFiler unit.208 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. and view or modify configurations of a storage system or a vFiler unit. You can perform the following tasks: • • • • View system data for all or a group of monitored systems Generate spreadsheet reports Obtain detailed information about a specific storage system Launch FilerView Where to find information about a specific storage system By using Operations Manager. and launch FilerView. Edit the storage system or vFiler unit configuration using FilerView. get information about storage systems. generate spreadsheet reports.0 Tasks performed by using the storage systems and vFiler unit report pages You can view system data for all groups. and so on. Similar to the other Operations Manager Control Center tab pages. . You can view the details by clicking the storage system or vFiler unit name on the Operations Manager reports. view and check active/active configurations. the Appliances and vFiler reports enable you to view a wide variety of details in one place.

Storage system management | 209 • • • • • • • View the active/active configuration status and perform takeover and giveback operations by using the cluster console (on active/active controllers only). Access the vFiler units that are hosted on a storage system. Check active/active controller configurations. Edit the storage system configuration using FilerView. Edit Remote LAN Module (RLM) port settings for the storage system. View events related to the storage system or vFiler unit. View graphing information specific to each type of storage system.

Editable options for storage system or vFiler unit settings
You can specify or change the storage system or vFiler unit settings using Operations Manager. You can use the Edit Storage Controller Settings page to specify or change storage system or vFiler unit settings. Note, however, that you can set global values for many settings using the Options page. You do not need to modify storage system or vFiler unit-level settings unless they differ from your global values. You can use the Edit Storage Controller Settings page to modify the following information: IP address This is the IP address of the storage system that DataFabric Manager monitors. You might want to change the storage system IP address if you want to use a different interface for administrative traffic. Login and password You should configure a login and password if you want to use Operations Manager to run a command on a system. Operations Manager uses this information to authenticate itself to the storage system on which the command is run. Configuration of login and password is mandatory. You can also set up authentication by using the /etc/hosts.equiv file on the storage system. For information about configuring the /etc/ hosts.equiv file, see the Data ONTAP Storage Management Guide.

Authentication

Threshold values The threshold values indicate the level of activity that must be reached on the storage system before an event is triggered. By using these options, you can set specific storage system or group thresholds. For example, the Appliance CPU Too Busy threshold indicates the highest level of activity the CPU can reach before a CPU Too Busy event is triggered. Threshold values specified on this page supersede any global values specified on the Options page. Threshold intervals The threshold interval is the period of time during which a specific threshold condition must persist before an event is triggered. For example, if the monitoring cycle time is 60 seconds and the threshold interval is 90 seconds, the event is generated only if the condition persists for 2 monitoring intervals. You can configure threshold intervals only for specific thresholds, as listed on the Options page.

210 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0
Related information

Data ONTAP Storage Management Guide - http://now.netapp.com/NOW/knowledge/docs/ontap/ ontap_index.shtml

What Storage Controller Tools list is
You can view storage system or controller details using Storage Controller Tools of Operations Manager. You can use Storage Controller Tools to set up parameters needed to communicate with a storage system system or a controller. You can perform the following tasks using Storage Controller Tools: • • • • • • Modify settings such as the primary IP address, remote platform management IP address, login and password, and so on. Diagnose the network connectivity of the storage system or the controller. Refresh the monitoring samples collected by Operations Manager. Run a command on the storage system or the controller. Connect to the device console. Gather information about the host (storage system or controller) users.

You can access Storage Controller Tools from the Details page for the storage system or hosting storage system (of a vFiler unit), or controller. The tools menu is located at the lower left display of Operations Manager.
Note: Storage Controller Tools is applicable to storage systems running Data ONTAP 8.0 7-Mode.

What Cluster Tools list is
You can view and modify cluster or cluster controller details using Cluster Tools of Operations Manager. You can use Cluster Tools to set up parameters needed to communicate with a cluster. You can perform the following tasks using Cluster Tools: • • • • Modify settings such as the primary IP address, monitoring options, management options such as login and password, and so on. Diagnose the network connectivity of the cluster. Refresh the monitoring samples collected by Operations Manager. Run a command on the cluster.

You can access Cluster Tools from the Cluster Details page for the cluster. The tools menu is located at the lower left display of Operations Manager.
Note: The Cluster Tools is applicable to storage systems running Data ONTAP 8.0 Cluster-Mode.

What the Diagnose Connectivity tool does
By using the Diagnose Connectivity tool, you can perform connectivity tests and review test outcome. The Diagnose Connectivity tool queries the DataFabric Manager database about a selected storage system, runs connectivity tests, and displays information and test outcomes. The sequence of steps

Storage system management | 211 depends on whether the storage system is managed or unmanaged. A managed storage system is one that is in the DataFabric Manager database. An unmanaged storage system is one that is not in the DataFabric Manager database.

The Refresh Monitoring Samples tool
You can view updated storage system details using Refresh Monitoring Samples in Operations Manager. You can specify the frequency at which Operations Manager collects information by using the system information-monitoring interval.

The Run a Command tool
By using the Run a Command tool in Operations Manager, you can run commands on storage systems. The Run a Command tool provides you with an interface to do the following: • • Run Data ONTAP commands on storage systems. Run any Remote LAN Module (RLM) command on the RLM card that is installed on a storage system.

Prerequisite DataFabric Manager uses the following connection protocols for communication: • Remote Shell (RSH) connection for running a command on a storage system To establish an RSH connection and run a command on a storage system, DataFabric Manager must authenticate itself to the storage system. Therefore, you must enable RSH access to the storage system and configure login and password credentials that are used to authenticate Data ONTAP. Secure Socket Shell (SSH) connection for running a command on an RLM card, if the installed card provides a CLI.

Restrictions The following restrictions exist: • There are several Data ONTAP run commands that are available on storage systems, but are restricted in DataFabric Manager. For a list of restricted commands, see the Operations Manager Help. You cannot run a command on the Global group.

Related concepts

Remote configuration of a storage system on page 215 DataFabric Manager CLI to configure storage systems on page 214 Prerequisites for running remote CLI commands from Operations Manager on page 215

212 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

What the remote platform management interface is on page 265
Related tasks

Running commands on a specific storage system on page 215 Running commands on a group of storage systems from Operations Manager on page 216

The Run Telnet tool
You can connect to the storage system using the Run Telnet tool in Operations Manager.

Console connection through Telnet
By using Operations Manager, you can connect the storage system console. Use the Connect to Device Console tool to connect to the storage system console. The storage system must be connected to a terminal server for DataFabric Manager to connect to the storage system console.
Note: Before initiating the console connection, you must set the Console Terminal Server Address in the Edit Settings page for the storage system.

Managing active/active configuration with DataFabric Manager
You can monitor and manage active/active configuration with the cluster console of Operations Manager. The cluster console enables you to view the status of an active/active configuration (controller and its partner) and perform takeover and giveback operations between the controllers. For detailed information about active/active configurations, see the Data ONTAP Storage Management Guide.
Next topics

Requirements for using the cluster console in Operations Manager on page 213 Accessing the cluster console on page 213 What the Takeover tool does on page 213 What the Giveback tool does on page 214 DataFabric Manager CLI to configure storage systems on page 214
Related information

Data ONTAP Storage Management Guide - http://now.netapp.com/NOW/knowledge/docs/ontap/ ontap_index.shtml

Storage system management | 213

Requirements for using the cluster console in Operations Manager
You can use the cluster console in Operations Manager to view the status of an active/active configuration. An authentication method must be set up for DataFabric Manager to authenticate to the controller on which takeover and giveback operations are to be performed. Login and password must be set for the storage system.

Accessing the cluster console
You can access the cluster console from Operations Manager to view the status of an active/active configuration.
Steps

1. Click Control Center > Home > Member Details > Physical Systems. 2. From the Report drop-down list, select Active/Active Controllers, All. 3. Click the controller for which you want to view the status of the active/active configuration. 4. Click View Cluster Console under Storage Controller Tools.

What the Takeover tool does
You can use the Takeover tool from the Tools list to initiate a manual takeover of the controller’s partner. The Takeover tool is available in the Tools list only when the controller whose Tools list you are viewing can take over its partner. Once you select Takeover, the Takeover page is displayed. The Takeover page enables you to select the type of takeover you want the controller to perform. You can select from one of the following options: Take Over Normally This option is the equivalent of running the cf takeover command in which the controller takes over its partner in a normal manner. The controller allows its partner to shut down its services before taking over. This option is used by default. This option is the equivalent of running the cf takeover -f command in which the controller takes over its partner without allowing the partner to gracefully shut down its services.

Take Over Immediately

Force a Takeover This option is the equivalent of running the cf forcetakeover -f command in which the controller takes over its partner even in cases when takeover of the partner is normally not allowed. Such a takeover might cause data loss.

214 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Takeover After a Disaster

This option is for MetroClusters only and is the equivalent of running the cf
forcetakeover -f -d command. Use this option if the partner is

unrecoverable.
Note: The Force a Takeover and Takeover After a Disaster options, are also available in circumstances when the interconnect between the controller, and its partner is down. It enables you to manually take over the partner.

Once you have made a selection, the Status option on the Cluster Console page displays the status of the takeover operation. Once the takeover operation is complete, the Cluster Console page displays the updated controller-icon colors. The Cluster Console page also displays the status of each controller. The Tools list of each controller is adjusted appropriately to indicate the active/active configuration operation each controller can now perform.

What the Giveback tool does
You can use the Giveback tool to initiate a giveback operation from a controller that has taken over its partner. The Giveback tool is available in the Tools list only when the controller whose Tools list you are viewing can give back to its partner. Once you select Giveback for the controller, the Giveback page is displayed. You can select from one of the following giveback options: Give Back Normally This option is the equivalent of the cf giveback command in which the controller performs a graceful shutdown of services and aborts CIFS operations. The controller also shuts down long-running jobs that are running on the controller on behalf of the taken over controller. This option is the equivalent of the cf giveback -f command in which the controller does not gracefully shut down the services of the taken over controller.

Give Back Immediately

Once you have selected an option, the Status option on the Cluster Console page displays the status of the giveback operation. Once the giveback operation is complete, the Cluster Console page displays the updated controller-icon colors. The Cluster Console page also displays status of each controller. The Tools list of each controller is adjusted appropriately to indicate the active/active configuration operation each controller can now perform.

DataFabric Manager CLI to configure storage systems
DataFabric Manager enables you to run storage system commands such as sysconfig, version, and install, on a specific storage system or a group of storage systems. You can run all commands, except for a few administrator commands. For a list of unsupported commands, see the Operations Manager Help.

Storage system management | 215

Remote configuration of a storage system
You can remotely configure a storage system using DataFabric Manager. As you monitor your storage systems, you might find that you need to alter the configuration settings on one or more storage systems. DataFabric Manager provides three methods by which you can remotely configure your storage systems: • • • Accessing the storage system CLI Accessing FilerView Using the DataFabric Manager multiple-storage system remote configuration feature

You can remotely configure the following DataFabric Manager features: • • • • Host users management User quota management Password management Roles management

Next topics

Prerequisites for running remote CLI commands from Operations Manager on page 215 Running commands on a specific storage system on page 215 Running commands on a group of storage systems from Operations Manager on page 216

Prerequisites for running remote CLI commands from Operations Manager
Your storage systems must meet certain prerequisites to run remote CLI from Operations Manager The Command operation uses rsh to run a command on storage systems. Therefore, you must have rsh access to your storage system enabled to run CLI commands from DataFabric Manager. By default, rsh access to a storage system is enabled. For more information about enabling rsh on your storage system, see the Data ONTAP System Administration Guide.
Note: The command operation uses only ssh to run remote CLI commands on clusters running

Data ONTAP 8.0 Cluster-Mode.

Running commands on a specific storage system
You can run a command for a specific storage system using Operations Manager.
Steps

1. Click Control Center > Home > Member Details > Physical Systems 2. Select Storage Systems, All report.

4.216 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Remote configuration of a cluster You can remotely configure a cluster using DataFabric Manager. Enter the command in the Appliance Command box. Enter the command in the Appliance Command box. Click Run a Command under Storage Controller Tools. you can run a command on a group of storage systems. As you monitor your clusters. Click Run. you must configure the credentials for the cluster by using either of the following methods: • • Accessing the CLI Accessing the Edit Storage Controller Settings or Edit Cluster Settings page (Details > Tools List > Edit Settings) from Operations Manager Next topics Running commands on a specific cluster on page 217 Running commands on a specific node of a cluster on page 217 . In Operations Manager. Steps 1.0 3. you might find that you need to execute commands to alter the configuration settings on one or more nodes in the cluster. 6. The Group Summary page is displayed. The Run Command page is displayed. In the left pane of the Operations Manager window. 3. Select Run a Command from the Storage Controller Tools menu. Click Run. 2. select the group that you want to run a command on. Click the storage system to go to the Storage Controller Details page for the storage system or hosting storage system (of a vFiler unit) that you want to run a command on. 4. 5. Running commands on a group of storage systems from Operations Manager By using Operations Manager.

Click the Run a Command link under Cluster Tools.0 Cluster-Mode) using Operations Manager. 5. you cannot execute RLM commands on clusters running Data ONTAP 8." 4. Running commands on a specific node of a cluster You can run only RLM commands for a specific node of a cluster by using Operations Manager. 3. Before you begin You must set the credentials for a cluster and its node. Enter the command in the Appliance Command box. All. click the name of the controller. 4. Before you begin You must set the user name and password at the cluster level from the Edit Cluster Settings page or using the corresponding CLI. Click the cluster to go to the Cluster Details page. All" report. 2. Click the cluster to go to the Cluster Details page. 2. DataFabric Manager uses the Remote Platform Management IP Address (address of the RLM card) for running these RLM commands.0 Cluster-Mode. Click Run. 3. However. Note: You must set the user name to naroot and set a password at the node level of the cluster from the Edit Storage Controller Settings page or using the corresponding CLI. click the number corresponding to "Controllers.Storage system management | 217 Running commands on a specific cluster You can run commands on a specific cluster (Data ONTAP 8. Steps 1. All. Click Control Center > Home > Member Details > Physical Systems > Report > Clusters. Click Control Center > Home > Member Details > Physical Systems > Report > Clusters. Steps 1. . The Run Command page is displayed. In the Cluster Details page. In the "Controllers.

. 7. Operations Manager enables you to log in to the FilerView management UI of the storage system. and volumes. vFiler units. qtrees. 5. 6. Storage system management using FilerView With Operations Manager. In addition to providing access to the storage system. aggregates. The Run Command page is displayed. click the storage system icon next to the storage system or vFiler unit name in the respective details page. FilerView.0 The Storage Controller Details page is displayed. In DataFabric Manager 2. Steps 1. You can access FilerView by clicking the icon next to the storage system or vFiler unit name in the details pages for events.218 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. By using FilerView. LUNs. Next topics What FilerView is on page 218 Configuring storage systems by using FilerView on page 218 What FilerView is Operations Manager enables you to view information about storage systems and vFiler units from a Web-based UI called FilerView. Configuring storage systems by using FilerView You can configure a storage system by using FilerView. pages displaying information about storage systems and vFiler units provide access to the Web-based UI.3 and later. Click the Run a Command link under Storage Controller Tools. storage systems. you can edit the configuration settings of a storage system. To access FilerView for a selected storage system or vFiler unit. Enter the command in the Remote Platform Management Command box. you can connect to a storage system using FilerView. Note: You cannot remotely configure more than one storage system using this method. Click Run. Operations Manager spawns a new window. When you invoke FilerView.

shtml . For information about configuring and using vFiler units in your storage network. Go to Step 3. Each “virtual filer” created as a result of the logical partitioning of the hosting storage system’s network and storage resources is called a vFiler unit. The storage resource assigned to a vFiler unit can be one or more qtrees or volumes.netapp. A distinct routing system is maintained for each IPspace.http://now. select the name of the storage system that you want to configure. Go to Step 3.com/NOW/knowledge/docs/ ontap/ontap_index. Next topics Why monitor vFiler units with DataFabric Manager on page 220 Requirements for monitoring vFiler units with DataFabric Manager on page 220 vFiler unit management tasks on page 221 Related information Data ONTAP MultiStore Management Guide . using the resources assigned. 4.3 or later You are running DataFabric Manager 2.. All page.2 or earlier Then. no cross-IPspace traffic is routed.3 or later You are running DataFabric Manager 2.. When prompted.. see the Data ONTAP MultiStore Management Guide. 3. provide your user name and the password. A vFiler unit can participate in a distinct IP address space called the IPspace. click the icon before the system name. click the FilerView icon next to the name of the storage system you want to configure.Storage system management | 219 If. 2. You are running DataFabric Manager 3. Introduction to MultiStore and vFiler units MultiStore is a software product that enables you to partition storage and network resources of a single storage system so that it appears as multiple storage units on the network. Edit the settings. The network resource assigned can be one or more base IP addresses or IP aliases associated with network interfaces. delivers storage system services to its clients as a storage system does. click the FilerView icon. On the Storage Systems. On the Storage Controller Details page.. On the Storage Systems. IP addresses defined for an IPspace are meaningful only within that space. In the Storage Controller Details or vFiler Details page. A vFiler unit . All page. You can create multiple vFiler units using MultiStore.

which includes MultiStore. even though the source vFiler unit is not assigned to the same group.1 or later installed on the hosting storage systems of vFiler units. Monitoring SnapMirror relationships: DataFabric Manager collects details of vFiler SnapMirror relationships from the hosting storage system and displays them if the destination vFiler unit is assigned to the vFiler group. Hosting storage systems are physical storage systems on which a vFiler unit is configured. you must have Data ONTAP 6. NDMP discovery. DataFabric Manager then displays them if the secondary storage system is assigned to the vFiler group.2 or later. • • Network connectivity: To monitor a vFiler unit. Operations Manager does not provide vFiler0 details. Data ONTAP automatically creates a default vFiler unit on the hosting storage system unit called vFiler0. you must first enable SNMP and HTTPS discovery. you are unable to monitor your vFiler unit using DataFabric Manager.220 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Editing user quotas: To edit user quotas that are configured on vFiler units. • • • • • . NDMP discovery: DataFabric Manager uses NDMP as the discovery method to manage SnapVault and SnapMirror relationships between vFiler units. Requirements for monitoring vFiler units with DataFabric Manager Requirements like Data ONTAP release support. Hosting storage system discovery and monitoring: You must first discover and monitor the hosting storage system before discovering and monitoring vFiler units. and so on. Note: To run a command on a vFiler unit using a Secure Socket Shell (SSH) connection. network connectivity. To use NDMP discovery. You must meet the following requirements prior to monitoring vFiler units with DataFabric Manager. DataFabric Manager and the hosting storage system must be part of the same routable network that is not separated by firewalls. DataFabric Manager provides Storage Service Providers (SSPs) the same management interface for monitoring vFiler units and hosting storage systems.5.0 Why monitor vFiler units with DataFabric Manager You can monitor vFiler units using Operations Manager. Monitoring the default vFiler unit: When you license Operations Manager. even though the primary system is not assigned to the same group. If you do not have access to the hosting storage system (by acquiring a vFiler unit through an SSP). • Data ONTAP release support: The Manager MultiStore monitoring feature supports hosting storage systems running Data ONTAP 6. must be met for monitoring vFiler units. the vFiler unit must be running Data ONTAP 7.5 or later. Monitoring backup relationships: DataFabric Manager collects details of vFiler unit backup relationships from the hosting storage system.

Obtain vFiler performance and usage reports. Obtain vFiler network and storage resource details. Manage host admin. . Group vFiler units for consolidated reporting. Manage your configuration. Run commands on vFiler units. Monitor and manage SnapVault relationships. Monitor vFiler health and general status. • • • • • • • • • • • • Discover vFiler units of a hosting storage system. Monitor and manage SnapMirror relationships. Manage user quota.Storage system management | 221 vFiler unit management tasks You can perform management tasks on a vFiler unit using DataFabric Manager. Control vFiler administrative access.

.

Next topics Management of storage system configuration files on page 223 What a configuration resource group is on page 226 Configuring multiple storage systems or vFiler units on page 229 Management of storage system configuration files Many administrators prefer to centrally manage their storage system and vFiler configuration /etc files and registry options. you can pull configuration settings from a storage system and vFiler unit. You can now and push all of the configuration settings or a some of the configuration settings to other storage systems. You can also ensure that storage system and vFiler configuration conforms with the configuration that is pushed to it from Operations Manager. With Operations Manager. By using storage system configuration management.Configuration of storage systems | 223 Configuration of storage systems You can remotely configure multiple storage systems using Operations Manager. or groups of storage systems and vFiler units. Next topics Prerequisites to apply configuration files to storage systems and vFiler units on page 224 List of access roles to manage storage system configuration files on page 224 List of tasks for configuration management on page 224 What configuration files are on page 225 What a configuration plug-in is on page 225 Comparison of configurations on page 225 Verification of a successful configuration push on page 226 . By creating configuration resource groups and applying configuration settings to them. administrators can remotely configure multiple storage systems from the server on which DataFabric Manager is installed. you can create and manage configuration files that contain configuration settings you want to apply to a storage system and vFiler unit or groups of storage systems and vFiler units.

http://now. Note: The DataFabric Manager storage system configuration management feature supports storage systems running Data ONTAP 6. You can download the plug-ins from the NOW site. .5.netapp.224 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You must have write privileges for a group to push configurations to it. • • Pull a configuration file from a storage system or a vFiler unit View the contents of each configuration file.com/ List of access roles to manage storage system configuration files You need specific access roles to perform management tasks with the storage system configuration files.0 Prerequisites to apply configuration files to storage systems and vFiler units You must meet a set of requirements before applying configuration files to a group of storage systems and vFiler units. Task Create configuration files Delete configuration files Edit configuration files Export configuration files Import configuration files Upgrade or revert configuration file versions Access role Global Write Global Delete Global Write Global Read Global Write Global Write List of tasks for configuration management You can complete a set of configuration management tasks by using the storage system configuration management feature. Related information The Now Site -.1 or later when you install the appropriate Data ONTAP plug-in with DataFabric Manager. Obtain a Data ONTAP plug-in for each version of the configuration file that you use in DataFabric Manager. • • • Ensure that you are assigned the Global Write and Global Delete access roles to add or delete a configuration file from a group. Set the login and password for the storage system or vFiler unit before you set up configuration groups.

Remove an existing configuration file from a group’s configuration list. to manage Data ONTAP. a Data ONTAP plug-in is provided. vFiler units against a configuration file. Upgrade or revert file versions. vFiler units or groups of storage systems. Configuration plug-ins provide the capability to upgrade or revert a configuration file that is stored in DataFabric Manager database to a different version. Compare configuration files against a standard template. Delete push configuration jobs. Comparison of configurations Operations Manager enables you to compare your configuration file settings against those of a template configuration. View the list of existing configuration files. Delete a configuration file.Configuration of storage systems | 225 • • • • • • • • • • • • • • • • Edit the configuration file settings (registry options and /etc files). View the status of push configuration jobs. . Use Operations Manager to access the comparison job results. Use Operations Manager to pull a configuration file from storage systems and save it. For each Data ONTAP version. Exclude configuration settings from being pushed to a storage system or a vFiler unit. Related concepts What a configuration resource group is on page 226 What configuration files are A configuration file is a set of configuration settings that you want the storage systems in one or more groups to share. Import and export configuration files. Change the order of files in the configuration list. View Groups configuration summary for a version of Data ONTAP. You can also compare storage systems. Push configuration files to a storage system or a group of storage systems. Copy or rename configuration files. Configuration files exist independently of groups and can be shared between groups. Specify configuration overrides for a storage system or a vFiler unit assigned to a group. What a configuration plug-in is A configuration plug-in is an add-on library in a zip file that is required by DataFabric Manager. and create jobs to obtain comparison results. or to vFiler units or a group of vFiler units. Edit a configuration file to create a partial configuration file.

Next topics List of tasks for managing configuration groups on page 226 Considerations when creating configuration groups on page 227 Creating configuration resource groups on page 227 Parent configuration resource groups on page 228 List of tasks for managing configuration groups After you have added configuration files to a group. • • • Remove an existing configuration file from a group’s configuration list. • • Exclude configuration settings from being pushed to a storage system or a vFiler unit. Change the order of files in the configuration list. What a configuration resource group is A configuration resource group is a group of storage systems that share a set of common configuration settings. Use this report to identify the configuration settings that do not conform to those of the standard template. When you push a configuration to a storage system or vFiler unit. to complete the configuration push job. but you can use the following CLI command to specify a new retry limit: dfm config push -R. View Groups configuration summary for a version of Data ONTAP. and the administrator who started the push job.0 You can view the configuration comparison results in a report format. Note: DataFabric Manager attempts to contact the storage system or vFiler unit. A configuration resource group must contain some number of storage systems and have one or more files containing the desired configuration settings. Specify configuration overrides for a storage system or a vFiler unit assigned to a group. five times (default). Verification of a successful configuration push After you have initiated a configuration push.226 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. or vFiler unit. you can manage your configuration groups by completing a set of tasks. These configuration settings are listed in files called configuration files. You can designate groups of managed storage systems that can be remotely configured to share the same configuration settings. You cannot reconfigure the number of retries from Operations Manager. DataFabric Manager logs the push operation on the storage system. DataFabric Manager logs this operation as a message that contains information about the DataFabric Manager server station. you can review the status of the push operation for each storage system or vFiler unit to which you pushed a configuration. .

Steps 1. select Edit Membership.5. the following icon is attached to the group name so that you can identify the group as a configuration resource group: . and storage systems belonging to configuration resource groups. From the Groups pane. Delete push configuration jobs. you must consider versions of operating systems and Data ONTAP. select Edit Storage System Configuration to add one or more configuration files to the group. From the Current Group pane. 4. or to vFiler units or a group of vFiler units. • • • • Storage systems running on Data ONTAP 6. A storage system that is a member of a configuration resource group can also belong to one or more groups. 5.1 or later can be included in configuration resource groups. A storage system can belong to only one configuration resource group. Create an empty group. 2. Creating configuration resource groups You can create configuration resource groups by creating an empty group and populate the group with storage systems. but can also belong to other non-configuration resource groups. View the status of a push configuration jobs. From the Current Group pane. Result After configuration files have been associated with the group. select the group you want to edit. Storage systems running different operating system versions can be grouped in the same configuration resource group. Populate the group from the available members. Considerations when creating configuration groups Before you create configuration groups.Configuration of storage systems | 227 • • • Push configuration files to a storage system or a group of storage systems. 3.

When to assign parent groups You should assign a parent group if you want to control all or most of the configuration settings of a storage system from Operations Manager. • • When you assign a parent. if an existing group contains most of the access control list (ACL) rules you require. you inherit all configuration settings in the parent group. You do not inherit the storage systems in the member group. Assigning a parent group enables you to quickly set the majority of the configuration settings of a storage system. Therefore.” Note: Ensure to review the settings in a parent group so that they do not have unintended consequences on your storage systems.0 Parent configuration resource groups You can specify a parent configuration resource group from which one can acquire configuration settings. The configuration settings of all parents are added to the beginning of the child’s configuration settings. you inherit only the parent group’s configuration files. For example. There is no limit to the potential length of these parent “chains. You also cannot add more ACL rules in another configuration file.228 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Remember that when you assign a parent group. Next topics Parent group considerations on page 228 When to assign parent groups on page 228 Properties of configuration files acquired from a parent on page 229 Parent group considerations You should consider some points about the inheritance from the parent files and the parent hierarchy before you assign a parent group. not the storage systems in the parent group. When you assign a parent group. . You can then add any other configuration files that you might need to meet your deployment requirements. you cannot assign the group as a parent. you should carefully scan a parent’s configuration for any undesirable settings before assigning a parent. You would probably not want to assign a parent if you want to use only a few of a parent group’s settings. you inherit only the parent group's configuration files. Parent groups can have parents of their own.

Steps 1. or exclude configuration settings from being pushed to the storage system or vFiler units. If necessary. Configuration files acquired from parent groups are always read first. Create a configuration resource group by adding a configuration file. you must have enabled SNMP on the storage systems and DataFabric Manager must have already discovered them. 6. 2. Configuring multiple storage systems or vFiler units To configure storage systems or vFiler units. 7. . click (Edit Storage System Configuration or Edit vFiler Configuration > Edit Configuration Pushed for Appliance) to specify configuration overrides for a specific storage system or vFiler unit. 8. Click Edit Storage System Configuration or Edit vFiler Configuration and push the configuration file or files out to the storage systems or to the group. Click Compare Configuration Files to compare your storage system or vFiler configuration file against a standard template configuration. You cannot change the order in which the acquired files are read unless you re-order the configuration files from within the parent group. Click (Management > Storage System or vFiler > Configuration Files > Edit Configuration File). 3. Verify that the configuration changes have taken effect by reviewing the status of the push jobs. 4. Pull a configuration file from a storage system or a vFiler unit. When you include configuration files from another group.Configuration of storage systems | 229 Properties of configuration files acquired from a parent One of the properties of configuration files acquired from parent groups is that they are initially readonly. 5. consider the following points: • • A configuration resource group can include configuration files from only one parent group. Edit the file settings.

.

The following figure shows a sample configuration for backup management by using a DataFabric Manager server: .Backup Manager | 231 Backup Manager You can manage disk-based backups for your storage systems using Backup Manager. Next topics Backup management deployment scenario on page 231 System requirements for backup on page 232 What backup scripts do on page 233 What the Backup Manager discovery process is on page 233 SnapVault services setup on page 235 Management of SnapVault relationships on page 236 What backup schedules are on page 238 Management of discovered relationships on page 241 What lag thresholds are on page 242 List of CLI commands to configure SnapVault backup relationships on page 244 Primary directory format on page 246 Secondary volume format on page 246 Backup management deployment scenario The DataFabric Manager server uses the SnapVault technology of Data ONTAP to manage the backup and restore operations. Note: Backup Manager does not support IPv6. backing up data. and restoring data. Backup Manager provides tools for selecting data for backup. scheduling backup jobs. You can access it from the Backup tab in Operations Manager.

System requirements for backup You must meet a set of requirements for DataFabric Manager. and secondary storage system. System The DataFabric Manager server station Primary storage system Requirements Protection Manager license • • • • Data ONTAP 6.232 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.5 or later for vFiler units SnapVault primary license SnapVault and NDMP enabled (configured using Data ONTAP commands or FilerView) Open Systems SnapVault module for open systems platforms. such as UNIX. primary storage system.0 The configuration provides data protection between two storage systems and from a storage system to a UNIX or a Windows storage system. Linux.4 or later for storage systems and Data ONTAP 6. and Windows . for data backup. The following table lists the requirements for each system.

DataFabric Manager uses Network Data Management Protocol (NDMP) primarily to communicate with primary and secondary storage systems. For more information about the process of setting up such scripts to run on primary directories.Backup Manager | 233 System Secondary storage system Requirements • • • • Data ONTAP 6. see the DataFabric Manager Backup man pages. and the NDMP server rejects the authentication credentials. When DataFabric Manager discovers a storage system. If DataFabric Manager attempts to connect to an NDMP server.4 or later for storage systems and Data ONTAP 6. NDMP credentials are used to identify if the discovered storage system is a primary or a secondary storage system. DataFabric Manager provides the ability to run prebackup and postbackup scripts on specific primary directories. Next topics Methods of storage system discovery on page 233 What SnapVault relationship discovery is on page 234 New directories for backup on page 234 Viewing directories that are not backed up on page 234 Methods of storage system discovery Backup Manager provides two methods for discovering storage systems: SNMP and NDMP. . before and after data has been backed up from those directories. What the Backup Manager discovery process is The Backup Manager performs three kinds of discovery process—storage system discovery. and New directories and qtree discoveries.5 or later for vFiler units SnapVault secondary license SnapVault and NDMP enabled (configured using Data ONTAP commands or FilerView ) Licenses for open systems platforms that are backing up data to secondary volumes on the secondary storage system What backup scripts do The prebackup and postbackup scripts help in bringing the databases into the hot backup mode before a backup is performed. it adds the storage system to its database with the NDMP authentication credentials used for the connection. SnapVault relationship discovery. DataFabric Manager uses Simple Network Management Protocol (SNMP) to discover and monitor storage systems.

Description dfbm report primary-dirsdiscovered Directories not scheduled for backup view 2. New directories for backup Backup administrators should know when new directories appear on primary storage systems so that they can schedule them for backup. To disable the discovery of directories that are not backed up on Open Systems SnapVault hosts. go to. execute the command. On authentication. DataFabric Manager then adds the storage systems to its database without NDMP authentication credentials.0 DataFabric Manager does not add the storage system to its database. Therefore. it generates an Unprotected Item Discovered event. SnapVault relationship discovery is possible only if you have NDMP credentials for the primary and secondary storage systems. .. An Open Systems SnapVault monitor checks whether all Open Systems SnapVault hosts for which DataFabric Manager has valid NDMP authentication credentials are running. Viewing directories that are not backed up You can use either the CLI or the GUI to view the directories that are not backed up on Open Systems SnapVault hosts. Backup Manager communicates with the primary and secondary storage systems to perform backup and restore operations. Steps 1... You can view the directories that are not backed up either by CLI or using the GUI. execute the dfbm primary dir ignore all command at the CLI. When DataFabric Manager cannot authenticate a storage system with NDMP. DataFabric Manager avoids spamming NDMP servers with “Login failed” errors. What SnapVault relationship discovery is DataFabric Manager discovers and imports existing SnapVault relationships by using NDMP. Option By using the Command-Line Interface (CLI).234 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.. If DataFabric Manager discovers directories that are not backed up on Open Systems SnapVault hosts. By using Graphic User Interface (GUI). it uses SNMP to discover primary and secondary storage systems.

enable on.access host=snapvault_primary to name the primary storage systems that you want to back up. Note: The setup procedure requires that you use the CLI of the storage system. 2. Enter license add sv_primary_license. Next topics Configuring the SnapVault license on page 235 Enabling NDMP backups on page 235 Configuring the SnapVault license You must configure SnapVault license on your storage systems before you begin the backup operation. 3. you must prepare the primary storage system and the secondary storage system to use SnapVault. Enter options snapvault. Steps 1. To use SnapVault.access host=snapvault_secondary to name the secondary storage systems that you want to designate for backups. you must have separate SnapVault licenses for both the primary and secondary storage systems—SnapVault primary license for primary storage system and SnapVault secondary license for secondary storage system. . or Windows licenses. you must first enable RSH on the storage system. Enter options snapvault. Enter ndmpd on to enable NDMP service on each primary and secondary storage system. 4. The Open Systems SnapVault agent does not require a license on the agent itself. UNIX. and setting up DataFabric Manager access permissions. Steps 1. 5. If you want to use the Run a Command tool. but the secondary storage system requires Linux. Enter options snapvault. enabling SnapVault and NDMP services. The initial setup includes installing licenses. Enter license add sv_secondary_license.Backup Manager | 235 SnapVault services setup Before you can use Backup Manager to back up your data using SnapVault relationships. Enabling NDMP backups You must enable the NDMP service and specify the DataFabric Manager server to enable NDMP backups on your storage systems.

Click Backup > Storage Systems. 3.236 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. adding secondary volumes. In the Secondary Storage Systems page. and selecting primary directories or qtrees for backup.access host=dfm_server_host to let DataFabric Manager perform backup and restore operations. Enter options ndmpd. Note: DataFabric Manager cannot discover Open systems platforms. Although you can add open systems platforms to the DataFabric Manager database for backup management. 5. When DataFabric Manager discovers a primary or secondary storage system. you cannot manage these platforms with DataFabric Manager. Obtain the NDMP password by entering the following command on the storage system: ndmpd password username .0 2. You can use either Operations Manager or the dfbm command to add a storage system. Management of SnapVault relationships You can configure DataFabric Manager to manage SnapVault relationships. adding primary and secondary storage systems. Before you begin The SnapVault server feature must be licensed on the secondary storage system. 4. enter the name (or IP address) of the secondary storage system. from the Backup Summary page. Enter the NDMP user. Next topics Adding secondary storage systems on page 236 Adding secondary volumes on page 237 Adding primary storage systems on page 237 Selecting primary directories or qtrees for backup on page 238 Adding secondary storage systems You can add a secondary storage system to Backup Manager. The configuration involves tasks such as. . Click All Secondary Storage System from the View drop-down list. 2. Backup Manager lists the storage system in its backup reports. Steps 1.

3. enter the NDMP password. Click Backup > Backup and then the icon next to Secondary Volume. 6. select the secondary storage system. 7. Steps 1. Obtain the NDMP password by entering the following command on the storage system: ndmpd password username 4. Select a volume on the secondary storage system. From the Secondary Volumes page. 4. 3. From the Primary Storage Systems page. Adding secondary volumes You can add secondary volumes by selecting from a list of discovered volumes. Click Add. From the Secondary Storage Systems page. Click Add.Backup Manager | 237 6. Click Add. From the Primary Storage Systems page. Adding primary storage systems You can add a primary storage system to Backup Manager from the Primary Storage Systems page. Enter the NDMP user name used to authenticate the primary storage system in the NDMP User field. enter the name (or IP address) of the primary storage system. 5. you can configure a non-default value for the NDMP port on which DataFabric Manager will communicate with the system. If the primary storage system you are adding is an Open Platform system. type the NDMP password. . 2. you might need to click Refresh. so that the volumes of secondary storage system are automatically discovered by DataFabric Manager and are added to its database. Note: If DataFabric Manager has not yet discovered a volume. 2. Steps 1. Before you begin Ensure that you have added a secondary storage system to Backup Manager.

DataFabric Manager automatically discovers the primary directories. Select Qtrees Not Scheduled For Backup from the View drop-down list. Click Backup > Storage Systems. 2. Next topics Best practices for creating backup relationships on page 239 Snapshot copies and retention copies on page 239 Requirements to create a backup schedule on page 239 Creating backup schedules on page 240 Local data protection with Snapshot copies on page 240 Snapshot copy schedule interaction on page 241 .0 Result After you have added a primary storage system to Backup Manager. You must associate a backup schedule with a secondary volume before automatic backups can occur. all backup relationships associated with a secondary volume must use the same backup schedule. Select the qtree that you want to back up. 4. 3.238 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. DataFabric Manager also discovers primary qtrees. Steps 1. in the case of primary storage systems that support qtrees. you must add the primary directory to Backup Manager and configure its volumes or qtrees for backups to the secondary volume. Therefore. and adds them to its database. Before you begin Before baseline transfers can start. Selecting primary directories or qtrees for backup You can schedule a primary directory or qtree for backup using the Storage Systems tab. Note: Only one backup schedule can be associated with a secondary volume. Click Back Up. What backup schedules are A backup schedule specifies how frequently data transfers are made from a primary directory or qtree to a secondary volume and how many Snapshot copies are retained on the secondary volume.

and have information about the weekly. After a transfer to the secondary volume occurs. Otherwise. based on the Snapshot copy made on the primary storage system. the backup takes longer to complete. If the number of Snapshot copies of the secondary volume being retained exceeds the number specified in the retention count. This Snapshot copy is retained if a retention count other than zero is specified in the backup schedule. the copy is overwritten the next time a Snapshot copy of the secondary volume is created as a result of a new backup. you can specify a limit on the amount of bandwidth. However. • Avoid creating multiple backup relationships at the same time to avoid initiating multiple baseline transfers. Instead. Therefore. then the performance of file services is not impacted. DataFabric Manager directs each primary storage system to create a Snapshot copy of its current data. entire changed files are transferred.Backup Manager | 239 Best practices for creating backup relationships Backups typically involve large amounts of data. the oldest copy is purged. you might want to follow certain recommendations before creating backup relationships. because the storage system’s resources are being consumed by services of higher priority. DataFabric Manager directs the secondary storage system to create a Snapshot copy of the entire secondary volume. Note: Unlike on supported NetApp primary storage systems. DataFabric Manager directs the secondary storage system to initiate a backup to its secondary volume. Snapshot copies and retention copies If a scheduled or manual backup occurs. you must ensure that you have a name for the schedule. Alternatively. Requirements to create a backup schedule Before you create a backup schedule. These services are given higher priority than the backup. a backup transfer can use. . Then. Snapshot copies of current data are not created on open systems platforms. and hourly schedules. Note: If a baseline backup transfer starts on a storage system when it is busy providing file services (NFS and CIFS). Following are the recommendations for creating backup relationships: • Create backup relationships during off-peak hours so that any performance impact does not affect users. nightly.

weekly. to save resources. 3. Create a schedule using one of the following methods: • • Select a template. Local data protection with Snapshot copies If you want to keep several Snapshot copies of your data on supported primary storage systems for local data protection. However. The Schedule Details page is displayed. Type a name for the schedule. check the Use as Default for New Secondary Volumes check box to apply the schedule to all secondary volumes subsequently added to Backup Manager. either use a template or customize an existing schedule. 4. Before you begin When you create a backup schedule. and nightly schedule. Use the Data ONTAP snap sched command or FilerView to provide local data protection on primary storage systems.240 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. 8. Optionally. 7. Select None to create a schedule without using a template. click to open the Backup Schedules page. Optionally. You can also generate events to avoid conflict between Snapshot and SnapVault. . 6. and nightly schedule. From the Backup page. 2. Any secondary volumes subsequently added to Backup Manager are then automatically associated with this default backup schedule. Steps 1. you must not rely on the Snapshot copies created for backup transfers with DataFabric Manager. You can modify the schedule later and create a custom template. you should turn off Snapshot copy scheduling configured with the snapvault snap sched command. click Add a schedule to open the Edit Schedule page and configure backup times for each hourly. you can identify it as the default backup schedule. 5. weekly. enter the retention count for the hourly. Optionally. Click Add to add the schedule to the DataFabric Manager database.0 Creating backup schedules To create a backup schedule. Click Update.

entering NDMP credentials. Note: Turn off all Snapshot copy schedules and policies defined for the imported backup relationship that were created using the Data ONTAP snapvault snap sched command. turn off all Snapshot copy creation and retention schedules configured with Data ONTAP commands on the secondary storage systems. Enabling DataFabric Manager to manage discovered relationships You can enable DataFabric Manager to manage a discovered relationship by enabling NDMP. Steps 1. Management of discovered relationships DataFabric Manager uses the following storage systems information to manage discovered relationships: system type. backup transfers and retention of backups as defined by the commands continue to occur on the secondary storage system. the backup schedules created in DataFabric Manager are independent of any Snapshot copy schedules that are defined on the secondary storage systems using the Data ONTAP snapvault snap sched and snap sched commands or FilerView. Enable NDMP on the primary and secondary storage systems. If you do not turn "off" the backup schedules that are defined for a relationship on a secondary storage system. and NDMP credentials.Backup Manager | 241 Snapshot copy schedule interaction The backup schedules defined in DataFabric Manager do not affect the Snapshot copy schedules defined on the secondary storage systems. 3. Enter the NDMP credentials for the primary and the secondary storage systems. 2. DataFabric Manager contacts the storage system using NDMP to get this information. DataFabric Manager does not always have the basic information it needs to authenticate itself with discovered primary storage systems. Although such a situation does not lead to any data loss. Because. thus consuming resources on those storage systems. . Associate a backup schedule with the secondary volume. Although all types of schedules can exist simultaneously. OS version. and associating a backup schedule. it causes the primary and secondary storage systems to make unnecessary transfers. Hence. This also enables DataFabric Manager to verify the NDMP credentials. uses network bandwidth required for the backup transfers. Whenever you update the NDMP credentials of a storage system.

Steps 1. hours. In the SnapVault Replica Nearly Out of Date Threshold field. Click Update. After you add a secondary volume to Backup Manager.0 What lag thresholds are Lag thresholds are limits set on the time elapsed since the last successful backup. either for all volumes or for specific secondary volumes.25 hours or 20:15:00 as the value for the threshold. enter 20. enter the limit at which the backups on a secondary volume are considered obsolete. You can specify time in weeks. DataFabric Manager generates events of specific severity that indicates the acuteness of the event. From any Backup Manager report. Select Setup > Options and choose Backup Default Thresholds from the Edit Options menu at the left side of Operations Manager. days. minutes. To set a global option. Next topics Setting global thresholds on page 242 Setting local thresholds on page 242 Bandwidth limitation for backup transfers on page 243 Configuring backup bandwidth on page 243 Setting global thresholds The lag thresholds are applied by default to all secondary volumes in Backup Manager. you can change these lag thresholds. click the secondary volume name to access the Secondary Volume Details page. 2. you must complete a list of tasks. Steps 1. 4. 3. For example. or seconds. When those limits are exceeded.242 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Setting local thresholds You must complete a list of tasks to change the lag thresholds for a specific secondary volume. However. In the SnapVault Replica Out of Date Threshold field. enter the limit at which the backups on a secondary volume are considered nearly obsolete. to specify 20 hours and 15 minutes. . the default values for lag thresholds are applied.

hours. the limit applies to all backup transfers—baseline and incremental—that occur for the relationship. Enter the bandwidth limit that you want to impose on the backup transfers for this relationship. Lag Warning threshold specifies the lag time after which the DataFabric Manager server generates the SnapVault Replica Nearly Out of Date event. Select the directory for which you want to configure a bandwidth limit by doing one of the following: • • For a new backup relationship. Note: The bandwidth limit applies only to the backup operations and not to the restore operations. When you specify a limit for a backup relationship. You must specify a limit for each relationship individually. specify the lag warning threshold limit in weeks.Backup Manager | 243 2. the baseline transfer does not use more bandwidth than you specify. you can remove the limit. Steps 1. or seconds. hours. Bandwidth limitation for backup transfers You can specify a limit on the amount of bandwidth used when a backup transfer occurs. 2. select the Backup tab to open the Backup page. You specify this limit when you create a backup relationship or later when a need arises. In the Lag ErrorThreshold field. specify the lag warning threshold limit in weeks. By doing this. days. In the Lag WarningThreshold field. minutes. Click Update. you can apply the limit when you create a backup relationship. days. After a baseline transfer has occurred. For the restore operations. However. Click Update. . or seconds. minutes. For an existing backup relationship. Configuring backup bandwidth You cannot specify a global bandwidth limit for backup relationships. Lag Error threshold specifies the lag time after which the DataFabric Manager server generates the SnapVault Replica Out of Date event. select a primary directory name from any view to open the Primary Directory Details page. 4. If you do not specify a limit for a backup relationship. 3. the maximum available bandwidth is always used. the maximum available bandwidth for a transfer is used. 3. if you do not want to apply a bandwidth limit permanently to a backup relationship. but still want to limit the amount of bandwidth that is used for the baseline transfer.

Manages primary directories that DataFabric Manager discovers. Manages the list of user names and passwords used for Network Data Management Protocol (NDMP) discovery. Description Initiates and browses backups for a secondary. Manages backup jobs. . Lists the backup events.244 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 List of CLI commands to configure SnapVault backup relationships To configure SnapVault backup relationships. CLI command dfbm backup list dfbm backup ls dfbm backup start dfbm event list dfbm job abort dfbm job detail dfbm job list dfbm job purge dfbm ndmp add dfbm ndmp delete dfbm ndmp modify dfbm ndmp list dfbm option list dfbm option set dfbm primary dir add dfbm primary dir delete dfbm primary dir discovered dfbm primary dir ignore dfbm primary dir list dfbm primary dir modify dfbm primary dir relinquish dfbm primary dir unignore dfbm primary host add dfbm primary host delete dfbm primary host list dfbm primary host modify Manages the global backup options that control the operation of Backup Manager. you must execute a set of CLI commands.

. Manages backup schedules. Manages hosts and volumes used as backup destinations. Runs reports on backup jobs. Runs backup reports based on primary or secondary relationships.Backup Manager | 245 CLI command dfbm reports events dfbm reports events-error dfbm reports events-unack dfbm reports events-warning dfbm reports jobs dfbm reports jobs-1d dfbm reports jobs-30d dfbm reports jobs-7d dfbm reports jobs-aborted dfbm reports jobs-aborting dfbm reports jobs-completed dfbm reports jobs-failed dfbm reports jobs-running dfbm reports backups-by-primary dfbm reports backups-bysecondary dfbm schedule add dfbm schedule create dfbm schedule delete dfbm schedule destroy dfbm schedule diag dfbm schedule modify dfbm secondary host add dfbm secondary host delete dfbm secondary host list dfbm secondary host modify dfbm secondary volume add dfbm secondary volume delete dfbm secondary volume list dfbm secondary volume modify Description Runs reports on backup events.

Supported storage systems For a primary directory called engineering/projects in volume vol1 of a storage system named jupiter. . pluto:/vol1. UNIX system For a primary directory /usr/local/share on a UNIX system named mercury. but not for Windows. use the following format: system_name:volume_name For example. enter the following text: jupiter:/vol1/engineering/projects Windows system For a primary directory called engineering\projects on the D drive of a Windows system named mars. enter the following text: mercury:/usr/local/share Secondary volume format If you need to add a secondary volume to Backup Manager. use the following format: system_name:{drive_letter | volume_name} {path_name}. Note: The parameters are case-sensitive for UNIX systems. enter the following text: mars:D:\engineering\projects Your capitalization could be different.246 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Primary directory format If you want to add a primary directory to Backup Manager without browsing through Operations Manager.

see the Data ONTAP Data Protection Online Backup and Recovery Guide.2 or later. Web-based method of monitoring and managing SnapMirror relationships between volumes and qtrees on your supported storage systems and vFiler units. For more information about SnapMirror. ensure that certain prerequisites are met. Note: Disaster Recovery Manager does not support IPv6. • The SnapMirror destination storage systems must be running Data ONTAP 6.com/NOW/ knowledge/docs/ontap/ontap_index.netapp. If you do not have this license. You can view and manage all SnapMirror relationships through the Disaster Recovery tab of Operations Manager. A SnapMirror relationship is the replication relationship between a source storage system or a vFiler unit and a destination storage system or a vFiler unit by using the SnapMirror feature. so that Disaster Recovery Manager generates an event and notifies the designated recipients of the event.Disaster Recovery Manager | 247 Disaster Recovery Manager Disaster Recovery Manager is an application within DataFabric Manager that enables you to manage and monitor multiple SnapMirror relationships from a single interface. Note: The SnapMirror monitoring and management features are available only with the Business Continuance Management license. You can also configure SnapMirror thresholds. contact your sales representative. Disaster Recovery Manager provides a simple.http://now. .shtml Prerequisites for using Disaster Recovery Manager Before you use Disaster Recovery Manager to monitor SnapMirror relationships. • The Business Continuance Management license key must be installed. Next topics Prerequisites for using Disaster Recovery Manager on page 247 Tasks performed by using Disaster Recovery Manager on page 248 What a policy is on page 248 Connection management on page 251 Authentication of storage systems on page 253 Volume or qtree SnapMirror relationships on page 254 What lag thresholds for SnapMirror are on page 258 Related information Data ONTAP Data Protection Online Backup and Recovery Guide .

SnapMirror replication can occur asynchronously or synchronously. and authenticate storage systems using the Disaster Recovery Manager. The page lists all the existing SnapMirror relationships. The Disaster Recovery Manager Home Page is the default page that appears when you click the Disaster Recovery tab. What a policy is A policy is a collection of configuration settings that you can apply to one or more SnapMirror relationships. There are two types of policies that you can create and apply to SnapMirror relationships: • • Replication Failover Next topics What a replication policy does on page 248 What a failover policy does on page 250 Policy management tasks on page 250 What a replication policy does A replication policy affects the way in which a source storage system replicates data to a destination storage system or a vFiler unit. Tasks performed by using Disaster Recovery Manager You can manage policies. there are two policies: • • Asynchronous replication policy Synchronous replication policy . based on the type of replication.2 or later installed. The ability to apply a policy to more than one SnapMirror relationship makes a policy useful when managing many SnapMirror relationships.5 installed to perform any of the SnapMirror management tasks. and SnapMirror relationships. connections.0 Note: Disaster Recovery Manager can discover and monitor only volume and qtree SnapMirror relationships in which the SnapMirror destinations have Data ONTAP 6.248 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. • • The source and destination storage systems must have Data ONTAP 6. The source and destination storage systems configured with vFiler units must be running Data ONTAP 6. therefore.5 or later to perform any of the SnapMirror relationship management and monitoring tasks.

see the na_snapmirror.conf(5) man page for Data ONTAP. The destination can be behind the source by 0 to 60 seconds or by 0 to 500 write operations. Visibility interval Specifies a time interval after which the transferred data becomes visible on the destination. Disaster Recovery Manager generates a SnapMirror Nearly Out of Date event. List of parameters for a synchronous replication policy You must set a list of parameters for a synchronous replication policy. Specifies the maximum transfer speed. You can specify the schedule using Operations Manager or you can enter the schedule in the cron format. .Disaster Recovery Manager | 249 Next topics List of parameters for an asynchronous replication policy on page 249 List of parameters for a synchronous replication policy on page 249 List of parameters for an asynchronous replication policy You must set a list of parameters for an asynchronous replication policy. Maximum Transfer Speed Restart Lag Warning Threshold Lag Error Threshold TCP Window Size Checksum Specifies the restart mode that SnapMirror uses to continue an incremental transfer from a checkpoint if it is interrupted. Schedule Specifies when an automatic update occurs. in bytes. Specifies. Specifies the limit at which the SnapMirror destination contents are considered nearly obsolete. the amount of data that a source can send on a connection before it requires acknowledgment from the destination that the data was received. If this limit is exceeded. If this limit is exceeded. Use the checksum option if the error rate of your network is high enough to cause an undetected error. TCP Window Size Specifies the amount of data that a source can send on a connection before it requires acknowledgment from the destination that the data was received. Specifies the use of a Cyclic Redundancy Check (CRC) checksum algorithm. Semi-synchronous Specifies the level of synchronicity between the source and the destination. Fully synchronous Specifies full synchronicity between the source and the destination. Specifies the limit at which the SnapMirror destination contents are considered obsolete. in kilobytes per second. Note: For more information about scheduling using the cron format. Disaster Recovery Manager generates a SnapMirror Out of Date event.

.. What a failover policy does A failover policy affects the process used by a SnapMirror relationship to recover from a disaster.. If you want to. otherwise. Create a new replication policy Click.250 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.. Disaster Recovery Manager sends an error message. . Use the checksum option if the error rate of your network is high enough to cause an undetected error. Failover policies consist of a path to a user-installed script that would be called when a disaster occurs before and after the following events: • • SnapMirror break SnapMirror resynchronization Policy management tasks This table describes policy management tasks and provides the location of the user-interface page that enables you to complete the task.0 Checksum Specifies the use of a Cyclic Redundancy Check (CRC) checksum algorithm. Disaster Recovery > Add a Mirror > Manage Replication Policies icon) OR Disaster Recovery > Mirrors-Mirrored > SnapMirror Relationship Source > Manage Replication Policies icon Edit an existing replication policy Disaster Recovery > Add a Mirror > Edit Replication Policy icon OR Disaster Recovery > Mirrors-Mirrored > SnapMirror Relationship Source > Policy name > Manage Replication Policies icon Delete an existing replication policy Disaster Recovery > Add a Mirror > Manage Replication Policies icon Note: OR Remove the policy from Disaster Recovery > Mirrors-Mirrored > SnapMirror SnapMirror relationships before Relationship Source > Manage Replication Policies icon deleting it.

Disaster Recovery tab > Add a Mirror link > Manage Failover Policies icon Connection management You can specify one or two specific network paths between a source storage system or a vFiler unit. while upgrading DataFabric Manager from versions earlier than 3. and a destination storage system or a vFiler unit using connection management. For more information.2 to 3.... Disaster Disaster Recovery > Mirrors-Mirrored > SnapMirror Recovery Manager sends an error Relationship Source > Manage Failover Policies icon message. Next topics Connection management tasks on page 252 .. Disaster Recovery > Add a Mirror > Manage Failover Policies icon OR Disaster Recovery > Mirrors-Mirrored > SnapMirror Relationship Source > Manage Failover Policies icon Edit an existing failover policy Disaster Recovery > Add a Mirror > Edit Selected Failover Policy icon OR Disaster Recovery > Mirrors-Mirrored > SnapMirror Relationship Source > Policy name > Manage Failover Policies icon Delete an existing failover policy Note: Remove the policy from OR SnapMirror relationships before deleting it. The advantages of multiple paths between source and destination storage systems or vFiler units are as follows: • • Increased transfer bandwidth Networking failover capability Note: Asynchronous SnapMirror does not support multiple paths in Data ONTAP 6. Create a new failover policy Click.5. Note: All the replication policies in the earlier versions that are not assigned to any mirror relationships are deleted. otherwise. see the Data ONTAP Data Protection Online Backup and Recovery Guide.Disaster Recovery Manager | 251 If you want to.2 or later.

shtml Connection management tasks The table describes connection management tasks and provides the location of the user-interface page that enables you to complete the task. Create a connection Connections page (ConnectionsDisaster Recovery tab > Add a Mirror link > Manage Connections icon) OR (Disaster Recovery tab > Mirrors-Mirrored link > SnapMirror Relationship Source > Manage Connections icon) Edit a connection Edit Connections page (Disaster Recovery tab > Add a Mirror link > Manage Connections icon > View drop-down list.. connection name) Delete a connection Connections page (Disaster Recovery tab > Mirrors-Mirrored link > SnapMirror Relationship Source > Manage Connections icon) What the connection describes A connection specifies the parameters for one or two network paths between the source and the destination storage system or a vFiler unit.252 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.http://now.netapp. .. connection name) OR (Disaster Recovery tab > Mirrors-Mirrored link > View drop-down list. Go here.0 What the connection describes on page 252 What multipath connections are on page 253 Related information Data ONTAP Data Protection Online Backup and Recovery Guide . that the paths use. Connection mode Defines the mode...com/NOW/ knowledge/docs/ontap/ontap_index. The parameters that are specified by the connection are as follows: Connection name Name of the connection. If you want to.

Note: Disaster Recovery Manager can manage only storage systems running on Data ONTAP 6. • Authentication of storage systems DataFabric Manager uses the NDMP protocol to manage SnapMirror relationships. You must define one path. DataFabric Manager must know the NDMP credentials for the source and destination storage systems. If one path fails. The paths can be Ethernet. and therefore authentication of the storage systems is necessary. You can set the two paths to use one of the following two modes: • Multiplexing mode—SnapMirror uses both paths at the same time. Storage systems for which the Disaster Recovery Manager fails to recognize the NDMP credentials are listed on the Authenticate Storage Systems page (Disaster Recovery tab > Authenticate Storage systems link). Disaster Recovery Manager discovers storage systems enabled with SnapMirror on your network. . Fibre Channel. Disaster Recovery Manager might not identify the NDMP credentials for storage systems on your network.Disaster Recovery Manager | 253 IP address pairs IP addresses of the source and destination storage systems and vFiler units that define the path that are used by the SnapMirror relationship. Next topics Authentication of discovered and unmanaged storage systems on page 253 Addition of a storage system on page 254 Modification of NDMP credentials on page 254 Deletion of a storage system on page 254 Authentication of discovered and unmanaged storage systems If you try to perform management tasks on a SnapMirror relationship in which a storage system is unauthenticated. Failover mode—SnapMirror uses the first specified path as the desired path and uses the second specified path only if the first path fails. and you can define up to two paths. After the failed path is repaired. What multipath connections are Synchronous SnapMirror supports up to two paths for a particular SnapMirror relationship. or a combination of Ethernet and Fibre Channel. the transfers occur on the other path. essentially load balancing the transfers.5 or later. the transfers resume using both paths. Disaster Recovery Manager redirects you to the authentication page where you need to enter the NDMP credentials (user name and password). Before you can perform any SnapMirror management task.

Deletion of a storage system You can delete one or more storage systems from the managed storage system list if you do not use them in a SnapMirror relationship. Do not manually create a destination qtree. If the new SnapMirror relationship is a qtree replication. you must change the NDMP password by editing the storage system. . you must add the NDMP credentials. Instead. for a selected SnapMirror relationship from the Edit a storage system page. Modification of NDMP credentials You can edit the NDMP credentials of a storage system. If the new SnapMirror relationship is a volume replication. You can add the NDMP credentials on the Storage systems page when you create the new SnapMirror relationship (Disaster Recovery tab > Add a Mirror link > Manage SnapMirror Hosts icon). You might have to edit the NDMP credentials for selected SnapMirror relationships. you must add the NDMP credentials. Volume or qtree SnapMirror relationships To create a new SnapMirror relationship. Change administrators and passwords as a security precaution You can edit the NDMP credentials for a selected SnapMirror relationship from the Edit a storage system page (Disaster Recovery tab > Mirrors-Mirrored link > SnapMirror Relationship source > Storage system). ensure that the volume on the destination storage system where you want to replicate a qtree with SnapMirror is online and not restricted.0 Addition of a storage system Before you manage the SnapMirror relationship using Disaster Recovery Manager. you must create the volume (restricted or not restricted) on the destination storage system. When creating SnapMirror relationships by using storage systems that have not been SnapMirror enabled before. Note: Monitor must be running for you to be able to access the FilerView interface for a storage system. Note: The NDMP credentials are shared with the Backup Manager. you must create the volume on the destination storage system and mark the volume as restricted before you can create the SnapMirror relationship. You must not delete a storage system if you are changing its NDMP password. that is either a volume replication or a qtree replication. The FilerView link provides a shortcut to do this.254 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

in a qtree SnapMirror relationship. Volume. If you do not specify a failover policy. Next topics Decisions to make before adding a new SnapMirror relationship on page 255 Addition of a new SnapMirror relationship on page 256 Modification of an existing SnapMirror relationship on page 256 Modification of the source of a SnapMirror relationship on page 256 Reason to manually update a SnapMirror relationship on page 256 Termination of a SnapMirror transfer on page 257 SnapMirror relationship quiescence on page 257 View of quiesced SnapMirror relationships on page 257 Resumption of a SnapMirror relationship on page 257 Disruption of a SnapMirror relationship on page 257 View of a broken SnapMirror relationship on page 257 Resynchronization of a broken SnapMirror relationship on page 258 Deletion of a broken SnapMirror relationship on page 258 Decisions to make before adding a new SnapMirror relationship You can add a new SnapMirror relationship by specifying the Relationship type. If you do not specify a defined connection.3. you can select a qtree directly belonging to the vFiler unit by selecting the volume belonging to the storage system. and Policies. If you do not specify a replication policy. no user scripts are called during a SnapMirror break or resynchronization. If the new SnapMirror relationship is a qtree replication. You can add a replication policy later to schedule when incremental transfers occur. Source. ensure that the volume on the destination storage system where you want to replicate a qtree with SnapMirror is online and not restricted. Replication and failover policies are optional. only a baseline transfer is performed.Disaster Recovery Manager | 255 On upgrading to DataFabric Manager 3. Connection Policies Related concepts Connection management on page 251 What a policy is on page 248 . Relationship type If the new SnapMirror relationship is a volume replication. the default network route is used. Connection. you must create the volume on the destination storage system and mark the volume as restricted before you can create the SnapMirror relationship. Destination. Specifying a defined connection is optional.

Data loss occurs due to scheduled or threatened power outage or from a destination volume being taken offline for maintenance. By updating a SnapMirror relationship manually is useful because you might need to run an unscheduled update to prevent data loss. To change the source of a SnapMirror relationship. Reason to manually update a SnapMirror relationship You can update an existing SnapMirror relationship from the Edit SnapMirror Relationship page.0 Addition of a new SnapMirror relationship You can use the SnapMirror Relationships page to create a new SnapMirror relationship. Adding a new SnapMirror relationship involves the following: • • By selecting the type of SnapMirror relationship. volume or qtree By selecting the source storage system and source volume or qtree Note: If the SnapMirror relationship is a qtree replication. . upgrade. Editing an existing SnapMirror relationship involves the following tasks: • • Creating or editing replication policies and failover policies Assigning a connection for a relationship Related concepts Connection management on page 251 What a policy is on page 248 Modification of the source of a SnapMirror relationship You can change the source of a SnapMirror relationship from the Edit SnapMirror Relationship page. use the Edit SnapMirror Relationship page (Disaster Recovery tab > Home > SnapMirror relationship source). you select the volume and the qtree. • • • Selecting the destination storage system and destination volume or qtree Selecting the connection Selecting the type of replication and failover policy Modification of an existing SnapMirror relationship You can edit an existing SnapMirror relationship from the Edit SnapMirror Relationship page. use the Edit SnapMirror Relationship page. Use the Edit SnapMirror Relationship page (Disaster Recovery tab > Home > SnapMirror relationship source) to update a SnapMirror relationship in between scheduled incremental updates. To edit an existing SnapMirror relationship.256 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. or data migration. repair.

Resumption of a SnapMirror relationship You can resume a SnapMirror relationship from the Edit SnapMirror Relationship page or Quiesced SnapMirror Relationships page.Disaster Recovery Manager | 257 Termination of a SnapMirror transfer You can abort a SnapMirror transfer from the Edit SnapMirror Relationship page. the source from a destination volume or qtree releases. The Abort button on the Edit SnapMirror Relationship page is available only when SnapMirror transfers are in progress. Disruption of a SnapMirror relationship You can break a quiesced SnapMirror relationship from the Edit SnapMirror Relationship page or the Quiesced SnapMirror Relationships page. When you break a relationship. Use the Edit SnapMirror Relationship page (Disaster Recovery tab > Home > SnapMirror relationship source) to quiesce a SnapMirror relationship. You can quiesce only volumes and qtrees that are online and that are SnapMirror destinations. If a qtree is not in a stable state (is in transition). View of a broken SnapMirror relationship You can view all the broken SnapMirror relationships from the Disaster Recovery Home page or the Broken SnapMirror Relationships page. You can quiesce a SnapMirror relationship to block updates to the destination storage system after existing volume or qtree updates are complete. . You can break SnapMirror relationships if you want to temporarily end a SnapMirror relationship between a source and a destination volume or qtree. You cannot quiesce a restricted or offline volume or a qtree in a restricted or offline volume. allowing the source to delete its base Snapshot copy for the SnapMirror relationship. View of quiesced SnapMirror relationships You can use the Disaster Recovery Home page or the Quiesced SnapMirror Relationships page to view all quiesced SnapMirror relationships. SnapMirror relationship quiescence You can quiesce a SnapMirror relationship from the Edit SnapMirror Relationship page. quiescing the SnapMirror relationship forces it into a stable state.

however. You can reverse the functions of the source volume and the destination volume when you resynchronize. You can change the threshold values of the following: • • • • All SnapMirror sources and destinations in the Disaster Recovery Manager database All SnapMirror sources and destinations of a specific group All mirrors of a SnapMirror source A specific SnapMirror source or destination For more information on the thresholds and the default values.0 Resynchronization of a broken SnapMirror relationship You can use the Edit SnapMirror Relationship page or the Broken SnapMirror Relationships page to perform resynchronization tasks related to SnapMirror relationships. You can resynchronize a source and a destination volume or qtree in one of the following ways: • • You can resynchronize the SnapMirror relationship that you broke. Next topics Where to change the lag thresholds on page 259 Lag thresholds you can change on page 259 Reasons for changing the lag thresholds on page 259 What the job status report is on page 259 . Deletion of a broken SnapMirror relationship You can delete a broken SnapMirror relationship from the Edit SnapMirror Relationship page or the Broken SnapMirror Relationships page. What lag thresholds for SnapMirror are After Operations Manager is installed. If you want to receive notifications in the form of e-mail messages. Note: The only way to restore a deleted SnapMirror relationship is to initialize the relationship. pager alerts.258 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. the lag thresholds of all the SnapMirror source volumes and destination volumes are set to default values. or SNMP traps when a SnapMirror event occurs. Disaster Recovery Manager automatically generates SnapMirror events based on these thresholds. Only lag thresholds of SnapMirror destinations are used for generating events. see Operations Manager Help. you can set up alarms. Note: Disaster Recovery Manager does not generate events when lag thresholds of SnapMirror sources are crossed. you might want to change these thresholds.

25 hours or 20:15:00 as the value for this option. minutes. In such a case.25 hours or 20:15:00 as the value for this option. . Disaster Recovery Manager generates a SnapMirror Nearly Out of Date event. You can specify the lag error threshold limit in weeks. You can specify the lag warning threshold limit in weeks.Disaster Recovery Manager | 259 Where to change the lag thresholds You can change the lag thresholds from the Edit Policy page (Disaster Recovery tab > Home link > SnapMirror relationship Source > Manage Replication Policies icon > Policy name). SnapMirror Lag Error Threshold This option specifies the limit at which the SnapMirror destination contents are considered obsolete. For example. If this limit is exceeded. For example. days. Lag thresholds you can change You can change SnapMirror Lag Warning Threshold and SnapMirror Lag Error Threshold. enter 20. days. or seconds. hours. hours. you must set the SnapMirror schedule to transfer data from the source to the destination frequently. minutes. you must keep the latest data at the destination. to specify 20 hours and 15 minutes. to specify 20 hours and 15 minutes. What the job status report is The Job Status report displays the status of SnapMirror jobs along with information such as the time at which the jobs were started and their job IDs. After you have done this. enter 20. Disaster Recovery Manager can generate an event sooner than the default lag time so that you can take corrective action sooner. • Reasons for changing the lag thresholds You can change the default values of the lag thresholds to a lower valueso that you are notified at an earlier time than the specified time. If you use your SnapMirror destination to distribute data to remote sites. You can change the following lag thresholds: • SnapMirror Lag Warning Threshold This option specifies the limit at which the SnapMirror destination contents are considered nearly obsolete. If this limit is exceeded. Disaster Recovery Manager generates a SnapMirror Out of Date event. The Job Status report is identical in appearance and function to the Job Status report for Backup Manager. or seconds.

.

the CLI on a. Next topics Accessing the CLI on page 261 Where to find information about DataFabric Manager commands on page 262 Audit logging on page 262 What the remote platform management interface is on page 265 Scripts overview on page 266 What the DataFabric Manager database backup process is on page 269 What the restore process is on page 274 Disaster recovery configurations on page 276 Accessing the CLI You can access the CLI using telnet or the console of the system on which DataFabric Manager is installed. If you want to access Then. and the port number of the terminal server. Local Windows system Local Linux system Remote Windows or Linux system Use the command prompt window of the Windows workstation through Start > Run. Step 1.. Make a Telnet connection from the remote host to DataFabric Manager you want to access by entering the following command: telnet hostname hostname is the host name or IP address of the system running DataFabric Manager. Initiate authentication with your user name and password. Access the CLI on local or remote systems. use the host name or IP address.Maintenance and management | 261 Maintenance and management You can configure and maintain DataFabric Manager through the CLI.. . a.. Use any shell prompt on your workstation. When connected to a terminal server.. to access the console of the workstation: telnet {term_server}: port_number b.

Identify attempts to subvert the security of the system. The events that can be audited in DataFabric Manager and the information that is recorded about each event are described here: Authentication events DataFabric Manager logs each authentication action that succeeded or failed. Determine when a specified change in the configuration of the system was made. DataFabric Manager logs all the activities in the audit log file. System administrators view the audit log file for following reasons: • • • • Determine the recently changed configurations to understand why a problem is occurring.0 Where to find information about DataFabric Manager commands There are two ways to find information about DataFabric Manager commands. which has the following syntax: dfm help command. Next topics Events audited in DataFabric Manager on page 262 Global options for audit log files and their values on page 263 Format of events in an audit log file on page 263 Permissions for accessing the audit log file on page 265 Events audited in DataFabric Manager DataFabric Manager logs events and each of these events are recorded. you can access the man pages running the command source /opt/NTAPdfm/bin/vars. On a Linux system. The user name associated with the authentication attempt is also recorded.sh. and then man dfm.262 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. by accessing the help and by using the dfm help command. . The audit log file resides in the default log directory of DataFabric Manager. • Access the man pages through the table of contents of the Operations Manager Help or the following URL: http://server_ipaddress:8080/man/. Use the dfm help command. Determine who made the specified change in the configuration of the system and ask them why the change was made. • Audit logging Audit logging is the process of logging every activity performed by DataFabric Manager for a later review.

the failure status of the command. In the case of APIs. DataFabric Manager logs the execution of each command. In the case of CLI requests. The complete command line (including options and arguments) is recorded in the audit log file. The valid values for this option are yes and no and the default value is noauditLogEnabled. You have to ensure that you have enough space on DataFabric Manager to keep the audit log files forever. the dfm option list command requires global read capability. the IP address is always that of DataFabric Manager. the API was invoked. the number of audit log files (each 3 MB in size) can grow excessively. are recorded in the audit log file. DataFabric Manager logs the scheduled action and the user affiliated with it in the audit log file. and the type of request: Web or CLI.Maintenance and management | 263 Authorization events Command execution DataFabric Manager logs each authorization failure and the user name associated with it. API calls Scheduled actions In addition. and the dfm option set command requires global write capability. Example of events in the audit log file: Apr 11 00:04:19 [dfm:NOTIC]:root:LOG:action:::Added new administrator "testu1": Apr 20 11:57:27 [dfm:NOTIC]:root:API:in:[10. DataFabric Manager logs the invocation of any API by using the DataFabric Manager service. When the scheduler starts a job by invoking a CLI. Note: When you set the auditLogForever global option to yes. DataFabric Manager also logs the name of the user who executed the command. the IP address of the appliance from which the requests are received is logged. You must have Core Control Capability to modify auditLogEnable and auditLogForever global options.72.1. For audit logging. Global options for audit log files and their values A global option auditLogForever is used to keep the audit log files forever. Format of events in an audit log file The format of events in the audit log file is as follows: <timestamp> [<applicationname>:<priority>]:<username>:<protocol> <label>:[ip-address]:<intent>:<message>. License features required: The dfm option set command requires an Operations Manager license.2]:Listing database . The complete details of the API call and the authenticated user's name. on whose behalf. if any. a timestamp is recorded for each event.

The <protocol> field describes the source of the event being logged. dfm and sdu). For requests coming .0 Apr 23 14:56:40 [dfm:NOTIC]:root:WEB:in:[ABCD:EF01:2345:6789:ABCD:EF01:2345:6789]:dfm report view backups: <ss><dfm-backup-directory>opt <dfm-backup-directory><dfm-backup-verify-settings> The <application-name> value denotes the application invoking the audit log facility. <application-name> is the actual name of the application that called the API (for example. in the case of log in failures) ACTION: for actions initiated by the user The <ip-address> value for APIs is the IP address of the system from which the API is invoked. The protocol label can have one of the following values: • • • • API: In the case of an API invocation CMD: In the case of CLI commands LOG: When an event is explicitly logged by the system WEB: In the case of an Operations Manager request The <label> field describes the type of the event being logged. in the case of API calls) ERR: for error (for example.264 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. the field :: is not printed in the log message. The message priority field <priority> can have one of the following values: • • • • • • • • EMERG: unusable system ALERT: action required CRIT: critical conditions ERROR: error conditions WARN: warning conditions NOTIC: normal. but significant condition INFO: informational messages DEBUG: debug-level messages The <username> field logs names of the users who invoked the CLIs and APIs. it is the IP address of the DataFabric Manager server. it can be dfm/dfbm/dfdrm if a CLI or Operations Manager request is being audit logged from the dfm/dfbm/dfdrm executable. In the case of the CLI. The message label can have one of the following values: • • • • IN: for input OUT: for output (for example. For example. In the case of APIs. If the API was called by an external application other than DataFabric Manager and it does not pass the name of the application in the API input parameters.

it is empty. it conveys the intention of the API. it is the URL of the Web page. such as powering on or off the storage system .1:dfm user login username = tom password=******:Logged in as<B>NETAPP\tom <\B><BR> July 06 14:42:55[dfm:NOTIC]:NETAPP\tom:API:in:127.1:Add a role to a user:<rbac-admin-role-add> <role-name-or-id>4</role-name-orid> <admin-name-or-id>TOM-XP\dfmuser</admin-name-or-id></rbacadminrole-add> Permissions for accessing the audit log file For accessing the audit log file.root root “root” users on Linux have both read and write permissions.0. the XML input to the API is logged.Maintenance and management | 265 through Operations Manager. excluding the <netapp> element. • • Linux: -rw . it contains output or an error message. Following is an example: July 04 22:11:59 [dfm:NOTIC]:NETAPP\tom:CMD:in:127.. If <protocol> is CMD. If audit-log API is called. Windows users in the Administrators group have both read and write permissions. Windows and UNIX users have separate permissions. this field remains blank. it is the IP address of the workstation on which Operations Manager is installed. If the protocol is WEB.0.0. it is the actual command used. The <intent> field describes the following information: • • • • If the protocol is API. If the protocol is CMD. If <protocol> is WEB.0. Following maintenance tasks can be performed using the interface on DataFabric Manager: • Control system power. The <message> field content depends on the value of <protocol>. What the remote platform management interface is The remote platform management interface enables you to remotely perform the routine system maintenance tasks such as resetting a system with backup firmware. as follows: • • • If <protocol> is API.0.0.1:dfm host password set -p ****** jameel:Started job:2 July 06 13:27:15 [dfm:NOTIC]:NETAPP\tom:WEB:in:127.

266 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. DataFabric Manager pings the card every 15 minutes. address Appliance login Appliance password This is the login user name. and 60xx storage systems. see the Operations Manager help. and execution of your scripts are supported by DataFabric Manager. By using the interface. By default. For procedures to configure an RLM card IP address. You can change this monitoring interval using the Operations Manager Tools menu. you must set the following parameters in Edit Appliance Settings for the destination storage system: Remote Platform Management IP This is the IP address of the RLM card on the storage system.0 • • • Reset system firmware View system and event logs Obtain system sensor status You can use the Run a Command tool in Operations Manager or use the dfm run cmd -r command on the CLI of DataFabric Manager to execute the remote platform management commands. This is the login password. by using the remote platform management interface. The network administrator needs to install the interpreter you require on the system where DataFabric Manager is installed. You begin creating a script by writing the script. When the card responds to the request. It is . Scripts overview The installation. Prerequisites for using the remote platform management interface Before performing remote maintenance. but keep in mind that your choice impacts your network administrators. After you have configured the card’s IP address. Next topics RLM card monitoring in DataFabric Manager on page 266 Prerequisites for using the remote platform management interface on page 266 RLM card monitoring in DataFabric Manager DataFabric Manager monitors the RLM card installed on your storage system and obtains its status by issuing an ICMP ECHO request. you must configure the RLM card IP address on the storage system. Operations Manager displays the card status as Online in the Appliance Details page. you can access the Remote LAN Module (RLM) cards on the 30xx. management. You can use any scripting language. 31xx.

See the Operations Manager Help for task procedures and option descriptions. .zip file must contain the script. generates reports for direct inclusion in Perl scripts. and a file named package. and a file called package. any data files that are needed by the script. Your script can complete the following tasks: • • • Use the dfm run command to execute commands on storage systems for which your DataFabric Manager server has been specified the credentials. Next topics Commands that can be used as part of the script on page 267 Package of the script content on page 267 What script plug-ins are on page 267 What the script plug-in directory is on page 268 What the configuration difference checker script is on page 268 What backup scripts do on page 233 Commands that can be used as part of the script Because scripts are executed by DataFabric Manager. What script plug-ins are By using the script plug-in framework. Use the dfm report command to import information from Operations Manager reports into a Perl script. Scripts are installed using a . see the dfm report command man page.xml.Maintenance and management | 267 recommended that you use a scripting language such as Perl. You have a way of importing information from DataFabric Manager reports into a Perl script. Package of the script content After you have written your script. you can perform several tasks in Operations Manager.xml. Perl is typically installed with the operating system on Linux and Windows workstations. The .xml and the XML schema information. any data files that are needed by the script. you need to package it as a ZIP file. see the Operations Manager Help. the dfm report –F Perl command in DataFabric Manager. your script is able to execute any of the commands available as part of the DataFabric Manager CLI. For more information. Use the dfm event generate command to enable your scripts to generate events. For more information about package. Besides.zip file that must contain the script.

You can perform the following tasks using the script plug-in framework: • • • • Manage the scripts you add to DataFabric Manager Create and manage script jobs Create and manage schedules for the script jobs you have created Define new event classes during script installation or generate these events during script execution For more information about creating scripts and the contents of the script . you can install it on the DataFabric Manager server to verify that your packaging functions correctly. What the configuration difference checker script is You can use the Configuration Difference Checker script to detect when the configuration of a storage system changes from a previous known configuration.com/eservice/toolchest/ . Related information ToolChest . or distribute the script. ToolChest contains many frequently used tools.http://now. You can manage the Configuration Difference Checker script (config_diff. if a configuration change is detected. third-party tools. After you have your . see the Operations Manager Help. The value must include an existing directory on the DataFabric Manager server. the configuration changes.zip) with the functionality that is provided to you as part of the script plug-in framework. What the script plug-in directory is By default. When the Configuration Difference Checker script runs. including downloadable tools and utilities. The option value must conform to the following requirements: • • The value must be an absolute Linux or Windows path. You can obtain the Checker script from ToolChest on the NOW site.netapp. scripts are installed in the script-plugins subdirectory of the installation directory. DataFabric Manager generates an event to notify you of. Web applications. and miscellaneous useful links. After you verify the functionality. it stores a history of the configuration changes and stores one configuration obtained from the last time the script ran.zip file ready. You can change the location of this directory by using the scriptDir option of the dfm option command. you can use. and might optionally contain definitions of new event types that your script generates.xml file contains information about the script.0 The package.268 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. By using job reports.zip file.

The DataFabric Manager server automatically converts the DataFabric Manager data to an archive format and stores the backup in a local or remote directory.Maintenance and management | 269 What backup scripts do The prebackup and postbackup scripts help in bringing the databases into the hot backup mode before a backup is performed. script plug-ins. What the DataFabric Manager database backup process is DataFabric Manager database backup is simple and powerful. You can quicken the backup process through this approach. see the DataFabric Manager Backup man pages. You can back up the DataFabric Manager database. data collection and view modifications of Performance Advisor are suspended during the backup process. For more information about the process of setting up such scripts to run on primary directories. But the backup process is timeconsuming. and performance data without stopping any DataFabric Manager services. There are two types of backups: • Archive: The backup process backs up your critical data in compressed form using the ZIP format. You can easily move an archive-based backup to a different system and restore it. But you cannot easily move a Snapshot-based backup to a different system and restore it. before and after data has been backed up from those directories. • Next topics When to back up data on page 270 Where to back up data on page 270 Recommendations for disaster recovery on page 270 Backup storage and sizing on page 270 Limitation of Snapshot-based backups on page 271 Access requirements for backup operations on page 271 Changing the directory path for archive backups on page 271 Starting database backup from Operations Manager on page 272 Scheduling database backups from Operations Manager on page 272 Specifying the backup retention count on page 273 Disabling database backup schedules on page 273 Listing database backups on page 273 Deleting database backups from Operations Manager on page 273 Displaying diagnostic information from Operations Manager on page 274 . However. the backup process uses the Snapshot technology to back up the database. DataFabric Manager provides the ability to run prebackup and postbackup scripts on specific primary directories. Snapshot-based: In the Snapshot-based approach.

The backup file has an added file name extension of . you do not have to specify a target directory in the Snapshot based backups. the DataFabric Manager server creates the archive backups in two directories. integration support of DataFabric Manager server with SnapVault or SnapMirror is available for Snapshot-based backups. it is a good practice to provide enough disk space to hold the backups. depending on the version of the DataFabric Manager server that you are running.270 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. the DataFabric Manager server does not calculate the amount of disk space required.0 Exportability of a backup to a new location on page 274 When to back up data You must back up your data regularly. set the database backup location to a remote location. Following are the other situations for your data backup: • • Before upgrading (or during the upgrade.db. unlike in the archive backups. . as one of the installation wizard steps) Before any maintenance on the system host on the DataFabric Manager server or Operating System upgrade Where to back up data By default. Similar to archive-based backups. You can access the backup even if the DataFabric Manager server is not accessible.ndb format. or . Recommendations for disaster recovery If you are using the archive-based backups. . . The Snapshot-based backups are volume Snapshot copies. the DataFabric Manager server calculates the amount of disk space required to complete the backup successfully.zip. Note: The current version of DataFabric Manager uses . In the case of Snapshot-based backups.ndb. Therefore. Backup storage and sizing When you perform an archived backup.gz. Therefore. You can view the backup information in the Database Backup Schedule Confirmation page. DataFabric Manager creates archives in the following directories: • • /opt/NTAPdfm/data on a UNIX system C:\Program Files\NetApp\DataFabric Manager\DFM\data on a Windows system You can also specify a different target directory.

.1 for UNIX or later. you must install SnapDrive 2. Changing the directory path for archive backups You can update the directory path for archive backups from Operations Manager. log in to Operations Manager with the GlobalRead role. Related information The NOW site -. you must install SnapDrive 4. and to get the status and schedules of backups.netapp. Click Database Backup under Edit Options.com/ Access requirements for backup operations You must log in to the DataFabric Manager server with CoreControl capability in the Global scope to perform various backup operations.Maintenance and management | 271 Limitation of Snapshot-based backups To configure the DataFabric Manager database for Snapshot-based backups on Windows and Linux. Select Setup > Options 2. SnapDrive restrictions on Windows To configure the DataFabric Manager database for Snapshot-based backups on Windows.2. SnapDrive restrictions on Linux To configure the DataFabric Manager database for Snapshot-based backups on Linux.1 for Windows or later.http://now. You can perform the following backup operations: • • • • • • • • Create Delete Start Abort Set schedule Enable schedule Export Diagnose To list the backups on a directory. For information about supported versions. you must install the appropriate versions of SnapDrive software. see the SnapDrive for UNIX Compatibility Matrix page on the NOW site. Steps 1.

3. 5. Click Run Backup. Because backups in the archive format take time. After you finish Note: To stop a backup that is in progress. Select Setup > Database Backup. hourly backups and multiple backups in a day are not feasible. daily. Scheduling database backups from Operations Manager You can schedule a database backup to occur at a specific time on a recurring basis. About this task While scheduling database backup in the archive format. change the directory path if you want to back up to a different location. Select Setup > Database Backup. You can choose between the archive-based and Snapshot-based backups. 4. less frequent than hourly. Verify that the settings are correct. 4. Starting database backup from Operations Manager You can manually start a database backup from Operations Manager. Entries are based on hourly. 3. . You can choose between the archive-based and Snapshot-based backups. The Database Backup Confirmation page is displayed if the Backup Type is Archive. Steps 1. 6.272 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Click Back Up Now. Select the frequency to run the backup and enter the time to run it. Select Enable Schedule to activate the schedule. 2. Click Update. then click Update Data. 2. Select Backup Type. In the Archive Backup Destination Directory field. click Abort Backup. Click Update Schedule. and weekly schedules. Steps 1.0 3. 4. Select Backup Type.

2. 4. enter the count. Click Update Schedule. Click List Database Backups. Steps 1. Disabling database backup schedules You can temporarily disable a DataFabric Manager database backup schedule. Select Setup > Options. Steps 1. 2. Click List Database Backups. ensure that Enable Schedule check box is not selected. Steps 1. Deleting database backups from Operations Manager You can delete a DataFabric Manager database backup using Operations Manager. 2. Select Setup > Database Backup. 3. Click Update. Select Setup > Database Backup.Maintenance and management | 273 Specifying the backup retention count You can specify the number of backups the DataFabric Manager server needs to retain. Click Database Backup. . In the Database Backup Retention Count field. Select Setup > Database Backup. 2. Select Delete Selected. 3. Listing database backups You can view information about DataFabric Manager database backups. 3. To disable the schedule. Steps 1.

If this attempt also fails. You can investigate such backup-related issues using diagnostic information. What the restore process is You can restore the DataFabric Manager database backups only through the CLI. 2. If some commands are run. DataFabric Manager attempts to restore the database to its earlier state. for some reason. and all temporary files that were created are deleted. use dfm backup export <backup_name> [target_filepath] command.0 Displaying diagnostic information from Operations Manager The DataFabric Manager server might fail to detect the DataFabric Manager setup on a LUN. If target_filepath is not specified.You can overwrite an existing backup at a new location by using the dfm backup export -f <backup_name> command. 3. Click Diagnose. they can interfere with the restore or upgrade operation by locking database tables and causing the operation to fail.274 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. If. Note: Do not run any dfm command during the DataFabric Manager restore or upgrade operation. Update schedule after you schedule a database backup. Steps 1. To export the backup file to a new path. the CLI prompts you with a set of instructions to restore the original database. You can restore all data by using the dfm backup restore command. Next topics Restoring the database from the archive-based backup on page 275 Restoring the database from the Snapshot copy-based backup on page 275 Restoration of the database on different systems on page 275 . Exportability of a backup to a new location You can export a Snapshot-based backup to a different location using the dfm backup export <backup_name> command. the archive-based backup is created by default in the directory specified in the Archive Backup Destination Directory field using Operations Manager. Click Update Schedule. the restore job fails. Click Setup > Database Backup.

copy that backup file onto the new system. Restoring the database from the Snapshot copy-based backup You can restore the DataFabric Manager database on the same system. after you restore a database from one DataFabric Manager server to another. the local administrator account might be different on the new server. Copy that backup file onto the new system. Steps 1. You can specify the absolute path for the backup file using the dfm backup restore command. create a backup file (dfm backup create). Restoration of the database on different systems To restore the DataFabric Manager database and other configuration from the archive-based backup on to another server. However. This restore operation would result in the . then restore the backup on that system (dfm backup restore). and then restore the backup on that system (dfm backup restore). Copy the backup file into the databaseBackupDir directory. One way is to export the Snapshot-based backup in the archive format (dfm backup export). A “Completed restore” message is displayed when the restore process finishes successfully. and run the dfm datastore setup-n <target_dir_LUN> command. You can restore the database from a Snapshot-based backup on to another server. A “Completed restore” message is displayed when the restore process finishes successfully. Type the following command at the command line: dfm backup restore <backup_name> backup_name is the name of the backup copy in the database. Another way to restore the database from a Snapshot-based backup is to connect to the Snapshot copy using the SnapDrive commands.Maintenance and management | 275 Restoring the database from the archive-based backup You can restore the DataFabric Manager database on the same system. Steps 1. 2. Type the following command at the command line: dfm backup restore <backup_name> backup_name is the name of the file to which you saved your DataFabric Manager database. Type the following command at the command line to display the names of backup copies of the database: dfm backup list 2.

you need to perform the following tasks: • • Log in to the new system as a user with GlobalFullControl.0 local administrator account losing access to the restored DataFabric Manager database. Disaster recovery support enables you to recover the DataFabric Manager services quickly on another site. the Snapshot-based backups are registered with Protection Manager. By using Protection Manager and SnapMirror technology. This can usually be avoided by ensuring that a domain user (who has permission to log in to both systems) exists in DataFabric Manager with GlobalFullControl role before migrating the database. A disaster recovery plan typically involves deploying remote failover architecture. If this happens. DataFabric Manager services should be started on the secondary site using mirrored data by running the dfm datastore mirror connect command. Disaster recovery configurations You can configure DataFabric Manager for disaster recovery by using Protection Manager and SnapDrive. Note: Disaster Recovery does not support IPv6 addressing. Next topics Disaster recovery using Protection Manager on page 276 Disaster recovery using SnapDrive on page 282 Disaster recovery using Protection Manager If you have a Protection Manager license. Add the local account administrator of the new system back into DataFabric Manager with GlobalFullControl capabilities. Snapshot-based backups are made at the primary site according to the backup schedule. If a catastrophic failure of the primary site occurs. This remote failover architecture allows a secondary data center to take over critical operations when there is a disaster in the primary data center. contact technical support for assistance.276 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Next topics Limitation of disaster recovery support on page 277 . Disaster recovery support prevents any data loss due to disasters that might result in a total site failure. These Snapshot-based backups are then mirrored to the secondary site according to the Protection Manager Mirror policy. then you can use Protection Manager to configure DataFabric Manager for disaster recovery. Note: If there are no GlobalFullControl users that can access the new system.

see the SnapDrive/Data ONTAP Compatibility Matrix page at the NOW site. It is best to have a dedicated flexible volume for the DataFabric Manager data.netapp. For more information. The Windows domain account that is used by the SnapDrive service must be a member of the local built-in group or local administrators group on both the source and destination storage systems. You must have SnapDrive for Windows 6. To ensure that you have the required Data ONTAP version for SnapDrive.com/ .Maintenance and management | 277 Prerequisites for disaster recovery support on page 277 Setting up DataFabric Manager on page 278 Recovering DataFabric Manager services on page 279 Recovering DataFabric Manager services using the dfm datastore mirror connect command on page 280 Failing back DataFabric Manager services on page 280 Limitation of disaster recovery support You must use SnapDrive to configure DataFabric Manager on Linux for disaster recovery. because Volume SnapMirror is used for mirroring the data. The source and destination storage systems must be managed by Protection Manager. You must have a SnapMirror license for both the source and destination storage systems. by setting the wafl. To grant root access to the Windows domain account that is used by the SnapDrive service. You must be using the same version of Data ONTAP on both the source and destination storage systems. The Windows domain account used to administer SnapDrive must have full access to the Windows domain to which both the source and destination storage systems belong. You must have a Protection Manager license.http://now. You can configure the storage systems. you can view the technical report on the NetApp Web site. • • • • • • • Related information The NOW site -. DataFabric Manager data must be stored in a LUN.0 or later. You must configure Snapshot-based backup for DataFabric Manager. • • • • You must be a Windows domain administrator. DataFabric Manager is dependent on SnapDrive for Windows to provide disaster recovery support. Related information Disaster Recovery Support for DataFabric Manager Data Using SnapDrive Prerequisites for disaster recovery support Ensure that the prerequisites are met for disaster recovery support for DataFabric Manager data. you must configure the source and destination storage systems.map_nt_admin_priv_to_root option to "On" in the CLI.

Install DataFabric Manager on the secondary site. Steps 1. Install DataFabric Manager. b. For more information about how to use Protection Manager to assign schedules to a dataset. Configure a schedule for Snapshot-based backup. If you do not have either of these licenses. Assign the provisioned secondary volume to the application dataset. . Run the dfm service disable command on the secondary site to disable all DataFabric Manager. Create an FC-based or iSCSI-based LUN on the storage system using SnapDrive. Ensure that there are no conformance issues. 2. 6. Assign a schedule to the application dataset. 7. Run the dfm datastore mirror setup command to create the application dataset. see the NetApp Management Console Help. Install SnapDrive for Windows and configure it with Protection Manager. 5. c. Install or upgrade DataFabric Manager on the primary site by completing the following steps: a. d. 3. b. complete the following steps: a. If you have either the Protection Manager Disaster Recovery license or the Provisioning Manager license. d. Run the dfm datastore setup command to migrate the data to a directory in the FC-based or iSCSI-based LUN. secondary volume provisioning can take advantage of the policies provided by that license. For more information about how to use Protection Manager to assign resources to a dataset. You would need this information while using the dfm datastore mirror connect command during the process of recovering DataFabric Manager. Run the dfm backup diag command and note down the SnapMirror location information from the command output. c. see the NetApp Management Console Help. then provision the secondary volume manually.278 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Setting up DataFabric Manager You must perform a set a tasks to set up DataFabric Manager for disaster recovery. 4. Create a volume on the destination storage system having the same size and space configuration as the primary volume. To configure DataFabric Manager for disaster recovery using the Protection Manager features.

netapp. d. double-click SnapDrive. Select the Connect Disk option and follow the instructions on the Connect Disk wizard. 3. Right-click Disks.com/NOW/ knowledge/docs/client_filer_index. If the primary site is clustered using MSCS. Related information SnapDrive for Windows Installation and Administration Guide . complete the following steps: a. double-click SnapDrive List. offline the services before disabling them. 4. Run the dfm service start command to start DataFabric Manager services. Double-click the name of the SnapDrive host you want to manage. Steps 1. 5. if you are managing multiple instances of SnapDrive. 6. For more information about connecting virtual disks. c. 2. Using the Protection Manager UI. if the primary site is clustered using MSCS. Run the dfm options set command to reset the localHostName global option on the secondary site. Run the dfm service disable command to disable the services on the primary site. see the SnapDrive for Windows Installation and Administration Guide.Maintenance and management | 279 DataFabric Manager services must be enabled only during disaster recovery.http://now. Related tasks Scheduling database backups from Operations Manager on page 272 Recovering DataFabric Manager services You can recover DataFabric Manager services on the secondary site if a disaster occurs at the primary site. 7. b. To connect to the LUN using the Microsoft Management Console (MMC) Connect Disk wizard on the secondary storage system. Otherwise. by using the dfm datastore mirror connect command. Run the dfm datastore setup-n command to configure DataFabric Manager to use the mirrored data. Expand the Storage option in the left pane of the MMC. Run the dfm service enable command to enable the services. change the dataset created for DataFabric Manager data from the current mode to suspended mode. e.shtml .

b. Run the dfm options set command to reset the localHostName global option on the secondary site.0 Recovering DataFabric Manager services using the dfm datastore mirror connect command You can use the dfm datastore mirror connect to recover the DataFabric Manager services on the secondary site if a disaster occurs at the primary site. If the primary site is clustered using MSCS. c. Run the dfm service disable command to disable the services on the primary site. run the snapmirror resync command to resynchronize the data at the storage system level. Run the dfdrm mirror list command to find relationships between the source and destination storage systems. Steps 1. If the SnapMirror relationship is removed during the process of recovering DataFabric Manager services. Configures DataFabric Manager to use the data from the mirrored location. Failing back DataFabric Manager services You must complete a list of tasks to fail back DataFabric Manager services to the primary site. Run the dfdrm mirror resync -r command to resynchronize the mirror relationships.280 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. This command reverses the mirror direction and starts updates. Steps 1. Connects to the mirrored volume or LUN using SnapDrive for Windows. Puts the dataset created for the DataFabric Manager data in suspended mode. offline the services before disabling them. 2. Enables the services using the dfm service enable command. Starts the DataFabric Manager services. . 3. Ensure that the DataFabric Manager data at the source storage system is in synchronization with data at the destination storage system by completing the following steps: a. The dfm datastore mirror connect command performs the following operations: • • • • • • Breaks the mirror relationship between the source and destination DataFabric Manager. Run the dfm datastore mirror connect command on the DataFabric Manager server at the secondary site to start DataFabric Manager services using mirrored data. if the primary site is clustered using MSCS.

Run the dfdrm mirror list command to ensure that these relationships are discovered. use the dfm datastore mirror connect command. if the primary storage system is destroyed during disaster. . d. Run the dfdrm mirror resync -r command to resynchronize the mirror relationships so that they are no longer reversed. 8. 10. Run the snapmirror resync command to resynchronize the data at the storage system level. 2. if the SnapMirror relationship is removed during the failback process.Maintenance and management | 281 d. To start DataFabric Manager services using the mirrored data on the primary site. Run the dfm host discover command to discover the reversed relationships on the primary site. Connect to the LUN using MMC Connect Disk wizard on the primary storage system. Run the dfm service enable command to enable the services. 5. c. Run the dfm datastore setup -n command to configure DataFabric Manager to use the mirrored data on the LUN. Therefore. For more information about how to use Protection Manager to import SnapMirror relationships. 9. import the SnapMirror relationship already established for DataFabric Manager data to the new application dataset. 7. Alternatively. Run the dfdrm mirror initialize command to create a new relationship from the secondary storage system to the new primary storage system . see the NetApp Management Console Help. Run the dfm datastore mirror destroy command to destroy the application dataset created for DataFabric Manager data. 4. b. Note:The dfm datastore mirror connect command does not support shared storage. Run the dfm service start command to start DataFabric Manager services. Run the dfm datastore mirror setup command to create a new application dataset for DataFabric Manager data. 3. 6. if they are not discovered already. Using the Protection Manager UI. perform the following steps: • • Run the dfm datastore mirror connect command at the CLI Alternatively you perform the following procedure: a. Run the dfm service disable command to stop and disable the services at the secondary site. the command should not be used if the primary system is set up for cluster using MSCS.

you can view the technical report. Related information Disaster Recovery Support for DataFabric Manager Data Using SnapDrive . then you can use SnapDrive to configure DataFabric Manager for disaster recovery.0 Disaster recovery using SnapDrive If you do not have a Protection Manager license. For more information.282 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

and how to get technical assistance from your service provider. if you cannot troubleshoot or resolve those problems.Troubleshooting in Operations Manager | 283 Troubleshooting in Operations Manager Learn about the common issues with DataFabric Manager. Next topics AutoSupport in DataFabric Manager on page 283 DataFabric Manager logs on page 286 Common DataFabric Manager problems on page 287 How discovery issues are resolved on page 288 Troubleshooting network discovery issues on page 289 Troubleshooting appliance discovery issues with Operations Manager on page 290 How configuration push errors are resolved on page 291 How File Storage Resource Manager (FSRM) issues are resolved on page 291 Issues related to SAN events on page 292 Import and export of configuration files on page 294 How inconsistent configuration states are fixed on page 294 Data ONTAP issues impacting protection on vFiler units on page 294 AutoSupport in DataFabric Manager You can use the AutoSupport feature of DataFabric Manager to send messages to technical support. or SMTP. how to troubleshoot those problems. When you install or upgrade to DataFabric Manager 3.3 or later. AutoSupport sends messages to technical support over secure HTTPS (by default). Note: AutoSupport in DataFabric Manager does not support IPv6 addressing. DataFabric Manager then starts to monitor the system’s operations and logs a message that AutoSupport was enabled. the scheduler service automatically enables AutoSupport after the first 24 hours of operation. You can contact technical support. HTTP. Next topics Reasons for using AutoSupport on page 284 Types of AutoSupport messages in DataFabric Manager on page 284 Protection of private data by using AutoSupport on page 284 Configuring AutoSupport on page 285 . if you have not disabled the feature.

Technical support helps you to solve problems that AutoSupport detects. Complete AutoSupport messages are required for normal technical support. Note: If you are using a DataFabric Manager demonstration license. depending on the type of problem. such as IP addresses and user names. Protection of private data by using AutoSupport You can ensure that private data. . A range of support might be provided. The AutoSupport feature sends messages to the technical support for the following reasons: • • Scripts have been programmed to automatically look for particular data in AutoSupport weekly reports that might indicate a potential problem. such as IP addresses. The message contains information to help you diagnose and troubleshoot the problem. a replacement disk is automatically sent to you. are not included in the AutoSupport message. Minimal AutoSupport messages omit sections and values that might be considered sensitive information. including automated parts replacement.284 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. if a message is received that a disk failure occurred on your system. but greatly affect the level of support you can receive. For example. e-mail contact. If you do not want to include private data. DataFabric Manager does not send AutoSupport messages. and contact by a technical support engineer. Types of AutoSupport messages in DataFabric Manager AutoSupport tracks events and sends messages. AutoSupport generates the following types of messages: Event message A message that AutoSupport sends to recipients when an event tracked by AutoSupport occurs. host names. Weekly report A weekly report is a general status message that AutoSupport sends automatically each week to recipients you have identified. set the Autosupport Content global option to minimal. and user names.0 Reasons for using AutoSupport With the help of the AutoSupport feature in DataFabric Manager. you can detect potential problems and get quick help.

Select Include to include the Performance Advisor AutoSupport data along with the DataFabric Manager AutoSupport data. 10. 3." any complete AutoSupport message not sent is cleared from the outgoing message spool and notification of this is displayed on the console. Specify the type of delivery --HTTP. 4.for AutoSupport notification to NetApp technical support. Enter the time to wait before trying to resend a failed AutoSupport notification. 7. or the list can be left empty. 6. AutoSupport uses port numbers 80 for HTTP. Note: By default. .Troubleshooting in Operations Manager | 285 Configuring AutoSupport You can configure AutoSupport using Operations Manager. Specify the type of AutoSupport content that messages should contain. and 25 for SMTP. Enter the comma-delimited list of recipients for the AutoSupport email notification. Note: If this setting is changed from "complete" to "minimal. 2. 443 for HTTPS. Select Include to include the Provisioning Manager AutoSupport data along with the DataFabric Manager AutoSupport data. identify the administrator to be designated as the sender of the notification. 9. 11. Select Yes to enable AutoSupport notification to NetApp. HTTPS or SMTP-. Click Setup > Options > AutoSupport. From the AutoSupport Settings page. 8. Up to five email address are allowed. Enter the number of times the system should try to resend the AutoSupport notification before giving up. Click Update. if previous attempts have failed. Steps 1. 5.

0 DataFabric Manager logs DataFabric Manager creates logs. Accessing the logs through the DataFabric Manager CLI You can access DataFabric Manager logs using the CLI. Access to the SAN log You can access the logs containing LUN-related information by using Operations Manager or the CLI. .log file. On a UNIX workstation. On a Windows workstation. use the following URL: http://mgmt_station:8080/dfm/diag mgmt_station is the name or IP address of the workstation on which DataFabric Manager is installed. enter installation_directory\dfm\log. You can access this file using the CLI or through Operations Manager. You can access the DataFabric Manager logs through the Diagnostics page. You can also access DataFabric Manager logs through the CLI . On a UNIX workstation. enter installation_directory/log. you can find the DataFabric Manager logs in the following directory: installation_directory/log. CLI commands failing for unexpected reasons. To access this page. you can find the DataFabric Manager logs in the following directory: installation_directory\dfm\log. which you can use to troubleshoot issues such as storage system discovery not working. going to different directories depending on whether you are using a Windows or a UNIX workstation: • • On a Windows workstation.286 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You must scroll down to the Logs section to find the available DataFabric Manager logs. and events not generating as expected. DataFabric Manager logs information related to LUN management in the dfmsan. Next topics Access to logs on page 286 Accessing the logs through the DataFabric Manager CLI on page 286 Access to the SAN log on page 286 Apache and Sybase log rotation in DataFabric Manager on page 287 Access to logs You can access DataFabric Manager logs using the Operations Manager GUI or by using the CLI.

Common DataFabric Manager problems You can resolve some of the common problems that you might encounter when using DataFabric Manager. Look in dfmeventd. DataFabric Manager relies on routers to discover networks other than the network to which the DataFabric Manager server is attached. Apache log files are rotated when they are 3. If DataFabric Manager fails to communicate with a router. E-mail alerts not working in DataFabric Manager If e-mail alerts do not work. The router is beyond the maximum number of hops set in the Network Discovery Limit option. These are some typical reasons DataFabric Manager fails to communicate with routers: • • • The SNMP community strings do not match. SNMP is disabled on the router. you can verify the e-mail address in the log file.log to see if DataFabric Manager attempted to send an e-mail to that address. If an alarm does not send an e-mail to the expected e-mail address.000 KB or larger.log to see if errors were reported. Next topics Communication issues between DataFabric Manager and routers on page 287 E-mail alerts not working in DataFabric Manager on page 287 Communication issues between DataFabric Manager and routers Communication between DataFabric Manager and routers fails due to problems such as a mismatch of SNMP community strings. Related concepts DataFabric Manager logs on page 286 .000 KB or larger. Sybase logs are rotated when they are 10. SNMP disabled on the router. it cannot discover other networks attached to that router. and so on. • • Look in alert. use the log files generated by DataFabric Manager to help troubleshoot the problem.Troubleshooting in Operations Manager | 287 Apache and Sybase log rotation in DataFabric Manager DataFabric Manager automatically rotates Apache and Sybase logs.

it tests the following: . DataFabric Manager provides a Diagnose Connectivity tool that automates frequently used steps of the troubleshooting process for connectivity issues. A managed storage system is one that is in the DataFabric Manager database. The Diagnose Connectivity tool queries the database and displays a summary of information about the storage system: • • • The name. and displays information and test outcomes. shows sysName. Use this tool when you want to troubleshoot discovery problems. sysObjectID. This tool queries the DataFabric Manager database about a selected storage system. RSH. and HTTP port tests. DataFabric Manager object ID. according to DataFabric Manager Results of the SNMP. SSH. runs connectivity tests. XML. model. and productID Uses the ping utility to contact the IP address of the storage system and tests for connectivity If it is a supported storage system.0 How discovery issues are resolved You can use Diagnose Connectivity tool to troubleshoot discovery problems. and OS version Whether the storage system is up. Next topics Use of the Diagnose Connectivity tool for a managed storage system on page 288 Use of the Diagnose Connectivity tool for an unmanaged storage system on page 288 Where to find the Diagnose Connectivity tool in Operations Manager on page 289 Reasons why DataFabric Manager might not discover your network on page 289 Use of the Diagnose Connectivity tool for a managed storage system You can collect information about your managed storage system by using the Diagnose Connectivity tool.288 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. system ID. An unmanaged storage system is one that is not in the DataFabric Manager database. The sequence of steps depends on whether the selected storage system is managed or unmanaged. Use of the Diagnose Connectivity tool for an unmanaged storage system The Diagnose Connectivity tool runs the following tests: • • • • • Determines if the storage system IP address falls within the range of networks discovered by DataFabric Manager Sends an SNMP GET request to determine if DataFabric Manager can use SNMP to communicate with the storage system If it is a supported storage system . RLM. ping.

DataFabric Manager fails to communicate with a router. using Operations Manager. Ensure that the Network Discovery Enabled option on the Options page is set to Yes. Ensure that the router is within the maximum number of hops set in the Network Discovery Limit option. You can find the Diagnose Connectivity tool under Storage Controller Tools or Cluster Tools list in the left pane on a Details page. and storage system. you must run the Diagnose Connectivity tool from the command line. DataFabric Manager might not discover your network if a SNMP community string in DataFabric Manager does match the community string required by the router. Steps 1. switch. switch. Three possible reasons why DataFabric Manager might not discover your network are as follows: • • • The SNMP community string set in DataFabric Manager does not match the community string required by the router. and clusters. From Operations Manager. you should run the dfm host diag command. and it cannot discover other networks attached to that router.Troubleshooting in Operations Manager | 289 • • The RSH connection that uses the user name and password stored in DataFabric Manager for the storage system The HTTP port Note: For unmanaged storage systems. Reasons why DataFabric Manager might not discover your network DataFabric Manager might not discover your network if it fails to communicate with the router. use one of the following methods: • • From the CLI. try to add the storage system. You changed the IP address of the router for the network. The Diagnose Connectivity tool is available only for managed storage systems. such as DataFabric Manager not discovering specific networks. You must click this link to run the Diagnose Connectivity tool. and storage system. Troubleshooting network discovery issues You can troubleshoot network discovery issues. . an error message is displayed with a link labeled Click here to troubleshoot. To diagnose connectivity on unmanaged storage systems. If the operation fails. Where to find the Diagnose Connectivity tool in Operations Manager You can use the Diagnose Connectivity tool for both managed storage systems. 2.

d. Click the edit link beside Network Credentials. . click the Options link (in the Banner area). and then click the edit link beside Network Credentials. From the command line. If you changed the IP address for the router. 3. On the Network Credentials page. Determine whether an SNMP community string other than the default (public) is required for the network device to which the undiscovered network is attached. find Discovery Options.0 3. add the network manually by using the Networks To Discover option on the Options page under Discovery Options. Find discovery Options. you must change the primary IP address stored in DataFabric Manager on the Edit Settings page. Determine whether an SNMP community string other than the default (public) is required for the network device to which the undiscovered network is attached. Click the edit link of the Networks to Discover option to check whether the network to which this appliance is attached has been discovered. b. you can troubleshoot using Operations Manager. Steps 1. click the edit link at the right of the SNMP Community whose string you want to set. 2. If Steps 1 through 3 are not successful. run the Diagnose Connectivity tool against the IP address of the router of the network to determine if DataFabric Manager can communicate with the router through SNMP. follow the troubleshooting guidelines. 5. Troubleshooting appliance discovery issues with Operations Manager If DataFabric Manager does not discover a storage system.290 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. click the edit link at the right of the SNMP Community whose string you want to set. Ensure that the Host Discovery Enabled option on the Options page is set to Enabled. If the network to which this storage system is attached has not been discovered. You can also modify the primary IP address by entering the following CLI command: dfm host set host-id hostPrimaryAddress=ip-address 4. c. perform the following steps: a. Click the Options link (in the Banner area). To set an SNMP community string in DataFabric Manager. On the Network Credentials page. To set an SNMP community string in DataFabric Manager.

If a path walk takes a long time to complete: A path walk can take many hours to complete. When a storage system is offline. Network problems between the FSRM host agent and the storage device can cause the path walk to fail or return incomplete results. How configuration push errors are resolved You can troubleshoot configuration push errors by analyzing logs created by DataFabric Manager. • • • How File Storage Resource Manager (FSRM) issues are resolved You can resolve FSRM issues by reviewing host agent credentials and network issues.Troubleshooting in Operations Manager | 291 4. DataFabric Manager could not authenticate to the system. To fix the error. • • . do the following: • • Verify that the FSRM host agent is running. You can access the job report by clicking the job ID in the Job ID column of the Status of Configuration Jobs option. if one or more push jobs fail. If Steps 1 through 3 are not successful. You can correct this problem by setting the correct login and password. If a path walk fails. DataFabric Manager logs the reason for the failure in the job's report. Check for network problems between the FSRM host agent and the storage device. add the network manually by using the Networks to Discover option on the Options page under Discovery Options. then review the settings of host agent administration and host agent password. and then check the login and password settings configured on the FSRM host agent and on DataFabric Manager. but did not update its configuration because it found the configuration to be invalid. The storage system was offline. The following failure conditions are documented by DataFabric Manager: • An attempt to push a configuration for a particular group was disallowed because the group already had a pending configuration push job. • If you cannot collect FSRM data. you must cancel the pending job and then re-push the new job. You can monitor the status of the path walk that is in progress from the SRM Path Details page (Control Center > Home > Group Status > File SRM > SRM path name. DataFabric Manager continues to try to contact the storage system until you manually cancel the push. The storage system downloaded the configuration.

The connectivity to the port failed because of a bad cable or loose connection. Corrective action: 1. If the port state was not changed by an administrator. Replace the cables. or because maintenance needs to be performed on the port. . typically in one of two situations: • • An administrator might have changed the port state either because the port is no longer required to be connected to a storage system. If the port hardware is malfunctioning. connect the device to another port. Causes: A port goes offline. see the administration guide for your switch to diagnose the problem. Next topics Offline FC Switch Port or Offline HBA Port on page 292 Faulty FC Switch Port or HBA Port Error on page 292 Offline LUNs on page 293 Snapshot copy of LUN not possible on page 293 High traffic in HBA Port on page 293 Offline FC Switch Port or Offline HBA Port FC Switch Port or HBA Port goes offline if it is taken over by an administrator. Replace the port. Faulty FC Switch Port or HBA Port Error A faulty FC Switch Port or HBA Port Error occurs due to malfunctioning of the port hardware.0 Issues related to SAN events You can use Operations Manager to troubleshoot SAN events by resolving issues related to FC switch port. HBA port.292 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. and LUNs. perform one of the following corrective actions: • • For the FC switch. 3. 2. Cause: The port hardware might be malfunctioning. if a SAN device is connected to a port that is reported faulty. Ensure that the cable is connected securely to the port.

Cause: DataFabric Manager threshold HBA Port Too Busy has been exceeded. High traffic in HBA Port High traffic in HBA Port occurs if the DataFabric Manager threshold HBA Port Too Busy exceeds the permitted value. Cause: A Snapshot copy for the volume that contains the LUN cannot be taken because the amount of free space on this volume is less than the used and reserved space. consider upgrading the infrastructure. Corrective action: Determine the cause of high traffic. Check for a conflicting serial number and resolve the issue. Corrective action: Expand the size of the volume. bring the LUN online from the storage system console. to perform maintenance or to apply changes to the LUN. The LUN has a serial number that conflicts with that of another LUN. Causes: A LUN goes offline typically in one of two situations: • • An administrator. . Snapshot copy of LUN not possible If you cannot take a Snapshot copy of a LUN. such as to modify its size might have changed the status.Troubleshooting in Operations Manager | 293 Offline LUNs A LUN can be offline if it is taken over by an administrator or there is a conflict between the serial numbers of two LUNs. Corrective action: • • If an administrator did not change the LUN status. If traffic stays above the threshold for a long period. you can expand the size of the volume.

If you want to restore the configuration files. and manage these relationships. To create SnapMirror and SnapVault relationships.294 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. and then import it. DataFabric Manager continues using the hosting storage system to create.2 and later. However. To fix the inconsistent configuration state.2 and later. After creating the file. Configuration modification events can also help you maintain configuration consistency by alerting you to configuration changes. you should archive it in a different location for safekeeping. re-push the configuration files to any storage systems that require updating. DataFabric Manager uses the hosting storage system to create SnapMirror and SnapVault relationships. if you need to undo future configuration changes. This helps you to revert to a working set of configuration files. How inconsistent configuration states are fixed To fix an inconsistent configuration state. you must use the hosting storage system . to make any required changes. You can also edit the configuration file with a text editor. DataFabric Manager routinely monitors the storage systems in a configuration resource group to determine if their settings are inconsistent with the group’s configuration files. You can use the Export Configuration Files option on the Configurations page to save your configuration files to DataFabric Manager. monitor.0 Import and export of configuration files You can export your configuration files for archiving to DataFabric Manager or to your local computer. or to your local computer. Related concepts Management of storage system configuration files on page 223 Data ONTAP issues impacting protection on vFiler units You might encounter errors while trying to create SnapMirror and SnapVault relationships by using vFiler units in DataFabric Manager 7. re-push the configuration files to the required storage systems. SnapMirror and SnapVault relationships can be created using vFiler units. you can import the archived configuration files. you might encounter the following issues: . Data ONTAP versions earlier than 7. As a result. you can import the archived file that you previously exported.2 do not support SnapMirror and SnapVault commands on vFiler units. For Data ONTAP 7. If you need to restore the configuration files. DataFabric Manager enables you to save all your configuration files as a text file.

SnapMirror updates. The destination hosting storage system is not able to contact the IP address of the source vFiler unit.Troubleshooting in Operations Manager | 295 • Issue: If the snapvault.preferred_interfaces option is not set on the source hosting storage system. Issue: If the ndmpd.access options on the source system. the backups from DataFabric Manager might not use the correct network interface.preferred_interfaces option on the source hosting storage system. scheduled backups. • • Workaround: Ensure that the host name or IP address of the source system that is used to create relationships can be reached from the destination hosting storage system . and SnapMirror resync from DataFabric Manager fails.access and snapmirror." This issue occurs when both of the following conditions occur: • • A relationship between two vFiler units is imported into DataFabric Manager by autodiscovery or added manually. set the snapmirror.access options on the source storage system allow access only to the destination vFiler unit. relationship creation. Workaround: Set the ndmpd. Issue: The backups and SnapMirror updates from DataFabric Manager fail with the error message "Source unknown.” Workaround: To allow access to the destination hosting storage system.access snapvault. Check access permissions on the source. DataFabric Manager displays the following error message: “Request denied by the source storage system. . ondemand backups.

.

Events are listed in alphabetical order by object type. and Operations Manager and the associated event severity types. Provisioning Manager. Use the links in the following table to jump directly to the events for that object.Appendix | 297 Appendix List of events and severity types These tables list all of the events generated by Protection Manager. . Note: Performance Advisor uses only the Normal and Error events.

298 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Event categories Active/Active Configuration Controller on page 298 Active/Active Configuration Interconnect on page 299 Active/Active Configuration Partner on page 299 Agent on page 299 Aggregate on page 300 Alarm on page 300 CFO Interconnect on page 301 CFO Partner on page 301 CFO Settings on page 301 CFO This Storage System on page 301 Comment Field on page 302 Configuration Changed on page 302 CPU on page 302 Data Protection on page 302 Database on page 302 Dataset on page 303 Dataset Conformance on page 304 Disks on page 304 Enclosures on page 305 Fans on page 305 FC (Fibre Channel) Switch Port on page 305 Filer Configuration on page 305 Global Status on page 306 HBA Port on page 306 Host on page 306 Host Agent on page 307 Inodes on page 307 Interface Status on page 307 LUN on page 307 Management Station on page 308 Migration on page 308 NDMP on page 309 Network on page 309 Network Services on page 309 No Schedule Conflict on page 310 NVRAM Battery on page 310 OSSV (Open Systems SnapVault) on page 310 Performance Advisor on page 311 Power Supplies on page 311 Primary on page 311 Protection Policy on page 311 Protection Schedule on page 311 Provisioning Policy on page 311 Qtree on page 312 Remote Platform Management (RPM) on page 312 Resource Group on page 312 Resource Pool on page 312 SAN Host LUN Mapping on page 313 Script on page 313 SnapMirror on page 313 Snapshot(s) on page 314 SnapVault on page 315 SNMP Trap Listener on page 316 Space Management on page 316 Storage Services on page 316 Sync on page 317 Temperature on page 317 Unprotected Item on page 317 User on page 317 vFiler Unit on page 318 vFiler Unit Template on page 318 Volume on page 318 Active/Active Configuration Controller Event name Can Take Over Cannot Takeover Dead Severity Normal Error Critical .

Appendix | 299 Event name Takeover Severity Warning Active/Active Configuration Interconnect Event name Down Not Present Partial Failure Up Severity Error Warning Error Normal Active/Active Configuration Partner Event name Dead May Be Down OK Severity Warning Warning Normal Active/Active Configuration Settings Event name Disabled Enabled Not Configured Takeover Disabled This Controller Dead Severity Normal Normal Normal Normal Warning Agent Event name Down Login Failed Login OK Up Severity Error Warning Normal Normal .

300 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Aggregate Event name Almost Full Almost Overcommitted Deleted Discovered Failed Full Nearly Over Deduplicated Not Over Deduplicated Not Overcommitted Offline Online Overcommitted Over Deduplicated Restricted Snapshot Reserve Almost Full Snapshot Reserve Full Snapshot Reserve OK Space Normal Severity Warning Warning Information Information Error Error Warning Normal Normal Error Normal Error Error Normal Warning Warning Normal Normal Alarm Event name Created Deleted Modified Severity Information Information Information .

Appendix | 301 CFO Interconnect Event name Down Not Present Partial Failure Up Severity Error Warning Error Normal CFO Partner Event name Dead May Be Down OK Severity Warning Warning Normal CFO Settings Event name Disabled Enabled Not Configured Takeover Disabled This Node Dead Severity Normal Normal Normal Normal Warning CFO This Storage System Event name Can Take Over Cannot Take Over Dead Takeover Severity Normal Error Critical Warning .

0 Comment Field Event name Created Modified Destroyed Severity Information Information Information Configuration Changed Event name Config Group Severity Information CPU Event name Load Normal Too Busy Severity Normal Warning Data Protection Event name Job Started Policy Created Policy Modified Schedule Created Schedule Modified Severity Information Information Information Information Information Database Event name Backup Failed Backup Succeeded Restore Failed Restore Succeeded Severity Error Information Error Information .302 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Appendix | 303 Dataset Event name Backup Aborted Backup Completed Backup Failed Created Deleted DR State Ready DR State Failover Over DR State Failed Over DR State Failover Error DR Status Normal DR Status Warning DR Status Error Initializing Job Failure Member Clone Snapshot Discovered Member Clone Snapshot Status OK Member Dedupe Operation Failed Member Dedupe Operation Succeeded Member Destroyed Member Destroy Operation Failed Member Resized Member Resize Operation Failed Modified Protected Protection Failed Protection Lag Error Protection Lag Warning Severity Warning Normal Error Information Information Information Warning Information Error Information Warning Error Information Warning Information Information Error Normal Information Information Information Information Information Normal Error Error Warning .

Member Resize Required Write Guarantee Check .304 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.Member Size OK Severity Warning Normal Error Normal Normal Warning Error Warning Normal Dataset Conformance Event name Conformant Conforming Initializing Nonconformant Severity Normal Information Information Warning Disks Event name No Spares None Failed None Reconstructing Some Failed Some Reconstructing Spares Available Severity Warning Normal Normal Error Warning Normal .0 Event name Protection Suspended Protection Uninitialized Provisioning Failed Provisioning OK Space Status: Normal Space Status: Warning Space Status: Error Write Guarantee Check .

Appendix | 305 Enclosures Event name Active Disappeared Failed Found Inactive OK Severity Information Warning Error Normal Warning Normal Fans Event name Many Failed Normal One Failed Severity Error Normal Error FC (Fibre Channel) Switch Port Event name Faulty Offline Online Severity Error Warning Normal Filer Configuration Event name Changed OK Push Error Push OK Severity Warning Normal Warning Normal .

0 Global Status Event name Critical Non Critical Non Recoverable OK Other Unknown Severity Critical Error Emergency Normal Warning Warning HBA Port Event name Offline Online Port Error Traffic High Traffic OK Severity Warning Normal Error Warning Normal Host Event name Cluster Configuration Error Cluster Configuration OK Cold Start Deleted Discovered Down Identity Conflict Identity OK Login Failed Login OK Severity Error Normal Information Information Information Critical Warning Normal Warning Normal .306 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Appendix | 307 Event name Modified Name Changed SNMP Not Responding SNMP OK System ID Changed Up Severity Information Information Warning Normal Information Normal Host Agent Event name Down Up Host Agent: Login Failed Severity Error Normal Warning Inodes Event name Almost Full Full Utilization Normal Severity Warning Error Normal Interface Status Event name Down Testing Unknown Up Severity Error Normal Normal Normal LUN Event name Offline Severity Warning .

0 Event name Online Snapshot Not Possible Snapshot Possible Severity Normal Warning Normal Management Station Event name Enough Free Space File System File Size Limit Reached License Expired License Nearly Expired License Not Expired Load OK Load Too High Node Limit Nearly Reached Node Limit OK Node Limit Reached Not Enough Free Space Provisioning Manager Node Limit Nearly Reached Provisioning Manager Node Limit Ok Provisioning Manager Node Limit Reached Protection Manager Node Limit Nearly Reached Protection Manager Node Limit Ok Protection Manager Node Limit Reached Severity Normal Error Error Warning Normal Normal Warning Warning Normal Error Error Warning Normal Error Warning Normal Error Migration Event name Dataset Not Migrating Dataset Migrating Severity Normal Normal .308 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Up CIFS Service .Down NFS Service .Up Severity Normal Warning Normal .Appendix | 309 Event name Dataset Migrated With Errors Dataset Migrated Dataset Migrate Failed vFiler Unit Not Migrating vFiler Unit Migrating vFiler Unit Migrated With Errors vFiler Unit Migrated vFiler Unit Migrate Failed Severity Warning Normal Error Normal Normal Warning Normal Error NDMP Event name Credentials Authentication Failed Credentials Authentication Succeeded Communication Initialization Failed Communication Initialization Succeeded Down Up Severity Warning Normal Warning Normal Warning Normal Network Event name OK Too Large Severity Normal Warning Network Services Event name CIFS Service .

310 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.Down Severity Warning Normal Warning Normal Warning No Schedule Conflict Event name Between Snapshot and SnapMirror Schedules Between Snapshot and SnapVault Schedules Severity Normal Normal NVRAM Battery Event name Discharged Fully Charged Low Missing Normal Old Overcharged Replace Unknown Status Severity Error Normal Warning Error Normal Warning Warning Error Warning OSSV (Open Systems SnapVault) Event name Host Discovered Severity Information .Down iSCSI Service .Down FCP Service .Up FCP Service .0 Event name NFS Service .Up iSCSI Service .

Appendix | 311 Performance Advisor Event name Enough Free Space Not Enough Free Space Severity Normal Error Power Supplies Event name Many Failed Normal One Failed Severity Error Normal Error Primary Event name Host Discovered Severity Information Protection Policy Event name Created Deleted Modified Severity Information Information Information Protection Schedule Event name Created Deleted Modified Severity Information Information Information Provisioning Policy Event name Created Severity Information .

0 Event name Deleted Modified Severity Information Information Qtree Event name Almost Full Files Almost Full Files Full Files Utilization Normal Full Growth Rate Abnormal Growth Rate OK Space Normal Severity Warning Warning Error Normal Error Warning Information Normal Remote Platform Management (RPM) Event name Online Unavailable Severity Normal Critical Resource Group Event name Created Deleted Modified Severity Information Information Information Resource Pool Event name Created Deleted Severity Information Information .312 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Appendix | 313 Event name Modified Space Full Space Nearly Full Space OK Severity Information Error Warning Normal SAN Host LUN Mapping Event name Changed Severity Warning Script Event name Critical Event Emergency Event Error Event Information Event Normal Event Warning Event Severity Critical Emergency Error Information Normal Warning SnapMirror Event name Abort Completed Abort Failed Break Completed Break Failed Date OK Delete Aborted Delete Completed Delete Failed Severity Normal Error Normal Error Normal Warning Information Error .

314 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Event name Initialize Aborted Initialize Completed Initialize Failed Nearly Out of Date Not Scheduled Not Working Off Out of Date Possible Problem Quiesce Aborted Quiesce Completed Quiesce Failed Resume Completed Resume Failed Resync Aborted Resync Completed Resync Failed Unknown State Update Aborted Update Completed Update Failed Working Severity Warning Normal Error Warning Normal Error Normal Error Warning Warning Normal Error Normal Error Warning Normal Error Warning Warning Normal Error Normal Snapshot(s) Event name Age Normal Age Too Old Count Normal Severity Normal Warning Normal .

Appendix | 315 Event name Count OK Count Too Many Created Failed Full Schedule Conflicts with the SnapMirror Schedule Schedule Conflicts with the SnapVault Schedule Schedule Modified Scheduled Snapshots Disabled Scheduled Snapshots Enabled Severity Normal Error Normal Error Warning Warning Warning Information Information Normal SnapVault Event name Backup Aborted Backup Completed Backup Failed Host Discovered Relationship Create Aborted Relationship Create Completed Relationship Create Failed Relationship Delete Aborted Relationship Delete Completed Relationship Delete Failed Relationship Discovered Relationship Modified Replica Date OK Replica Nearly Out of Date Replica Out of Date Severity Warning Information Error Information Warning Information Error Warning Information Error Information Information Normal Warning Error .

0 Event name Restore Aborted Restore Completed Restore Failed Severity Warning Normal Error SNMP Trap Listener Event name Alert Trap Received Critical Trap Received Emergency Trap Received Error Trap Received Information Trap Received Notification Trap Received Warning Trap Received Start Failed Start OK Severity Information Information Information Information Information Information Information Warning Information Space Management Event name Space Management Job Started Space Management Job Succeeded Space Management Job Failed Severity Information Information Information Storage Services Event name Storage Service Created Storage Service Modified Storage Service Destroyed Storage Service Dataset Provisioned Severity Information Information Information Information .316 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Appendix | 317 Event name Storage Service Dataset Attached Storage Service Dataset Detached Severity Information Information Sync Event name SnapMirror In Sync SnapMirror Out of Sync Severity Information Warning Temperature Event name Hot Normal Severity Critical Normal Unprotected Item Event name Discovered Severity Information User Event name Disk Space Quota Almost Full Disk Space Quota Full Disk Space Quota OK Disk Space Soft Limit Exceeded Disk Space Soft Limit Not Exceeded E-mail Address OK E-mail Address Rejected Files Quota Almost Full Files Quota Full Files Quota Utilization Normal Severity Warning Error Normal Warning Normal Normal Warning Warning Error Normal .

318 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 Event name Files Soft Limit Exceeded Files Soft Limit Not Exceeded Severity Warning Normal vFiler Unit Event name Deleted Discovered Hosting Storage System Login Failed IP Address Added IP Address Removed Renamed Storage Unit Added Storage Unit Removed Severity Information Information Warning Information Information Information Information Information vFiler Unit Template Event name Created Deleted Modified Severity Information Information Information Volume Event name Almost Full Automatically Deleted Autosized Clone Deleted Clone Discovered Destroyed Severity Warning Information Information Information Information Information .

Appendix | 319

Event name First Snapshot OK Full Growth Rate Abnormal Growth Rate OK Maxdirsize Limit Nearly Reached Maxdirsize Limit Reached Nearly No Space for First Snapshot Nearly Over Deduplicated New Snapshot Next Snapshot Not Possible Next Snapshot Possible No Space for First Snapshot Not Over Deduplicated Offline Offline or Destroyed Online Over Deduplicated Quota Overcommitted Quota Almost Overcommitted Restricted Snapshot Automatically Deleted Snapshot Deleted Space Normal Space Reserve Depleted Space Reservation Nearly Depleted Space Reservation OK

Severity Normal Error Warning Normal Information Information Warning Warning Normal Warning Normal Warning Normal Warning Warning Normal Error Error Warning Restricted Information Normal Normal Error Error Normal

320 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Report fields and performance counters
Next topics

Report fields and performance counters for filer catalogs on page 320 Report fields and performance counters for vFiler catalogs on page 322 Report fields and performance counters for volume catalogs on page 323 Report fields and performance counters for qtree catalogs on page 325 Report fields and performance counters for LUN catalogs on page 325 Report fields and performance counters for aggregate catalogs on page 326 Report fields and performance counters for disk catalogs on page 326

Report fields and performance counters for filer catalogs
DataFabric Manager creates reports for Filer Catalogs. The table below lists and describes the various fields in Filer Catalogs and provides the corresponding performance counters.
Field Filer.TotalOpsperSec Filer.CIFSOps Filer.NFSOps Filer.HTTPOps Filer.iSCSIOps Filer.FCPOps Filer.NFSv3ReadOps Name/description Storage System Total Ops/Sec Storage System CIFS Ops/Sec Storage System NFS Ops/Sec Storage System HTTP Ops/Sec Storage System iSCSI Ops/Sec Storage System FCP Ops/Sec Storage System NFSv3 Read Ops/ Sec Storage System NFSv3 Write Ops/ Sec Storage System NFSv4 Read Ops/ Sec Storage System NFSv4 Write Ops/ Sec Storage System NFSv3 Avg Latency (millisec) Performance counter system:total_ops system:cifs_ops system:nfs_ops system:http_ops system:iscsi_ops system:fcp_ops nfsv3:nfsv3_read_ops

Filer.NFSv3WriteOps

nfsv3:nfsv3_write_ops

Filer.NFSv4ReadOps

nfsv4:nfsv4_read_ops

Filer.NFSv4WriteOps

nfsv4:nfsv4_write_ops

Filer.NFSv3Avglatency

nfsv3:nfsv3_avg_op _latency

Appendix | 321

Field Filer.NFS4Avglatency

Name/description Storage System NFSv4 Avg Latency (millisec) Storage System CPU Busy (%) Storage System iSCSI Read Ops/ Sec Storage System iSCSI Write Operations Storage System CIFS Latency (millisec)

Performance counter nfsv4:nfsv4_avg_op _latency

Filer.CPUBusy Filer.iSCSIReadOps

system:cpu_busy iscsi:iscsi_read_ops

Filer.iSCSIWriteOps

iscsi:iscsi_write_ops

Filer.CIFSLatency

cifs:cifs_latency

Filer.NFSReadLatency

Storage System NFS Read Latency nfsv3:nfsv3_read _latency (millisec) Storage System NFS Write Latency (millisec) Storage System iSCSI Read Latency (millisec) Storage System iSCSI Write Latency (millisec) Storage System FCP Read Latency (millisec) nfsv3:nfsv3_write _latency

Filer.NFSWriteLatency

Filer.iSCSIRead Latency

iscsi:iscsi_read_latency

Filer.iSCSIWrite Latency

iscsi:iscsi_write _latency

Filer.FCPReadLatency

fcp:fcp_read_latency

Filer.FCPWriteLatency

Storage System FCP Write Latency fcp:fcp_write_latency (millisec) Storage System NAS Throughput (KB/Sec) Storage System SAN Throughput (KB/Sec) Storage System Disk Throughput (KB/Sec) Storage System Network Throughput (MB/Sec) Storage System Total Data Received (MB/Sec) Storage System Total Data Sent (MB/Sec) system:nas_throughput

Filer.NASThroughput

Filer.SANThroughput

system:san_throughput

Filer.DiskThroughput

system:disk_ throughput

Filer.NetThroughput

system:load_total _mbps

Filer.LoadInbound Mbps

system:load_inbound_ mbps

Filer.LoadOutbound Mbps

system:load_outbound_ mbps

322 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Field Filer.NetDataSent

Name/description

Performance counter

Storage System Network Data Sent system:net_data_sent (KB/Sec) Storage System Network Data Receive (KB/Sec) Storage System Ratio of disk data read and load outbound Storage System Ratio of disk data write and load inbound Storage System Disk Data Read (KB/Sec) Storage System Disk Data Written (KB/Sec) Storage System FCP Write Data (B/Sec) system:net_data_recv

Filer.NetDataRecv

Filer.LoadReadBytes Ratio

system:load_read_bytes _ratio

Filer.LoadWriteBytes Ratio

system:load_write_byte s_ratio

Filer.DiskDataRead

system:disk_data_read

Filer.DiskDataWritten

system:disk_data_ written

Filer.FCPWriteData

fcp:fcp_write_data

Filer.FCPReadData

Storage System FCP Read Data (B/ fcp:fcp_read_data Sec) Storage System iSCSI Write Data (B/Sec) Storage System iSCSI Read Data (B/Sec) Storage System Processor Busy (%) Storage System NFS Latency (millisec) Storage System Perf Threshold Violation Count Storage System Perf Threshold Violation Period (Sec) iscsi:iscsi_write_data

Filer.iSCSIWriteData

Filer.iSCSIReadData

iscsi:iscsi_read_data

Filer.ProcessorBusy

system:avg_processor_ busy

Filer.NFSLatency

nfsv3:nfsv3_avg_op _latency

Filer.PerfViolation Count

Not Applicable

Filer.PerfViolation Period

Not Applicable

Report fields and performance counters for vFiler catalogs
DataFabric Manager creates reports for vFiler Catalogs. The following table lists and describes the fields of the vFiler Catalog.

Appendix | 323

Field vFiler.TotalOps vFiler.ReadOps vFiler.WriteOps vFiler.MiscOps vFiler.NetThroughput

Name/description vFiler Total Ops/Sec vFiler Read Ops/Sec vFiler Write Ops/Sec vFiler Miscellaneous Ops/Sec vFiler Network Throughput (KB/ Sec)

Performance counter vfiler:vfiler_total_ops vfiler:vfiler_read_ops vfiler:vfiler_write_ops vfiler:vfiler_misc_ops vfiler:vfiler_nw_ throughput

vFiler.ReadBytes

vFiler Number of Bytes Read (KB/ vfiler:vfiler_read_bytes Sec) vFiler Number of Bytes Write (KB/Sec) vFiler Network Data Received (KB/Sec) vFiler Network Data Sent (KB/ Sec) vFiler Total Data Transferred (KB/ Sec) vFiler Perf Threshold Violation Count vFiler Perf Threshold Violation Period (Sec) vfiler:vfiler_write_ bytes

vFiler.WriteBytes

vFiler.NetDataRecv

vfiler:vfiler_net_data_ recv

vFiler.NetDataSent

vfiler:vfiler_net_data_ sent

vFiler.DataTransferred

vfiler:vfiler_data_ transferred

vFiler.PerfViolation Count

Not Applicable

vFiler.PerfViolation Period

Not Applicable

Report fields and performance counters for volume catalogs
DataFabric Manager creates reports for Volume Catalogs. The following table lists and describes the fields of the Volume Catalog.
Field Volume.TotalOps Volume.CIFSOps Volume.NFSOps Volume.SANOps Volume.SANReadOps Volume.SANWriteOps Name/description Volume Total Ops/Sec Volume CIFS Ops/Sec Volume NFS Ops/Sec Volume SAN Ops/Sec Volume SAN Read Ops/Sec Volume SAN Write Ops/Sec Performance counter volume:total_ops volume:cifs_ops volume:nfs_ops volume:total_san_ops volume:san_read_ops volume:san_write_ops

324 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0

Field Volume.SANOtherOps Volume.ReadOps Volume.WriteOps Volume.OtherOps Volume.NFSReadOps Volume.NFSWriteOps Volume.NFSOtherOps Volume.CIFSReadOps Volume.CIFSWriteOps Volume.CIFSOtherOps Volume.FlexCache ReadOps Volume.FlexCache WriteOps Volume.FlexCache OtherOps Volume.Latency Volume.CIFSLatency Volume.NFSLatency Volume.SANLatency Volume.ReadLatency Volume.WriteLatency Volume.OtherLatency Volume.CIFSRead Latency Volume.CIFSWrite Latency Volume.CIFSOther Latency Volume.SANRead Latency Volume.SANWrite Latency Volume.SANOther Latency

Name/description Volume SAN Other Ops/Sec Volume Read Ops/Sec Volume Write Ops/Sec Volume Other Ops/Sec Volume NFS Read Ops/Sec Volume NFS Write Ops/Sec Volume NFS Other Ops/Sec Volume CIFS Read Ops/Sec Volume CIFS Write Ops/Sec Volume CIFS Other Ops/Sec Volume FlexCache Read Ops/Sec Volume FlexCache Write Ops/Sec Volume FlexCache Other Ops/Sec Volume Latency (millisec) Volume CIFS Latency (millisec) Volume NFS Latency (millisec) Volume SAN Latency (millisec) Volume Read Latency (millisec) Volume Write Latency (millisec) Volume Other Latency (millisec) Volume CIFS Read Latency (millisec) Volume CIFS Write Latency (millisec) Volume CIFS Other Latency Volume SAN Read Latency (millisec) Volume SAN Write Latency (millisec) Volume SAN Other Latency (millisec)

Performance counter volume:san_other_ops volume:read_ops volume:write_ops volume:other_ops volume:nfs_read_ops volume:nfs_write_ops volume:nfs_other_ops volume:cifs_read_ops volume:cifs_write_ops volume:cifs_other_ops volume:flexcache_read _ops volume:flexcache_ write_ops volume:flexcache_ other_ops volume:avg_latency volume:cifs_latency volume:nfs_latency volume:san_latency volume:read_latency volume:write_latency volume:other_latency volume:cifs_read_ latency volume:cifs_write_ latency volume:cifs_other_ latency volume:san_read_ latency volume:san_write_ latency volume:san_other_ latency

Appendix | 325

Field Volume.NFSRead Latency Volume.NFSWrite Latency Volume.NFSOther Latency Volume.Data Throughput Volume.PerfViolation Count Volume.PerfViolation Period

Name/description Volume NFS Read Latency (millisec) Volume NFS Write Latency Volume NFS Other Latency (millisec) Volume Throughput (KB/Sec) Volume Perf Threshold Violation Count Volume Perf Threshold Violation Period (Sec)

Performance counter volume:nfs_read_ latency volume:nfs_write_ latency volume:nfs_other_ latency volume:throughput Not Applicable Not Applicable

Report fields and performance counters for qtree catalogs
DataFabric Manager creates reports for Qtree Catalogs. The following table lists and describes the fields of the Qtree Catalog.
Field Qtree.CIFSOps Qtree.NFSOps Qtree.InternalOps Qtree.PerfViolation Count Name/description Qtree CIFS Ops/Sec Qtree NFS Ops/Sec Qtree Internal Ops/Sec Qtree Perf Threshold Violation Count Qtree Perf Threshold Violation Period (Sec) Performance counter qtree:cifs_ops qtree:nfs_ops qtree:internal_ops Not Applicable

Qtree.PerfViolation Period

Not Applicable

Report fields and performance counters for LUN catalogs
DataFabric Manager creates reports for LUN Catalogs. The following table lists and describes the fields of the LUN Catalog.
Field LUN.TotalOps LUN.ReadOps LUN.WriteOps LUN.OtherOps Name/description LUN Total Ops/Sec LUN Read Ops/Sec LUN Write Ops/Sec LUN Other Ops/Sec Performance counter lun:total_ops lun:read_ops lun:write_ops lun:other_ops

PerfViolation Count Not Applicable Aggregate.PerfViolation Period Not Applicable Report fields and performance counters for aggregate catalogs DataFabric Manager creates reports for Aggregate Catalogs.CPReads Name/description Aggregate Total Ops/Sec Aggregate User Reads (per_sec) Aggregate User Writes (per_sec) Aggregate Reads done during CP (per_sec) Aggregate Perf Threshold Violation Count Aggregate Perf Threshold Violation Period (Sec) Performance counter aggregate:total_ transfers aggregate:user_reads aggregate:user_writes aggregate:cp_reads Aggregate.WriteOps Name/description Disk User Read Ops/Sec Disk User Write Ops/Sec Performance counter disk:user_reads disk:user_writes . The following table lists and describes the fields of the Aggregate Catalog.PerfViolation Period Not Applicable Report fields and performance counters for disk catalogs DataFabric Manager creates reports for Disk Catalogs.WriteData LUN. Field Disk.PerfViolation Count Name/description LUN Latency (millisec) LUN Throughput (KB/Sec) LUN Read Data (KB/Sec) LUN Write Data (KB/Sec) LUN Perf Threshold Violation Count LUN Perf Threshold Violation Period (Sec) Performance counter lun:avg_latency lun:throughput lun:read_data lun:write_data Not Applicable LUN.ReadData LUN.ReadOps Disk. The following table lists and describes the fields of the Disk Catalog. Field Aggregate.UserReads Aggregate.326 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.Throughput LUN.0 Field LUN.Latency LUN.TotalOps Aggregate.UserWrites Aggregate.

PerfViolation Period Not Applicable Protocols and port numbers DataFabric Manager uses various networking protocols and port numbers to communicate with storage systems. The presence of firewalls. . non-trusted environments.CPReadLatency Disk.Utilization Disk.Appendix | 327 Field Disk. and Open Systems SnapVault agents. Next topics DataFabric Manager server communication on page 327 DataFabric Manager access to storage systems on page 328 DataFabric Manager access to host agents on page 328 DataFabric Manager access to Open Systems SnapVault agents on page 328 DataFabric Manager server communication You might need to enable both HTTP and HTTPS transports if multiple administrators are accessing the workstation from different locations.Throughput Disk.PerfThreshold Violations disk:user_read_latency disk:user_write_latency disk:cp_read_latency disk:throughput disk:disk_busy Not Applicable Disk. Note: Reconfiguring these options from Operations Manager results in a message instructing you to restart the HTTP service from the CLI. host agents (for SRM and SAN management). or other circumstances might require a combination of HTTP and HTTPS transport. Use the following command to restart the HTTP service: dfm service start http.WriteLatency Disk.ReadLatency Disk.CPReads Name/description Disk Reads initiated for CP processing (per_sec) Disk Read Latency (millisec) Disk Write Latency (millisec) Disk CP Read Latency (millisec) Disk Throughput (blocks/Sec) Disk Utilization (%) Disk Perf Threshold Violations Count Disk Perf Threshold Violation Period (Sec) Performance counter disk:cp_reads Disk.

Protocol HTTP HTTPS UDP port TCP port 4092 4093 DataFabric Manager access to Open Systems SnapVault agents There are a set of protocols and port numbers used by DataFabric Manager to access your Open Systems SnapVault agents. Protocol HTTP HTTPS RSH SSH Telnet SNMP 161 UDP port TCP port 80 443 514 22 23 DataFabric Manager access to host agents There are a set of protocols and port numbers used by DataFabric Manager to access your host agents.0 DataFabric Manager access to storage systems There are a set of protocols and port numbers used by DataFabric Manager. Protocol HTTP SNMP 161 UDP port TCP port 10000 . to access your storage systems.328 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.

Fibre Channel (FC) switches. if HTTPS is configured on both. ensure that both DataFabric Manager and the NetApp Host Agent software on the SAN host . both the Host Agent and DataFabric Manager are configured to use HTTP. and manages SANs on SAN hosts. Instead.netapp. see the Data ONTAP Block Access Management Guide for iSCSI and FC.com/NOW/ knowledge/docs/san/#ontap_san Discovery of SAN hosts by DataFabric Manager DataFabric Manager discovers SAN hosts with the NetApp Host Agent software installed on each SAN host. Existing customers can continue to license the SAN option with DataFabric Manager. the special-purpose NetApp Host Agent software discovers. For information about setting up a NetApp SAN. however. monitors. Next topics Discovery of SAN hosts by DataFabric Manager on page 329 SAN management using DataFabric Manager on page 330 Reports for monitoring SANs on page 333 DataFabric Manager options on page 340 DataFabric Manager options for SAN management on page 341 How SAN components are grouped on page 343 Related information Data ONTAP Block Access Management Guide for iSCSI and FC . DataFabric Manager customers should check with their sales representative regarding other SAN management solutions. If both DataFabric Manager and the NetApp Host Agent software are configured to use HTTP. DataFabric Manager communicates with the NetApp Host Agent software using HTTP or HTTPS. port 4092 is used for communication. The NetApp SANs are storage networks that have been installed in compliance with the "SAN setup guidelines" by NetApp. You must install the NetApp Host Agent software on each SAN host that you want to monitor and manage with DataFabric Manager. By default. Note: NetApp has announced the end of availability for the SAN license for DataFabric Manager. port 4093 is used. it does not use SNMP to poll for new SAN hosts.Appendix | 329 SAN management You can use DataFabric Manager to monitor and manage components—such as logical unit numbers (LUNs). however. You can specify the protocol to use for communication in DataFabric Manager and when you install the NetApp Host Agent software on your SAN host. and Windows and UNIX SAN hosts—of your NetApp storage area networks (SANs). Note: If you choose to use HTTPS for communication between DataFabric Manager and a SAN host. DataFabric Manager can automatically discover SAN hosts.http://now.

and so on.netapp. . targets. communication between the two occurs. but HTTP is used for communication. see the NetApp Host Agent Installation and Administration Guide. if the NetApp Host Agent software is configured to use HTTPS and DataFabric Manager is configured to use HTTP. For NetApp storage systems (targets) DataFabric Manager does not report any data for your SAN if you do not have it set up according to the guidelines specified by NetApp. communication between the SAN host and DataFabric Manager does not occur. If the Host Agent is configured to use HTTP and DataFabric Manager is configured to use HTTPS. for DataFabric Manager 2. storage systems in a SAN. . or SAN hosts. Related information NetApp Host Agent Installation and Administration Guide . Next topics Prerequisites for SAN management with DataFabric Manager on page 330 List of tasks performed for SAN management on page 331 List of user interface locations to perform SAN management tasks on page 332 Prerequisites for SAN management with DataFabric Manager Different prerequisites apply for SAN management with DataFabric Manager.330 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. viewing reports about LUNs. For more information about the NetApp Host Agent software. see the SAN Configuration Guide. All SAN monitoring and management features including LUN and FC switch monitoring are available only if you have the SAN Management license key installed. For information about the supported hardware platforms.http://now.3 or later. Conversely.0 are configured to use HTTPS. FC switches. and HBA ports. For DataFabric Manager You must have the SAN Management license key installed on your DataFabric Manager server.shtml SAN management using DataFabric Manager By using DataFabric Manager. You can also group LUNs.com/NOW/ knowledge/docs/nha/nha_index. you can perform tasks such as.3 or later. SAN deployments are supported on specific hardware platforms running Data ONTAP 6. If you do not have this license. contact your sales representative to find out how you can purchase one.

The SAN hosts must be connected to the network through an Ethernet port and must each have a valid IP address. targets.4 or later. the following settings must be enabled: • discoverEnabled (available from the CLI only) • Host Discovery (Setup > Options > Edit Options: Discovery) • SAN Device Discovery (Setup > Options > Edit Options: Discovery) DataFabric Manager can discover and monitor only FC switches. To find out which SnapDrive version you must have installed. Note: For a list of supported Brocade switches. . and managing SAN hosts.Appendix | 331 For FC switches • To enable discovery of FC switches. SAN hosts. a report displaying storage systems that are connected to an FC switch displays only storage systems that are running Data ONTAP 6. and HBA ports in a SAN. see the DataFabric Manager software download pages at http://now.http://now. monitoring.netapp. The FC switches must be connected to the network through an Ethernet port and must have a valid IP address. The NetApp Host Agent software is required for discovering.netapp.netapp.com/NOW/ knowledge/docs/nha/nha_index. Note: LUN management on UNIX SAN hosts by using DataFabric Manager is not currently available. • All FC switches to be managed by DataFabric Manager must be connected to a TCP/IP network either known to or discoverable by DataFabric Manager. Each SAN host must have the NetApp Host Agent software installed on it. for LUN management by using DataFabric Manager. For example.com/. • For SAN hosts (initiators) • All SAN hosts to be managed by DataFabric Manager must be connected to a TCP/IP network either known to or discoverable by DataFabric Manager. Certain FC switch monitoring reports in DataFabric Manager require that the storage systems connected to an FC switch run Data ONTAP 6. configured in a SAN setup as specified in the SAN Setup Overview for FCP guide.shtml List of tasks performed for SAN management You can perform different tasks for SAN management by using DataFabric Manager. For more information about the Host Agent software. see the SAN Configuration Guide at http:// • now. specifically Brocade Silkworm switches. Related information • • NetApp Host Agent Installation and Administration Guide . see the NetApp Host Agent Installation and Administration Guide. The Windows SAN hosts must have proper version of the SnapDrive software installed on it. • View reports that provide information about all LUNs.com/. FC switches.4 or later.

or SAN hosts for efficient monitoring and management.0 • • • • • • View details about a specific LUN. FC switches. a target on a storage system. on which DataFabric Manager should poll the SAN host. and creating. and SAN hosts). FC targets on a storage system. Group LUNs. FC switches. Change the monitoring intervals for LUNs. a SAN host. Create groups of LUNs. and SAN hosts in the > Options) DataFabric Manager database Events tab (Control To view and respond to SAN events Center > Home > Group Status > Events) Alarms link (Control Center > Home > Group Status > Alarms) To configure alarms for SAN events . and SAN hosts. and an HBA port. and SAN hosts. Configure settings for SAN hosts such as the administration port.332 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. modifying. Configure DataFabric Manager to generate alarms to notify recipients of the SAN events. an FC switch. FC switches. UI locations to perform SAN management tasks: SANs tab (Control Center > Home > Member Details) To view reports about all or a group of SAN components (LUNs. LUNs. You can also configure the type of protocol (HTTP or HTTPS) DataFabric Manager should use to communicate with SAN hosts. Options link (Setup To enable and disable the discovery of SAN components and change the monitoring intervals for FC switches. You also use this tab to perform the following tasks: • • • • Access the details about a specific SAN component. or expanding a LUN. FC switches. storage systems in a SAN. List of user interface locations to perform SAN management tasks You can perform SAN management tasks in the Operations Manager Control Center UI. Perform management functions such as configuring an FC switch. Perform LUN and FC switch management functions. View SAN events and logs containing information related to LUN management and respond to SAN events.

All Fibre Channel Switches. Comments Fibre Channel Switches. Down Fibre Channel Switch Environmentals Fibre Channel Switch Locations Fibre Channel Switch Firmware . Compact Fibre Channel Switches. from the SANs tab. Next topics Location of SAN reports on page 333 DataFabric Manager managed SAN data in spreadsheet format on page 334 Where to find information for specific SAN components on page 334 Where to view LUN details of SAN components on page 335 Tasks performed from the LUN Details page for a SAN host on page 335 Information about FC target on a SAN host on page 335 Information about the FC switch on a SAN host on page 336 Access to the FC Switch Details page on page 336 Information about FC Switch on a SAN host on page 336 Tasks performed from the FC Switch Details page for a SAN host on page 337 Information about NetApp Host Agent software on a SAN host on page 337 Accessing the HBA Port Details page for a SAN host on page 338 Details about the HBA Port Details page on page 338 List of SAN management tasks on page 339 LUN management on page 339 Initiator group management on page 340 FC switch management on page 340 Location of SAN reports Reports about the SAN components that DataFabric Manager manages are available on the SANs tab. you specify the group by clicking the group name in the left pane of the DataFabric Manager window. You can view the following reports from the SANs page: • • • • • • • • • LUNs.Appendix | 333 Reports for monitoring SANs You can view various reports about the SAN components that DataFabric Manager manages. Comments Fibre Channel Switches. You can view reports by selecting reports in the Report drop-down list. Deleted Fibre Channel Switches. If you want to view a report about a specific group.

All HBA Ports. All SAN Host LUNs. iSCSI SAN Hosts. Comments LUNs. The Details page of a SAN component provides information specific to that component. When you view a report. FCP SAN Hosts. You can use the data in the spreadsheet to create your own charts and graphs or to analyze the data statistically. iSCSI SAN Host LUNs. Deleted LUNs. you can bring up the data in spreadsheet format by clicking the spreadsheet icon (<ICON >) on the right side of the Report drop-down list. iSCSI SAN Hosts. FCP HBA Ports.0 • • • • • • • • • • • • • • • • • • • • • • • • • Fibre Channel Switches. Deleted SAN Hosts Traffic. FCP SAN Host Cluster Groups SAN Host LUNs. the FC Switch Details page provides the following details: . Physical Fibre Channel Links. Where to find information for specific SAN components You can view information about SAN components from the Details page of a SAN component. Uptime Fibre Channel Switch Ports Fibre Channel Links.334 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. Unmapped LUN Statistics LUN Initiator Groups Initiator Groups DataFabric Manager managed SAN data in spreadsheet format You can put data from reports for SAN components that DataFabric Manager manages into a spreadsheet format. Logical FCP Targets HBA Ports. For example. FCP LUNs. Comments SAN Hosts. Up Fibre Channel Switches. All SAN Hosts. All LUNs.

the Web-based UI of the storage system on which the LUN exists. Note: You must setup the appropriate authentication to run commands on the storage system. and view events associated with these components. edit settings. Information about FC target on a SAN host You can view FCP target details of a SAN host from the FCP Target Details page.Appendix | 335 • • • Firmware version Uptime Status of each port on the switch The LUN Details page provides the following details: • • • The status and size of a LUN Events associated with the LUN All groups to which the LUN belongs In addition. Where to view LUN details of SAN components You can view LUN details of SAN components using DataFabric Manager. and so on. The FCP Target Details page contains the following information: • Name of the FC switch and the port to which the target connects . You can access the LUN Details page for a LUN by clicking the path of the LUN in any of the reports. Edit Settings Refresh Monitoring Samples—Obtains current monitoring samples from the storage system on which this LUN exists FilerView—Launches FilerView. Destroy this LUN—Launches a wizard that helps you to destroy the LUN. expand a LUN. Manage LUNs with FilerView—Launches FilerView and displays a page where you can manage the LUN. access the management tools for these components. destroy LUN. Run a Command—Runs a Data ONTAP command on the storage system on which this LUN exists. The Tools list on the LUN Details page enables you to select the following tasks: • • • • • • • • Diagnose Connectivity Expand this LUN—Launches a wizard that helps you to expand the LUN. Tasks performed from the LUN Details page for a SAN host From the LUN Details page. you can diagnose connectivity. you can obtain graphs of information about the SAN components.

0 • • • Status of the target Name of the storage system on which target is installed Port state of the target. Port state can be one of the following: • Startup • Uninitialized • Initializing Firmware • Link Not Connected • Waiting For Link Up • Online • Link Disconnected • Resetting • Offline • Offline By User System • Unknown Specifics about the target such as hardware version. and speed of the target FC topology of the target. Information about FC Switch on a SAN host You can view FC Switch details of a SAN host from the FC Switch Details page. and the number of devices connected with the switch. Topology can be one of the following: • Fabric • Point-To-Point • Loop • Unknown WWNN and WWPN of the target Other FCP targets on the storage system on which the target is installed Time of the last sample collected and the configured polling interval for the FCP target • • • • • Information about the FC switch on a SAN host You can view FCP switch details of a SAN host and the status of the switch. Access to the FC Switch Details page You can access FC Switch Details page from SAN monitoring reports. You can access the FC Switch Details page for a switch by clicking the name of the switch in any of the reports for SAN monitoring.336 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. You also view events associated with the switch. firmware version. • • • Status of the switch Firmware version installed on the switch The length of time that the switch has been up since the last reboot .

DataFabric Manager requires the login and password information to connect to a switch using the Telnet program. version details of the operating system and Host Agent. The Details page for a SAN host contains the following information: • • • Status of the SAN host and the time since the host has been up The operating system and NetApp Host Agent software version running on the SAN host The SAN protocols available on the host . • • • Information about NetApp Host Agent software on a SAN host You can view the SAN host status.A graphical layout of the switch ports. of the Brocade switch. You might want to connect to FabricWatch to manage and configure the switch. • Red—indicates that the port is offline or not working normally. the Web-based UI. refresh monitoring samples. • Black—indicates that there is no GBIC connected. or one year. one quarter (three months). and so on in the Host Agent Details page. You can use the Tools list on the FC Switch Details page to select the tasks to perform for the switch whose Details page you are on. run FabricSwitch. one week. with the color of the switch port that indicates the status of the port: • Green—indicates that the port is online and working normally. but is not synchronized (No Sync). You can view the traffic over a period of one day. • • • • Tasks performed from the FC Switch Details page for a SAN host You can edit the FC Switch settings. number of HBAs. Run Telnet—Connect to the CLI of the switch using the Telnet program. DataFabric Manager requires the login and password information to connect to a switch using the Telnet program. Invoke FabricWatch—Connects you to FabricWatch. The tasks are as follows: • Edit Settings—Displays the Edit FC Switch Settings page where you configure the login and password information in DataFabric Manager for the switch.Appendix | 337 • • • Contact information for the switch such as the administrator name and location of the switch Events associated with the FC switch FC switch port status-. Refresh Monitoring Samples—Obtains current monitoring samples from the FC switch. • Yellow—indicates that the port is connected to a GBIC. one month. Number of devices connected to the switch and a link to a report that lists those devices The DataFabric Manager groups to which the FC switch belongs Time when the last data sample was collected and the configured polling interval for the switch Graph of FC traffic per second on the switch. and run Telnet using the FC Switch Details page.

The Details page for an HBA port contains the following information: • • • • • • • • • • Status and state of the HBA port Name of the SAN host on which the HBA is installed The protocol available on the HBA Specifics about the HBA such as the name.0 • • • • • • • • The MSCS configuration information about the SAN host. driver version. and so on. Click the name of the HBA port. All. and firmware version WWNN of the HBA and WWPN of the HBA port FC switch port to which the HBA port connects The events that occurred on this HBA port The number of HBA ports on the HBA and a link to the list of those ports The number of HBA ports on the SAN host on which the HBA port exists and a link to the list of those ports The number of storage systems accessible to the HBA port and a link to the list of those storage systems . and cluster groups to which the SAN host belongs The events that occurred on this SAN host The number of HBAs and HBA ports on the SAN host The devices related to the SAN host such as the FC switches connected to it and the storage systems accessible from it The number of LUNs mapped to the SAN host and a link to the list of those LUNs The number of initiator groups that contain the SAN host and a link to the list of those initiator groups Time of the last sample collected and the configured polling interval for the SAN host Graphs of information such as. serial number.338 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The HBA Port Details page is displayed. view the HBA protocol. one week. one month. view events that occurred in the HBA port. 2. one quarter (three months). if any. or one year Accessing the HBA Port Details page for a SAN host You can access the HBA Port Details page for a SAN host from the SAN host reports. Details about the HBA Port Details page You can use the HBA Port Details page to view the status of HBA port. hardware version. model. such as the cluster name. Steps 1. the HBA port traffic per second or the HBA port frames per second over a period of one day. cluster partner. Click Member Details > SANs > Report drop-down > HBA Ports.

You can perform the following LUN management functions from FilerView: • Add or delete a LUN • . For initiator groups: Create or delete initiator groups. the operation fails. • Use the LUN management options available in DataFabric Manager: • The Host Agent Details page provides a Create a LUN option in the Tools list to create LUNs. you can create. and destroy a LUN. one quarter (three months). LUN management You can manage a LUN in two ways by using DataFabric Manager. Otherwise. Connect to FilerView. or one year List of SAN management tasks You can perform various SAN management tasks by using DataFabric Manager. These LUN management options launch wizards specific to their function. ensure the following: • The SAN host management options are appropriately set on the Options page or the Edit Host Agent Settings page. The LUN Details page provides a Manage LUNs with FilerView option in the Tools list. perform the operation on the active node of the cluster. a wizard is launched that takes you through the process of creating a LUN. one week. • • • For LUNs: Create. as shown in the previous example. and destroy LUNs. The wizards take you through the process of expanding or destroying a LUN. For FC switches: Configure and view the current configuration of a switch. Before you run the wizard. • By using a wizard available in the Tools list on the Host Agent Details page and LUN Details page.Appendix | 339 • • • • The number of LUNs mapped to the HBA port and a link to the list of those LUNs The number of initiator groups that contain the HBA port and a link to the list of those initiator groups Time of the last sample collected and the configured polling interval for the HBA port Graphs of information such as the HBA port traffic per second or the HBA port frames per second over a period of one day. one month. map a LUN to or unmap a LUN from initiator groups. When you select the Create a LUN option. the Web-based UI of your storage system. The LUN Details page provides the two LUN management options in the Tools list: Expand this LUN and Destroy this LUN. This option enables you to access FilerView. expand. expand. • To manage a shared LUN on an MSCS.

but not locally. The LUN Details page provides a Manage LUNs with FilerView option in the Tools list. or locally—to apply to a specific object or a group of objects in the DataFabric Manager database. When DataFabric Manager is installed. Therefore. DataFabric Manager determines.0 • • • Modify configuration settings such as the size or status of a LUN Map a LUN to or unmap a LUN from initiator groups Create or delete initiator groups The Tools list on the LUN Details page displays two options for FilerView. When both global and local options are specified for an object. The options can be changed globally—to apply to all objects in the DataFabric Manager database. and qtree they are contained in. DataFabric Manager options DataFabric Manager uses the values of options in its database to determine whether to automatically discover new objects or not. Some options can be set globally. of your storage system. the Web-based management tool for the Brocade SilkWorm switches. these options are assigned default values. LUNs. The Invoke FilerView option connects you to the main window of the UI of your storage system and the Manage LUNs with FilerView option connects you directly to the Manage LUNs window. This option enables you to connect to FilerView.340 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. volume. Note: A DataFabric Manager object is an entity that is monitored or managed by DataFabric Manager. FC switches. the Web-based management interface. LUNs inherit access control settings from the storage system. you must have appropriate privileges set up on those storage systems. Initiator group management You can manage initiator groups by connecting to FilerView. Examples of DataFabric Manager objects are storage systems. and user quotas. . You can use this option to connect to FabricWatch. FC switch management The FC Switch Details page provides the Invoke FabricWatch option in the Tools list. the local options override the global options. however. to perform LUN operations on storage systems. how frequently to monitor objects. you can change these values. and what threshold value to use to generate an event.

There is no default value for the management password. 5 minutes • Global and local options: • Host Agent Login This option specifies the user name that is used to authenticate to the NetApp Host Agent software. By default. By default. Fibre Channel monitoring interval. Host Agent Monitoring Password This option specifies the password that is used for the user name guest to authenticate to the Host Agent software for SAN monitoring. for SAN monitoring and management. Global-only options: • SAN Device Discovery This option specifies whether to enable or disable the automatic discovery of SAN components (LUNs. this option is enabled. and SAN Host monitoring interval The monitoring intervals determine how frequently DataFabric Manager collects information about an object. SAN monitoring is enabled. Global-only option and Global and local options. Therefore. By default. 30 minutes For Fibre Channel. Host Agent Administration Transport • • • . the user name guest is used. you must specify a value for this option before you can use the LUN management features through DataFabric Manager. DataFabric Manager cannot communicate with the SAN host. If you change the password in DataFabric Manager. you must select the user name admin. Otherwise. 5 minutes For SAN Host. FC switches. Host Agent Management Password This option specifies the password that is used for the user name admin to authenticate to the NetApp Host Agent software for SAN monitoring and management. DataFabric Manager cannot communicate with the SAN host.Appendix | 341 DataFabric Manager options for SAN management There are two DataFabric Manager options for SAN management. The password you specify for this option must match the password specified in the Host Agent software running on the SAN hosts. LUN monitoring interval. however. Following are the default monitoring intervals: • • • For LUNs. therefore. ensure that you change the password to the same value in the Host Agent software running on the SAN hosts. namely. you can change it. Otherwise. SAN hosts). If you want to enable SAN management in addition to monitoring. public is used as the password.

To configure options locally (for a specific object).342 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. this threshold is set to 90 for all HBA ports. By default. HBA Port Too Busy Threshold This threshold specifies the value. DataFabric Manager stops collecting and reporting data about it. You must delete the SAN component from the Global group for DataFabric Manager to stop monitoring it. Data collection and reporting is not resumed until the component is added back (using the Undelete button) to the database. the component is deleted only from that group. you must be on the Edit Settings page of that specific object (Details page > Tools list > Edit Settings). used to connect to the NetApp Host Agent software. DataFabric Manager generates an HBA Port Traffic High event. Host Agent Administration Port This option specifies the port that is used to connect to the Host AgentNetApp Host Agent software. DataFabric Manager does not stop collecting and reporting data about it. When you delete a SAN component. HTTP or HTTPS. or a SAN host) with DataFabric Manager by deleting it from the Global group. If this threshold is crossed. By default. at which an HBA port has so much incoming and outgoing traffic that its optimal performance is hindered. this option is set to HTTP. Note: When you delete a SAN component from any group except Global.0 This option specifies the protocol. • • Next topics Where to configure monitoring intervals for SAN components on page 342 Deleting and undeleting SAN components on page 342 Reasons for deleting and undeleting SAN components on page 343 Process of deleting SAN components on page 343 Process of undeleting SAN components on page 343 Where to configure monitoring intervals for SAN components You can configure the global options on the Options page. By default. . an FC switch. 4092 is used for HTTP and 4093 for HTTPS. a storage system. Deleting and undeleting SAN components You can stop monitoring a SAN component (a LUN. as a percentage. You cannot stop monitoring a specific FC target or an HBA port unless you stop monitoring the storage system or the SAN host on which the target or the port exists.

storage systems. First. You might want to undelete a SAN component if you want to resume the monitoring of the component that you previously deleted. SAN hosts. A non-mission critical network could be a laboratory network. Permanently stop monitoring the SAN component You might want to delete a component if it exists on a non-mission critical network and does not need to be monitored. Deleted report. You can undelete an object by selecting it from its Deleted report and clicking Undelete at the bottom of the report. Temporarily stop monitoring SAN component You might want to delete a SAN component if you want to perform maintenance on the component and do not want DataFabric Manager to generate events and alarms (if configured) during the maintenance process. How SAN components are grouped You can group SAN components—LUNs. Deleted report. Deleted report. All deleted SAN hosts are listed in the SAN hosts. Therefore. you can add storage systems. Then. All deleted FC switches are listed in the FC Switches.” In addition. Process of undeleting SAN components All deleted objects are listed in their respective Deleted reports. Process of deleting SAN components You can delete a SAN component from any of the reports related to that component. you click Delete at the bottom of each report to delete the selected components. All deleted LUNs are listed in the LUNs. Storage systems. the type of the created group is “Appliance resource group. you select the components you want to delete in the left-most column of a report. but it has been discovered by DataFabric Manager. when you create a group of storage systems. . SAN hosts. and FC switches to an Appliance resource group. You can delete a SAN component from these reports. or FC switches—to manage them easily and to apply access control. and FC switches are considered storage systems for the purpose of creating groups. SAN hosts. or FC switches. SAN hosts.Appendix | 343 Reasons for deleting and undeleting SAN components You might want to delete a SAN component if you want to stop monitoring the SAN component either temporarily or permanently.

Next topics Restriction of SAN management access on page 344 Access control on groups of SAN components on page 344 Restriction of SAN management access You can allow an administrator to manage your SAN hosts and devices. The GlobalSAN role allows an administrator to create.344 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. The role defines the administrator’s authority to view and perform operations on the components in that group. Note: By default. when you create a group of LUNs. a group containing LUNs cannot contain any other objects—storage systems. This role can be global or per-group. Therefore. SAN hosts. DataFabric Manager is configured to provide global Read access across all groups. to user Everyone. Access control on groups of SAN components Just as with other DataFabric Manager groups. by selecting the GlobalSAN role on the Administrators page (Setup menu > Administrative Users). or FC switches. you can apply access control on groups of SAN components to restrict the scope of tasks an administrator can perform. the created group is “LUN resource group. including Global.” Note: You cannot group HBA ports or FCP targets.0 However. . and destroy LUNs. You can apply access control to a group (group-based access control) by assigning a role to an administrator account for that group. expand.

independent of whether a client has the associated performance view open or not. Historical data can be used for diagnosing past performance problems or for short-term trend analysis. heads.Glossary | 345 Glossary associated view A performance view available for an object. CPU modules. Some counters that Performance Advisor tracks and apply to both storage systems and vFiler units. and the set of replicas and archives that exist on other storage sets. A statistical measurement of activity on a storage system or storage subsystem that is provided by Data ONTAP. Controller also known as storage controller refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. A graphical display of the data provided by one or more counters. or controller modules. and scalability benefits. A collection of storage sets along with configuration information associated with data. Data that is archived by the performance-monitoring server on the DataFabric Manager server. Storage controllers are also sometimes called storage appliances. The customizable grouping of storage systems and vFiler units whose performance can be monitored and displayed through Performance Advisor. CPU usage) apply only chart cluster counter controller custom views dataset hierarchical groups historical data hosting storage system . All the data that is included in the Performance Advisor default views is also archived as historical data. All systems that are monitored by the DataFabric Manager server and whose performance can be monitored by Performance Advisor are displayed in the Groups browser panel of Performance Advisor. storage engines. providing performance. Canned views are associated to an object based on the object type. An associated view can be a canned view or a custom view. The historical data is accessible to any Performance Advisor that can connect to the workstation. The physical storage system on which one or more vFiler units are configured. Other counters (for example. The storage sets associated with a dataset include a primary storage set used to export data to clients. Custom Views are user-defined views that includes metrics from specific instances. Custom views can be associated to an object by the user. appliances. Datasets represent exportable user data. A cluster refers to a group of connected nodes (storage systems) that share a global namespace and that you can manage as a single virtual server or multiple virtual servers. Historical data is collected on an ongoing basis. reliability.

qtrees. which contains other DataFabric Manager information. separate from the system on which the DataFabric Manager server is installed. WAFL.0 Cluster-Mode. lower threshold The type of threshold set for an event generation when the counter value falls and remains below the lower threshold value for longer than the Threshold Interval specified. and clusters on systems running Data ONTAP 8. RAID and target are the examples of internal objects specific to Data ONTAP. LUNs. and NFS are the examples of software protocol objects. The Performance Advisor component installed on the NetApp Management Console platform enables you to monitor the performance of storage system and vFiler unit as described in this chapter. Typically there is an object associated with each hardware or software subsystem within Data ONTAP. Operations Manager is the Web-based user interface of DataFabric Manager.346 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. and reporting on storage infrastructure.0 to storage systems and the associated host of a vFiler unit. NetApp Management Console is the client platform for the Java-based NetApp Manageability Software applications. that the performance-monitoring server has collected to generate its graphical charts and views. NetApp Management Console runs on a Windows or Linux workstation. iSCSI. Examples of hardware objects are processor. qtree. nfs_ops. Examples of system objects are avg_processor_busy. NVRAM. Operations Manager is used for day-to-day monitoring. The user interface of Performance Advisor contains only performancemonitoring information. and networking card objects. alerting.” logical objects Object types that represent storage containers such as volume. Volumes. managed object NetApp Management Console object Operations Manager Performance Advisor performancemonitoring server . The Performance Advisor component that is enabled on the DataFabric Manager server to collect and archive performance data at regular intervals from the monitored storage systems and vFiler units. and net_data_recv. logical hierarchy The hierarchy that only the Logical Objects and instances when selected. disk. from which you can monitor and manage multiple storage systems. Virtual objects like the system object capture key statistics across all the other objects in one single place. and LUNs are the examples of managed objects. This Performance Advisor interface is distinct from Operations Manager. aggregates. cifs_ops. CIFS. Performance Advisor gathers sets of data. A managed object is an object that is contained within a DataFabric Manager group. vFiler units. FCP. The hosting storage system is also referred to as the “vFiler host. and datasets are known as Logical Objects. A managed object represents any object that has an identity and a name in the DataFabric Manager object table.

The hierarchy that only the Physical Objects and instances when selected. You can set the sample rate based on units of a minute. aggregates and spare disks. The same interval will also be used to generate a normal event. FAS appliances and NearStore systems are the examples of storage systems. A managed object in DataFabric Manager. physical objects physical hierarchy real-time data resource pool retention period sample rate storage set storage system A template is a view definition that applies to object instances known to templates and canned templates Performance Advisor. Containers that are used for delegation. aggregates. month. Real-time data is accessible only to the client by which it is retrieved. week. memory. network interfaces. Real-time data is suitable for diagnosing immediate performance issues. sub storage provisioning. resource pools. Data that is passed through Performance Advisor for display but is not stored. You can set the retention period based on units of a minute. An unmanaged object does not have a unique identity in the DataFabric Manager table. Object types that represent the physical resources in a storage system such as disk. threshold interval The amount of time in seconds for which an event generation is suppressed before Performance Advisor decides that a counter has crossed a specified threshold and an event needs to be generated. Objects apart from the managed objects belong to the class of unmanaged objects.Glossary | 347 performance view A collection of one or more counters accessible through Performance Advisor for a group of storage systems or subsystems that the counter data as one or more charts. day. An appliance that is attached to a computer network and is used for data storage. The rate at which data is collected for a counter. A storage set contains a group of volumes whereas a volume should be in at most one storage set. unmanaged object . replication and in some cases. or year. and custom views. and RAID groups are known as Physical Objects. Performance Advisor displays default views that are built in to the system. containing storage provisioning resources like storage systems. The amount of previously collected data for a counter that is accessible to the user. A set of pre-configured templates in Performance Advisor is known as Canned Template. hour. Real-time data is collected only as long as the performance view window displaying it is open on Performance Advisor. The only container of merit in a storage set is a volume (flexible or traditional).

One or more virtual storage systems that can be configured on a single physical storage system licensed for the MultiStore® feature. DataFabric Manager 3. vFiler unit view .4 and later enables monitoring and management of vFiler units.0 upper threshold The type of threshold set for an event generation when the counter value exceeds and remains above the higher threshold value for longer than the Threshold Interval specified.348 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. A collection of related panels represented together and displayed by the Performance Advisor client.

Index | 349

Index
A
access check, for application administrator 51 access control configuring vFiler units 52 on groups of SAN components 344 precedence of, global and group 59 access roles manage configuration files 224 accessing CLI 261 accessing DataFabric Manager CLI 261 Active Directory user group accounts 55 active/active configuration cluster console, accessing 213 managing with DataFabric Manager 212 add administration access for host agents 149 administration passwords, host agents 145 automapping SRM paths 154 SRM paths 149 adding clusters 48 adding primary, secondary storage systems 236 Administration Transport option 134 Administrator access managing 62 administrators accounts default accounts everyone account 53 creating accounts 62 types of access controls 62 Aggregate Details page 182 Aggregate Full threshold 183 Aggregate Nearly Full threshold 182 Aggregate Nearly Overcommitted threshold 183 Aggregate Overcommitted threshold 182, 183 aggregates capacity graphs 182 chargeback (billing). See storage chargeback 198 historical data 182 name format 182 relation to traditional volume 186 aggregates space availability 182 alarm notification customize 101 alarms acknowledging 102 alerts and alarms, differences 103 configuration guidelines 99 creating 100 e-mail messages 103, 106 SNMP traps as 94 testing 101 alerts See user alerts 104 annual charge rate, storage chargeback reports 202 Appliance Details page tasks 208 appliance management. See appliances, managing 205 appliances commands, running 215 configuration changes 215 console, connecting 212 grouping 206 managing, administrative access 209 archived reports Report Archival Directory 116 assign parent groups 228 authentication requirements NDMP 241

B
backing up access requirements 271 deleting from Web-based user interface 273 directory location 270 disabling schedules 273 displaying information about 273 process described 269 restoring from backup file 275 scheduling 272 starting from Web-based user interface 272 storage and sizing 270 backup requirements 239 retention copies 239 Snapshot copies 239

350 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0
backup management hot mode for databases 233, 269 scripts, pre and postbackup 233, 269 Backup Manager adding primary storage systems 237 adding secondary storage systems 236 adding secondary volumes 237 authentication 233 discovering 233 discovery process 233 initial setup 235 NDMP SNMP 233 primary directory format 246 secondary volume format 246 select primary directories, qtrees 238 SnapVault relationships 235 backup relationships bandwidth limitation 243 baseline transfer 243 configuring bandwidth limit 243 create 239 backup schedules creating 240 Snapshot copies 238 BackupManager reports 117 billing cycle, storage chargeback reports 203 billing reports. See storage chargeback 198 Business Continuance license 232 Business Continuance Management license 247 CLI 261 CLI, accessing 262 Clone List report 196 clone volumes Clone List report 196 identifying 196 parents, relation to 196 cluster 87 cluster components aggregates 88 controllers 88 grouping 88 virtual servers 88 volumes 88 cluster console requirements 213 cluster details cluster objects 91 cluster reports 91 information available 91 cluster discovery automatic discovery 47 cluster management browse cluster components 92 capabilities 63 Cluster Details page 92 cluster hierarchyinformation about cluster objects CPU usage 92 cluster objects 63 controlled user access 63 gather cluster information 91 logical interface 91 login interface 91 network traffic 92 tasks performed 92 view physical system 92 view utilization of physical resources 92 volume capacity used 92 cluster management LIF 87 cluster mode 48 cluster monitoring features supported 48 limitations 48 using APIs 91 using SNMP 91 Cluster Tools 210 Clusters 110 configuration details of storage arrays 50 configuration files

C
CA-signed certificate 130 capacity 91 capacity reports 142 capacity thresholds 191 catalogs See reports 109 Certificate Signing Request 130 certificates definition of CA-signed 130 obtaining, CA-signed 131 role of, in secure connections 129 self-signed, generating 130 self-signed, security precautions for 129 changed features 27 CIFS, SRM requirements 152

Index | 351
acquired from parent groups 229 compare configurations 225 manage 223 prerequisites, for storage systems and vFiler Units 224 properties 229 pull 225 tasks to manage 224 template configuration 225 verify successful push 226 configuration group remove configuration file 226 tasks to manage 226 configuration management storage system 223 configuration plug-ins Data ONTAP plug-in 225 configuration report 50 configuration resource groups assign parent group 228 see also configuration files creating 84 configuration settings 226 configuration using DataFabric Manager data protection 231 configure alarms 93 configuring multiple storage systems 229 connect storage system Run Telnet tool 212 considerations assigning parent group inherit, configuration files of parent group 228 console access to DataFabric Manager 261 Controllers 110 CPU usage 91 Create a LUN wizard 175 CSR 130 currency format, storage chargeback reports 202 custom comment fields alarm notification by e-mail 101 script format 102 trap format 102 DataFabric Manager database backup 269 deleting and undeleting objects 203 dfm backup command (CLI) 274 groups objects 79 logging in to 52 restoring backup file 275 DataFabric Manager Host Agent software administration access 148 capabilities 169 communication ports 145 overview 144 passwords 145 DataFabric Manager options global and local options 340 objects 340 DataFabric Manager server access to host agents 328 access to Open Systems SnapVault agents 328 access to storage systems 328 communication 327 HTTP, HTTPS 327 protocols and port numbers 328 Day of the Month for Billing option 203 default backup schedule 240 default role 55 delete SRM paths 154 deleting SAN components 177 deleting and undeleting 203 directories not backed up viewing 234 Disaster Recovery Management policy management 250 Disaster Recovery Manager connection management 251 defined 247 managing SnapMirror relationships 247 storage system authentication 253 tasks, general 247 discoveries new, qtrees 234 Open Systems SnapVault monitor 234 discovery DataFabric Manager Host Agent software 169 SAN host 169 discovery of

D
Data LIF 87 database backup about Backup process 269 DataFabric Manager 269 Restore process 274 database scripts, pre and postbackup 233, 269

352 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0
storage array ports 49 storage arrays 49 Disk space hard limit option 164 Disk space soft limit option 164 Domain users Pushing 72 Viewing settings 71 GloablDataProtection 56 global access control applying to administrator accounts 62 description of 62 precedence over group access control 55 global and local options HBA Port Too Busy Threshold 341 Host Agent Administration Port 341 Host Agent Administration Transport 341 Host Agent login 341 Host Agent Management Password 341 Host Agent Monitoring Password 341 SAN management 341 Global Delete 224 global groups 81 global privileges creating accounts 62 Global Read 224 Global SAN role 56 global thresholds setting 242 Global Write 224 Global-only options LUN monitoring interval 341 managing SAN 341 SAN device discovery 341 global, group information location 207 GlobalBackup role 56 GlobalDataSet 56 GlobalDelete role 56 GlobalDistribution role 56 GlobalEvent role 56 GlobalFullControl role 56 GlobalMirror role 56 GlobalPerfManag 56 GlobalQuota role 56 GlobalRead role 56 GlobalRestore role 56 GlobalSRM role 56 GlobalWrite role 56 graphs LUNs 173 Graphs SAN hosts 175 group access control applying 62 description 62 precedence over global access control 55 group access privileges

E
edit SRM paths 154 Edit Quota Settings page 165 editing user quotas 220 Event Reports 117 events Aggregate Almost Full 183 Aggregate Almost Overcommitted 183 Aggregate Full 183 Aggregate Overcommitted 183 clearing configuration events 99 definition of 97 list of, complete 297–318 managing 98 Qtree Full 194 Qtree Nearly Full 194 user, notification of 103 viewing 98 Volume Almost Full 186 Everyone, administrator account 53 example assigning parent groups 228

F
failover mode, of multiple paths 253 FC switch managing 340 FCP Target Details page 174 FCP targets 174 FCP topology 174 File Storage Resource Management, FSRM 141 FilerView configuring storage systems 218 links 93 Files hard limit option 164 Files soft limit option 164

G
giveback 214

Index | 353
accounts 62 groups changing thresholds 85 definition of 79 global 81 guidelines for creating 84 reports (views) about 86 See also configuration resource groups 84 limitations managed host options 135 list of storage array ports connected to the V-Series system 50 list of storage arrays connected to the V-Series system 50 local threshold setting 242 Local users Adding 66, 69 Deleting 68 Editing passwords 69 Pushing 69 Pushing passwords 67 Viewing settings 65 logical interface 87 Logical interfaces 110 logical resources logical interface traffic 93 utilization 93 virtual servers 93 login and password SAN hosts 175 Login Protocol option, for managed hosts 134 LUN connecting to FilerView 339 managing 339 LUN Details page 173, 174 LUNs creating 175 deleting and undeleting 177 destroying 174 expanding 174 graphs 173 initiator groups mapped to 173 reports in spreadsheet form 116 status of 173 stop monitoring 177

H
hierarchical groups 81 host agent administration access 148 passwords 148 Host Agent Details page 175 host agents types of 144 host discovery 42 hosts.equiv 137 hosts.equiv file 137 hot backup mode 233, 269 HTTPS enabling 131 requirements for 132

I
initiator group connecting to FilerView 340 managing 340 viewing LUNs mapped to 173 installing licenses 235 interface group 87 Interface groups 110

J
jobs 78 junction 87

M
mailformat file 106, 107 mailmap file 104 manage discovered relationships enable DataFabric Manager 241 Managed host options limitations 135 where to find 133 managing administrator access domain users 70 Mirror Reports 117

L
lag thresholds SnapMirror 258 license key Business Continuance Management 247 SRM 143 licenses, permissions 235

354 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0
modification of passwords issue 137 modify cluster controller details 210 cluster details 210 monitoring 79 monitoring intervals See options 94 monitoring of clusters cluster discovery 48 configuration 48 Data ONTAP 8.0 48 tasks performed 48 monitoring options guidelines for changing 94 location to change 94 monitoring passwords, host agents 145 monitoring process flow chart 89 multiple paths failover mode 253 multiplexing mode 253 MultiStore 219 passwords for host agents 145 path walks recommendations 155 paths adding paths 152 CIFS requirements 152 CLI quoting conventions 153 UNC requirements, 152 valid path formats 152 paths, managing automapping 154 deleting 154 editing 154 quick reference 151 viewing details, directory-level 153 Permanently stop monitoring 343 physical resources controller 93 CPU 93 network traffic 93 utilization 93 Policy Reports 117 Ports 110 potential length, parent chains 228 predefined roles 54 preferred SNMP version option 48 prerequisites editing user quotas 164 managing user quotas 163 managing, SnapMirror relationships 247 SRM components 143 system backup 232 preset thresholds 93 push jobs 229

N
namespace 87 NDMP service enabling 235 network settings 47 new features 27 node management LIF 87 node mode 48

O
Open Systems SnapVault hosts 234 Operations Manager deleting a backup from 273 starting a backup from 272 options Administration Transport, managed hosts 134 Disk space hard limit 164 Disk space soft limit 164 Files hard limit 164 Files soft limit 164 Login Protocol, for managed hosts 134 order of applying privileges 59 overcommitment strategy 182 Overview 62

Q
quick reference for tasks 146 quiesced SnapMirror relationships 257 quotas hard 162 process 161 soft 162 threshold 162 why you use 161 Quotas tab renamed to SRM 143

P
parent group 88

R
RBAC

Index | 355
access 60 RBAC resource 60 remote configuration accessing CLI 216 cluster 216 Operations Manager UI 216 remote configuration feature 223 remote platform management RLM card management 265 system requirements 265 Remote Platform Management configuring RLM card IP address 266 Remote Platform Management IP Address 217 report categories capabilities 117 report generation scheduling 116 report outputs list 123 report schedule deleting 119 disabling 119 editing 119 listing results 120 retrieving 120 running 120 Report Schedules report existing report schedules 117 reports aggregate 110 aggregate array LUNs 110 array LUNs 110 catalogs 109 configuring 114 controller 110 customizing options 109 dataset 110 deleting custom 115 disks 110 events 110 FC link 110 FC switch 110 FCP target 110 file system 110 group summary 110 groups 86 History performance reports 110 Host domain users 110 Host local users 110 Host roles 110 Host usergroups 110 Host users 110 LUN 110 performance events 110 putting data in spreadsheets 116 Quotas 110 Report outputs 110 Report schedules 110 resource pools 110 SAN hosts 110 schedules 110 scripts 110 spare array LUNs 110 SRM 110 storage systems 110 types 108 user quotas 110 vFiler 110 viewing, tasks performed 208 volume 110 requirements SRM 143 SRM license requirements 163 resource utilization 92 restoring DataFabric Manager database 274, 275 restricting access to SRM data 156 roles create 59 global 55 GlobalWrite GlobalDelete 224 group 58 modify 59 order applying 59 Roles configuring 76 run commands on a cluster 217 run commands on storage systems Run a Command tool 211 running commands specific cluster node 217 specific node of a cluster 217 using RLM card 217 running commands on storage systems 215

S
SAN components access control 344

pre and postbackup 233. UI locations 332 SAN reports 333 saved reports list of failed report outputs 123 list of successful report outputs 123 viewing the output of a particular report 123 Saved reports Viewing the output details of a particular report output 124 Viewing the output of a particular report output 124 scenario example.0 configuring monitoring intervals 342 deleting and undeleting 342 deleting process 343 grouping 343 reasons.356 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4. steps for FSRM 144 Setting up the DataFabric Manager database backup 269 SnapMirror Business Continuance Management license key 247 lag thresholds 258. requirements for 142 password. interaction 241 Snapshot copy monitoring requirements 197 Snapshot-based backups limitations 271 SnapVault 235 SnapVault backup relationships CLI commands 244 configure 244 SnapVault relationship NDMP 234 SnapVault relationships configuration of DataFabric Manager 236 SNMP traps 94–97 SNMP version 47 spreadsheets LUN reports. 269 secure connections components of 132 enabling HTTPS for 131 global options versus appliance options 135 protocols used for 132 requirements for. 331 tasks performed 331 tasks performed. deleting. and undeleting 343 undeleting process 343 SAN hosts administration transport 175 editing settings for 175 host agents 175 port for communication 175 SAN management FC switches 330. 331 SAN hosts. modes 253 SnapMirror Lag Error threshold 259 SnapMirror Lag Warning threshold 259 SnapMirror relationships resynchronize 258 Snapshot copies local data protection 240 schedules. setting 149 passwords. identify oldest files 156 schedule report defined 121 methods 118 schedules adding 121 deleting 122 editing 122 list 121 scheduling report 117. 331 Global and local options 341 Global-only options 341 GlobalSAN role 344 license key 330. 259 license requirement for managing 247 relationships. initiators 330. 118 scripts. types of 145 quick reference of management tasks 146 SRM Reports 117 . managing 247 synchronous. putting into 116 SRM comparison to capacity reports 142 license keys 143 monitoring. setting 149 SRM hosts administration access configuration settings 148 described 144 enabling administration access 149 passwords. 331 list of tasks 339 prerequisites 330. on workstation 132 secure shell 129 Secure Sockets Layer 132 See Volumes 186 server console access 261 set up.

differences 103 alert See user alert 103 Usergroups Adding 73 Deleting 74 Editing settings 74 Pushing 75 Viewing settings 73 using DataFabric manager 223 T takeover. 165 quotas file 162 synchronous SnapMirror failover mode 253 multiplexing mode 253 user alert e-mail message 103 mailformat file 106. tool 213 targets. 107 mailmap file 104 user alerts alarms and alerts.Index | 357 SSH 132 SSL 132 stop monitoring a LUN 177 storage array ports discover 49 storage arrays discover 49 storage arrays of a V-Series system configuration details 50 storage controller 87 Storage Controller Tools 210 storage-related report 179 Subgroup reports 86 Summary reports 86 Symbols /etc/quotas file 164. 204 SAN components 177 user accounts 53 . differences 103 contents of e-mail message 106 Default Email Domain for Quota Alerts option 105 defined 103 user capabilities 58 user quota about editing 164 mailformat file 106 mailmap file 105 User Quota Monitoring Interval option 163 user quota thresholds hard quota limit 166 monitor 166 settings 166 soft quota limit 166 user quotas alarms and alerts. See FCP Targets 174 templates customizing reports 108 Temporarily stop monitoring 343 thresholds Aggregate Full 183 Aggregate Full interval 183 Aggregate Nearly Full 183 Aggregate Nearly Overcommitted 183 Aggregate Overcommitted 183 changing group 85 description of 165 editing 180 global and group levels 180 overriding global settings 182 precedence of 167 SnapMirror 258 user quota 165 ways and locations to apply 166 traditional volumes 186 V V-Series SAN-attached storage management limitations 49 vFiler administrator configuring 52 vFiler unit management tasks 221 vFiler units access control administrators 52 adding to Resource groups 84 vFiler Units editing quotas 163 view storage system details Refresh Monitoring Samples tool 211 viewing data 155 virtual server 87 Virtual servers 110 U undeleting (restoring) objects 203.

events 192 Volume Too Old Snapshot Threshold 192 volumes chargeback (billing). See storage chargeback 198 W workstation command line access (CLI) 261 Web-based access (GUI) 261 .358 | Operations Manager Administration Guide For Use with DataFabric Manager Server 4.0 volume capacity thresholds modify 191 Volume Full Threshold 186 Volume Nearly Full threshold 186 Volume Nearly No First Snapshot Threshold 192 Volume Snap Reserve Full Threshold 192 Volume Snapshot Count Threshold 192 Volume Snapshot No First Count Threshold 192 volume thresholds Snapshot copy.