Professional Documents
Culture Documents
NetApp OnCommand Console Administration
NetApp OnCommand Console Administration
NetApp OnCommand Console Administration
NetApp, Inc.
495 East Java Drive
Sunnyvale, CA 94089 USA
Telephone: +1 (408) 822-6000
Fax: +1 (408) 822-4501
Support telephone: +1 (888) 4-NETAPP
Documentation comments: doccomments@netapp.com
Information Web: http://www.netapp.com
Part number: 215-05997_A0
July 2011
Table of Contents | 3
Contents
About this document .................................................................................. 13
Welcome to OnCommand console Help ................................................... 15
How to use OnCommand console Help .................................................................... 15
Bookmarking your favorite topics ................................................................. 15
Understanding how the OnCommand console works ............................................... 15
About the OnCommand console ................................................................... 15
Window layout and navigation ..................................................................... 15
Window layout customization ....................................................................... 16
How the OnCommand console works with the Operations Manager
console and NetApp Management Console ............................................ 17
Launching the Operations Manager console ................................................. 17
Installing NetApp Management Console ...................................................... 18
How the OnCommand console works with AutoSupport ............................. 19
Dashboard ................................................................................................... 21
Understanding the dashboard .................................................................................... 21
OnCommand console dashboard panels ....................................................... 21
Monitoring the dashboard ......................................................................................... 22
Monitoring dashboard panels ........................................................................ 22
Page descriptions ....................................................................................................... 22
Availability dashboard panel ......................................................................... 22
Events dashboard panel ................................................................................. 23
Full Soon Storage dashboard panel ............................................................... 24
Fastest Growing Storage dashboard panel .................................................... 24
Dataset Overall Status dashboard panel ........................................................ 25
Resource Pools dashboard panel ................................................................... 25
External Relationship Lags dashboard panel ................................................ 26
Unprotected Data dashboard panel ............................................................... 26
Jobs .............................................................................................................. 45
Understanding jobs .................................................................................................... 45
Understanding jobs ........................................................................................ 45
Managing jobs ........................................................................................................... 45
Canceling jobs ............................................................................................... 45
Monitoring jobs ......................................................................................................... 46
Monitoring jobs ............................................................................................. 46
Page descriptions ....................................................................................................... 46
Jobs tab .......................................................................................................... 46
Servers ......................................................................................................... 53
Understanding virtual inventory ................................................................................ 53
How virtual objects are discovered ............................................................... 53
Monitoring virtual inventory ..................................................................................... 53
Monitoring VMware inventory ..................................................................... 53
Monitoring Hyper-V inventory ..................................................................... 57
Managing virtual inventory ....................................................................................... 58
Adding virtual objects to a group .................................................................. 59
Adding a virtual machine to inventory .......................................................... 59
Table of Contents | 5
Preparing a virtual object managed by the OnCommand console for
deletion from inventory ........................................................................... 60
Performing an on-demand backup of virtual objects .................................... 61
Restoring backups from the Server tab ......................................................... 64
Mounting and unmounting backups in a VMware environment ................... 67
Page descriptions ....................................................................................................... 71
VMware ......................................................................................................... 71
Hyper-V ......................................................................................................... 80
Storage ......................................................................................................... 85
Physical storage ......................................................................................................... 85
Understanding physical storage .................................................................... 85
Configuring physical storage ........................................................................ 87
Managing physical storage ............................................................................ 89
Monitoring physical storage .......................................................................... 94
Page descriptions ......................................................................................... 100
Virtual storage ......................................................................................................... 118
Understanding virtual storage ..................................................................... 118
Managing virtual storage ............................................................................. 120
Monitoring virtual storage ........................................................................... 123
Page descriptions ......................................................................................... 125
Logical storage ........................................................................................................ 134
Understanding logical storage ..................................................................... 134
Managing logical storage ............................................................................ 136
Monitoring logical storage .......................................................................... 141
Page descriptions ......................................................................................... 150
Table of Contents | 7
Evaluating and resolving issues displayed in the Conformance Details
dialog box .............................................................................................. 232
Monitoring datasets ................................................................................................. 236
Overview of dataset status types ................................................................. 236
How to evaluate dataset conformance to policy .......................................... 239
Monitoring dataset status ............................................................................ 244
Monitoring backup and mirror relationships ............................................... 245
Listing nonconformant datasets and viewing details .................................. 246
Evaluating and resolving issues displayed in the Conformance Details
dialog box .............................................................................................. 246
Page descriptions ..................................................................................................... 251
Datasets tab ................................................................................................. 251
Create Dataset dialog box or Edit Dataset dialog box ................................ 258
Table of Contents | 9
Page descriptions ......................................................................................... 338
Database schema ..................................................................................................... 359
How to access DataFabric Manager server data ......................................... 359
Supported database views ........................................................................... 360
alarmView ................................................................................................... 361
cpuView ...................................................................................................... 362
designerReportView .................................................................................... 363
Database view datasetIOMetricView .......................................................... 363
Database view datasetSpaceMetricView .................................................... 364
Database view datasetUsageMetricCommentView .................................... 366
hbaInitiatorView .......................................................................................... 367
hbaView ...................................................................................................... 367
initiatorView ................................................................................................ 367
reportOutputView ........................................................................................ 367
sanhostlunview ............................................................................................ 368
usersView .................................................................................................... 368
volumeDedupeDetailsView ........................................................................ 369
Table of Contents | 11
Error: Vss Requestor - Backup Components failed with partial writer
error. ...................................................................................................... 511
Error: Failed to start VM. Job returned error 32768 ................................... 512
Error: Failed to start VM. You might need to start the VM using HyperV Manager ............................................................................................. 512
Error: Vss Requestor - Backup Components failed. An expected disk did
not arrive in the system .......................................................................... 512
Error: Vss Requestor - Backup Components failed. Writer Microsoft
Hyper-V VSS Writer involved in backup or restore encountered a
retryable error ........................................................................................ 513
Hyper-V virtual objects taking too long to appear in OnCommand
console ................................................................................................... 514
Increasing SnapDrive operations timeout value in the Windows registry . . 514
MBR unsupported in the Hyper-V plug-in ................................................. 514
Some types of backup failures do not result in partial backup failure ........ 515
Space consumption when taking two snapshot copies for each backup ..... 515
Virtual machine snapshot file location change can cause the Hyper-V
plug-in backup to fail ............................................................................. 516
Virtual machine backups taking too long to complete ................................ 516
Virtual machine backups made while a restore operation is in progress
might be invalid ..................................................................................... 516
Volume Shadow Copy Service error: An internal inconsistency was
detected in trying to contact shadow copy service writers. ................... 517
Hyper-V VHDs do not appear in the OnCommand console ....................... 518
13
15
Menu bar
File
View
Administration
Help
Groups:
Dashboard
panel tabs
Dashboard
List of
related
objects
List of views
Breadcrumb
trail
Command
buttons
List of
objects
Details
for the
selected
object
Tabs
You can click the column headings to sort the column entries in ascending order
and display the sort arrows (
and
specify the order in which entries appear.
Filtering
You can use the filter icon ( ) to display only those entries that match the
conditions provided. You can use the character filter (?) or string filter (*) to
narrow your search. You can apply filters to one or more columns. The column
heading is highlighted if a filter is applied. For example, you can search for
alarms configured for a particular event type: Aggregate Overcommitted. In the
Alarms tab, you can use the filter in the Event column. You can use the string
filter to search for alarms configured for the event "Aggregate Overcommitted."
In the string filter, when you type *aggr, all events whose names start with "aggr"
are listed.
Note: If an entry in the column contains "?" or "*", to use the character filter or
You can drag the bottom of the "list of objects" area up or down to resize the
main areas of the window. You can also choose to display or hide the "list of
related objects" and "list of views" panels. You can drag vertical dividers to
resize the width of columns or other areas of the window.
How the OnCommand console works with the Operations Manager console
and NetApp Management Console
The OnCommand console provides centralized access to a variety of storage capabilities. While you
can perform most virtualization tasks directly in the OnCommand console graphical user interface,
many physical storage tasks require the Operations Manager console or NetApp Management
Console.
The OnCommand console automatically launches these other consoles when they are required to
complete a task. You must install NetApp Management Console separately. You can also access the
Operations Manager console from the OnCommand console File menu at any time.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Step
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
After installation, you can access NetApp Management Console from the following locations:
The total number of host services registered with DataFabric Manager server.
Of the total number of host services, the number of VMware host services and the number of
Hyper-V host services.
The total number of host services with pending authorization.
The total number of storage systems that have host services connected to them. This is the total
number of unique storage systems known to all host services registered with DataFabric Manager
server.
Of the total number of storage systems that have host services connected to them, the number of
each FAS system model.
The total number of vFilers that are connected to host services.
The total number of VMware virtual centers.
The total number of VMware datacenters.
The total number of virtual machines.
Of the total number of virtual machines, the number of VMware virtual machines and the number
of Hyper-V virtual machines.
The total number of hypervisors.
Of the total number of hypervisors, the number of VMware hypervisors.
The total number of Hyper-V parents.
The total number of datastores.
Of the total number of datastores, the number of SAN datastores and the number of NAS
datastores.
The maximum number of virtual machines in a datastore.
The minimum number of virtual machines in a datastore.
The average number of virtual machines per datastore.
The maximum number of virtual machines on an ESX server.
The minimum number of virtual machines on an ESX server.
The average number of virtual machines per ESX server.
The maximum number of virtual machines on a Hyper-V server.
21
Dashboard
Understanding the dashboard
OnCommand console dashboard panels
The OnCommand console dashboard contains multiple panels that provide cumulative at-a-glance
information about your storage and virtualization environment. The dashboard provides various
aspects of your storage management environment, such as the availability of storage objects, events
generated for storage objects, resource pools, and dataset overall status.
The following panels are available in the OnCommand console dashboard:
Availability
Provides information about the availability of storage controllers (standalone controllers and HA pairs) and vFiler units that are discovered and
monitored. You can also view the number of controllers and vFiler units that
are either online or offline.
Events
Provides information about the status of the objects by listing the top five
events, based on their severity.
Resource Pools
Displays the resource pools that have existing or potential space shortages.
Displays the top five aggregates and volumes that are likely reach a
configured threshold, based on the number of days before this threshold is
reached. You can also view the trend and space utilization of a particular
aggregate or volume.
Fastest Growing
Storage
Displays the top five aggregates and volumes whose space usage is rapidly
increasing. You can also view the growth rate, trend, and space utilization of
a particular aggregate or volume.
Dataset Overall
Status
External
Relationship Lags
Unprotected Data
Displays the number of unprotected storage and virtual objects that are being
monitored.
Get Started
Enables you to navigate to the Getting Started with NetApp Software page in
the NetApp University Web site.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Availability dashboard panel
This panel provides information about the availability of storage controllers (stand-alone controllers
and HA pairs) and vFiler units that are discovered and monitored by the OnCommand console.
Panel display
icon links you to the Storage tab where you can access details about your storage controllers
The
and vFiler units.
Controllers Displays the percentage of storage controllers that are either online or offline. For
example, when ten storage controllers are being monitored and all the controllers are
Dashboard | 23
online, the Controllers area displays 100% up. If five controllers are offline, the
Controllers area displays 50% down. You can also view the number of storage
controllers that are online.
You can view more information about the storage controllers by clicking in the
Controllers area.
vFiler Units Displays the percentage of vFiler units that are either online or offline. For example,
when ten vFiler units are being monitored and all the vFiler units are online, the
vFiler Units area displays 100% up. If five vFiler units are offline, the vFiler Units
area displays 50% down. You can also view the number of vFiler units that are
online.
You can view more information about the vFiler units by clicking in the vFiler Units
area.
icon links you to the Events tab where you can view a list of events and their properties.
Events The events are displayed in the order of their severity as follows:
Emergency
Critical
Error
Warning
The event source experienced an occurrence that you should be aware of.
Events of this severity do not cause service disruption, and corrective
action might not be required.
Information The event may be of interest to the administrator. This severity does not
represent an abnormal operation.
By clicking the specific event, you can view more information about the event from the
Events tab.
Lists the volumes or aggregates that will soon reach the specified threshold. By
clicking the name of a resource, you can view more information about it in the
Volumes view or the Aggregates view, depending on the type of resource you
select.
Days to Full
Trend
Displays, as a trend line, information about the space used in the volume or the
aggregate for the past 30 days.
Space
Utilization
Displays the five fastest-growing aggregates and volumes. By clicking the name
of a resource, you can view more information about it in the Volumes view or
the Aggregates view, depending on the type of resource you select.
Growth Rate
(%)
Displays the percent growth rate of the space used by the fastest-growing storage
systems. The growth rate is determined by dividing the daily growth rate by the
total amount of space in the storage system.
Trend
Displays, as a trend line, information about the space used in the aggregate or
volume for the past 30 days.
Space
Utilization
Dashboard | 25
The number of datasets with an overall status of Error. A dataset is designated with
overall error status based upon the following status values:
DR status condition: Error
Protection status condition: Lag error or Baseline failed
Conformance status condition: Nonconformant
Space status condition: Error
Resource status condition: Emergency, Critical, or Error
Warning The number of datasets with an overall status of Warning. A dataset is designated with
overall warning status based upon the following status values:
DR status condition: Warning
Protection status condition: Job failure, Lag warning, Uninitialized, or No protection
policy for a non-empty dataset
Conformance status condition: NA
Space status condition: Warning
Resource status condition: Warning
Normal
Total Size
Space Utilization The percentage of the resource pool's capacity that is being utilized
Items are sorted in decreasing order of available space.
icon links you to the External Relationships window in the NetApp Management Console.
This panel uses colored bars to indicate the relative percentages of external SnapVault, Qtree
SnapMirror, and volume SnapMirror relationships with lag times in Error, Warning, and Normal
status.
The Warning and Error status percentages indicate the portion of external SnapVault, Qtree
SnapMirror, and Volume SnapMirror relationships whose current lag times have exceeded the time
specified in the global Warning and Error threshold settings for those relationships. Normal status
percentages indicate the portion of external relationships whose current lag times are still within
normal range.
External relationships are protection relationships that are monitored but not managed by the
OnCommand console. The lag is the time since the last successful data update associated with an
external protection relationship was completed.
Displays the number of unprotected volumes in your domain. The hypertext links
you to the Unprotected Data window in the NetApp Management Console.
Qtrees
Displays the number of unprotected qtrees in your domain. The hypertext links you
to the Unprotected Data window in the NetApp Management Console.
Hyper-V VMs Displays the number of unprotected Hyper-V virtual machines in your domain.
The hypertext links you to the Hyper-V VMs view of the Server tab.
Dashboard | 27
VMware VMs Displays the number of unprotected VMware virtual machines in your domain.
The hypertext links you to the VMware VMs view of the Server tab.
Datastores
Datacenters
Storage objects are unprotected if they do not belong to a dataset or if they belong to an unprotected
dataset. Datasets are unprotected if they do not have an assigned protection policy or if they have an
assigned protection policy but do not have an initial relationship created (the dataset has never
conformed to the protection policy).
Virtual objects are unprotected if they do not belong to a dataset that has been assigned a local
policy.
29
You can use the Events tab to acknowledge and resolve events, and also create alarms for specific
events.
You can use the Alarms tab to create, edit, delete, test, and enable or disable alarms.
Related concepts
Group
You can create alarms only at the group level. You must decide the group for which the alarm is
added. If you want to set an alarm for a specific object, you must first create a group with that
object as the only member. For example, if you want to closely monitor a single aggregate by
configuring an alarm, you must create a group, and add the aggregate into the group. You can
then configure an alarm for the newly created group.
Note: By default, there exists a global group, and all objects and groups belong to the global
group.
Event
If you add an alarm based on the type of event generated, you should decide which events require
an alarm.
Event severity
You should decide if any event of a specified severity type should trigger the alarm and, if so,
which severity type.
Event class
You can configure a single alarm for multiple events using event class. If you add an alarm based
on the event class, you should decide if an event in an event class should trigger the alarm and, if
so, which event class. For example, the expression userquota.*|qtree.* matches all user
quota or qtree events.
Note: You can view the list of event classes from the CLI, by using the following command:
dfm eventType list. You can view the list of events specific to an event class by using the
following command: dfm eventType list -Cevent-class
E-mails
You must provide the administrator user names or e-mail addresses of users other than the
administrator.
Pagers
You must provide the user names of the administrators or pager numbers of the
nonadministrator users.
Note: You must ensure that proper e-mail addresses and pager numbers of administrators
and nonadministrator users are configured.
You must provide the SNMP traphost. Optionally, you should provide the SNMP community
name.
Script
You must provide the complete path of a script that is executed when an alarm occurs and the
user name that runs the script.
Effective time for repeat notification
You can configure an alarm to repeatedly send notification to the recipients for a specified time.
You should determine the time from which the event notification is active for the alarm. If you
want the event notification repeated until the event is acknowledged, you should determine how
often you want the notification to be repeated.
A problem occurred that might lead to service disruption if corrective action is not
taken immediately.
Error
The event source is still performing; however, corrective action is required to avoid
service disruption.
Warning
The event source experienced an occurrence that you should be aware of. Events of
this severity do not cause service disruption, and corrective action might not be
required.
Information The event occurs when a new object is discovered, or when a user action is
performed. For example, when a group is created, an alarm is configured, or when a
storage system is added, the event with severity type Information is generated. No
action is required.
Normal
A previous abnormal condition for the event source returned to a normal state and the
event source is operating within the desired thresholds.
Alarm configuration
DataFabric Manager server uses alarms to notify you when events occur. DataFabric Manager server
sends the alarm notification to one or more specified recipients in different formats, such as e-mail
notification, pager alert, an SNMP traphost, or a script you wrote (you should attach the script to the
alarm).
You should determine the events that cause alarms, whether the alarm repeats until it is
acknowledged, and how many recipients an alarm has. Not all events are severe enough to require
alarms, and not all alarms are important enough to require acknowledgment. Nevertheless, to avoid
multiple responses to the same event, you should configure DataFabric Manager server to repeat
notification until an event is acknowledged.
Note: DataFabric Manager server does not automatically send alarms for the events.
Configuring alarms
Creating alarms for events
The OnCommand console enables you to configure alarms for immediate notification of events. You
can also configure alarms even before a particular event occurs. You can add an alarm based on the
event, event severity type, or event class from the Create Alarm dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have your mail server configured so that the DataFabric Manager server can send e-mails
to specified recipients when an event occurs.
You must have the following information available to add an alarm:
DFM.Event.Write
DFM.Alarm.Write
Alarms you configure based on the event severity type are triggered when that event severity level
occurs.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have your mail server configured so that the DataFabric Manager server can send e-mails
to specified recipients when an event occurs.
You must have the following information available to add an alarm:
DFM.Event.Write
DFM.Alarm.Write
Alarms you configure for a specific event are triggered when that event occurs.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have the following capabilities to perform this task:
DFM.Event.Write
DFM.Alarm.Write
Steps
The new configuration is immediately activated and displayed in the alarms list.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Events tab
The Events tab provides a single location from which you can view a list of events and their
properties. You can perform various actions on these events such as navigating to the Alarms tab,
configuring alarms (by clicking the Manage alarms link), acknowledging, and resolving events.
Create Alarm Launches the Create Alarm dialog box in which you can create alarms for the
selected event.
Refresh
Events list
The Events list displays a list of all the events that occurred. By default, the most recent events are
listed. The list of events is updated dynamically, as events occur. You can select an event to see the
details for that event.
ID
Source ID
Displays the ID of the object with which the event is associated. By default,
this column is hidden.
Triggered
Source
Displays the full name of the object with which the event is associated.
Event
Displays the event names. You can select an event to display the event details.
State
Severity
Displays the severity type of the event. You can filter this column to show all
severity types. The event severity types are Normal, Information, Warning,
Error, Critical, and Emergency.
Acknowledged By Displays the name of the person who acknowledged the event. The field is
blank if the event is not acknowledged. By default, this column is hidden.
Acknowledged
Displays the date and time when the event was acknowledged. The field is
blank if the event is not acknowledged. By default, this column is hidden.
Displays the name of the person who resolved the event. This field is blank if
the event is not resolved. By default, this column is hidden.
Resolved
Displays the date and time at which the event was resolved. This field is blank
if the event is not resolved. By default, this column is hidden.
Current
Displays a "Yes" if the event is a current event, and displays a "No" if the
event is a history event.
Details area
Apart from the event details displayed in the events list, you can view other additional details of the
events in the area below the events list.
Event
Displays the event names. You can select an event to display the event details.
About
Triggered
State
Severity
Displays the severity type of the event. You can filter this column to show all
severity types. The event severity types are Normal, Information, Warning, Error,
Critical, and Emergency.
Source
Displays the full name of the object with which the event is associated. By
clicking the source, you can view the details of the object from the corresponding
inventory view.
Type
Condition
Notified
Acknowledged Displays the date and time when the event was acknowledged. The field is blank if
the event is not acknowledged.
Resolved
Displays the date and time at which the event was resolved. This field is blank if
the event is not resolved.
Related references
Alarms tab
The Alarms tab provides a single location from which you can view a list of alarms configured based
on event, event severity type, and event class. You can also perform various actions from this
window, such as edit, delete, test, and enable or disable alarms.
Command buttons
The command buttons enable you to perform the following management tasks for a selected event:
Create
Launches the Create Alarm dialog box in which you can create an alarm based on event,
event severity type, and event class.
Edit
Launches the Edit Alarm dialog box in which you can modify alarm properties.
Delete
Test
Tests the selected alarm to check its configuration, after creating or editing the alarm.
Event
Event Severity
Group
Enabled
Start
Displays the time at which the selected alarm becomes active. By default, this
column is hidden.
End
Displays the time at which the selected alarm becomes inactive. By default, this
column is hidden.
Event Class
Displays the class of event that is configured to trigger an alarm. By default, this
column is hidden.
You can configure a single alarm for multiple events using the event class. The
event class is a regular expression that contains rules, or pattern descriptions, that
typically use the word "matches" in the expression. For example, the
userquota.*|qtree.* expression matches all user quota or qtree events.
Details area
Apart from the alarm details displayed in the alarms list, you can view other additional properties of
the alarms in the area below the alarms list.
Effective Time Range
Administrators (Email
Address)
Administrators (Pager
Number)
The SNMP traphost system that receives the alarm notification in the
form of SNMP traps.
Script Path
The name, along with the path of the script that is run when an alarm
is triggered.
Related references
Event Options
You can create an alarm based on event name, event severity type, or event class:
Group
Displays the group that receives an alert when an event or event type triggers an
alarm.
Event
Event
Severity
Displays the severity types of the event that triggers an alarm. The event severity
types are Normal, Information, Warning, Error, Critical, and Emergency.
Event Class
Notification Options
You can specify alarm notification properties by selecting one of the following check boxes:
SNMP Trap Host
E-mail Administrator
(Admin Name)
Page Administrator
(Admin Name)
E-mail Addresses
(Others)
Script Path
Specifies the name of the script that is run when the alarm is triggered.
Repeat Interval
(Minutes)
Command buttons
You can use command buttons to perform the following management tasks for a selected event:
Cancel
Does not save the alarm configuration and closes the Create Alarm dialog box.
Event Options
You can edit alarm properties such as group with which the alarm is associated, event type, event
severity, or event class.
Group
Displays the group (and its subgroups) that receives an alert when an event or
event type triggers an alarm.
Event
Event
Severity
Displays the severity type of the event that triggers the alarm.
Event Class
Notification Options
You can edit alarm notification properties by selecting one of the following check boxes:
SNMP Trap Host
E-mail Administrator
(Admin Name)
Page Administrator
(Admin Name)
E-mail Addresses
(Others)
Specifies the name of the script that is run when the alarm is triggered.
Repeat Interval
(Minutes)
Command buttons
You can use command buttons to perform the following management tasks for a selected event:
Edit
Cancel Does not save the modification of alarm configuration, and closes the Edit Alarm dialog
box.
45
Jobs
Understanding jobs
Understanding jobs
A job is typically a long-running operation. The OnCommand console enables you to create, manage,
and monitor jobs. From the Jobs tab, you can view all jobs that are currently running as well as jobs
that have completed.
Following are three examples of a job:
Managing jobs
Canceling jobs
You can use the Jobs tab to cancel a job if it is taking too long to complete, encountering too many
errors, or is no longer needed. You can cancel a job only if its status and type allow it. You can
cancel any job that has the status Running or Running with failures.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
failures. If the Cancel button is not enabled, that job type cannot be canceled.
4. At the confirmation prompt, click Yes to cancel the selected job.
Monitoring jobs
Monitoring jobs
You can monitor for job status and other job details using the Jobs tab. For example, you can view
the progress of an on-demand backup job and see whether there are any errors.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Jobs tab
The Jobs tab enables you to view the current status and other information about all jobs that are
currently running as well as jobs that have completed. You can use this information to see which jobs
are still running and which jobs have succeeded. This tab displays up to 25,000 of the jobs in all
states.
Jobs | 47
Command buttons
Cancel Stops the selected jobs. You can select multiple jobs and cancel them simultaneously.
This button is enabled only for certain job types and when the selected jobs are running.
Refresh Updates the jobs list.
View Jobs drop-down list
Selecting these options displays all jobs that were started during the specified time range; these
displays do not include earlier but still in-process jobs. All ranges are based on a 24-hour day for
which 00:00 represents midnight.
1 Day
Displays all jobs that were started between midnight of the previous day and now. This
period can cover up to 47 hours and 59 minutes.
For example, if you click 1 Day at 15:00 on February 14 (on a 24-hour clock), the list
includes all jobs that were started from 00:00 (midnight) on February 13 to the current
time on February 14. This list covers the full day of February 13 plus the partial current
day of February 14.
1 Week
Displays all jobs that were started between midnight of the same day in the previous
week (seven days ago) and now. This period can cover up to seven days, 23 hours, and
59 minutes.
For example, if you click 1 Week at 15:00 on Thursday, February 14 (on a 24-hour
clock), the list includes all jobs that were started from 00:00 (midnight) the previous
Thursday (February 7) to the current time on February 14. This list covers seven full
days plus the partial current day.
1 Month Displays all jobs that were started between midnight of the same day in the previous
month and now. This period can cover from 28 through 32 days, depending on the
month.
For example, if you click 1 Month at 15:00 on Thursday, February 14 (on a 24-hour
clock), the list includes all jobs that were started from 00:00 (midnight) on January 14 to
the current time on February 14.
All
Note: On very large or very busy systems, the Jobs tab might be unresponsive for long periods
while loading 1 Month or All data. If the application appears unresponsive for these large lists,
select a shorter time period (such as 1 Day).
Jobs list
Displays a list of the jobs that are in progress. You can customize the display by using the following
filtering and sorting options in the columns of the jobs list.
Note: You can display no more than 25,000 records simultaneously.
The identification number of the job. The default jobs list includes this column.
The job identification number is unique and is assigned by the server when it starts
the job. You can search for a particular job by entering the job identification
number in the text box provided by the column filter.
Job Type
The type of job, which is determined by the policy assigned to the dataset or by the
direct request initiated by a user. The default jobs list includes this column. The
job types are as follows:
Backup Deletion
Backup Mount
Backup Unmount
Failover
Host Service
Resource Discovery
LUN Destruction
LUN Resizing
Local Backup
Local Backup
Confirmation
Member Dedupe
Jobs | 49
Migration (One-Step)
Migration
Cancellation
Migration Cleanup
Migration
Completion
Migration
Relinquishment
Migration Repair
Migration Rollback
Migration Start
Migration Update
Mirror
On-Demand
Protection
Provisioning
Relationship Creation
Relationship
Destruction
Remote Backup
Restore
Server Configuration
Snapshot Copies
Deletion
Snapshot Copy
Deletion
Storage Deletion
Volume Dedupe
Volume Migration
Volume Resizing
Volume Undedupe
Object
The name of the object on which the job was started. The default jobs list includes
this column.
Object Type
The type of object on which the job was started. The default jobs list includes this
column. Examples of object types are Aggregate, Dataset, and vFiler unit.
Start
The date and time the job was started. The default jobs list includes this column.
Bytes
Transferred
The amount of data (in megabytes, gigabytes, or kilobytes) that was transferred
during the job. This column is not displayed in the jobs list by default.
Note: This number is an approximation and does not reflect an exact count; it is
always less than the actual number of bytes transferred. For jobs that take a
short time to complete, no data transfer size is reported.
Job Status
The running status of the job. The default jobs list includes this column. The
progress options are as follows:
Jobs | 51
Failed
Partially Failed
One or more of the tasks in the job failed and one or more of
the tasks completed successfully.
Succeeded
Running with
Failures
The job is currently running but one or more tasks in the job
failed.
Running
Queued
Canceled
Canceling
The Cancel button was clicked and the job is in the process
of being canceled.
End
The date and time the job ended. The default jobs list includes this column.
Policy
The name of the data protection policy associated with the job. This column is not
displayed in the jobs list by default.
Source Node
The name of the storage resource that contains the data being protected. This
column is not displayed in the jobs list by default.
Destination
Node
The name of the storage resource to which the data is transferred during the job.
This column is not displayed in the jobs list by default.
Submitted By The policy that automatically started the job or the user name of the person who
started the job. This column is not displayed in the jobs list by default.
Description
A description of the job taken from the policy configuration or the job description
entered when the job was manually started. This column is not displayed in the
jobs list by default.
Result
Warning
Normal
Job details
Displays details for the currently highlighted job appear in the lower right window.
Dataset
Job Description
Event
Description
Policy
The name of the data protection policy associated with the job. This column is
not displayed in the jobs list by default.
Job Type
The type of job, which is determined by the policy assigned to the dataset or by
the direct request initiated by a user.
Source
The name of the storage resource that contains the data being protected. This
column is not displayed in the jobs list by default.
Destination
The name of the storage resource to which the data is transferred during the job.
This column is not displayed in the jobs list by default.
Submitted By
The policy that automatically started the job or the user name of the person who
started the job. This column is not displayed in the jobs list by default.
Bytes
Transferred
The amount of data (in megabytes, gigabytes, or kilobytes) that was transferred
during the job. This column is not displayed in the jobs list by default.
Note: This number is an approximation and does not reflect an exact count; it
is always less than the actual number of bytes transferred. For jobs that take a
short time to complete, no data transfer size is reported.
Related references
53
Servers
Understanding virtual inventory
How virtual objects are discovered
After you successfully install and register a host service with DataFabric Manager server, DataFabric
Manager server automatically begins a job to discover the virtual server inventory.
The storage credentials that you set when configuring the host service (and vCenter properties for
VMware) are pushed to the storage inventory, and DataFabric Manager server begins to map each
server to storage.
If this automatic discovery job is not successful, you can fix the errors noted in the event log and then
manually start a discovery job from the Host Services tab.
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Related concepts
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Steps
Servers | 55
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Note: If you move an ESX server from one vCenter to another, DataFabric Manager server still
shows the ESX server and its objects in the inventory for the original vCenter host service. In this
case, you must explicitly remove the ESX server from the original vCenter.
Steps
For the OnCommand console to show guest virtual machine properties such as DNS name and IP
address, VMware Tools must be installed and running on the guest virtual machine.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Servers | 57
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you make changes to the virtual infrastructure, the results do not immediately appear in the
OnCommand console. To see the updated inventory, manually refresh the host service information
from the Host Services tab.
Note: The Related Objects pane does not show LUNs that were created on the virtual machine
using the Microsoft iSCSI software initiator.
Steps
Servers | 59
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
6. Click Ok.
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Failure to remove a virtual object from a dataset before deleting it from inventory causes backup
failures in the deleted object's former dataset.
Steps
1. After you decide to delete a specific virtual object from inventory, but before you actually delete
it, click the OnCommand console Server tab.
2. Find and select the listing for the virtual object that you want to delete, and note whether any
datasets are listed in that object's Dataset(s) column.
Datasets that are listed in the selected object's Dataset(s) column indicate that the virtual object is
a member of those datasets.
3. If no datasets are listed for the selected virtual object, use your usual tool to delete that object
from inventory.
4. If the selected object belongs to one or more datasets, click Datasets in the Related Objects pane
to display the dataset hyperlinks, then complete the following actions for each hyperlink:
a. Click the dataset hyperlink.
The OnCommand console displays the Datasets tab with the dataset in question selected.
b. Click Edit to display the Edit Dataset dialog box for the selected dataset.
c. Click Data to display the Data area
d. Remove the virtual object that you want to delete from its dataset.
e. Click OK to finalize the removal.
5. After you have removed the virtual object from all datasets, use your favorite tool to delete the
virtual object from inventory.
Servers | 61
Related references
You must have reviewed the Guidelines for performing an on-demand backup on page 277
You must have reviewed the Requirements and restrictions when performing an on-demand
backup on page 279
You must have added the virtual objects to an existing dataset or have created a dataset and added
the virtual objects that you want to back up.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
You must have the following information available:
Dataset name
Retention duration
Backup settings
Backup script location
Backup description
If you perform a backup of a dataset containing Hyper-V virtual machines and you are currently
restoring those virtual machines, the backup might fail.
Steps
Then...
You can monitor the status of your backup from the Jobs tab.
Related references
You must select the dataset that you want to back up.
Local
protection
settings
You can define the retention duration and the backup settings for your on-demand
backup, as needed.
Retention
You can choose to keep a backup until you manually delete it, or
you can assign a retention duration. By specifying a length of time
to keep the on-demand local backup, you can override the retention
duration in the local policy you assigned to the dataset for this
backup. The retention duration of a local backup defaults to a
retention type for the remote backup.
Servers | 63
Backup
settings
Backup
script path
Remote retention
type
Hourly
Daily
Weekly
Monthly
You can choose your on-demand backup settings based on the type
of virtual objects you want to back up.
Allow saved state
backup (Hyper-V
only)
Create VMware
snapshot
(VMware only)
Include
independent disks
(VMware only)
You can specify a script that is invoked before and after the local backup. The script
is invoked on the host service and the path is local to the host service. If you use a
PowerShell script, you should use the drive letter convention. For other types of
You can provide a description for the on-demand backup so you can easily find it
when you need it.
Virtual machines or datastores must first belong to a dataset before backing up.
You can add virtual objects to an existing dataset or create a new dataset and
add virtual objects to it.
Hyper-V specific Each virtual machine contained in the dataset that you want to back up must
requirements
contain at least 300 MB of free disk space. Each Windows volume in the
virtual machine (guest OS) must have at least 300 MB free disk space. This
includes the Windows volumes corresponding to VHDs, iSCSI LUNs, and
pass-through disks attached to the virtual machine.
Hyper-V virtual machine configuration files, snapshot copy files, and VHDs
must reside on Data ONTAP LUNs, otherwise backup operations fail.
VMware specific Backup operations of datasets containing empty VMware datacenters or
datastores will fail. All datacenters must contain datastores or virtual machines
requirements
to successfully perform a backup.
Virtual disks must be contained within folders in the datastore. If virtual disks
exist outside of folders on the datastore, and that data is backed up, restoring
the backup could fail.
NFS backups might take more time than VMFS backups. This is because it
takes more time for VMware to commit snapshots in a NFS environment.
Hyper-V specific Partial backups are not supported. If the Hyper-V VSS writer fails to back up
one of the virtual machines in the backup and the failure occurs at the Hyper-V
restrictions
parent host, the backup fails for all of the virtual machines in the backup.
Servers | 65
Once you start the restoration, you cannot stop the process.
Steps
Description
Restores the contents of your virtual machine from a Snapshot copy and restarts
the virtual machine after the operation completes.
Pre/Post Restore Script Runs a script that is stored on the host service server before or after the restore
operation.
The Restore Wizard displays the location of the virtual hard disk (.vhd) file.
7. From this wizard, click Restore to begin the restoration.
Restoring a VMware virtual machine using the Restore wizard
You can use OnCommand console to recover a VMware virtual machine from a local or remote
backup. By doing so, you overwrite the existing content with the backup you select.
About this task
The process for restoring a VMware virtual machine differs from restoring a Hyper-V virtual
machine in that you can restore an entire virtual machine or its disk files. Once you start the
restoration, you cannot stop the process, and you cannot restore from a backup of a virtual machine
after you delete the dataset the virtual machine belonged to.
Description
Restores the contents of your virtual machine from a Snapshot copy to its original
location. The Restart VM checkbox is enabled if you select this option and the
virtual machine is registered.
Particular virtual
disks
6. In the ESX host name field, select the name of the ESX host. The ESX host is used to mount the
virtual machine components.
This option is available if you want to restore virtual disk files or the virtual machine is on a
VMFS datastore.
7. In the Pre/Post Restore Script field, type the name of the script that you want to run before or
after the restore operation.
8. Click Next.
9. From this wizard, review the summary of restore operations and click Restore to begin the
restoration.
Related tasks
If you start a restore operation of a Hyper-V virtual machine, and another backup or restoration of the
same virtual machine is in process, it fails. Once you start the restoration, you cannot stop the
process.
Servers | 67
Steps
Description
Restores the contents of your virtual machine from a Snapshot copy and restarts
the virtual machine after the operation completes.
Pre/Post Restore Script Runs a script that is stored on the host service server before or after the restore
operation.
The Restore Wizard displays the location of the virtual hard disk (.vhd) file.
7. From this wizard, click Restore to begin the restoration.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you from
mounting its partner mirror destination backup copy. For a Mirror-generated destination backup copy
to be mountable, its associated mirror source backup copy must still exist on the source node.
Steps
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If there are virtual objects in use from the previously mounted datastores of a backup, the unmount
operation fails. You must manually clean up the backup prior to mounting the backup again because
its state reverts to not mounted.
If all the datastores of the backup are in use, the unmount operation fails but this backup's state
changes to mounted. You can unmount the backup after determining the datastores are not in use.
Servers | 69
Steps
If the ESX server becomes inactive or reboots during an unmount operation, the job is terminated and
the mount state remains mounted and the backup stays mounted on the ESX server.
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you from
mounting its partner mirror destination backup copy. For a Mirror-generated destination backup copy
to be mountable, its associated mirror source backup copy must still exist on the source node.
Steps
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If there are virtual objects in use from the previously mounted datastores of a backup, the unmount
operation fails. You must manually clean up the backup prior to mounting the backup again because
its state reverts to not mounted.
If all the datastores of the backup are in use, the unmount operation fails but this backup's state
changes to mounted. You can unmount the backup after determining the datastores are not in use.
Steps
Servers | 71
2. In the Backups tab, select a mounted backup to unmount.
3. Click Unmount.
4. At the confirmation prompt, click Yes.
A dialog box opens with a link to the unmount job displays and when you click the link, the Jobs
tab appears.
After you finish
If the ESX server becomes inactive or restarts during an unmount operation, the job is terminated and
the mount state remains mounted and the backup stays mounted on the ESX server.
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
Page descriptions
VMware
VMware Virtual Centers view
The VMware Virtual Centers view lists the discovered virtual centers. You can access this view by
clicking View > Server > VMware > Virtual Centers.
From the VMware Virtual Centers view, you can add a virtual center to a group, and view objects
that are related to each virtual center.
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
Back Up
Servers | 73
Using New
Dataset
Using Existing
Dataset
Add to Group Opens the Add to Group dialog box that enables you to add the selected datacenter
to the destination group.
Refresh
Datacenters list
Displays information about the datacenters that have been discovered by DataFabric Manager server.
You can double-click a datacenter to display the objects in that datacenter.
Datacenter
Protected
Indicates whether the datacenter is protected. Valid values are "Yes" and "No."
A datacenter is protected if it is a member of a dataset that has a local policy
assigned to it.
Virtual Center Name of the virtual center with which the datacenter is associated.
Dataset
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
Datacenter
Virtual Center
Servers | 75
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
Back Up
Using Existing
Dataset
Back Up Now
Restore
Mount
Enables you to mount a selected backup to an ESX server if you want to verify its
content before restoring it.
Unmount
Enables you to unmount a backup after you mount it on an ESX server and verify
its contents.
Add to Group Opens the Add to Group dialog box that enables you to add the selected virtual
machine to the destination group.
Refresh
Protected
Indicates whether the data in the virtual machine is protected. Valid values are
"Yes" and "No."
A virtual machine is protected if any of the following conditions are true:
The virtual machine is a member of a dataset that has a local policy assigned
to it.
ESX Server
Datacenter
Virtual Center
State
Powered On
Suspended
DNS Name
IP Address
The IP address of the virtual machine. One or more IP addresses might be listed
for a virtual machine.
Dataset(s)
VDisks tab
Displays detailed information about the VDisks for the selected virtual machine.
VDisk
Disk Type The disk type of the VDisk. Possible values are "Raw Device Mapping" or "Regular."
Datastore The datastore to which the VDisk is mapped.
Datasets tab
Displays detailed information about the dataset of which the selected virtual machine is a member.
Dataset
Storage
Service
Local Policy
The local policy that is assigned to the dataset. This policy might be the default
policy associated with the dataset or it might be a local policy assigned by an
administrator as part of a dataset modification.
Servers | 77
Related Objects pane
Displays the datastores, ESX servers, storage controllers, vFiler units, volumes, LUNs, datasets, and
backups that are related to the selected VMware virtual machine.
Related references
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
Back Up
Using Existing
Dataset
Back Up Now
Restore
Mount
Enables you to mount a selected backup to an ESX server if you want to verify its
content before restoring it.
Enables you to unmount a backup after you mount it on an ESX server and verify
its contents.
Add to Group Opens the Add to Group dialog box that enables you to add the selected datastore
to the destination group.
Refresh
Datastores list
Displays information about the datastores that have been discovered by DataFabric Manager server.
You can double-click a datastore to display the objects in that datastore.
Datastore
Protected
Indicates whether the data in the virtual machine is protected. Valid values
are "Yes" and "No."
A datastore is protected if any of the following conditions are true:
Type
Datacenter
Virtual Center
The name of the virtual center with which the datastore is associated.
Capacity (GB)
Used Capacity
(GB)
Dataset
Hosted on Data
ONTAP
Indicates whether the datastore is hosted on Data ONTAP. Valid values are
"Yes" and "No."
Servers | 79
Capacity
(GB)
Volume Thin
Provisioning
Enabled
Dedupe
Autosize
Datastore Usage
Volume Usage
Space Savings
Overview IGroup
Capacity
(GB)
LUN Space
Reservation
Volume Thin
Provisioning
Enabled
Dedupe
Autosize
Datastore Usage
LUN Usage
Volume Usage
Space Savings
Hyper-V
Hyper-V Servers view
The Hyper-V Servers view lists the discovered Hyper-V servers. You can access this window by
clicking View > Server > Hyper-V > Hyper-V Servers.
From the Hyper-V Servers view, you can add a server to a group, and view objects that are related to
each server.
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
Add to Group Opens the Add to Group dialog box that enables you to add the selected server to
the destination group.
Servers | 81
Refresh
Domain Name
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
Back Up
Using New
Dataset
Using Existing
Dataset
Back Up Now
Restore
Add to
Group
Opens the Add to Group dialog box that enables you to add the selected virtual
machine to the destination group.
Refresh
Indicates whether the data in the virtual machine is protected. Valid values are
"Yes" and "No."
A virtual machine is protected if it is a member of a dataset that has a local
policy assigned to it.
Hypervisor
State
DNS Name
Dataset(s)
VDisks tab
Displays detailed information about the VDisks for the selected virtual machine.
VDisk
VHD Type The virtual hard disk type for the selected virtual machine. Possible values are "Boot
Disk," "Cluster Shared Volume," "Passthrough," or "Regular."
Datasets tab
Displays detailed information about the dataset of which the selected virtual machine is a member.
Dataset
The name of the dataset of which the Hyper-V virtual machine is a member.
Servers | 83
Storage
Service
Local Policy
The local policy that is assigned to the dataset. This policy might be the default
policy associated with the storage service or it might be a local policy assigned by
an administrator as part of a storage service modification.
85
Storage
Physical storage
Understanding physical storage
What physical storage objects are
You can monitor and manage physical storage objects such as clusters, storage systems, aggregates,
and disks by using the OnCommand console.
You can view detailed information about the physical storage objects that are discovered and
monitored by clicking the appropriate view option.
Cluster
A group of connected storage systems that share a global namespace that you can
manage as a single virtual server or multiple virtual servers, providing performance,
reliability, and scalability benefits. The Clusters view displays all the clusters that are
monitored by OnCommand console and all of the controllers that are part of the
cluster.
Storage
System
Also known as storage controller, is a hardware device running Data ONTAP that
receives data from and sends data to native disk shelves, third-party storage, or both.
The Storage Controllers view displays all the storage systems that are discovered and
monitored by the OnCommand console.
Aggregate
Disk
The basic unit of physical storage for a Data ONTAP system. Multiple disks are
contained by a disk shelf. A Data ONTAP node, can accommodate multiple disk
shelves; the number and capacity varies according to the node's specifications. Disk
shelves provide the physical storage on which logical objects such as aggregates and
volumes are located. The Disks view displays all the disks that are monitored by the
OnCommand console.
Namespace
Every virtual server has a namespace associated with it. All the volumes
associated with a virtual server are accessed under the virtual server's
namespace. A namespace provides a context for the interpretation of the
junctions that link together a collection of volumes.
Junction
A junction points from a directory in one volume to the root directory of another
volume. Junctions are transparent to NFS and CIFS clients.
Logical
interface
Cluster
A group of connected storage systems that share a global namespace and that
you can manage as a single virtual server or multiple virtual servers, providing
performance, reliability, and scalability benefits.
Storage
controller
The component of a storage system that runs the Data ONTAP operating system
and controls its disk subsystem. Storage controllers are also sometimes called
controllers, storage appliances, appliances, storage engines, heads, CPU
modules, or controller modules.
Ports
Data ports
Provide data access to NFS and CIFS clients.
Cluster ports
Provide communication paths for cluster nodes.
Management ports
Storage | 87
A logical network interface mainly used for data transfers and operations. A data
LIF is associated with a node or virtual server in a Data ONTAP cluster.
Node
management
LIF
A logical network interface mainly used for node management and maintenance
operations. A node management LIF is associated with a node and does not fail
over to a different node.
Cluster
management
LIF
Interface group A single virtual network interface that is created by grouping together multiple
physical interfaces.
What deleted objects are
Deleted objects are the storage objects you have deleted from the OnCommand console. When you
delete a storage object, it is not removed from the OnCommand console database, it is only deleted
from the OnCommand console display and is no longer be monitored by the OnCommand console.
If you delete an object from the database, DataFabric Manager server also deletes all the child objects
it contains. For example, if you delete a storage system, all volumes and qtrees in the storage system
are deleted. Similarly, if a volume is deleted, all the qtrees in the volume are deleted. However, if you
delete a SnapMirror object, only the SnapMirror destination object (volume or qtree) is deleted from
the database.
What happens when storage objects are deleted
With the OnCommand console, you can stop monitoring a storage object (aggregate, volume, or
qtree) by deleting it from the Global group. When you delete an object, the DataFabric Manager
server stops collecting and reporting data about it. Data collection and reporting is resumed only
when the object is added back to the OnCommand console database.
Note: When you delete a storage object from any group other than Global, the object is deleted
only from that group; DataFabric Manager server continues to collect and report data about it. You
must delete the object from the Global group if you want the DataFabric Manager server to stop
monitoring it.
Adding clusters
You can add a new cluster and monitor it by using the Storage tab.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
DataFabric Manager server displays an Unknown status until it determines the identity and
status of the cluster.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
Storage | 89
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
Storage | 91
The Edit Cluster Settings page is displayed in the Operations Manager console.
5. Edit the cluster settings.
6. Click Update.
Changes to the settings are updated in the DataFabric Manager server.
7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommand
console.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Storage | 93
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Storage | 95
To free disk space, ask your users to delete files that are no longer needed
from volumes contained in the aggregate that generated the event.
You must add one or more disks to the aggregate that generated the event.
Note: After you add a disk to an aggregate, you cannot remove it
without first destroying all flexible volumes present in the aggregate to
which the disk belongs. You must destroy the aggregate after all the
flexible volumes are removed from the aggregate.
Aggregate Nearly
Full (%)
Aggregate
Overcommitted
(%)
You must create new free blocks in the aggregate by adding one or more
disks to the aggregate that generated the event.
Note: You must add disks with caution. After you add a disk to an
aggregate, you cannot remove it without first destroying all flexible
volumes present in the aggregate to which the disk belongs. You must
destroy the aggregate after all the flexible volumes are destroyed.
You must temporarily free some already occupied blocks in the aggregate
by taking unused flexible volumes offline.
Note: When you take a flexible volume offline, it returns any space it
uses to the aggregate. However, when you bring the flexible volume
online again, it requires the space again.
Aggregate Nearly
Overcommitted
(%)
Aggregate
Snapshot Reserve
Nearly Full
Threshold (%)
Storage | 97
reserve, see the Data ONTAP Data Protection Online Backup and Recovery
Guide.
Aggregate
Snapshot Reserve
Full Threshold
(%)
Note: A newly created traditional volume tightly couples with its containing aggregate so that the
capacity of the aggregate determines the capacity of the new traditional volume. Therefore, you
should synchronize the capacity thresholds of traditional volumes with the thresholds of their
containing aggregates.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
Viewing the cluster inventory
You can use the Clusters view to monitor your inventory of clusters and view information about
related storage objects, capacity graphs, and cluster hierarchy details.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Storage | 99
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Clusters view
The Clusters view displays detailed information about the clusters you are monitoring, as well as
their related objects, and also enables you to perform tasks such as editing the cluster settings,
grouping the clusters, and refreshing the monitoring samples.
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected cluster:
Add
Launches the Storage Systems, All page in the Operations Manager console.
You can add clusters from this page.
Edit
Launches the Edit Cluster Settings in the Operations Manager console. You
can modify the cluster settings from this page.
Delete
Deletes the selected cluster. Deleting a cluster does not also delete the cluster
from the OnCommand console database, but the deleted cluster is no longer be
monitored.
Add to Group
Displays the Add to Group dialog box, which enables you to add the selected
cluster to the destination group.
Storage | 101
Refresh
Monitoring
Samples
Refreshes the database sample of the selected cluster and enables you to view
the updated details.
More Actions
Refresh
View Events
Displays the events associated with the cluster in the Events tab. You can
sort the information based on the event severity, source ID, date of event
trigger, state, and so on.
Note: You can modify cluster settings, delete a cluster, add clusters to a group, refresh monitoring
samples, and view events for a cluster by right-clicking the selected cluster.
List view
The List view displays, in tabular format, the properties of all the discovered clusters. You can
customize your view of the data by clicking the filters for the columns.
You can double-click a cluster to display its child objects. The breadcrumb trail is modified to
display the selected cluster.
ID
Name
Serial Number
Controller Count
Vserver Count
Location
Aggregate Used
Capacity (GB)
Aggregate Total
Capacity (GB)
Primary IP Address
Status
Displays the current status of the cluster, based on the events generated
for the cluster. The status can be Normal, Warning, Error, Critical,
Emergency, or Unknown.
Overview tab
The Overview tab displays information about the selected cluster, such as the list of LIFs and ports.
Contact Email
LIFs
Ports
Graph tab
The Graph tab visually represents the various statistics about the clusters, such as performance and
capacity. You can select the graph you want to view from the drop-down list.
You can view the graphs representing a selected time period, such as one day, one week, one month,
to export graph details, such as the space savings
three months, or one year. You can also click
trend, used capacity, total capacity, and space savings achieved through deduplication.
Cluster Hierarchy tab
The Cluster Hierarchy tab displays details about the cluster objects in the selected cluster, such as
LIFs, storage controllers, Vservers, and aggregates.
Related Objects pane
The Related Objects section enables you to view and navigate to the groups, storage controllers,
aggregates, volumes, and Vservers related to the cluster.
Groups
Storage Controllers
Aggregates
Volumes
Vservers
Related references
Storage | 103
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected storage controller:
Add
Launches the Storage Systems, All page in the Operations Manager console.
You can add storage controllers from this page.
Edit
Delete
Add to Group
Displays the Add to Group dialog box, which enables you to add the selected
storage controller to the destination group.
Refresh
Monitoring
Samples
Refreshes the database sample of the selected storage controller and enables
you to view the updated details.
More Actions
Refresh
Grid (
View Events
Displays the events associated with the storage controller in the Events
tab. You can sort the information based on the event severity, source ID,
date of event trigger, state, and so on.
TreeMap (
List view
The List view displays, in tabular format, the properties of all the discovered storage controllers. You
can customize your view of the data by clicking the column filters.
You can double-click a storage controller to display its child objects. The breadcrumb trail is
modified to display the selected storage controller.
ID
Name
Type
Status
Displays the current status of the storage controller based on the events
generated. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Cluster
Displays the name of the cluster to which the storage controller belongs.
Model
Serial Number
Displays the serial number of the storage controller (This number is also
provided on the chassis).
System ID
Used Capacity
(GB)
Total Capacity
(GB)
IP Address
State
Displays the current state of the storage controller. The state can be Up, Down,
Error, or Unknown. By default, this column is hidden.
Map view
The Map view enables you to view the properties of the storage controllers which are displayed as
rectangles with different sizes and colors. The size and color of the rectangles are based on the
options you select for the Size and Color fields in the properties area.
Storage
Controller
Filter
Enables you to display capacity, CPU utilization, and status information about the
storage controllers in varying rectangle sizes and colors:
Storage | 105
Size
Specifies the size of the rectangle based on the option you select from the
drop-down list. You can choose one of the following options:
Used Capacity (default): The amount of physical space (in GB) used
by application or user data in the storage controller. The size of the
rectangle increases when the value for used capacity increases.
Available Capacity: The amount of physical space (in GB) that is
available in the storage controller. The size of the rectangle increases
when the value for available capacity increases.
Committed Capacity: The amount of physical space (in GB) allocated
to user and application data. The size of the rectangle increases when
the value for committed capacity increases.
Saved Capacity: The amount of space (in GB) saved in the storage
controller. The size of the rectangle increases when the value for saved
capacity increases.
Status: The current status of the storage controller based on the events
generated. The size of the rectangle varies from large to small in the
following order: Emergency, Critical, Error, Warning, Normal, and
Unknown. For example, a controller with an Emergency status is
displayed as a larger rectangle than a controller with a Critical status.
CPU Utilization: The CPU usage (in percentage) of the storage
controller. The size of the rectangle increases when the value for CPU
utilization increases.
Color Specifies the color of the rectangle based on the option you select from the
drop-down list. You can choose one of the following options:
), Critical (
), Error (
), Warning (
), Normal (
), and
Unknown (
).
Available %: The percentage of space available in the storage
controller. The color varies based on the specified threshold values and
the space available in the controller. For example, in a storage
controller with a size of 100 GB, if the Volume Nearly Full Threshold
and Volume Full Threshold are set to default values of 80% and 90%,
respectively, the color of the rectangle depends on the following
conditions:
). When the
displayed is red (
). When the available space reduces, the red
color changes to a darker shade.
Used %: The percentage of space used in the storage controller. The
color displayed varies based on the following conditions:
If the used space in the controller is less than the Volume Nearly
Full Threshold value of the controller, the color displayed is green
(
). When the used space reduces, the green color changes to a
darker shade.
If the used space in the controller exceeds the Volume Nearly Full
Threshold value but is less than the Volume Full Threshold value
of the controller, the color displayed is orange (
). When the
used space reduces, the orange color changes to a lighter shade.
If the used space in the controller exceeds the Volume Full
). When the
to application or user data. The color displayed is blue (
committed capacity reduces, the blue color changes to a lighter shade.
Saved Capacity: The amount of space (in GB) saved in the storage
If the CPU usage of the controller is less than the Host CPU Too
General
Enables you to filter storage controllers based on the name, status, or both.
Storage | 107
Note: You can filter by entering regular expressions instead of the full name of
the controller. For example, xyz* lists all the controllers that begin with the name
xyz.
Capacity
Enables you to filter storage controllers based on the used capacity, available
capacity, committed capacity, and saved capacity. You can specify the capacity
range by dragging the sliders.
Performance Enables you to filter storage objects based on the CPU utilization of the storage
controller. You can specify the CPU utilization range by dragging the sliders.
Overview tab
The Overview tab displays information about the selected storage controller, such as the IP address,
network interface connection, status, and AutoSupport details.
Name
Operating
System
Displays the version of the operating system the storage controller is running.
IP Address
Network
Interface count
Network
Interfaces
Status
Displays the current status of the storage controller based on the events
generated. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Up Time
Remote Platform Displays the status of the Remote LAN Module (RLM) card that is installed on
the controller. The status can be one of the following:
Management
Online
You can perform remote maintenance operations for the storage controller by
clicking the remote platform management link.
Contact
Displays the contact information of the administrator for the storage controller.
Location
AutoSupport
Capacity tab
The Capacity tab displays information about the capacity of storage objects and disks within the
storage controller.
Storage
Capacity
Displays the number of aggregates, volumes, qtrees, or LUNs, if any, that the
storage system contains, including the capacity that is currently in use. You can click
the number corresponding to the storage capacity for more information.
Physical
Space
Displays the number of data, spare, and parity disks and their data capacities on the
storage controller. The Total disks field under Physical space provides information
about the disks on the storage controller. You can click the number corresponding to
Total Disks to view more information from the Disks view.
Failed Disk Info Displays the location of the failed disk on the storage controller.
Initiators
Displays the number of LUN initiators available in the storage controller. You
can double-click the number corresponding to the initiator for more information.
Protocols
Displays the list of protocols that are supported by the storage controller, such as
NFS, CIFS, FCP, and iSCSI.
Graph tab
The Graph tab visually represents the various statistics about the storage controller, such as
performance and capacity. You can select the graph you want to view from the drop-down list.
Storage | 109
You can view the graphs representing a selected time period, such as one day, one week, one month,
three months, or one year. You can also click
to export graph details, such as the space savings
trend, used capacity, total capacity, and space savings achieved through deduplication.
Related Objects pane
The Related Objects section enables you to view and navigate to the groups, volumes, aggregates,
and vFiler units related to the storage controller.
Groups
Volumes
Aggregates
vFiler Units
Related references
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Delete
Add to Group
Displays the Add to Group dialog box, which enables you to add the
selected aggregate to the destination group.
Refresh Monitoring Refreshes the database sample of the selected aggregate and enables you to
view the updated details.
Samples
More Actions
Refresh
Grid (
View Events
Displays the events associated with the aggregate in the Events tab. You
can sort the information based on the event severity, source ID, date of
event trigger, and state.
TreeMap (
List view
The list view displays, in tabular format, the properties of all the discovered aggregates. You can
customize your view of the data by clicking the column filters.
You can double-click the name of an aggregate to display its child objects. The breadcrumb trail is
modified to display the selected aggregate.
ID
Name
Storage System
Displays the name of the storage system that contains the aggregate.
Type
Block Type
RAID Type
Storage | 111
RAID 0
RAID 4
RAID DP
Mirrored RAID 0
Mirrored RAID 4
Mirrored RAID
DP
State
Displays the current state of an aggregate. The state can be Online, Offline, or
Unknown.
Status
Used Capacity
(GB)
Available
Capacity (GB)
Committed
Capacity (GB)
Displays the total space reserved for all flexible volumes on an aggregate.
Total Capacity
(GB)
Host ID
Displays the host ID to which the aggregate is related. By default, this column
is hidden.
Map view
The Map view enables you to view the properties of the aggregates which are displayed as rectangles
with different sizes and colors. The size and color of the rectangles are based on the options you
select for the Size and Color fields in the properties area.
Aggregate
Filter
Enables you to display capacity and status information about the aggregates in
varying rectangle sizes and colors:
Size
Specifies the size of the rectangle based on the option you select from the
drop-down list. You can select one of the following options:
Color Specifies the color of the rectangle based on the option you select from the
drop-down list. You can select one of the following options:
Status (default): The current status of the aggregate based on the events
generated. Each status displays a specific color: Emergency (
Critical (
), Error (
), Warning (
), Normal (
),
), and
Unknown (
).
Used %: The percentage of space used in the aggregate. The color
displayed varies based on the following conditions:
If the used space in the aggregate is less than the Nearly Full
Threshold value of the aggregate, the color displayed is green
Storage | 113
(
). When the used space reduces, the green color changes to a
darker shade.
If the used space in the aggregate exceeds the Nearly Full
Threshold value but is less than the Full Threshold value of the
aggregate, the color displayed is orange (
). When the used space
reduces, the orange color changes to a lighter shade.
If the used space in the aggregate exceeds the Full Threshold value
) is displayed.
Snapshot reserve, (
Unused Snapshot Reserve: The amount of unused Snapshot reserve
space (in GB) in the aggregate. The color varies based on the specified
threshold values and the unused Snapshot reserve space in the
controller. For example, in an aggregate with a size of 100 GB, if the
Aggregate Snapshot Reserve Nearly Full and Aggregate Snapshot
Reserve Full Threshold are set to default values of 80% and 90%,
respectively, the color of the rectangle depends on the following
conditions:
), raid_dp (
), and mixed_raid_type (
).
), raid4
Size: The total size (in GB) of the aggregate. The color displayed is
blue (
). When the size of the aggregate reduces, the blue color
changes to a lighter shade.
Available %: The percentage of space available in the aggregate. The
color varies based on the specified threshold values and the space
available in the aggregate. For example, in an aggregate with a size of
100 GB, if the Aggregate Nearly Full Threshold and Aggregate Full
Threshold are set to default values of 80% and 90%, respectively, the
color of the rectangle depends on the following conditions:
displayed is red (
). When the available space reduces, the red
color changes to a darker shade.
Committed Capacity: The amount of physical space (in GB) allocated
). When the
to application or user data. The color displayed is blue (
committed capacity reduces, the blue color changes to a lighter shade.
Saved Capacity: The amount of space (in GB) saved in the aggregate.
The color displayed is blue (
). When the saved capacity reduces,
the blue color changes to a lighter shade.
General
aggregate. For example, xyz* lists all the aggregates that begin with the name xyz.
Capacity
Enables you to filter aggregates based on the used %, growth rate, used capacity,
available capacity, saved capacity, and so on. You can specify the capacity range by
dragging the sliders.
Overview tab
The Overview tab displays details about the selected aggregate, such as the storage object name and
options to enable Snapshot copies.
Full Name
Storage | 115
Storage System
Displays the name of the storage system that contains the aggregate. You
can view more information about the storage system by clicking the link.
Snapshot Copies
Enabled
Snapshot Auto
Delete
Specifies whether a Snapshot copy will be deleted to free space when a write
to a volume fails due to lack of space in the aggregate.
Capacity tab
The Capacity tab displays information about the capacity of storage objects and disks within the
storage system.
Storage
Capacity
Displays the number of volumes and qtrees, if any, that the aggregate contains, if
any, including the capacity uses by each object. You can click the number to
display the volumes or qtrees contained in the volume.
Physical
Space
Displays the number and capacity of data disks and parity disks assigned to the
aggregate. You can click the number corresponding to Total Disks to view more
information from the Disks view.
Storage Controllers
Volumes
Disks
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command button
The command button enables you to perform the following task for a selected disk:
Refresh
List view
The List view displays, in tabular format, the properties of all the discovered disks. You can
customize your view of the data by clicking the column filters.
ID
Disk Name
Controller
Displays the name of the storage controller that contains the disk.
Aggregate
Aggregate ID Displays the ID of the aggregate to which the disk belongs. By default, this column
is hidden.
Type
Size (GB)
Shelf ID
Storage | 117
Bay ID
Displays the ID of the bay within the shelf on which the disk is located.
Plex ID
Status
Displays the current status of the disk, such as Active, Reconstruction in Progress,
Scrubbing in Progress, Failed, Spare, or Offline.
Host ID
Displays the ID of the host to which the disk is related. By default, this column is
hidden.
Overview tab
The Overview tab displays the following information about the selected disk:
Firmware Revision Number
Vendor
Disk Model
Related references
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Name
Type
Storage Server Displays the parent of the storage object, such as volume, vFiler unit, or Vserver.
Deleted
Displays the date and time that the storage object was deleted.
Deleted By
Displays the name of the user who deleted the storage object.
Parent Deleted Displays "Yes" if the parent object is deleted or displays "No" if the parent object
is not deleted.
Parent ID
Parent Name
Displays the name of the parent object. By default, this column is hidden.
Related references
Virtual storage
Understanding virtual storage
Storage | 119
credentials for the hosting storage system, you must set the credentials again.
The server monitors the hosting storage system once every hour to discover new vFiler units that you
configured on the storage system. The server deletes from the database the vFiler units that you
destroyed on the storage system.
You can change the default monitoring interval from the Monitoring setup options, or by using the
following CLI command:
dfm option set vFilerMonInterval=1hour
You can disable the vFiler discovery from the Discovery setup options, or by using the dfm option
set discovervfilers=no CLI command.
When the OnCommand console discovers a vFiler unit, it does not add the network to which the
vFiler unit belongs to its list of networks on which it runs host discovery. In addition, when you
delete a network, the server continues to monitor the vFiler units in that network.
of volumes. All the volumes associated with a Vserver are accessed from the Vserver's namespace.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
Storage | 121
6. Click Update.
7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommand
console.
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
Storage | 123
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
Network connectivity
To monitor a vFiler unit, DataFabric Manager server and the hosting storage system must be part
of the same routable network that is not separated by firewalls.
Hosting storage system discovery and monitoring
You must first discover and monitor the hosting storage system before discovering and
monitoring the vFiler units.
NDMP discovery
DataFabric Manager server uses NDMP as the discovery method to manage SnapVault and
SnapMirror relationships between vFiler units. To use NDMP discovery, you must first enable
SNMP and HTTPS discovery.
Monitoring the default vFiler unit
When you enable your core license, which includes MultiStore, Data ONTAP automatically
creates a default vFiler unit on the hosting storage system unit called vfiler0. The OnCommand
console does not provide vfiler0 details.
Editing user quotas
To edit user quotas that are configured on vFiler units, ensure that the hosting storage systems are
running Data ONTAP 6.5.1 or later.
Monitoring backup relationships
For hosting storage systems that are backing up data to a secondary system, you must ensure that
the secondary system is added to the vFiler group. DataFabric Manager server collects details
about vFiler unit backup relationships from the hosting storage system. You can then view the
backup relationships if the secondary storage system is assigned to the vFiler group, even though
the primary system is not assigned to the same group.
Monitoring SnapMirror relationships
For hosting storage systems that are mirroring data to a secondary system, you must ensure that
the secondary system is added to the vFiler group. DataFabric Manager server collects details
about vFiler unit SnapMirror relationships from the hosting storage system. DataFabric Manager
server displays the relationships if the destination vFiler unit is assigned to the vFiler group, even
though the source vFiler unit is not assigned to the same group.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Storage | 125
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
vFiler Units view
The vFiler Units view displays detailed information about the vFiler units that are monitored, as well
as their related objects, and also enables you to perform tasks such as editing the vFiler unit settings,
grouping the vFiler units, and refreshing the monitoring samples.
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected vFiler unit:
Edit
Launches the Edit vFiler Settings page in the Operations Manager console.
You can modify the vFiler unit settings from this page.
Delete
Add To Group
Displays the Add to Group dialog box, which enables you to add the selected
vFiler unit to a destination group.
Refresh
Monitoring
Samples
Refreshes the database sample of the selected vFiler unit and enables you to
view the updated details.
More Actions
Refresh
Grid (
View Events
Displays the events associated with the vFiler unit in the Events tab. You
can sort the information based on the event severity, source ID, date of
event trigger, and state.
Note: You can add a vFiler unit to a group, refresh monitoring samples,
modify the settings for a vFiler unit, view events for a vFiler unit, and
delete a vFiler unit by right-clicking the selected vFiler unit.
TreeMap (
Storage | 127
List view
The list view displays, in tabular format, the properties of all the discovered vFiler units. You can
customize your view of the data by clicking the column filters.
ID
Name
Hosting Storage
System
Displays the full name of the hosting storage system of the vFiler unit.
IP Space
Displays the IP space in which the vFiler unit is created and can subsequently
participate.
Primary IP
Address
Status
Displays the current status of a vFiler unit. The status can be Normal,
Warning, Error, Critical, Emergency, or Unknown.
State
Displays "Up" if the vFiler unit is online and, "Down" if not. By default, this
column is hidden.
Map view
The Map view enables you to view the properties of the vFiler units which are displayed as
rectangles with different sizes and colors. The size and color of the rectangles are based on the
options you select for the Size and Color fields in the properties area.
vFiler
Filter
Enables you to display capacity and status information about the vFiler units in
varying rectangle sizes and colors:
Size
Specifies the size of the rectangle based on the option you select from the
drop-down list. You can select one of the following options:
Used % (default): The percentage of space used in the vFiler unit. The
size of the rectangle increases when the value for used space increases.
Available %: The percentage of space available in the vFiler unit. The
size of the rectangle increases when the value for available space
increases.
Used Capacity: The amount of physical space (in GB) used by
application or user data in the vFiler unit. The size of the rectangle
increases when the value for used capacity increases.
Available Capacity: The amount of physical space (in GB) that is
available in the vFiler unit. The size of the rectangle increases when the
value for available capacity increases.
Status: The current status of the vFiler unit based on the events
generated. The size of the rectangle varies from large to small in the
Status (default): The current status of the vFiler unit based on the events
generated. Each status displays a specific color: Emergency (
Critical (
), Warning (
), Normal (
), and Unknown
(
).
Used %: The percentage of space used in the vFiler unit. The color
displayed varies based on the following conditions:
), Error (
),
If the used space in the vFiler unit is less than the Volume Nearly
Full Threshold value, the color displayed is green (
). When the
used space reduces, the green color changes to a darker shade.
If the used space in the vFiler unit exceeds the Volume Nearly Full
Threshold value but is less than the Volume Full Threshold value,
). When the used space reduces,
the color displayed is orange (
the orange color changes to a lighter shade.
If the used space in the vFiler unit exceeds the Volume Full
If the available space in the vFiler unit exceeds the Volume Full
Threshold value, the color displayed is green (
). When the
available space reduces, the green color changes to a lighter shade.
If the available space in the vFiler unit is less than the Volume Full
Threshold value but exceeds the Volume Nearly Full Threshold
value, the color displayed is orange (
). When the available space
reduces, the orange color changes to a darker shade.
If the available space in the vFiler unit is less than the Volume
Nearly Full Threshold value, the color displayed is red (
). When
the available space reduces, the red color changes to a darker shade.
General
Enables you to filter vFiler units based on the name, status, or both.
Storage | 129
Note: You can filter by entering regular expressions instead of the full name of the
vFiler unit. For example, xyz* lists all the vFiler units that begin with the name xyz.
Capacity
Enables you to filter storage objects based on used capacity, available capacity, used
%, and available %. You can specify the capacity range by dragging the sliders.
Overview tab
The Overview tab displays details about the selected vFiler unit, such as the system ID, information
about protocols, ping status, and domains.
Name
Hosting Storage System Displays the full name of the hosting storage system of the vFiler unit.
System ID
Ping timestamp
Displays the date and time that this vFiler unit was last queried.
Ping status
Displays the status of the ping request sent to the vFiler unit.
Protocols enabled
NFS service
CIFS service
iSCSI service
Capacity tab
The Capacity tab displays information about the capacity volumes and qtrees that were added at the
time of creating the vFiler unit.
Volume Displays the number of volumes the vFiler unit contains, including the used and total
capacity of the volumes. By clicking the number corresponding to the volumes, you can
view more information about these volumes from the Volumes view.
Qtree Displays the number of qtrees contained within the volumes in the vFiler unit, including the
used and total capacity of the qtrees. By clicking the number corresponding to the qtrees,
you can view more information about these qtrees from the Qtrees view.
Graph tab
The Graph tab visually represents the performance of a vFiler unit. The graphs display the volume
capacity used versus the total capacity in the vFiler unit, vFiler capacity used, volume capacity used,
and CPU usage (%). You can select the graphs from the drop-down list.
Volumes Displays the volumes in the selected vFiler unit. The volumes that were added after the
vFiler creation are displayed along with the volumes added during the vFiler creation.
Qtrees
Displays the qtrees in the selected vFiler unit. The qtrees that were added after the vFiler
creation are displayed along with the qtrees added during the vFiler creation.
Related references
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Storage | 131
Command buttons
Edit
Launches the Edit Vserver Settings page in the Operations Manager console.
You can modify the Vserver settings from this page.
Delete
Add To Group
Displays the Add to Group dialog box, which enables you to add the selected
Vserver to a destination group.
Refresh Monitoring Refreshes the database sample of the selected Vserver and enables you to
view the updated details.
Samples
More Actions
Refresh
Grid (
View Events
Displays the events associated with the Vserver in the Events tab. You
can sort the information based on the event severity, source ID, date of
event trigger, and state.
TreeMap (
List view
The list view displays, in tabular format, the properties of all the discovered Vservers. You can
customize your view of the data by clicking the column filters.
You can double-click a Vserver to display its child objects. The breadcrumb trail is modified to
display the selected Vserver.
ID
Name
Cluster
Root Volume
Name Service
Switch
NIS Domain
Status
Displays the current status of a Vserver. The status can be Critical, Error,
Warning, Normal, or Unknown.
Enables you to display capacity and status information about the Vservers in varying
rectangle sizes and colors:
Size
Specifies the size of the rectangle based on the option you select from the
drop-down list. You can select one of the following options:
Used % (default): The percentage of space used in the Vserver. The size
of the rectangle increases when the value for used space increases.
Available %: The percentage of space available in the Vserver. The size
of the rectangle increases when the value for available space increases.
Saved %: The percentage of space saved in the Vserver. The size of the
rectangle increases when the value for saved space increases.
Used Capacity: The amount of physical space (in GB) used by
application or user data in the Vserver. The size of the rectangle
increases when the value for used capacity increases.
Available Capacity: The amount of physical space (in GB) that is
available in the Vserver. The size of the rectangle increases when the
value for available capacity increases.
Saved Capacity: The amount of physical space (in GB) that is saved in
the Vserver. The size of the rectangle increases when the value for saved
capacity increases.
Status: The current status of the Vserver based on the events generated.
The size of the rectangle varies from large to small in the following
order: Emergency, Critical, Error, Warning, Normal, and Unknown. For
example, a Vserver with an Emergency status is displayed as a larger
rectangle than a Vserver with a Critical status.
Color Specifies the color of the rectangle based on the option you select from the
Color drop-down list. The options can be one of the following:
Status (default): The current status of the Vserver based on the events
generated. Each status displays a specific color: Emergency (
Critical (
), Error (
), Warning (
), Normal (
),
), and Unknown
(
).
Used %: The percentage of space used in the Vserver. The color
displayed varies based on the following conditions:
Storage | 133
If the used space in the Vserver is less than the Volume Nearly Full
). When the
Full Threshold value, the color displayed is red (
available space reduces, the red color changes to a darker shade.
Saved Capacity: The amount of space (in GB) saved in the Vserver. The
color displayed is blue (
). When the saved capacity reduces, the blue
color changes to a lighter shade.
General
Capacity
Enables you to filter storage objects based on used %, available %, saved %, used
capacity, and available capacity. You specify the capacity range by dragging the
sliders.
Overview tab
The Overview tab displays details about the selected Vserver, such as the primary IP address, LIF
information, and the capacity of the volume that the Vserver contains.
LIFs
Displays the number of LIFs that are associated with the Vserver.
Volume Capacity Displays the number of volumes, if any, that the Vserver contains and the
capacity of the volumes that are currently in use.
Graph tab
The Graph tab visually represents the performance of a Vserver. The graphs display the volume
capacity used versus the total capacity in the Vserver, logical interface traffic per second, and volume
capacity used. You can select the graphs from the drop-down list.
You can view the graphs representing a specified time period, such as one day, one week, one month,
three months, or one year. You can also click
trend, used capacity, and total capacity.
Clusters
Volumes
Related references
Logical storage
Understanding logical storage
Storage | 135
Logically defined file system that can exist as a special subdirectory of the root directory
within either a traditional volume or a flexible volume. There is no maximum limit on
the number of qtrees you can create in storage systems. The Qtrees view displays all the
qtrees monitored by OnCommand console.
LUN
Logical unit of storage identified by a number. The LUNs view displays all the LUNs
monitored by the OnCommand console.
About quotas
Quotas provide a way to restrict or track the disk space and number of files used by a user, group, or
qtree. Quotas are applied to a specific volume or qtree.
Why you use quotas
You can use quotas to limit resource usage, to provide notification when resource usage reaches
specific levels, or simply to track resource usage.
You specify a quota for the following reasons:
To limit the amount of disk space or the number of files that can be used by a user or group, or
that can be contained by a qtree
To track the amount of disk space or the number of files used by a user, group, or qtree, without
imposing a limit
To warn users when their disk usage or file usage is high
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
Storage | 137
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
Storage | 139
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Storage | 141
Result
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Then..
Type the name of the new group in the New Group field.
5. Click OK.
Result
Ask your users to delete files that are no longer needed, to free disk
space.
For flexible volumes containing enough aggregate space, you can
increase the volume size.
For traditional volumes containing aggregates with limited space, you
can increase the size of the volume by adding one or more disks to the
aggregate.
Storage | 143
Note: Add disks with caution. After you add a disk to an aggregate,
Volume Nearly Full Description: Specifies the percentage at which a volume is considered
nearly full.
Threshold (%)
Default value: 80. The value for this threshold must be lower than the
value for the Volume Full Threshold in order for DataFabric Manager
server to generate meaningful events.
Event generated: Volume Almost Full
Event severity: Warning
Corrective action
Perform one or more of the actions mentioned in Volume Full.
Volume Space
Reserve Nearly
Depleted Threshold
(%)
Volume Space
Reserve Depleted
Threshold (%)
Volume Quota
Nearly
Overcommitted
Threshold (%)
Create new free blocks by increasing the size of the volume that
generated the event.
Permanently free some of the occupied blocks in the volume by
deleting unnecessary files.
Volume Growth
Event Minimum
Change (%)
Storage | 145
Volume Snap
Reserve Full
Threshold (%)
User Quota Nearly Description: Specifies the value (percentage) at which a user is considered
Full Threshold (%) to have consumed most of the allocated space (disk space or files used) as
specified by the user quota. The user quota includes hard limit in
the /etc/quotas file. If this limit is exceeded, DataFabric Manager
server generates a User Disk Space Quota Almost Full event or a User
Files Quota Almost Full event.
Default value: 80
Event generated: User Quota Almost Full
Volume No First
Snapshot
Threshold (%)
Note: When a traditional volume is created, it is tightly coupled with its containing aggregate so
that its capacity is determined by the capacity of the aggregate. For this reason, you should
synchronize the capacity thresholds of traditional volumes with the thresholds of their containing
aggregates.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
Qtree capacity thresholds and events
The OnCommand console enables you to monitor qtree capacity and set alarms. You can also take
corrective actions based on the event generated.
DataFabric Manager server features thresholds to help you monitor the capacity of qtrees. Quotas
must be enabled on the storage systems. You can configure alarms to send notification whenever an
event related to the capacity of a qtree occurs.
By default, if you have configured an alarm to alert you to an event, the DataFabric Manager server
issues the alarm only once per event. You can configure the alarm to continue to alert you with
events until it is acknowledged. For the Qtree Full threshold, you can also configure an alarm to send
notification only when the condition persists over a specified period.
Note: If you want to set an alarm for a specific qtree, you must create a group with that qtree as the
only member.
Storage | 147
threshold Interval is set to zero. The Qtree Full Threshold Interval specifies the
time during which the condition must persist before the event is generated. If the
condition persists for the specified amount of time, DataFabric Manager server
generates a Qtree Full event.
For example, if the monitoring cycle time is 60 seconds and the threshold
interval is 90 seconds, the threshold event is generated only if the condition
persists for two monitoring intervals.
Default value: 90 percent
Event generated: Qtree Full
Event severity: Error
Corrective action
Perform one or more of the following actions:
Ask users to delete files that are no longer needed, to free disk space.
Increase the hard disk space quota for the qtree.
Qtree Nearly Description: Specifies the percentage at which a qtree is considered nearly full.
Full
Default value: 80 percent
Threshold
Event severity: Warning
(%)
Corrective action
Perform one or more of the following actions:
Ask users to delete files that are no longer needed, to free disk space.
Increase the hard disk space quota for the qtree.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
How much aggregate and volume space is used for Snapshot copies?
Is there adequate space for the first Snapshot copy?
Which Snapshot copies can be deleted?
Which volumes have high Snapshot copy growth rates?
Which volumes have Snapshot copy reserves that are nearing capacity?
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Storage | 149
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Volumes view
The Volumes view displays detailed information about the volumes in the storage systems that are
monitored, as well as their related objects, and also enables you to perform tasks such as editing the
volume settings, grouping the volumes, and refreshing monitoring samples.
Storage | 151
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected volume:
Edit
Delete
Add to Group
Displays the Add to Group dialog box, which enables you to add the selected
volume to the destination group.
Refresh Monitoring Refreshes the database sample of the selected volume and enables you to
view the updated details.
Samples
More Actions
Refresh
Grid (
View Events
Displays the events associated with the storage system in the Events tab.
You can sort the information based on the event severity, source ID, date
of event trigger, and state.
TreeMap (
Name
Aggregate
Storage Server
Type
Block Type
RAID
Displays the RAID protection scheme. The RAID protection scheme can be on
of the following:
RAID 0
RAID 4
RAID DP
Mirrored RAID 0
Mirrored RAID 4
Mirrored RAID DP All the raid groups in the mirrored volume are of type
raid_dp.
State
Displays the current state of a volume. The state can be Online, Offline,
Initializing, Failed, Restricted, Partial, or Unknown.
Status
Displays the current status of a volume. The status can be Normal, Warning,
Error, Critical, Emergency, or Unknown.
Used Capacity
(GB)
Total Capacity
(GB)
Aggregate ID
Host ID
Displays the ID of the host to which the volume is related. By default, this
column is hidden.
Storage | 153
Map view
The Map view enables you to view the properties of the volumes which are displayed as rectangles
with different sizes and colors. The size and color of the rectangles are based on the options you
select for the Size and Color fields in the properties area.
Volume
Filter
Enables you to display capacity and status information about the volumes in varying
rectangle sizes and colors:
Size
Specifies the size of the rectangle based on the option you select from the
drop-down list. You can select one of the following options:
Used % (default): The percentage of space used in the volume. The size
of the rectangle increases when the value for used space increases.
Available %: The percentage of space available in the volume. The size
of the rectangle increases when the value for available space increases.
Growth Rate: The rate at which data in the volume is growing. The size
of the rectangle increases when the value for growth rate increases.
Near to Max Size: The threshold value specified to generate an alert
before the volume reaches maximum size. The size of the rectangle
increases when the value for near to maximum size increases.
Inode Used %: The percentage of inode space used in the volume. The
size of the rectangle increases when the value for inode % increases.
Days to Max Size: The number of days needed for the volume to reach
maximum size. The size of the rectangle increases as the number of days
to maximum size increases.
Snapshot Used %: The percentage of space used in the Snapshot copy.
The size of the rectangle increases when the value for Snapshot used %
increases.
Saved %: The percentage of space saved in the volume. The size of the
rectangle increases when the value for saved % increases.
Used Capacity: The amount of physical space (in GB) used by
application or user data in the volume. The size of the rectangle
increases when the value for used capacity increases.
Available Capacity: The amount of physical space (in GB) that is
available in the volume. The size of the rectangle increases when the
value for available capacity increases.
Saved Capacity: The amount of space (in GB) saved in the volume. The
size of the rectangle increases when the value for saved capacity
increases.
Available Snapshot Reserve: The amount of Snapshot reserve space (in
GB) available in the volume. The size of the rectangle increases when
the value for available snapshot reserve increases.
Status: The current status of the volume based on the events generated.
The size of the rectangle varies from large to small in the following
Status (default): The current status of the volume based on the events
generated. Each status displays a specific color: Emergency (
Critical (
), Warning (
), Normal (
), and Unknown
(
).
Used %: The percentage of space used in the volume. The color
displayed varies based on the following conditions:
), Error (
),
If the used space in the volume is less than the Nearly Full Threshold
value of the volume, the color displayed is green (
). When the
used space reduces, the green color changes to a darker shade.
If the used space in the volume exceeds the Nearly Full Threshold
value but is less than the Full Threshold value of the volume, the
). When the used space reduces, the
color displayed is orange (
orange color changes to a lighter shade.
If the used space in the volume exceeds the Full Threshold value of
If the available space in the volume exceeds the Full Threshold value
of the volume, the color displayed is green (
). When the available
space reduces, the green color changes to a lighter shade.
If the available space in the volume is less than the Full Threshold
value but exceeds the Nearly Full Threshold value of the volume, the
color displayed is orange (
). When the available space reduces,
the orange color changes to a darker shade.
If the available space in the volume is less than the Nearly Full
Threshold value of the volume, the color displayed is red (
).
When the available space reduces, the red color changes to a darker
shade.
Storage | 155
Saved Capacity: The amount of space (in GB) saved in the storage
If the inode used space in the volume is less than the nearly full
threshold value, the color displayed is green (
). When the inode
used space reduces, the green color changes to a darker shade.
If the inode used space in the volume exceeds the nearly full
threshold value but is less than the full threshold value, the color
displayed is orange (
). When the inode used space reduces, the
orange color changes to a lighter shade.
If the inode used space in the volume exceeds the full threshold
). When the inode used space
value, the color displayed is red (
reduces, the red color changes to a lighter shade.
Note: The threshold values for inode used % are defined by the
DataFabric Manager server.
Auto Size: If the containing aggregate has sufficient space, the volume
can automatically increase to a maximum size. If auto size is enabled,
(
) is displayed. If auto size is disabled, (
) is displayed.
Snapshot Overflow: The amount of additional space (in GB) used by the
Snapshot copies apart from the allocated Snapshot reserve space. The
color displayed is blue (
). When the Snapshot overflow reduces, the
blue color changes to a lighter shade.
General
Capacity
Enables you to filter storage objects based on the growth rate, used capacity, available
capacity, saved capacity, and so on. You can specify the capacity range by dragging
the sliders.
Storage Server
Displays the name of the storage server that contains the volume. You
can view more information about the storage server by clicking the link.
Total Capacity
Displays the total amount of space available in the volume to store data.
Quota Committed
Space
SRM Path
Graph tab
The Graph tab visually represents the performance characteristics of a volume. You can select the
graph you want to view from the drop-down list.
You can view the graphs representing a selected time period, such as one day, one week, one month,
three months, or one year. You can also click
to export graph details, such as used capacity trend,
used capacity, total capacity, and space savings achieved through deduplication.
Storage | 157
Related Objects pane
The Related Objects section enables you to view and navigate to the groups, storage controllers,
aggregates, qtrees, LUNs, Snapshot copies, datasets, and datastores related to the volume.
Groups
Storage Controllers
Aggregates
Qtrees
LUNs
Snapshot Copies
Datasets
Datastores
Related references
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected LUN:
Launches the Edit LUN Path Settings in the Operations Manager console.
You can modify the attributes for a selected LUN from this page.
Delete
Add to Group
Displays the Add to Group dialog box, which enables you to add the selected
LUN to the destination group.
Refresh
Monitoring
Samples
Refreshes the database sample for the selected LUN and enables you to view
the updated details.
More Actions
Refresh
View Events
Displays the events associated with the LUN in the Events tab. You can
sort the information based on the event severity, source ID, date of event
trigger, and state.
Note: You can add a LUN to a group, refresh monitoring samples, modify LUN path settings,
view events for a LUN, and delete a LUN by right-clicking the selected LUN.
List view
The List view displays, in tabular format, the properties of all the discovered LUNs. You can
customize your view of the data by clicking the column filters.
ID
LUN Path
Displays the path to the LUN including the volume and qtree name.
Initiator Group Specifies the initiator group (igroup) to which the LUN is mapped.
Description
Displays the description you provide when creating the LUN on your storage
system.
Size (GB)
Storage Server Displays the name of the storage controller or vFiler unit on which the LUN
resides.
Status
Displays the current status of a LUN. The status can be Normal, Warning, Error,
Critical, Emergency, or Unknown.
File System ID
Displays the file system ID that contains the LUN. By default, this column is
hidden.
Overview tab
The Overview tab displays details about the selected LUN such as SRM path, size, connected ports,
and host details.
Storage | 159
Full Path
Size
Contained File
System
Displays the name of the file system (volume or qtree) on which this LUN
resides.
Mapped To
SRM Path
Displays the SRM path to which the LUN is mapped. You can modify the
SRM path by clicking the link.
SAN Host
Displays the monitored host in a SAN that initiates requests to the storage
systems to perform tasks.
Space Reservation
Enabled
HBA Port
Specifies the HBA ports that SAN hosts use to connect to each other in a
SAN environment.
Graph tab
The Graph tab visually represents the performance characteristics of a LUN. You can select the
graph you want to view from the drop-down list.
You can view the graphs representing a selected time period, such as one day, one week, one month,
three months, or one year. You can also click
second and LUN bytes written per second.
Volumes
Qtrees
Datastores
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected qtree:
Edit
Launches the Edit Qtree Settings in the Operations Manager console. You
can modify the capacity threshold settings for a selected qtree from this page.
Delete
Add to Group
Displays the Add to Group dialog box, which enables you to add the selected
qtree to the destination group.
Refresh
Monitoring
Samples
Refreshes the database samples of the qtree and enables you to view the
updated details.
More Actions
Refresh
View Events
Displays the events associated with the qtree in the Events tab. You can
sort the information based on the event severity, source ID, date of event
trigger, and state.
Storage | 161
Note: You can add a qtree to a group, refresh monitoring samples, modify qtree settings, view
events for a qtree, and delete a qtree by right-clicking the selected qtree.
List view
The List view displays, in tabular format, the properties of all the discovered qtrees. You can
customize your view of the data by clicking the column filters.
You can double-click a qtree to display its child objects. The breadcrumb trail is modified to display
the selected qtree.
ID
Qtree Name
Storage Server
Displays the name of the storage controller or vFiler unit containing the qtree.
Volume
Status
Displays the current status of a qtree. The status can be Normal, Warning,
Error, Critical, Emergency, or Unknown.
Displays the ID of the volume which contains the qtree. By default, this
column is hidden.
Aggregate ID
Displays the ID of the aggregate that contains the volume which in turn
contains the qtree. By default, this column is hidden.
Storage Path Type Displays the direct or indirect storage path type of the qtree. By default, this
column is hidden.
Overview tab
The Overview tab displays details about the selected qtree, such as storage capacity, SnapMirror
relationships and the SRM path.
Full Path
SnapMirror
Days to Full
Displays the estimated amount of time left before the storage space is
full.
Scheduled Snapshot
Copies
SnapVault
Displays whether or not the qtree is backed up. If the qtree is backed up,
the SnapVault destination is displayed.
Displays the date and time when the qtree was last backed up using
SnapVault.
SRM Path
Quota Limit
Used Capacity
Displays the change in the disk space (number of bytes) used in the qtree
if the amount of change between the last two samples continues for 24
hours.
Displays the change in the amount of used space in the qtree reserve.
Graph tab
The Graph tab visually represents about the performance of a qtree. You can select the graphs you
want to view from the drop-down list.
You can view the graphs representing a selected time period, such as one day, one week, one month,
three months, or one year. You can also click
and used capacity.
Volumes
LUNs
Datasets
Datastores
Related references
Storage | 163
Breadcrumb trail
The breadcrumb trail is created as you navigate from one list of storage objects to another. As you
navigate, each time you double-click certain items in these lists, another breadcrumb is added to
the trail, providing a string of hyperlinks that captures your navigation history. If you want to
revisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the
icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinked
object.
Command buttons
The command buttons enable you to perform the following tasks for a selected quota:
Edit
Launches the Edit Quota Settings page in the Operations Manager console. You can edit
the selected quota from this page.
List view
The List view displays, in tabular format, the properties of all the discovered user and user group
quotas. You can customize your view of the data by clicking the column filters.
ID
User Name
File System
Displays the name and path of the volume or qtree on which the user
quota resides.
Type
Displays the type of quota. The quota can be either a user quota or a
group quota.
Status
Displays the current status of the quotas. The status can be Normal,
Warning, Error, Critical, Emergency, or Unknown.
Files Used
Related references
Command button
The command button enables you to perform the following task for a selected Snapshot copy:
Refresh
List view
The List view displays, in tabular format, the properties of all the discovered Snapshot copies. You
can customize your view of the data by clicking the column filters.
ID
Name
Volume
Displays the name of the volume that contains the Snapshot copy.
Aggregate
Storage Server Displays the name of the storage controller, vFiler unit, or Vserver that contains
the Snapshot copy.
Access Time
Displays the time when the Snapshot copy was last accessed.
Dependency
Displays the names of the applications that are accessing the Snapshot copy (for
example, SnapMirror), if any.
Storage | 165
Related objects pane
The Related objects section enables you to view and navigate to the aggregates and volumes related
to the Snapshot copies.
Aggregates
Volumes
Related references
167
Policies
Local policies
Understanding local policies
Local policy and backup of a dataset's virtual objects
A dataset's local policy in the OnCommand console enables you to specify the start times, stop times,
frequency, retention time, and warning and error event thresholds for local backups of its VMware or
Hyper-V virtual objects.
What local protection of virtual objects is
Local protection of a dataset's virtual objects consists of the OnCommand console making Snapshot
copies of the VMware virtual objects or Hyper-V virtual objects that reside as images on your
storage systems and saving those Snapshot copies as backup copies locally on the same storage
systems.
In case of data loss or corruption due to user or software error, you can restore the lost or damaged
virtual object data from saved local Snapshot copies as long as the primary storage systems on which
the virtual objects reside remain intact and operating.
What a local policy is
A local policy is a configured combination of Snapshot copy schedules, retention times, warning
threshold, and error threshold levels that you can assign to a dataset. After you assign a local policy
to a dataset, that policy applies to all the virtual objects that are included in that dataset.
The OnCommand console allows you to configure multiple local policies with different settings from
which you can select one to assign to a dataset.
You can also use policies supplied by the OnCommand console.
Local protection and remote protection of virtual objects
After Snapshot copies of a dataset's virtual objects are generated as local backup, remote protection
operations that are specified in an assigned storage service configuration can save these backup
copies to secondary storage.
Secondary or tertiary protection of virtual objects cannot be accomplished unless Snapshot copies
have been generated on the primary node by backup jobs carried out on demand or through local
policy.
Policies | 169
Your company might have a naming convention for policies. When specifying a name
for a new policy, make sure you follow that convention.
Description Use a description that helps someone unfamiliar with the policy to understand its
purpose.
Schedule and Retention
Schedule
You can set up multiple schedules of multiple types (Hourly, Daily, Weekly) for your
local backups. Each schedule has a start time, a stop time and a frequency with which
its backups are executed.
If you intend to implement local backups on multiple datasets of Hyper-V objects that
are associated with the same Hyper-V server, you must configure separate local
policies with non-overlapping schedules to assign separately to each separate dataset.
Retention
You can specify a retention period to be associated with each type of backup schedule
(Hourly, Daily, or Weekly). A retention period specifies the minimum length of time
that a backup copy is maintained, before it is eligible to be purged. A retention period
assigned to one type of backup schedule applies to all backup copies of that type.
Backup
Options
Depending on the type of virtual objects the dataset contains, you can enable
additional operations to be performed on those objects during backup.
backup. Performing a saved state or offline backup can cause downtime (displayed
for datasets of Hyper-V objects).
If this option is not selected, encountering a virtual machine that is in saved state
or that is shutdown fails the dataset backup.
Start remote backup after local backup
Starts a remote backup of data to secondary storage after the local backup is
finished if a storage service that specifies a remote backup is also assigned to the
dataset.
Backup Settings
Issue a warning if
there is no backup
for:
Decide how long the OnCommand console waits before issuing a warning
event if no local backup has successfully finished during that time.
Issue an error if
there is no backup
for:
Decide how long the OnCommand console waits before issuing an error
event if no local backup has successfully finished during that time.
Backup script path: You can specify a path to a backup script (located on the system on which
the host service runs) to specify additional operations to be executed with
the local backup. If you use a PowerShell script, you should use the drive
letter convention. For other types of scripts, you can use either the drive
letter convention or the Universal Naming Convention.
You must have reviewed the Guidelines for configuring a local policy on page 169.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
The Create Local Policy dialog box enables you to create a new local policy, or add a preconfigured
local policy supplied by the OnCommand console.
Policies | 171
Steps
The OnCommand console creates your new policy and lists it in the Policies tab.
Related concepts
You must have reviewed the Guidelines for configuring a local policy on page 169.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
The Edit Local Policy dialog box enables you to modify an existing local policy.
Name
Schedule and Retention
Backup Settings
The OnCommand console updates your policy and lists it in the Policies tab.
Related concepts
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Policies | 173
2. In the Datasets tab, select the dataset on which you want to schedule and configure local backups
and click Edit.
3. In the Edit Dataset dialog box, select the Local Policy option and drop down list and complete
one of the following actions:
If you want to assign an existing local policy, select that policy from the Local Policy drop
down list.
If you want to assign an existing local policy with some modifications select that policy, make
your modifications in the content area, and click Save.
If you want to configure a new local policy to apply to this dataset, select the Create New
option, configure the policy in the content area, and click Create.
4. After you finish assigning a new or existing local policy to this dataset, if you want to test
whether your dataset's new configuration is in conformance with OnCommand console
requirements before you apply them, click Test Conformance to display the Dataset
Conformance Report.
If the Dataset Conformance Report displays no warning or error information, click Close and
continue.
If the Dataset Conformance Report displays warning or error information, read the Action and
Suggestion information resolve the conformance issue, then click Close and continue.
5. Click OK.
Any local policy assignment, modification, or creation that you completed will be applied to the
local protection of the virtual objects in the selected dataset.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Making multiple copies of a local policy, then configuring the copies with non-overlapping
schedules, and then assigning each copy to a different dataset is a good way to implement the local
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Policies | 175
Related references
Page descriptions
Policies tab
The Policies tab enables you to add, edit, copy, delete, and list local protection policies. You can
assign the listed local policies to your virtual object datasets to configure local protection of their
virtual object members.
Command buttons
The command buttons enable you perform the following tasks related to local policies:
Create
If you select the VMware Policy sub-option, starts the Create Local Policy dialog box
for adding a local protection policy specific to VMware objects.
If you select the Hyper-V Policy sub-option, starts the Create Local Policy dialog box
for adding a local protection policy specific to Hyper-V objects.
Edit
Copy
Delete
Enables you to delete the selected local policy if that policy is not currently attached to a
dataset.
Refresh Updates the information that is displayed for all local policies listed on the Policies tab.
Local policies list
Lists information about existing local application policies. Select a row in the list to display
information in the Details area.
Name
Type
Description
Displays the period of time after which the OnCommand console issues
an error event if no local backup has successfully finished during that
time.
Dependencies tab
Displays information about datasets that are assigned this local policy.
Name
Protection Status
Related references
Schedule and
Retention
Enables you to create, edit, and delete backup and retention schedules
associated with this local policy.
Policies | 177
Backup Settings
Enables you to specify no-backup warning and error thresholds, and a path
to an optional backup script
Saves the latest changes that you have made to the data in the Create Local Policy dialog
box or Edit Local Policy dialog box as the latest configuration for this policy.
Cancel Cancels any changes you have made to the settings in the Create Local Policy dialog box
or Edit Local Policy dialog box since the last time you opened it.
Name and Description area
The Name and Description area of the Create Local Policy dialog box or Edit Local Policy dialog
box enables you to specify a name and description for a local policy.
Name
Description Enables you to enter or modify a short description of the current policy.
Schedule and Retention area
The Schedule and Retention area of the Create Local Policy dialog box or Edit Local Policy dialog
box enables you to configure a schedule of local backup jobs that you can apply to members of a
virtual object dataset and also specify how long to retain the resulting Snapshot copies before their
deletion.
Schedule and
Retention
Either specifies the local backup schedule and retention settings assigned to the
current local policy, or enables you to create a new backup and retention schedule
for the current local policy.
Add
Enables you to add a schedule to be applied to the current local policy backup.
Delete
Schedule Type Displays the type of backup schedule (Hourly, Daily, Weekly, or Monthly).
Start Time
Enables you to select the time of day the local backup starts.
Enables you to select the time of day an hourly local backup ends (Applies to
Hourly type schedules only).
Recurrence
Enables you to select the frequency with which local policy backups occur for the
associated schedule. Recurrence settings vary by schedule type:
Retention
Hourly
You can specify recurrence by hours and minutes.
Daily
Recurrence is fixed at once a day.
Weekly
You can specify recurrence by days of the week.
Monthly
You can specify recurrence by days of the month.
Enables you to select the period of time that local backup copies generated by a
schedule remain on the storage system before becoming subject to purging.
You can use any valid number and either Minutes, Hours, Days, or Weeks to set
the backup retention time.
All schedules of one type use the same retention setting. For example, changing
the retention setting for one Hourly schedule configured for this policy changes
the retention setting for all the Hourly schedules configured for this policy.
Backup
Options
Enables you to view and select additional options to be implemented with your
local backups.
Policies | 179
Either specifies the current lag warning threshold, lag error threshold, and
optional backup script path, or enables you to set the lag warning threshold,
lag error threshold, and optional backup script path for the current local
policy backup.
Issue a warning if
there are no
backups for
Issue an error if
there are no
backups for
Backup Script Path Displays the existing backup script path, or, optionally, enables you to enter
a path to a backup script (located on the system upon which the host service
is installed) to specify additional operations to be executed with the backup.
181
Datasets
Understanding datasets
What a dataset is
A dataset is a set of physical or virtual data containers that you can configure as a unit for the
purpose of group protection or group provisioning operations.
You can use the OnCommand console to configure datasets that contain virtual VMware objects
or virtual Hyper-V objects.
You can also use the OnCommand console and the associated NetApp Management Console to
configure datasets that contain physical storage systems with aggregates, volumes, qtrees, and
LUNs
During dataset configuration you can additionally configure or assign local protection or remote
protection arrangements and schedules that apply to all objects in that dataset. You can also start ondemand protection operations for all objects in that dataset with one command.
Dataset concepts
Associating data protection, disaster recovery, a provisioning policy, or storage service with a dataset
enables storage administrators automate tasks, such as assigning consistent policies to primary data,
propagating policy changes, and provisioning new volumes, qtrees, or LUNS on primary and
secondary dataset nodes.
Configuring a dataset combines the following objects:
Dataset of
physical
storage objects
Dataset of
virtual objects
Resource pool
Defines how to provision storage for the primary or secondary dataset nodes, and
provides rules for monitoring and managing storage space and for allocating
storage space from available resource pools. Provisioning policies can be
assigned directly to the primary, secondary, or tertiary nodes of datasets of
physical storage objects. They can be assigned indirectly to both datasets of
virtual objects and datasets of physical storage objects through a storage service.
Storage service A single dataset configuration package that consists of a protection policy,
provisioning policies, resource pools, and an optional vFiler template (for vFiler
unit creation). You can assign a single uniform storage service to datasets with
common configuration requirements as an alternative to separately assigning the
same protection policy, provisioning policies, resource pools, and setting up
similar vFiler unit attachments to each of them.
The only way to configure a dataset of virtual objects with secondary or tertiary
backup and mirror protection and provisioning is by assignment of a storage
service. You cannot configure secondary storage vFiler attachments for datasets
of virtual objects.
Local policy
A policy that schedules local backup jobs and designates retention periods for the
local backup copies for datasets of virtual objects.
Related objects Are Snapshot copies, primary volumes, secondary volumes, or secondary qtrees
that are generated as a result of local policy or storage service protection jobs or
provisioning jobs.
The OnCommand console lists related objects for each dataset on the Datasets
tab.
Naming
settings
Are character strings and naming formats that are applied when naming related
objects that are generated as a result of local policy or storage service protection
jobs or provisioning jobs.
Datasets | 183
You can assign provisioning policies directly to each node in a dataset that is configured to
include and manage physical storage objects as members.
You can also assign provisioning policies to storage services, which are preconfigured
combinations of protection policies, provisioning policies, and resource pools.
You can then assign storage services directly both to datasets configured for physical storage
objects and datasets configured for virtual objects.
You can assign protection policies directly to datasets that are configured to include and manage
physical storage objects as members.
You can also assign protection policies to storage services, which are preconfigured combinations
of protection policies, provisioning policies, and resource pools.
You can then assign storage services directly both to datasets configured for physical storage
objects and datasets configured for virtual objects.
qtrees
volumes
aggregates
hosts
vFiler units
Datasets | 185
You can assign a local policy to configure local backup job scheduling and local backup copy
retention of your virtual object data.
You can assign a storage service (a preconfigured combination of a protection policy,
provisioning policies, and resource pools) to configure secondary storage and tertiary storage
backup and mirroring of your virtual object data.
A dataset designed for VMware virtual objects can include datacenter, virtual machine, and
datastore objects.
VMware datacenter objects that you include in a dataset cannot be empty.
They must contain virtual machine objects or datastore objects that also contain virtual machine
objects for successful backup.
VMware and Hyper-V objects can not coexist in one dataset.
VMware objects and storage system container objects (such as aggregates, volumes, and qtrees)
cannot coexist in one dataset.
If you add a datastore object to a dataset, all the virtual machine objects that are contained in that
datastore are protected by the dataset's assigned local backup policy or storage service.
If a virtual machine resides on more than one datastore, you can exclude one or more of those
datastores from the dataset.
No local or remote protection is configured for the excluded datastores.
You might want to exclude datastores that contain swap files that you want to exclude from
backup.
If a virtual machine is added to a dataset, all of its VMDKs are protected by default unless one of
the VMDKs is on a datastore that is in the "exclusion list" of that dataset.
VMDKs on a datastore object in a dataset must be contained within folders in that datastore. If
VMDKs exist outside of folders on the datastore, and that data is backed up, restoring the backup
could fail.
Datasets | 187
To avoid conformance and local backup issues caused by primary volumes reaching their
Snapshot copy maximum of 255, best practice is to limit the number of virtual objects included in
a primary volume, and limit the number of datasets to which each primary volume is directly or
indirectly included as a member.
A primary volume that hosts virtual objects that are included in multiple datasets is subject to
retaining an additional Snapshot copy of itself for every local backup on any dataset that any of
its virtual object children are members of.
To avoid backup schedule inconsistencies, best practice is to include only virtual objects that are
located in the same time zone in one dataset.
The schedules for the local protection jobs and remote protection jobs specified in the local
policies and storage services that are assigned a dataset of virtual objects are carried out
according to the time in effect on the host systems that are associated with the dataset's virtual
objects.
To ensure faster dataset backup of virtual machines in a Hyper-V cluster, best practice is to run
all the virtual machines on one node of the Hyper-V cluster.
When virtual machines run on different Hyper-V cluster nodes, separate backup operations are
required for each node in the cluster. If all virtual machines run on the same node, only one
backup operation is required, resulting in a faster backup.
If a virtual machine resides on more than one datastore, you can exclude one or more of those
datastores from the dataset. No local or remote protection is configured for the excluded
datastores.
You might want to exclude datastores that contain swap files that you want to exclude from
backup.
To avoid an excessive amount of secondary space provisioned for backup, best practice when
creating volumes to host the VMware datastores whose virtual machines will be protected by the
OnCommand console backup is to size those volumes to be not much larger than the datastores
they host.
The reason for this practice is that when provisioning secondary storage space to back up virtual
machines that are members of datastores, the OnCommand console allocates secondary space that
Datasets | 189
Storage services enabled for disaster recovery support cannot be assigned to datasets of virtual
objects.
Preconfigured storage services supplied by the OnCommand console
To simplify the task of providing provisioning storage for virtual objects and remote protection to
virtual objects in a dataset, the OnCommand console provides a set of storage services, preconfigured
combinations of protection policies, provisioning policies, and resource pools specifically designed
to facilitate remote protection of virtual objects.
The listed storage services are optimal for use in storage facilities with five or fewer storage systems
in single resource pools. The preconfigured storage services are assigned with a Mirror protection
policy or none. You can copy, clone, or modify these storage services with other protection policies
using the NetApp Management Console. The preconfigured storage services include the following
provisioning and protection policies:
Thin Provisioned
Space for VMFS
Datastores with
Mirror
Thin Provisioned
Space for NFS
Datastores with
Mirror
Reserved Data
Space for VMFS
Datastores with
Mirror
Reserved Data
Space for NFS
Datastores with
Mirror
Reserved Data
Space for Hyper-V
Delegated Storage
with Mirror
Datasets | 191
Thin Provisioned
Space for VMFS
Datastores
Thin Provisioned
Space for NFS
Datastores
Thin Provisioned
Space for Hyper-V
Thin Provisioned
Space for Hyper-V
Delegated Storage
Reserved Data
Space for VMFS
Datastores
Reserved Data
Space for NFS
Datastores
Reserved Data
Space for Hyper-V
Storage
Reserved Data
Space for Hyper-V
Delegated Storage
Datasets | 193
Local protection and remote protection of virtual objects
After Snapshot copies of a dataset's virtual objects are generated as local backup, remote protection
operations that are specified in an assigned storage service configuration can save these backup
copies to secondary storage.
Secondary or tertiary protection of virtual objects cannot be accomplished unless Snapshot copies
have been generated on the primary node by backup jobs carried out on demand or through local
policy.
Local policies supplied by the OnCommand console
To simplify the task of providing local protection to virtual objects in a dataset the OnCommand
console provides a set of local policies (preconfigured combinations of local backup schedules, local
backup retention settings, lag warning and lag error thresholds, and optional backup scripts)
specifically to support local backup of certain types of data.
The preconfigured local policies include the following set:
This default policy enforces the following VMware environment-optimized settings
VMware
local backup related to local backup scheduling and retention. This policy can also be renamed
and modified, to accommodate different circumstances.
policy
template
Hourly backups without VMware snapshot (crash consistent) every hour
between 7 am and 7 pm every day including weekends
Daily backups with VMware snapshot at 10 PM every night every day including
weekends
Weekly backup with VMware snapshot every Sunday midnight
Retention settings: Hourly backups for 2 days, Daily backups for 1 week,
Weekly backups for 2 weeks
Issue a warning if there are no backups for: 1.5 days
Issue an error if there are no backups for: 2 days
Backup script path: empty
This default policy enforces the following Hyper-V environment-optimized settings
Hyper-V
local backup related to local backup scheduling and retention. This policy can also be renamed
and modified, to accommodate different circumstances.
policy
template
Hourly backups every hour between 7 am and 7 pm every day including
weekends
Daily backups at 10 PM every night every day including weekends
Weekly backup every Sunday midnight
Retention settings: Hourly backups for 2 days, Daily backups for 1 week,
Weekly backups for 2 weeks
Issue a warning if there are no backups for: 1.5 days
Issue an error if there are no backups for: 2 days
Skip backups that will cause virtual machines to go offline
Datasets | 195
service settings, executes for every virtual object member at a single common time that is determined
by the host systems in the most lagging time zone.
For example, if virtual machines associated with a host service in California and virtual machines
associated with a host service in New York are added to the same dataset, and the protection policy
schedule in that dataset's storage service specifies a remote backup at 9 a.m., both the virtual
machines in California and the virtual machines in New York are backed up at 9 a.m. Pacific time (or
12 noon Eastern time).
Data ONTAP. If you plan to use Open Systems SnapVault to back up data on a host that is not
running Data ONTAP, you select the secondary storage system to license the necessary Data
ONTAP services.
The following licenses are available for use with DataFabric Manager server:
SnapMirror license You install a SnapMirror license on each of the source and destination
storage systems for the mirrored data. If the source and destination volumes
are on the same system, only one license is required.
SnapMirror replicates data to one or more networked storage systems.
SnapMirror updates the mirrored data to keep it current and available for
disaster recovery, offloading tape backup, read-only data distribution, testing
on nonproduction systems, online data migration, and so on. You can also
enable the SnapMirror license to use Qtree SnapMirror for backup.
To use SnapMirror software, you must update the
snapmirror.access option in Data ONTAP to specify the destination
systems that are allowed to access the primary data source system. For more
information about the snapmirror.access option, see the Data ONTAP
Data Protection Online Backup and Recovery Guide.
You install the SnapVault Secondary license on storage systems that host the
backups of protected data. SnapVault creates backups of data stored on
multiple primary storage systems and copies the backups to a secondary
storage system. If data loss or corruption occurs, backed-up data can be
restored to a primary or open storage system with little of the downtime and
uncertainty associated with conventional tape backup and restore operations.
For versions of Data ONTAP 7.3 or later, a single storage system can
contain a SnapVault Data ONTAP Primary license and a SnapVault
Secondary license.
SnapVault Data
ONTAP primary
license
You install the SnapVault Data ONTAP Primary license on storage systems
running Data ONTAP that contain host data to be backed up. For versions of
Data ONTAP 7.3 or later, a single storage system can contain a SnapVault
Data ONTAP Primary license and a SnapVault Secondary license.
SnapVault
Windows Primary
License
You install the SnapVault Open File Manager license on a secondary storage
SnapVault
Windows Open File system to enable the backup of open files on Windows primary storage
systems running the Open Systems SnapVault agent.
Manager license
You must install the SnapVault Windows Primary license and the
SnapVault Data ONTAP Secondary license on the secondary storage system
before installing the SnapVault Open File Manager license.
SnapVault UNIX
primary license
SnapVault Linux
primary license
NearStore Option
license
The NearStore license enables your storage system to use transfer resources
as conservatively as if it were optimized as a backup system. This approach
is useful when the storage system on which you want to store backed-up data
is not a system optimized for storing backups, and you want to minimize the
number of transfer resources the storage system requires.
Storage systems using the NearStore license must meet the following
criteria:
Datasets | 197
Deduplication
license
SnapMirror Sync
license
The SnapMirror Sync license enables you to replicate data to the destination
as soon as it is written to the source volume. SnapMirror Sync is a feature of
SnapMirror.
MultiStore Option
license
The MultiStore Option license enables you to partition the storage and
network resources of a single storage system so that it appears as multiple
storage systems on the network. Each virtual "storage system" created as a
result of the partitioning is called a vFiler unit. A vFiler unit, using the
resources assigned, delivers file services to its clients as a storage system
does.
The storage resource assigned to a vFiler unit can be one or more qtrees or
volumes. The storage system on which you create vFiler units is called the
hosting storage system. The storage and network resources used by the
vFiler units exist on the hosting storage system.
Be sure the host on which you intend to install the MultiStore Option license
is running Data ONTAP 6.5 or later.
FlexClone license
The FlexClone license is necessary on storage systems that you intend to use
as resources for secondary nodes for datasets of virtual objects.
The Single file restore license is necessary on storage systems that you
intend to use as primary storage for datasets of virtual objects.
Related information
Data ONTAP Data Protection Tape Backup and Recovery Guide - http://now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
If a secondary storage system runs out of storage space necessary to meet the retention duration
required by the protection policy
If the lag thresholds specified by the policy are exceeded
The following list describes protection status values and their descriptions:
Baseline Failed
Initializing
The dataset is conforming to the protection policy and the initial baseline data
transfer is in process.
Job Failure
Lag Error
The dataset has reached or exceeded the lag error threshold specified in the
assigned protection policy. This value indicates that there has been no successful
backup or mirror copy of a node's data within a specified period of time.
This status might result for any of the following reasons:
Lag Warning
The most recent local backup (Snapshot copy) on the primary node is older
than the threshold setting permits.
The most recent backup (SnapVault or Qtree SnapMirror) is older than the
lag threshold setting or no backup jobs have completed since the dataset was
created.
The most recent mirror (SnapMirror) copy is older than the lag threshold
setting or no mirror jobs have completed since the dataset was created.
The dataset has reached or exceeded the lag warning threshold specified in the
assigned protection policy. This value indicates that there has been no successful
backup or mirror copy of a node's data within a specified period of time.
This status might result for any of the following reasons:
The most recent local backup (Snapshot copy) on the primary node is older
than the threshold setting permits.
The most recent backup (SnapVault or Qtree SnapMirror) is older than the
lag threshold setting or no backup jobs have completed since the dataset was
created.
The most recent mirror (SnapMirror) copy is older than the lag threshold
setting or no mirror jobs have completed since the dataset was created.
Datasets | 199
No Protection
Policy
The dataset is managed by the OnCommand console but no protection policy has
been assigned to the dataset.
Protected
The dataset has an assigned policy and it has conformed to that policy at least
once.
Protection
Suspended
Uninitialized
The dataset has a protection policy that does not have any protection
operations scheduled.
The dataset does not contain any data to be protected.
The dataset does not contain storage for one or more destination nodes.
The single node dataset does not have any backup versions.
An application dataset requires at least one backup version associated with it.
The dataset does not contain any backup or mirror relationships.
protection status to Unitialized. When the next scheduled backup or mirror backup job runs or
when you run an on-demand backup the protection status changes to reflect the results of the
protection job.
Configuring datasets
Adding a dataset of physical storage objects
You can use the Add Dataset wizard to add a dataset to manage protection for physical storage
objects sharing the same protection requirements, or to manage provisioning for the dataset members.
Before you begin
You must already be familiar with the Decisions to make before adding datasets of physical
storage objects (for protection) on page 200.
You must have NetApp Management Console installed.
You must have gathered the protection information that you need to complete this task:
Dataset properties
Dataset naming properties
Group membership
You must have gathered the provisioning information that you need to complete this task:
During this task, the OnCommand console launches NetApp Management Console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the NetApp Management Console open, or you can close it to conserve bandwidth.
Steps
Is there a dataset naming convention that you can use to help administrators
easily locate and identify datasets?
Dataset names can include the following characters but cannot be only
numeric:
a to z
A to Z
Datasets | 201
0 to 9
. (period)
_ (underscore)
- (hyphen)
space
Dataset
naming
properties
Group
membership
Resources for
primary
storage
If you use any other characters when naming the dataset, they do not appear in
the name.
What is a good description of the dataset membership?
Use a description that helps someone unfamiliar with the dataset to understand
its purpose.
Who is the owner of the dataset?
If an event on the dataset triggers an alarm, who should be contacted?
You can specify one or more individual e-mail addresses or a distribution list
of people to be contacted.
What time zone do you want to use to schedule operations on the dataset be
scheduled according to the local time zone for the data?
You can specify a time zone in the wizard or use the default time zone, which
is the system time zone used by the DataFabric Manager server.
Do you want to use the actual dataset name or a custom label in your datasetlevel Snapshot copy, primary volume, secondary volume, or secondary qtree
naming?
For customizing the naming settings of object types, do you want the current
default naming format to apply to one or more object types that are generated
in this dataset?
If you want to customize the dataset-level naming formats for one or more
object types, in what order do you want to enter the naming attributes for
Snapshot copy, primary volume, secondary volume, or secondary qtree?
Do you need to create a collection of datasets and resource pools based on
common characteristics, such as location, project, or owning organization?
Is there an existing group to which you want to add this dataset?
For the primary node in the dataset, which resource pool meets its provisioning
requirements?
If no resource pool meets the requirements of the primary node, you can create
a new resource pool for each node at the Resource Pools window.
Verify that you have the appropriate software licenses on the storage you
intend to use.
Protection
policy
After you create a new dataset of physical objects, you protect it by running the
NetApp Management Console Dataset Policy Change wizard to assign a
protection policy.
Resources for
secondary or
tertiary
storage
If you prefer not to use resource pools for automatic provisioning, you can
select individual physical resources as members of your dataset.
Verify that you have the appropriate software licenses on the storage you
intend to use.
When you assign a protection policy, will you assign a resource pool or individual
physical resources as destinations for your backups and mirror copies?
You do not have to assign a resource pool or physical resources to a node to create
a new dataset. However, the dataset will be nonconformant with its policy until
resources are assigned to each node, because the NetApp Management Console
data protection capability cannot carry out the protection specified by the policy.
If using a resource pool:
For the secondary or third nodes in the dataset, which resource pool meets
their provisioning requirements?
For example, the resource pool you assign to a mirror node should contain
physical resources that would all be acceptable destinations for mirror copies
created of the dataset members.
If no resource pool meets the requirements of a node, you can create a new
resource pool for each node at the Resource Pools window.
Verify that you have the appropriate software licenses on the storage you
intend to use.
If you prefer not to use resource pools for automatic provisioning, you can
select individual physical resources as destinations for backups and mirror
copies of your dataset.
Datasets | 203
Verify that you have the appropriate software licenses on the storage you
intend to use.
Review the Guidelines for adding a dataset of virtual objects on page 204.
Review the Requirements and restrictions when adding a dataset of virtual objects on page 207
Review the Best practices when adding or editing a dataset of virtual objects on page 187
Have the protection information available that you need to complete this task:
The type of virtual objects, either VMware objects or Hyper-V objects, that you want to
include
The name you want to give this dataset
The user group to whom you want this dataset visible
Whether you want to specify dataset-level custom naming formats for the Snapshot copy,
volume, and qtree objects that are generated by local policy or storage service protection jobs
on the virtual objects in this dataset
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
The Create Dataset dialog box allows you to create an empty dataset, a populated but unprotected
dataset, a populated, remotely protected dataset, a populated locally protected dataset, or a populated
partially-configured or fully-configured dataset. A minimally configured dataset is an empty dataset.
Datasets of virtual objects must have any secondary protection and provisioning configured through a
storage service that you assign to them using the OnCommand console.
Steps
To create a dataset to manage VMware objects, select the Dataset with VMware objects
option.
To create a dataset to manage Hyper-V objects, select the Dataset with Hyper-V objects
option.
If the Dataset Conformance Report displays no warning or error information, click Close and
continue.
If the Dataset Conformance Report displays warning or error information, read the Action and
Suggestion information resolve the conformance issue, then click Close and continue.
8. Click OK.
The OnCommand console creates your new dataset and lists it in the Datasets tab.
Related references
Datasets | 205
must also decide on the set of virtual objects to include, the local policy that you want to assign, and,
if appropriate, the storage service that you want to assign.
The virtual object types you want to include
The dataset that you create can include either VMware virtual objects or Hyper-V virtual objects. A
dataset configured for VMware virtual objects can include Datacenter, Virtual Machine, and
datastore objects. A dataset configured for Hyper-V virtual objects can include virtual machines.
VMware objects and Hyper-V objects cannot co-exist in one dataset.
The scope of your initial dataset configuration
If you are ready to do so, the Edit Dataset dialog box and its four content areas enable you to create a
fully populated, fully protected dataset in one session.
However, if you are not ready to create a fully configured dataset in the initial session, the Edit
Dataset dialog box also allows you to create an empty dataset, a dataset of unprotected virtual
objects, a dataset of just locally protected virtual objects, or a dataset of virtual objects that are both
locally protected and remotely protected.
You can later edit a partially configured dataset to fully configure its membership or data protection.
General properties information
When you create a dataset for virtual objects, the minimum information you need to provide is the
general property information. If you complete a dataset configuration specifying only this
information, your result is a named but empty dataset.
Name
Your company might have a naming convention for datasets. If so, then the best
practice is for the dataset name to follow those requirements.
Description A useful description is one that helps someone unfamiliar with the dataset to
understand its purpose.
Owner
The name of the person or organization that is responsible for maintaining the virtual
objects that are included in this dataset.
Contact
You can specify one or more individual e-mail addresses or a distribution list of
people to be contacted when an event on the dataset triggers an alarm.
Group
If appropriate, you can specify the group to which you assign this dataset.
You can accept the default naming process of including the dataset name as part
of the entire name of related objects created for this dataset or you can specify a
custom character string to use instead.
Snapshot
Copy
You can accept the global default Snapshot copy naming format to be applied to
all local policy or storage service generated Snapshot copies for this dataset, or
you can specify an alternative custom dataset-level naming format.
Secondary
Volume
You can accept the global default secondary volume naming format to be applied
to all local policy or storage service generated secondary volumes for this dataset,
or you can specify an alternative custom dataset-level naming format.
Secondary
Qtrees
You can accept the global default secondary qtree naming format to be applied to
all local policy or storage service generated secondary qtrees for this dataset, or
you can specify an alternative custom dataset-level naming format.
Data information
If you want to populate your dataset with virtual objects in the same session that you create it, you
must be ready to specify the following information:
Group
The resource group from which you want to select virtual objects to
include in the dataset.
Resource type
(applies to datasets
configured for
VMware objects)
If you are configuring a dataset for VMware virtual objects, what class of
supported VMware object (Datacenter, Virtual Machine, or datastore) you
want to include.
Resources in the
dataset
What virtual objects that meet your group and type selection criteria you
want to include in the dataset.
Any VMware datacenter objects that you include in a dataset cannot be
empty. They must contain virtual machine objects or a datastore that
contains virtual machine objects for successful backup.
Spanned entities
(applies to datasets
configured for
VMware objects)
If one of the VMware virtual machine objects that you want to include in a
dataset spans two or more datastores, the option of whether to include or
exclude any of those datastores from that dataset.
Datasets | 207
If you want to set up remote protection, you must still generate local backup copies in primary
storage that an assigned storage service can copy and store in secondary storage. The usual method of
generating backups for this purpose is by scheduling local backup copies in the local policy.
If you intend to implement local backups on multiple datasets of Hyper-V objects that are associated
with the same Hyper-V server, you must configure separate local policies with non-overlapping
schedules to assign separately to each dataset.
Storage services information
If you want to set up remote protection (backup or mirroring to secondary and possibly tertiary
storage locations), you must be ready to select a storage service that is configured with a protection
policy that supports this possibility and to specify the path to any additional backup script that you
require.
Requirements and restrictions when adding a dataset of virtual objects
You must be aware of the requirements and restrictions when creating or editing a dataset of virtual
objects. Some requirements and restrictions apply to datasets of all types of virtual objects and some
are specific to datasets of Hyper-V or datasets of VMware virtual objects.
General requirements and restrictions
VMDKs on a datastore object in a dataset must be contained within folders in that datastore. If
VMDKs exist outside of folders on the datastore, and that data is backed up, restoring the backup
could fail.
To avoid conformance and local backup issues caused by primary volumes reaching their
Snapshot copy maximum of 255, best practice is to limit the number of virtual objects included in
a primary volume, and limit the number of datasets to which each primary volume is directly or
indirectly included as a member.
A primary volume that hosts virtual objects that are included in multiple datasets is subject to
retaining an additional Snapshot copy of itself for every local backup on any dataset that any of
its virtual object children are members of.
To avoid backup schedule inconsistencies, best practice is to include only virtual objects that are
located in the same time zone in one dataset.
The schedules for the local protection jobs and remote protection jobs specified in the local
policies and storage services that are assigned a dataset of virtual objects are carried out
according to the time in effect on the host systems that are associated with the dataset's virtual
objects.
To ensure faster dataset backup of virtual machines in a Hyper-V cluster, best practice is to run
all the virtual machines on one node of the Hyper-V cluster.
Datasets | 209
When virtual machines run on different Hyper-V cluster nodes, separate backup operations are
required for each node in the cluster. If all virtual machines run on the same node, only one
backup operation is required, resulting in a faster backup.
Best practices specific to datasets of VMware objects
The following configuration practices apply specifically to datasets containing VMware objects:
If a virtual machine resides on more than one datastore, you can exclude one or more of those
datastores from the dataset. No local or remote protection is configured for the excluded
datastores.
You might want to exclude datastores that contain swap files that you want to exclude from
backup.
To avoid an excessive amount of secondary space provisioned for backup, best practice when
creating volumes to host the VMware datastores whose virtual machines will be protected by the
OnCommand console backup is to size those volumes to be not much larger than the datastores
they host.
The reason for this practice is that when provisioning secondary storage space to back up virtual
machines that are members of datastores, the OnCommand console allocates secondary space that
is equal to the total space of the volume or volumes in which those datastores are located. If the
host volumes are much larger than the datastores they hold, an excessive amount of provisioned
secondary space can result.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Steps
Local backup copies of the dataset's primary data, generated either by on-demand backups or by
scheduled backups specified in the local policy, must exist on the primary node for transfer by the
assigned storage service to a secondary node.
You must have a storage service available for assignment that is configured to support the
dataset's protection and provisioning requirements.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
Datasets | 211
About this task
Datasets of virtual objects must have any secondary protection and provisioning configured
through a storage service that you assign to them using the OnCommand console.
To provide an application dataset with application consistent backup protection, the OnCommand
console operator must assign to that application dataset a storage service that is configured with a
protection policy that uses a "Mirror then backup" type protection topology.
Steps
If the Dataset Conformance Report displays no warning or error information, click Close and
continue.
If the Dataset Conformance Report displays warning or error information, read the Action and
Suggestion information resolve the conformance issue, then click Close and continue.
5. When you are ready to confirm the storage service assignment, click OK.
The OnCommand console saves your dataset with its newly added members and lists your dataset
and its members in the Datasets tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
If you want to assign an existing local policy, select that policy from the Local Policy drop
down list.
If you want to assign an existing local policy with some modifications select that policy, make
your modifications in the content area, and click Save.
If you want to configure a new local policy to apply to this dataset, select the Create New
option, configure the policy in the content area, and click Create.
4. After you finish assigning a new or existing local policy to this dataset, if you want to test
whether your dataset's new configuration is in conformance with OnCommand console
requirements before you apply them, click Test Conformance to display the Dataset
Conformance Report.
If the Dataset Conformance Report displays no warning or error information, click Close and
continue.
If the Dataset Conformance Report displays warning or error information, read the Action and
Suggestion information resolve the conformance issue, then click Close and continue.
5. Click OK.
Any local policy assignment, modification, or creation that you completed will be applied to the
local protection of the virtual objects in the selected dataset.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If you need to reschedule or modify the local backup jobs associated with the local policy of a dataset
of virtual objects, you can edit the Local Policy settings of that dataset.
Datasets | 213
Steps
Any local policy modification that you completed will be applied to the local protection of the virtual
objects in all datasets that use that local policy.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
The virtual objects continue to exist as objects, but not as members of the dataset from which they
are removed. Protection and provisioning jobs that are executed on the remaining objects in the
dataset are no longer executed on the removed objects.
Related references
During this task, the OnCommand console launches NetApp Management Console. Depending
on your browser configuration, you can return to the OnCommand console by using the Alt-Tab
key combination or clicking the OnCommand console browser tab. After the completion of this
task, you can leave the NetApp Management Console open, or you can close it to conserve
bandwidth.
Dataset-level naming properties, if customized for the related object types in a dataset, override
the global naming settings for those object types in that dataset.
Datasets | 215
Steps
If you want the OnCommand console to include the name of the dataset in the names of its
generated Snapshot copy, primary volume, secondary volume, or secondary qtree objects,
select Use dataset name.
If you want the OnCommand console to use a custom character string in place of the dataset
name in the names of its generated related object, select Use custom label and enter the
character strings that you want to use.
If you want the current global naming format to apply to one or more object types that are
generated in this dataset, select the Use global naming format option for those object types.
If you want to customize the dataset-level naming formats for one or more object types that
are generated in this dataset, select the Use custom format option for those object types, and
type the naming attributes in the order that you want those attributes to appear.
5. When you complete your naming configuration, click Next and complete the dataset creation.
6. To return to the Datasets tab, press Alt-Tab.
Result
After dataset creation is complete, the OnCommand console applies the custom dataset-level naming
formats to all objects created by protection and provisioning jobs for that dataset.
Related references
The dataset-level custom naming formats that you want to specify for the related object types
that are generated by local policy or storage service protection jobs on the virtual objects in
this dataset.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
Dataset level naming properties customized for a related object type in a dataset override any
conflicting global naming settings that might be configured for that object type.
Steps
To create a dataset to manage VMware objects, select the Dataset with VMware entities
option.
To create a dataset to manage Hyper-V objects, select the Dataset with Hyper-V entities
option.
3. In the Create Dataset dialog box, select the Name option and enter the requested information in
the sub-tabs of the associated content area.
a. In the General Properties tab, enter the dataset name and administrative contact information.
b. In the Naming Properties tab, specify dataset-level naming formats to apply to the object
types that are generated by protection jobs run on this dataset.
4. If you want to specify, at this time, the virtual objects to be included in this dataset, select the
Data option and make your selections in the associated content area.
You can also add or change this information for this dataset at a later time.
5. If you want to specify, at this time, a storage service that executes remote protection for the
objects in this dataset, select the Storage service option and make your selection.
You can also add or change this information for this dataset at a later time.
6. If you want to specify or create and configure, at this time, a local policy that executes local
protection for the objects in this dataset, select the Local Policy option and make your local
policy selection or configuration.
You can also add or change this information for this dataset at a later time.
7. After you specify your desired amount of information about this dataset, click OK.
The OnCommand console creates your new dataset and lists it in the Datasets tab.
Datasets | 217
Related references
The name of the dataset for which you want to configure custom naming
The related object types whose naming you want to customize
If you want to include a custom label for your dataset in your custom naming format, the
character string that you want to use
If you want to customize the naming settings by entering attributes on the Naming Properties
tab, the naming attributes that you want to include in the naming format
If you want to customize the naming settings by naming a pre-authored naming script, the
name and location of that script.
If you plan to assign a policy, you must be assigned a role that enables you to view policies.
If you plan to assign a provisioning policy, you must be assigned a role that enables you to attach
the resource pools configured for the policy.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
Dataset-level naming properties customized for the related object types in a dataset override any
conflicting global naming settings that might be configured for those related object types.
Steps
The name of the dataset for which you want to configure custom naming
The related object whose naming you want to customize
If you want to include a custom name for your dataset in your custom naming format, the
character string that you want to use
If you want to customize naming settings by selecting and ordering attributes from the
Naming Properties page, the naming attributes that you want to include in the naming format
If you plan to assign a policy, you must be assigned a role that enables you to view policies.
If you plan to assign a provisioning policy, you also need a role that enables you to attach the
resource pools configured for the policy.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
During this task, the OnCommand console launches NetApp Management Console. Depending
on your browser configuration, you can return to the OnCommand console by using the Alt-Tab
key combination or clicking the OnCommand console browser tab. After the completion of this
task, you can leave the NetApp Management Console open, or you can close it to conserve
bandwidth.
Dataset-level naming properties customized for the protection-related objects override any
conflicting global naming settings that might be configured.
Steps
Datasets | 219
5. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommand
console.
Result
After the dataset edit is complete, the OnCommand console applies the custom dataset-level naming
format that you specified to all future objects of that type that are generated by that dataset's
protection and provisioning jobs.
Related references
Have the protection information available that you need to complete this task:
The objects that you select must all be of the same virtual object type.
The supported object types include VMware datacenter, virtual machine, or datastore objects or
Hyper-V virtual machine objects.
Any VMware datacenter objects that you include in a dataset cannot be empty.
They must contain datastore or virtual machine objects for successful backup.
Steps
Click Close to close the confirmation box and view the Server tab.
Click the linked dataset name to view the listing of the new dataset in the Datasets tab.
Related references
Have the protection information available that you need to complete this task:
The type of virtual objects that you want to add to an existing dataset.
The names of the virtual objects that you want to add.
The name of the dataset to which you want to add your selected objects.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
Although a dataset might contain more than one object type of the same family, the objects that
you select for this operation must all be of the same virtual object type.
Datasets | 221
The supported object types include VMware datacenter, virtual machine, or datastore objects or
Hyper-V virtual machine objects.
Any VMware datacenter objects that you include in a dataset cannot be empty.
They must contain datastore or virtual machine objects for successful backup.
Steps
Click Close to close the confirmation box and view the Server tab.
Click the linked dataset name to view the listing of the updated dataset in the Datasets tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
A Hyper-V parent host does not allow simultaneous or overlapping local backups on multiple virtual
machines that are associated with it; therefore each associated dataset of Hyper-V objects that you
want to provide with local protection requires a separate local policy with a schedule that does not
overlap the schedule of any other local policy.
1. In the Policies tab, select Hyper-V Local Policy Template and click Copy to create an
alternative local policy for each dataset that is associated with the Hyper-V parent host.
2. Still in the Policies tab, edit the Schedule and Retention area of each alternative local policy that
you just created so that none of those policies has a schedule that overlaps with the schedule of
any other.
3. In the Datasets tab, edit the Local Policy area of each separate dataset that is associated with the
Hyper-V parent host to assign it a separate one of the local policies that you have just edited.
Related references
Managing datasets
Performing an on-demand backup of a dataset
You can perform a local on-demand dataset backup to protect your virtual objects.
Before you begin
You must have reviewed the Guidelines for performing an on-demand backup on page 277
You must have reviewed the Requirements and restrictions when performing an on-demand
backup on page 279
You must have added the virtual objects to an existing dataset or have created a dataset and added
the virtual objects that you want to back up.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
You must have the following information available:
Dataset name
Retention duration
Backup settings
Backup script location
Backup description
If you perform a backup of a dataset containing Hyper-V virtual machines and you are currently
restoring those virtual machines, the backup might fail.
Datasets | 223
Steps
You can monitor the status of your backup from the Jobs tab.
Guidelines for performing an on-demand backup
Before performing an on-demand backup of a dataset, you must decide how you want to assign
resources and assign protection settings.
General properties information
When performing an on-demand backup, you need to provide information about what objects you
want to back up, to assign protection and retention settings, and to specify script information that
runs before or after the backup operation.
Dataset
name
You must select the dataset that you want to back up.
Local
protection
settings
You can define the retention duration and the backup settings for your on-demand
backup, as needed.
Retention
You can choose to keep a backup until you manually delete it, or
you can assign a retention duration. By specifying a length of time
to keep the on-demand local backup, you can override the retention
duration in the local policy you assigned to the dataset for this
backup. The retention duration of a local backup defaults to a
retention type for the remote backup.
A combination of both the remote backup retention type and storage
service is used to determine the remote backup retention duration.
Backup
settings
Backup
script path
Remote retention
type
Hourly
Daily
Weekly
Monthly
You can choose your on-demand backup settings based on the type
of virtual objects you want to back up.
Allow saved state
backup (Hyper-V
only)
Create VMware
snapshot
(VMware only)
Include
independent disks
(VMware only)
You can specify a script that is invoked before and after the local backup. The script
is invoked on the host service and the path is local to the host service. If you use a
PowerShell script, you should use the drive letter convention. For other types of
scripts, you can use either the drive letter convention or the Universal Naming
Convention.
Datasets | 225
Backup
description
You can provide a description for the on-demand backup so you can easily find it
when you need it.
Virtual machines or datastores must first belong to a dataset before backing up.
You can add virtual objects to an existing dataset or create a new dataset and
add virtual objects to it.
Hyper-V specific Each virtual machine contained in the dataset that you want to back up must
requirements
contain at least 300 MB of free disk space. Each Windows volume in the
virtual machine (guest OS) must have at least 300 MB free disk space. This
includes the Windows volumes corresponding to VHDs, iSCSI LUNs, and
pass-through disks attached to the virtual machine.
Hyper-V virtual machine configuration files, snapshot copy files, and VHDs
must reside on Data ONTAP LUNs, otherwise backup operations fail.
VMware specific Backup operations of datasets containing empty VMware datacenters or
datastores will fail. All datacenters must contain datastores or virtual machines
requirements
to successfully perform a backup.
Virtual disks must be contained within folders in the datastore. If virtual disks
exist outside of folders on the datastore, and that data is backed up, restoring
the backup could fail.
NFS backups might take more time than VMFS backups. This is because it
takes more time for VMware to commit snapshots in a NFS environment.
Hyper-V specific Partial backups are not supported. If the Hyper-V VSS writer fails to back up
one of the virtual machines in the backup and the failure occurs at the Hyper-V
restrictions
parent host, the backup fails for all of the virtual machines in the backup.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
When you delete a dataset, the physical resources that compose the dataset are not deleted.
Steps
Before performing maintenance on volumes used as destinations for backups or mirror copies,
you might want to stop protection and conformance checking of the dataset to which the volume
belongs to ensure that the protection application does not initiate a new backup or mirror
relationship for the primary data.
Note: If you suspend protection for a dataset and the lag time exceeds the threshold defined for
the dataset, no lag threshold event is generated until protection is resumed. After you resume
protection for the dataset, the protection application generates the backlog of lag threshold
Datasets | 227
events that would have been generated had protection been in effect and triggers any applicable
alarms.
Note: When you suspend services on application datasets, the external application continues to
This task suspends all policies that are assigned to the dataset. You cannot choose to suspend only
protection when both protection and provisioning policies are assigned to a dataset.
Steps
All scheduled backups and provisioning are cancelled until service is resumed.
After you finish
After you bring the storage system volume online again, you must wait for the DataFabric Manager
server to recognize that the volume is back online. You can check the backup volume status using
Operations Manager.
You can resume data protection from the Datasets tab.
dataset, no lag threshold event is generated until protection is resumed. After you resume
protection for the dataset, the protection application generates the backlog of lag threshold events
that would have been generated had protection been in effect and triggers any applicable alarms.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
The following conditions must exist:
The new storage service that you want to attach is available in the group in which you want to
locate that dataset.
All datasets to which you want to attach the new storage service are currently attached to the
same current storage service.
The Change Storage Service wizard allows you to select an alternative storage service, presents you
with possible node remapping alternatives along with rebaselining requirements for each alternative,
carries out a dry run of your request, and then implements your request upon your approval.
During this task, the OnCommand console launches NetApp Management Console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the NetApp Management Console open, or you can close it to conserve bandwidth.
Steps
Datasets | 229
4. Confirm the details of the storage service and click Finish.
5. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommand
console.
6. Refresh your browser to update the OnCommand console with the changes you made.
Result
The selected datasets are listed in the datasets table with their newly attached storage service named
in the storage service column.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Confirm that all datasets to which you want to attach the storage service currently use the same
protection policy or no protection policy.
About this task
The Attach Storage Service wizard allows you to select from a list of possible storage services,
presents you with possible node remappings and associated rebaselining requirements for the storage
service that you select, carries out a dry run of your request, and then implements your request upon
your approval.
After you attach a storage service to an existing dataset, you cannot directly edit that dataset to
change its individual protection policy selection, provisioning policy selections, or resource pool
selections as long as that storage service is attached. You can only edit the attached storage service to
change the protection policy, provisioning policy, or resource pool selections for all datasets attached
to that storage service.
During this task, the OnCommand console launches NetApp Management Console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the NetApp Management Console open, or you can close it to conserve bandwidth.
Steps
The selected datasets are listed in the datasets table with their newly attached storage service named
in their storage service column.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
You can restore files contained in volumes, qtrees, and vFiler units that were backed up as members
of a dataset.
During this task, the OnCommand console launches NetApp Management Console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the NetApp Management Console open, or you can close it to conserve bandwidth.
Steps
Datasets | 231
2. In the Datasets tab, select the dataset whose data you want to restore, click More then select
Restore to start the Restore wizard.
The wizard displays the Backup Files window.
3. In the Backup Files window, select the backup copy containing the data that you want to restore.
4. Select the volumes, qtrees, directories and files contained in the backup copies that you want to
restore and continue running the Restore wizard.
5. Click Finish to end the wizard and begin the restore operation.
6. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommand
console.
After you finish
You can use the Jobs window to track the progress of the restore job and monitor the job for possible
errors.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If a virtual object that is a member of a dataset is deleted by use of third-party management tools
from inventory before it is removed as a member from its dataset, any subsequent backup jobs
attempted on that dataset are only partially successful until you complete the following actions to
remove their references from the dataset.
Steps
After the update of the dataset is complete, the partially successful backup failures caused by the
dataset containing deleted virtual objects stop, and fully successful backup jobs resume.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
This procedure assumes you are viewing the Conformance Details dialog box that you displayed by
selecting a nonconformant dataset in the Datasets tab and clicking
display.
The dialog box displays Information, Error, Action, Reason, and Suggestion text about the
nonconformance condition.
Steps
Datasets | 233
Error
Indicates the configuration operations that the OnCommand console can not
perform on this dataset due to conformance issues.
Action
Indicates what the OnCommand console conformance engine did to discover the
conformance issue.
Reason
Suggestion
2. Based on the dialog box text, decide the best way to resolve the conformance issue.
If the dialog box text indicates that the OnCommand console conformance monitor cannot
automatically resolve the conformance issue, resolve this issue manually.
If Suggestion text indicates that automatically resolving the conformance issues requires a
baseline transfer of data, first attempt to resolve this issue manually. If unsuccessful, consider
resolving the issue automatically even if doing so requires a baseline transfer of data.
If the Suggestion text indicates that the OnCommand console conformance monitor can
resolve the conformance issue automatically without reinitiating a baseline transfer of data,
first, consider resolving the issue manually. If unsuccessful resolve the issue automatically.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for a baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset Datasets tab and clicking
display.
1. In the Conformance Details dialog box, confirm that the messages indicate that the conformance
issues cannot be resolved automatically.
2. Using the conformance messages, determine what is causing the nonconformance problem and
attempt to correct the condition manually.
You might need to log in to another GUI or CLI console to resolve the issues.
3. After you have attempted to correct the condition, wait at least one hour for the conformance
monitor to update the dataset's conformance status.
4. Return to the Conformance Details dialog box and click Test Conformance to determine if the
conformance issue is resolved.
If the conformance issue is resolved, the Conformance Details dialog box does not display the
"Conform" button.
5. If the conformance issue is resolved, click Cancel.
6. If the conformance issue is not resolved, repeat Steps 2, 3, and 4.
After you finish
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for a baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset Datasets tab and clicking
display.
Datasets | 235
Steps
1. In the Conformance Details dialog box, read the text to determine the ability of the conformance
engine to automatically resolve the nonconformant condition without reinitializing a baseline
transfer of data.
2. If the text suggests that a simple automatic resolution is possible, click Conform .
The OnCommand console conformance engine closes the Conformance Details dialog box and
attempts to reconfigure storage resources to resolve storage service protection and provisioning
policy conformance issues automatically.
3. Monitor the conformance status on the Datasets tab for one of the following values:
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset in the Datasets tab and clicking
status display.
1. In the Conformance Details dialog box, confirm that warning text is displayed that indicates that
a reinitialized baseline transfer of data might be required.
You should try to resolve the conformance issues manually before initializing a time-consuming
baseline transfer of your data.
2. Using the conformance messages, determine what is causing the conformance problem and
attempt to correct the condition manually.
You might need to log in to another GUI or CLI console to resolve the issues.
3. After you have attempted to correct the condition, wait at least one hour for the conformance
monitor to update the dataset's conformance status.
4. Return to the Conformance Details dialog box and click Test Conformance to determine if the
conformance issue is resolved.
If the conformance issue is resolved, the Conformance Details dialog box does not display the
"Conform" button.
5. If the conformance issue is not resolved, click Conform to attempt automated resolution and
initiate a rebaseline of your data.
After you finish
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Related references
Monitoring datasets
Overview of dataset status types
The OnCommand console reports on each dataset's protection status, conformance status, and
resource status.
Although protection policies are not assigned directly to datasets of virtual objects, the OnCommand
console still displays the statuses related to protection policies that are assigned indirectly to datasets
of virtual objects, as components of assigned storage services.
Datasets | 237
If a secondary storage system runs out of storage space necessary to meet the retention duration
required by the protection policy
If the lag thresholds specified by the policy are exceeded
The following list describes protection status values and their descriptions:
Baseline Failed
Initializing
The dataset is conforming to the protection policy and the initial baseline data
transfer is in process.
Job Failure
Lag Error
The dataset has reached or exceeded the lag error threshold specified in the
assigned protection policy. This value indicates that there has been no successful
backup or mirror copy of a node's data within a specified period of time.
This status might result for any of the following reasons:
Lag Warning
The most recent local backup (Snapshot copy) on the primary node is older
than the threshold setting permits.
The most recent backup (SnapVault or Qtree SnapMirror) is older than the
lag threshold setting or no backup jobs have completed since the dataset was
created.
The most recent mirror (SnapMirror) copy is older than the lag threshold
setting or no mirror jobs have completed since the dataset was created.
The dataset has reached or exceeded the lag warning threshold specified in the
assigned protection policy. This value indicates that there has been no successful
backup or mirror copy of a node's data within a specified period of time.
This status might result for any of the following reasons:
The most recent local backup (Snapshot copy) on the primary node is older
than the threshold setting permits.
The most recent backup (SnapVault or Qtree SnapMirror) is older than the
lag threshold setting or no backup jobs have completed since the dataset was
created.
The most recent mirror (SnapMirror) copy is older than the lag threshold
setting or no mirror jobs have completed since the dataset was created.
No Protection
Policy
The dataset is managed by the OnCommand console but no protection policy has
been assigned to the dataset.
Protected
The dataset has an assigned policy and it has conformed to that policy at least
once.
Protection
Suspended
Uninitialized
The dataset has a protection policy that does not have any protection
operations scheduled.
The dataset does not contain any data to be protected.
The dataset does not contain storage for one or more destination nodes.
The single node dataset does not have any backup versions.
An application dataset requires at least one backup version associated with it.
The dataset does not contain any backup or mirror relationships.
protection status to Unitialized. When the next scheduled backup or mirror backup job runs or
when you run an on-demand backup the protection status changes to reflect the results of the
protection job.
Descriptions of dataset conformance status
The dataset conformance status indicates whether a dataset is configured according to its local policy
or storage service's protection policy. To be in conformance, all secondary and tertiary storage that is
part of the backup relationship must be successfully provisioned and the provisioned objects must
match the requirements of the primary data. You can monitor dataset status using the Datasets tab.
The OnCommand console regularly checks a dataset for conformance. If it detects changes in the
dataset's membership or policy definition, the console does one of three things:
Datasets | 239
You can view these actions and approve them in the Conformance Details dialog box.
A dataset might be nonconformant because there are no available resources from which to provision
the storage or because the NetApp Management Console data protection capability does not have the
necessary credentials to provision the storage resources.
The following list describes dataset conformance values:
Conformant
Conforming
The dataset is not in conformance with all associated policies. The OnCommand
console is performing actions to bring the dataset into conformance.
Nonconformant The OnCommand console cannot bring the dataset into conformance with all
associated policies and might require your approval or intervention to complete
this task.
Descriptions of dataset resource status
The dataset resource status indicates the event status for all resource objects that are assigned to the
dataset. The resources include those that are members of the secondary and tertiary storage systems.
If, for example, a tertiary member's status is critical, the dataset 's resource status also is displayed as
critical.
You can monitor dataset status using the Datasets tab . You can troubleshoot the resource objects
using .
The following list describes resource status values:
Normal
A previous abnormal condition for the resource returned to a normal state and the
resource is operating within the desired thresholds. No action is required.
The resource experienced an occurrence that you should be aware of. This event
severity does not cause service disruption, and corrective action might not be
required.
Error
The resource is still performing, but corrective action is required to avoid service
disruption.
Critical
A problem occurred that might lead to service disruption if you do not take
immediate corrective action.
Emergency The resource unexpectedly stopped working and experienced unrecoverable data
loss. You must take corrective action immediately to avoid extended downtime.
Following are some of the common reasons datasets fail to conform to their protection policy:
Datasets | 241
of your data, which might require significant time and bandwidth. As a result, if you would not want
a rebaseline to occur, you should try manual corrections to your system to resolve conformance
issues before you choose to use the Conform option.
After making manual corrections to your system, you can return to the Conformance Results window
and click the Test Conformance button to see if any changes made to the system have brought the
dataset into conformance with the policy assigned to it. Test Conformance initiates a new check on
the dataset but does not execute a conformance run. The results of the check reflect the latest system
updates that have been identified by the monitors and captured in the DataFabric Manager server
database. Therefore, the information displayed in the Conformance Results window might not reflect
recent changes made to a storage system or configuration and could be outdated by a few minutes or
a few hours, depending on the changes made and the scanning interval for each monitor.
You can view a list of monitor intervals by using the command dfm option list | grep
Interval. Following are some common monitoring actions, the default update intervals for
standard DataFabric Manager server configurations, and the associated monitors:
Discover new hosts
next to Conformance
When the conformance monitor detects a change in the dataset's membership or policy definition, the
conformance monitor does one of three things:
The OnCommand console provisions a destination volume but the aggregate in which the volume
is contained is no longer a member of the assigned resource pool.
Corrective action: The OnCommand console creates a new volume and moves the
relationship to it.
Does the action require your approval? Yes
Corrective action: The OnCommand console moves the relationship to an existing volume.
Does the action require your approval? Yes
The OnCommand console provisions a destination volume but the aggregate in which the volume
is contained is no longer a member of the assigned resource pool.
Corrective action: The OnCommand console creates a new volume and moves the
relationship to it.
Does the action require your approval? Yes
Corrective action: The OnCommand console moves the relationship to an existing volume.
Does the action require your approval? Yes
The destination volume does not have enough backup space or it is over its "nearly full"
threshold.
Corrective
action:
Corrective
action:
The OnCommand console provisions the volume and migrates the physical
relationship to a new destination volume.
Does the action require your approval? Yes
The OnCommand console deletes the backup versions. The console also
deletes the copies of the data if those copies do not contain other backup
versions.
Does the action require your approval? No
Policy calls for the source data to be mirrored but the source volume is not protected in a mirror
relationship.
Datasets | 243
Corrective
action:
Policy calls for the source data to be backed up but the source qtree is not protected in a backup
relationship.
Corrective
action:
An imported relationship has been detected in which the secondary volume exceeds the
volFullThreshold.
Corrective action: You must manually increase the secondary volume size. The conformance
monitor cannot resolve this condition.
The application does not have the appropriate credentials to access the assigned resources.
Corrective action: You must provide the credentials for access to the hosts or storage systems.
Allows you to test whether manual changes that you have made to your dataset
configuration have brought it into conformance with its protection and
provisioning policies before you execute a conformance run. The results of the
test reflect the latest system updates that have been identified by the monitors and
changes you have specified but not yet executed in the current OnCommand
console session.
Conform Now
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Datasets | 245
3.
button, if displayed.
button is displayed for resources, clicking it displays a dialog box that lists events
If the
related to warning-level or critical-level resource issues. You can use the Acknowledge button to
mark an event as acknowledged. If you take actions outside of the dialog box that resolves an
event issue, you can use the Resolve button to mark that event as resolved.
4. To view details about secondary or tertiary nodes, click the corresponding tabs for these nodes.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Although protection policies are not assigned directly to datasets of virtual objects, the OnCommand
console still displays the backup and mirror relationships of a protection policy that is assigned
indirectly to datasets of virtual objects, as a component of an assigned storage service.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the View menu and click the Datasets option to display the Datasets tab.
2. In the Datasets tab click the column header labeled Conformance Status and select
Nonconformant.
3. If the Datasets tab lists a dataset with nonconformant status, select that dataset to display its
Details area.
4.
button.
The Conformance Details dialog box displays the results of the most recent conformance check
and suggestions for resolving the issues encountered.
After you finish
After you display the Conformance Details dialog box, you must address and resolve the issues that
are indicated by its warning and error messages.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Datasets | 247
About this task
This procedure assumes you are viewing the Conformance Details dialog box that you displayed by
selecting a nonconformant dataset in the Datasets tab and clicking
display.
The dialog box displays Information, Error, Action, Reason, and Suggestion text about the
nonconformance condition.
Steps
Indicates the configuration operations that the OnCommand console can not
perform on this dataset due to conformance issues.
Action
Indicates what the OnCommand console conformance engine did to discover the
conformance issue.
Reason
Suggestion
2. Based on the dialog box text, decide the best way to resolve the conformance issue.
If the dialog box text indicates that the OnCommand console conformance monitor cannot
automatically resolve the conformance issue, resolve this issue manually.
If Suggestion text indicates that automatically resolving the conformance issues requires a
baseline transfer of data, first attempt to resolve this issue manually. If unsuccessful, consider
resolving the issue automatically even if doing so requires a baseline transfer of data.
If the Suggestion text indicates that the OnCommand console conformance monitor can
resolve the conformance issue automatically without reinitiating a baseline transfer of data,
first, consider resolving the issue manually. If unsuccessful resolve the issue automatically.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for a baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset Datasets tab and clicking
display.
Steps
1. In the Conformance Details dialog box, confirm that the messages indicate that the conformance
issues cannot be resolved automatically.
2. Using the conformance messages, determine what is causing the nonconformance problem and
attempt to correct the condition manually.
You might need to log in to another GUI or CLI console to resolve the issues.
3. After you have attempted to correct the condition, wait at least one hour for the conformance
monitor to update the dataset's conformance status.
4. Return to the Conformance Details dialog box and click Test Conformance to determine if the
conformance issue is resolved.
If the conformance issue is resolved, the Conformance Details dialog box does not display the
"Conform" button.
5. If the conformance issue is resolved, click Cancel.
6. If the conformance issue is not resolved, repeat Steps 2, 3, and 4.
After you finish
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Related references
Datasets | 249
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for a baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset Datasets tab and clicking
display.
Steps
1. In the Conformance Details dialog box, read the text to determine the ability of the conformance
engine to automatically resolve the nonconformant condition without reinitializing a baseline
transfer of data.
2. If the text suggests that a simple automatic resolution is possible, click Conform .
The OnCommand console conformance engine closes the Conformance Details dialog box and
attempts to reconfigure storage resources to resolve storage service protection and provisioning
policy conformance issues automatically.
3. Monitor the conformance status on the Datasets tab for one of the following values:
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Because of the probable time and bandwidth required for baseline transfer completion, a
resolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.
This procedure assumes you are viewing the Conformance Details dialog box that you displayed
by selecting a nonconformant dataset in the Datasets tab and clicking
status display.
Steps
1. In the Conformance Details dialog box, confirm that warning text is displayed that indicates that
a reinitialized baseline transfer of data might be required.
You should try to resolve the conformance issues manually before initializing a time-consuming
baseline transfer of your data.
2. Using the conformance messages, determine what is causing the conformance problem and
attempt to correct the condition manually.
You might need to log in to another GUI or CLI console to resolve the issues.
3. After you have attempted to correct the condition, wait at least one hour for the conformance
monitor to update the dataset's conformance status.
4. Return to the Conformance Details dialog box and click Test Conformance to determine if the
conformance issue is resolved.
If the conformance issue is resolved, the Conformance Details dialog box does not display the
"Conform" button.
5. If the conformance issue is not resolved, click Conform to attempt automated resolution and
initiate a rebaseline of your data.
After you finish
After you achieve dataset conformant status, continue with the operation that required the dataset to
be conformant.
Datasets | 251
Related references
Page descriptions
Datasets tab
The Datasets tab enables you to create, edit, survey, and manage protection of your datasets.
From the Datasets tab you can launch the configuration of datasets of both virtual objects and
physical objects, monitor their status, backup dataset content on demand, suspend and resume
protection, and initiate restore operations.
Command buttons
Create
Enables you to create datasets to manage physical storage objects or virtual objects. The
Create command gives you the following sub-options:
Edit
If you select the Dataset with Hyper-V objects sub-option, starts the Create
Dataset dialog box for adding a Hyper-V virtual object dataset.
If you select the Dataset with VMware objects sub-option, starts the Create
Dataset dialog box for adding a VMware virtual object dataset.
If you select the Dataset with Storage objects sub-option, starts the NetApp
Management Console Add Dataset wizard for adding a storage dataset.
If you select a dataset of VMware virtual objects or Hyper-V virtual objects, starts
the OnCommand console Edit Dataset dialog box for editing datasets that hold
virtual objects.
If you select a dataset of storage objects, starts the NetApp Management Console
Edit Dataset window for editing a storage dataset.
If you select a dataset that still contains no objects, enables you to specify the type
of dataset that you want it to be (Dataset with Hyper-V objects, Dataset with
VMware objects, or Dataset with Storage objects).
Delete
Deletes the selected dataset or datasets and thereby removes the protection relationships
among its member objects.
More
Resume
Restore
Back Up
Now
Storage
Service
Attach
Storage
Service
Detach
Storage
Service
Change
Refresh
For datasets of VMware or Hyper-V objects, opens the Back Up Now dialog box in
the OnCommand console.
For datasets of physical storage objects, opens the Protect Now dialog box in
NetApp Management Console.
Datasets | 253
Datasets list
A list that provides information about existing datasets. Click a row in the list to view information in
the Details area about the selected dataset.
Name
Data Type
Overall Status
Physical
VMware
Hyper-V
Undefined (the dataset is empty of objects)
Displays the status derived from the combined status conditions for disaster
recovery, protection, conformance, space, and resources.
Overall status is computed based upon the following other status values:
Overall
Status:
Error
Overall
Status:
Warning
Storage Service
Displays what storage service, if any, is attached to the dataset. A dataset that is
attached to a storage service uses the protection policy, provisioning policies,
resource pools, and vFiler unit configurations specified by that storage service.
Local Policy
Displays the name of the local protection policy, if any, that is attached to the
selected dataset. Datasets of virtual objects might have a local policy attached to
them.
Protection
Policy
Displays the name of the protection policy that is either assigned directly to a
dataset of physical storage objects or to a storage service that is then assigned to
a dataset. This information is hidden by default.
Displays the name of the provisioning policy currently assigned to the primary
node of the dataset. If a provisioning policy is assigned to a secondary node in
the dataset, that name is displayed in the details area when you select the
secondary node in the graph area. This information is hidden by default.
Space Status
Displays the status of the available space for the selected dataset node (OK,
Warning, Error, or Unknown).
Conformance
Status
Protection
Status
Baseline Failure
Initializing
Job Failure
Lag Error
Lag Warning
No Local Policy
No Protection Policy Attached
Not Protected
Protected
Protection Suspended
Uninitialized
Resource Status Displays the most severe of all current events on all direct and indirect members
of the dataset nodes. Values can be Emergency, Critical, Error, Warning, or
Normal.
Failed Over
Description
Yes
Failover on the dataset was invoked and completed successfully, completed
with warnings, or completed with errors.
No
Failover on the dataset has not been invoked.
In Progress
Failover on the dataset is currently in progress.
Not Applicable
The dataset is not assigned a disaster recovery protection policy and,
therefore, is not capable of failover.
Datasets | 255
Application
Displays the name of the application that created an application dataset, such as
SnapManager for Oracle. This item is not included in the dataset list by default.
Application
Version
Displays the version of the application that created the application dataset. This
item is not included in the dataset list by default.
Application
Server
Displays the name of the server that runs the application that created the
application dataset. This item is not included in the dataset list by default.
Members list
A folder list of the physical storage object types or the virtual object types that are currently included
as members of the selected dataset.
Clicking the folder for an object type displays the names of the dataset members of that object
type.
The names of virtual objects are linked to a dataset inventory page for their object type.
The names of physical storage objects are only listed.
If a dataset of physical objects contains more than three members of an object type, clicking
More displays all members of that type in a popup dialog box.
If the dataset of virtual objects contains more than three members of an object type, clicking
More displays all members of that type in the inventory page for that object type.
Clicking the folder for an object type displays the names of the dataset's related objects of that
object type.
If the dataset contains more than three objects of that type, clicking More displays all objects of
that type, either in a popup dialog box or in the inventory page for that object type.
Graph area
The graphical representation of the nodes for the selected dataset are displayed in the lower section
of the page.
Overview tab
This tab displays the following status and general property details of the selected dataset:
Protection
Baseline Failure
Initializing
Job Failure
Lag Error
Lag Warning
No Protection Policy Attached
Not Protected
Protected
Protection Suspended
Uninitialized
Represents the most severe of all current events on all direct and indirect members
of the dataset nodes. Values can be Emergency, Critical, Error, Warning, or
Normal. For Emergency, Critical, Error, or Warning conditions, click
evaluate the events and sources causing those conditions.
to
Space
Displays the status of the available space for the selected dataset node (OK,
Warning, Error, or Unknown). If any volume, qtree, or LUN of a dataset has space
allocation error or warning conditions, the dataset's space status indicates that
condition. You can select the dataset to scan its volumes, LUNs, or qtrees to
determine which member is the cause of the warning or error condition.
Failed over
Description
Owner
Contact
Displays the e-mail contact address for this dataset if one is specified.
Time Zone
Displays the time zone in which the primary node of the selected dataset is located.
This detail applies only to empty datasets or datasets of physical objects.
Custom Label Displays values that the user might have defined for this dataset.
Primary Node tab
This subtab displays the schedule of local Snapshot copy backups to be executed on the primary
dataset node. If you change the name of the primary node the title of this subtab matches your
change.
Local Backup
Schedule
Displays the names of the local backup schedules that are assigned to the
primary data node of this dataset.
Datasets | 257
Primary Node to Backup tab
If the selected dataset is configured with secondary backup or mirror protection, this subtab is
displayed. If you change the default names of your primary or secondary nodes, the title of this
subtab matches your changes.
This subtab lists the following information about the connection between the primary node and
secondary node:
Relationships Displays the number of existing backup or mirror relationships for the connection
between the volume and qtree objects on the primary node and volume and qtree
objects on the secondary node.
Schedule
Displays the name of the schedule that is assigned to the backup or mirror
connection.
Throttle
Displays the name of the throttle schedule, if any, that is assigned to the backup or
mirror connection.
Lag Status
Displays the worst current lag status for the backup or mirror connection.
Note: If the selected dataset contains multiple connections between the primary node and multiple
secondary or tertiary nodes, then a subtab similar to this one is displayed for every such
connection.
Backup tab
If the selected dataset is configured with secondary backup or mirror protection, this subtab is
displayed. If you change the name of the secondary node, the title of this subtab matches your
change.
This lists the following information about the secondary node:
Provisioning Policy Lists the provisioning policy, if any, that are assigned to the secondary node.
Physical Resources Lists the physical resources that are assigned to the secondary node.
Resource Pools
Lists the resource pools, if any, that are assigned to the secondary node.
Failed over
Note: If the selected dataset contains multiple secondary or tertiary nodes, then a subtab similar to
this one is displayed for every such node.
Related references
Options
Name
Enables you to name, rename, view, and edit the dataset properties and naming
formats of the current dataset.
Data
Enables you to view and edit the virtual object membership of this dataset.
Local Policy Enables you to assign a local policy to this dataset (to execute local protection of this
dataset's virtual object members).
Storage
service
Enables you select a storage service for this dataset (to execute remote protection of
this dataset's virtual object members) or review an existing storage service
assignment.
A storage service assignment cannot be changed.
Datasets | 259
The job times are based on the time settings of the systems running associated
host services.
Secondary
node name
table column
If the current dataset's topology includes secondary storage, lists the following
information related to secondary storage:
Storage systems and resource pools that provision the secondary storage node
of the current dataset.
The schedule for remote backup and mirror protection jobs between primary
and secondary storage nodes and retention times for the backed up data.
The listed jobs and retention times are those that are specified by the protection
policy that is associated with the storage service that is assigned to the current
dataset. The times are based on the time settings of the DataFabric Manager
server
If the retention duration and retention count for a secondary node are set to 0,
the table displays the text "No Transfer (No retention)" to indicate no actual
secondary back up of primary data has occurred.
Tertiary node If the current dataset's topology includes tertiary storage, lists the following
information related to tertiary storage:
name table
column
Storage systems and resource pools that provision the tertiary storage node of
the current dataset.
The schedule for remote backup and mirror protection jobs between secondary
and tertiary storage nodes and retention times for the backed up data.
The listed jobs and retention times are those that are specified by the protection
policy that is associated with the storage service that is assigned to the current
dataset.
If the retention duration and retention count for a tertiary node are set to 0, the
table displays the text "No Transfer (No retention)" to indicate no actual
tertiary back up of data has occurred.
Command buttons
Test
Conformance
Allows you to pretest the conformance of the latest modifications that you have
made to the dataset configuration in this dialog box before you save and apply
those modifications.
OK
Saves the latest changes that you have made to the data in the Create Dataset
dialog box or Edit Dataset dialog box as the latest configuration for this dataset.
Cancel
Cancels any changes you have made to the settings in the Create Dataset dialog
box or Edit Dataset dialog box since the last time you opened it.
Name area
The Name area displays administrative and naming property information about the current dataset of
virtual objects.
General
Properties tab
Displays the name of the dataset; enables you to enter dataset description, owner,
and contact information about the dataset; and enables you to assign a resource
group to the dataset.
Naming
Properties tab
Enables you to accept global naming formats for the related objects of this dataset
or enables you to configure dataset-level naming formats to be applied to related
objects of this dataset.
Related objects are Snapshot copy, primary volume, secondary volume, or
secondary qtree objects that are generated by local policy or storage service
protection jobs on this dataset.
Enables you to enter the name for the owner of this dataset.
Contact
The Naming Properties tab of the Create Dataset dialog box or Edit Dataset dialog box enables you
to select or specify values for the following dataset-level Naming Settings.
Datasets | 261
Custom label
Specifies a dataset-specific identification string that can be included in the name of all objects that a
protection job generates for this dataset.
Use dataset name Includes the dataset name in the name of all objects that a protection job
generates for this dataset.
Use custom label Enables you to enter a custom character string to be included in the name of all
objects that a protection job generates for this dataset.
Snapshot copy
Specifies the name format used for Snapshot copies that are generated by protection jobs run on this
dataset. The display of some of these options depends on your previous configuration choices:
Use global
naming
format
If you previously chose this format in the Global Naming Settings Snapshot Copy
area in the Setup Options dialog box, selecting this option applies the global naming
format for Snapshot copies to all Snapshot copies that a protection job generates for
this dataset.
Use custom Enables you to specify a dataset-level naming format to apply to all Snapshot copies
that a protection job generates for this dataset. You can enter the following attributes
format
(separated by the underscore character) in this field in any order:
%T (timestamp attribute)
Name
preview
Displays a sample Snapshot copy name that uses the default or custom naming
format that you selected or specified.
Secondary volume
Specifies the name format used for secondary volumes that are generated by protection jobs run on
this dataset. The display of some of these options depends on your previous configuration choices:
Use global
naming
format
If you previously chose this format in the Global Naming Settings Secondary Volume
area in the Setup Options dialog box, selecting this option applies the global naming
script for secondary volumes to all secondary volumes that a protection job generates
for this dataset.
Use custom Selecting this option enables you to specify a dataset-level naming format to apply to
all secondary volumes that a protection job generates for this dataset. You can enter
format
the following attributes (separated by the underscore character) in this field in any
order:
The custom label, if any, that is specified for the secondary volume's containing
dataset. If no custom label is specified, then the dataset name is included in the
secondary volume name.
It enables you to specify a custom alphanumeric character, . (period), _
(underscore), or - (hyphen) to include in the names of the related objects that are
generated by protection jobs that are run on this dataset. If the naming format for a
related object type includes the Custom label attribute, then the value that you
specify is included in the related object names. If you do not specify a value, then
the dataset name is used as the custom label. If you include a blank space in the
custom label string, the blank space is converted to letter x in any Snapshot copy,
volume, or qtree object name that includes the custom label as part of its syntax.
%S (primary storage system name)
The name of the primary storage system
%V (primary volume name)
The name of the primary volume
%C (type)
Datasets | 263
Name
preview
Displays a sample secondary volume name that uses the default or custom naming
format that you selected or specified.
Secondary qtree
Specifies how secondary qtrees for this dataset that are generated by local policies or storage service
protection jobs are named. Options and information fields include the following:
Use global
naming
format
Selecting this option applies the global naming format for secondary qtrees to all
secondary qtrees that a protection job generates for this dataset.
Use custom Selecting this option enables you to specify a dataset-level naming format to apply to
all secondary qtrees that a protection job generates for this dataset. You can enter the
format
following attributes (separated by the underscore character) in this field in any order:
The custom label, if any, that is specified for the secondary volume's containing
dataset. If no custom label is specified, then the dataset name is included in the
secondary volume name.
It enables you to specify a custom alphanumeric character, . (period), _
(underscore), or - (hyphen) to include in the names of the related objects that are
generated by protection jobs that are run on this dataset. If the naming format for
a related object type includes the Custom label attribute, then the value that
you specify is included in the related object names. If you do not specify a value,
then the dataset name is used as the custom label. If you include a blank space in
the custom label string, the blank space is converted to letter x in any Snapshot
copy, volume, or qtree object name that includes the custom label as part of its
syntax.
%S (primary storage system name)
The name of the primary storage system
%V (primary volume name)
The name of the primary volume
%C (type)
The connection type (backup or mirror)
%1, %2, %3 (digit suffix)
A one-digit, two-digit, or three-digit suffix if required to distinguish secondary
volumes with otherwise matching names
Displays a sample secondary qtree name that uses the default or custom naming
format that you selected or specified.
Data area
The Data area of the Create Dataset dialog box or Edit Dataset dialog box provides tabs and options
that enable you to add various kinds of virtual object types to the current dataset.
Data tab
The Data tab enables you to filter and select the virtual objects to include in the dataset.
Group
Specifies the OnCommand console resource group from which you want to select
virtual objects to include in the dataset.
Resource
Type
Specifies the virtual object types that you want to include in the dataset.
You cannot include both VMware object types and Hyper-V object types in one
dataset.
Available
Resources
Lists the virtual objects that you can select for inclusion in the dataset.
Only virtual objects in the selected resource group that match the selected resource
type are displayed.
Any VMware datacenter objects that you include in a dataset cannot be empty.
They must contain datastore or virtual machine objects for successful backup.
Selected
Resources
Lists the virtual objects that you have selected for inclusion in the dataset.
Datasets | 265
Selects the storage service that you want to assign to this dataset.
Selecting a storage service using this option displays the basic information for
that storage service in the content area.
Storage services enabled for disaster recovery support are not displayed.
After a storage service is selected and assigned to a dataset, the assignment
cannot be changed.
Name
Description
Owner
Contact
Displays the contact e-mail of the person in charge of the selected storage
service.
Backup Script
Path
Topology
Displays the name and the graphical topology of the protection policy that the
selected storage service uses to execute remote protection of the selected
dataset.
If the current dataset's topology includes secondary storage, lists the following
information related to secondary storage:
Storage systems and resource pools that provision the secondary storage node
of the current dataset.
The schedule for remote backup and mirror protection jobs between primary
and secondary storage nodes and retention times for the backed up data.
The listed jobs and retention times are those that are specified by the protection
policy that is associated with the storage service that is assigned to the current
dataset.
Policy Name
Description
Add
Delete
Schedule list
Displays details of the local backup schedules that are in effect for the displayed
local policy. For each local backup schedule, the following details are displayed:
Schedule
Type
Start Time
The time of day that the local backups start for the associated
schedule
End Time
Recurrence
Datasets | 267
Retention
The period of time that the backup copies associated with this
schedule remain on the storage systems before becoming
subject to automatic purging
Backup
Options
Additional options that you can enable for the selected local
policy.
Create
VMware
Snapshot
Include
independent
disks
Allow saved
state backups
Start a remote
backup after
local backup
Specifies a period of time after which the OnCommand console issues a warning
event if no local backup has successfully finished.
Issue an error if Specifies a period of time after which the OnCommand console issues an error
event if no local backup has successfully finished.
there are no
backups for:
Backup Script
Path
Specifies a path to an optional backup script (located on the system upon which
the host service is installed) that can specify additional operations to be executed
in association with local backups.
Dataset
Dependencies
If clicked, displays the other datasets that use this local policy and would thus be
affected by changes made to the settings in this content area.
This button appears only if the local policy is also assigned to other datasets.
Save
Saves any settings or changes made to settings in this content area for a new or
an existing local policy.
The OnCommand console enables this button if you make or modify any setting
in this content area.
Cancel
Cancels any changes to the dataset's local policy settings that have not yet been
saved during the current session.
269
Backups
Understanding backups
Types of backups
You can perform scheduled or on-demand local backups, or remote backups of datasets. Depending
on what you need, the different types of backups offer variation in how you protect your data.
You can create scheduled local backups by adding or editing datasets and their
Scheduled
local backups application policies. The host service runs local backups so even when the
OnCommand console is down, your backups continue to run.
You can create on-demand local backups as you need them. On-demand backups
On-demand
local backups apply to datasets. You can add specific virtual machines or datastores to existing or
new datasets for backup. You can also select specific settings for on-demand
backups that might differ from the local policy that might be attached to a dataset,
including starting a remote backup after the local backup operation.
Remote
backups
You can create remote backups by assigning a storage service to the selected
dataset. The storage service you assign to the dataset determines when and how the
remote backup occurs. To create a remote backup, you must first create or edit a
local dataset backup, or perform an on-demand dataset backup. During dataset
creation, you can add the storage service to the dataset you want to back up.
The following arguments apply to scripts that run before the backup occurs:
prebackup
resourceids
Specifies the colon-separated list of resource IDs that are backed up.
datasetid
backupid
snapshots
Specifies the comma-separated list of Snapshot copies that constitute the backup. You
should use one of the following formats:
storage system:/vol/volx:snapshot
storage system:/vol/volx/lun:snapshot
storage system:/vol/volx/lun/qtree:snapshot
Backups | 271
Example
The following script (.bat file) is invoked after the backup:
echo "********************* > c:\post.txt
IF %1 == postbackup echo %2 >> C:\post.txt
IF %1 == postbackup
echo %3 >> C:\post.txt
IF %1 == postbackup
echo %4 >> C:\post.txt
IF %1 == postbackup echo %5 >> C:\post.txt
IF %1 == postbackup echo %6 >> C:\post.txt
IF %1 == postbackup
echo %7 >> C:\post.txt
IF %1 == postbackup
echo %8 >> C:\post.txt
IF %1 == postbackup echo %9 >> C:\post.txt
You cannot mount or unmount Hyper-V backups just like VMware backups by clicking a button.
You can mount Hyper-V backups using a Snapshot copy, virtual hard disk (VHD), LUN and
storage system information available in the OnCommand console GUI.
You cannot mount a backup on the same or a different ESX server if that backup is already
mounted.
You must unmount this backup from the first ESX server prior to mounting a backup to a
different ESX server.
You can mount a local backup and a remote backup on any ESX host that is managed by the
same host service that was used when the backup was created.
If you include the same datastore in multiple backups and those backups are mounted, that
datastore is mounted multiple times.
These datastores can be differentiated because the name includes a mounted timestamp and
contains the dataset name.
Backup and restore of mounted objects is not supported.
If there is some data written in the mounted datastore, that data is lost when you unmount the
backup.
If a backup is mounted, you cannot delete it, even if it has expired, until you unmount the backup.
While mounting a remote mirror backup, if the corresponding primary mirror backup has already
been deleted, the mount request fails with a backup not found error.
After you mount a backup, the time it takes to copy data from the datastore depends on your
network bandwidth and whether this datastore is on a secondary storage system.
VSS requestor
Backups | 273
The VSS requestor is a backup application, such as Hyper-V plug-in or NTBackup. It initiates
VSS backup and restore operations. The requestor also specifies Snapshot copy attributes for
backups it initiates.
VSS writer
The VSS writer owns and manages the data to be captured in the Snapshot copy. Hyper-V plug-in
is an example of a VSS writer.
VSS provider
The VSS provider is responsible for the creation and management of the Snapshot copy. A
provider can be either a hardware provider or a software provider:
A hardware provider integrates storage array-specific Snapshot copy and cloning functionality
into the VSS framework. The Data ONTAP VSS Hardware Provider integrates the SnapDrive
service and storage systems running Data ONTAP into the VSS framework.
Note: The Data ONTAP VSS Hardware Provider is installed automatically as part of the
SnapDrive software installation.
1. Select Start > Run and enter the following command to open a Windows command prompt:
cmd
1. Navigate to System Tools > Event Viewer > Application in MMC and look for an event with
the following values.
Source
Event ID
Description
Navsspr
4089
Note: VSS requires that the provider initiate a Snapshot copy within 10 seconds. If this time
limit is exceeded, the Data ONTAP VSS Hardware Provider logs Event ID 4364. This limit
could be exceeded due to a transient problem. If this event is logged for a failed backup, retry
the backup.
Backups | 275
you want to provide with local protection requires a separate local policy with a schedule that does
not overlap the schedule of any other local policy in effect.
SnapManager for Hyper-V does not automatically delete backups after you
remove the protection policy. When you no longer need the backup, you
must manually delete it in SnapManager for Hyper-V.
Reinstalling
SnapManager for
Hyper-V
After you have transitioned all of your SnapManager for Hyper-V dataset
information to the OnCommand console, and uninstalled SnapManager for
Hyper-V, you should not reinstall SnapManager for Hyper-V.
Managing backups
Performing an on-demand backup of virtual objects
You can protect your virtual machines or datastores by adding them to an existing or new dataset and
performing an on-demand backup.
Before you begin
You must have reviewed the Guidelines for performing an on-demand backup on page 277
You must have reviewed the Requirements and restrictions when performing an on-demand
backup on page 279
You must have added the virtual objects to an existing dataset or have created a dataset and added
the virtual objects that you want to back up.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
You must have the following information available:
Dataset name
Retention duration
Backup settings
Backup script location
Backup description
If you perform a backup of a dataset containing Hyper-V virtual machines and you are currently
restoring those virtual machines, the backup might fail.
Steps
Backups | 277
If you want to back up...
Then...
You can monitor the status of your backup from the Jobs tab.
Related references
You must select the dataset that you want to back up.
You can define the retention duration and the backup settings for your on-demand
backup, as needed.
Retention
You can choose to keep a backup until you manually delete it, or
you can assign a retention duration. By specifying a length of time
to keep the on-demand local backup, you can override the retention
duration in the local policy you assigned to the dataset for this
backup. The retention duration of a local backup defaults to a
retention type for the remote backup.
A combination of both the remote backup retention type and storage
service is used to determine the remote backup retention duration.
For example, if you specify a local backup retention duration of two
days, the retention type of the remote backup is Daily. The dataset
storage service then verifies how long daily remote backups are kept
and applies this to the backup. This is the retention duration of the
remote backup.
The following table lists the local backup retention durations and the
equivalent remote backup retention type:
Backup
settings
Remote retention
type
Hourly
Daily
Weekly
Monthly
You can choose your on-demand backup settings based on the type
of virtual objects you want to back up.
Allow saved state
backup (Hyper-V
only)
Create VMware
snapshot
(VMware only)
Backups | 279
Include
independent disks
(VMware only)
Backup
script path
You can specify a script that is invoked before and after the local backup. The script
is invoked on the host service and the path is local to the host service. If you use a
PowerShell script, you should use the drive letter convention. For other types of
scripts, you can use either the drive letter convention or the Universal Naming
Convention.
Backup
description
You can provide a description for the on-demand backup so you can easily find it
when you need it.
Virtual machines or datastores must first belong to a dataset before backing up.
You can add virtual objects to an existing dataset or create a new dataset and
add virtual objects to it.
Hyper-V specific Each virtual machine contained in the dataset that you want to back up must
requirements
contain at least 300 MB of free disk space. Each Windows volume in the
virtual machine (guest OS) must have at least 300 MB free disk space. This
includes the Windows volumes corresponding to VHDs, iSCSI LUNs, and
pass-through disks attached to the virtual machine.
Hyper-V virtual machine configuration files, snapshot copy files, and VHDs
must reside on Data ONTAP LUNs, otherwise backup operations fail.
VMware specific Backup operations of datasets containing empty VMware datacenters or
datastores will fail. All datacenters must contain datastores or virtual machines
requirements
to successfully perform a backup.
Virtual disks must be contained within folders in the datastore. If virtual disks
exist outside of folders on the datastore, and that data is backed up, restoring
the backup could fail.
NFS backups might take more time than VMFS backups. This is because it
takes more time for VMware to commit snapshots in a NFS environment.
Hyper-V specific Partial backups are not supported. If the Hyper-V VSS writer fails to back up
one of the virtual machines in the backup and the failure occurs at the Hyper-V
restrictions
parent host, the backup fails for all of the virtual machines in the backup.
You must have reviewed the Guidelines for performing an on-demand backup on page 277
You must have reviewed the Requirements and restrictions when performing an on-demand
backup on page 279
You must have added the virtual objects to an existing dataset or have created a dataset and added
the virtual objects that you want to back up.
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
You must have the following information available:
Dataset name
Retention duration
Backup settings
Backup script location
Backup description
If you perform a backup of a dataset containing Hyper-V virtual machines and you are currently
restoring those virtual machines, the backup might fail.
Steps
Backups | 281
If you have already established local policies for the dataset, that information automatically
appears for the local protection settings for the on-demand backup. If you change the local
protection settings, the new settings override any existing application policies for the dataset.
5. If you want a remote backup to begin after the local backup has finished, select the Start remote
backup after local backup box.
6. Click Back Up Now.
After you finish
You can monitor the status of your backup from the Jobs tab.
Related references
You must select the dataset that you want to back up.
Local
protection
settings
You can define the retention duration and the backup settings for your on-demand
backup, as needed.
Retention
You can choose to keep a backup until you manually delete it, or
you can assign a retention duration. By specifying a length of time
to keep the on-demand local backup, you can override the retention
duration in the local policy you assigned to the dataset for this
backup. The retention duration of a local backup defaults to a
retention type for the remote backup.
A combination of both the remote backup retention type and storage
service is used to determine the remote backup retention duration.
For example, if you specify a local backup retention duration of two
days, the retention type of the remote backup is Daily. The dataset
storage service then verifies how long daily remote backups are kept
and applies this to the backup. This is the retention duration of the
remote backup.
The following table lists the local backup retention durations and the
equivalent remote backup retention type:
Backup
settings
Remote retention
type
Hourly
Daily
Weekly
Monthly
You can choose your on-demand backup settings based on the type
of virtual objects you want to back up.
Allow saved state
backup (Hyper-V
only)
Create VMware
snapshot
(VMware only)
Include
independent disks
(VMware only)
Backup
script path
You can specify a script that is invoked before and after the local backup. The script
is invoked on the host service and the path is local to the host service. If you use a
PowerShell script, you should use the drive letter convention. For other types of
scripts, you can use either the drive letter convention or the Universal Naming
Convention.
Backup
description
You can provide a description for the on-demand backup so you can easily find it
when you need it.
Backups | 283
Clustered virtual machine considerations (Hyper-V only)
Dataset backups of clustered virtual machines take longer to complete when the virtual machines run
on different nodes of the cluster. When virtual machines run on different nodes, separate backup
operations are required for each node in the cluster. If all virtual machines run on the same node,
only one backup operation is required, resulting in a faster backup.
Requirements and restrictions when performing an on-demand backup
You must be aware of the requirements and restrictions when performing an on-demand backup.
Some requirements and restrictions apply to all types of objects and some are specific to Hyper-V or
VMware virtual objects.
Requirements
Virtual machines or datastores must first belong to a dataset before backing up.
You can add virtual objects to an existing dataset or create a new dataset and
add virtual objects to it.
Hyper-V specific Each virtual machine contained in the dataset that you want to back up must
requirements
contain at least 300 MB of free disk space. Each Windows volume in the
virtual machine (guest OS) must have at least 300 MB free disk space. This
includes the Windows volumes corresponding to VHDs, iSCSI LUNs, and
pass-through disks attached to the virtual machine.
Hyper-V virtual machine configuration files, snapshot copy files, and VHDs
must reside on Data ONTAP LUNs, otherwise backup operations fail.
VMware specific Backup operations of datasets containing empty VMware datacenters or
datastores will fail. All datacenters must contain datastores or virtual machines
requirements
to successfully perform a backup.
Virtual disks must be contained within folders in the datastore. If virtual disks
exist outside of folders on the datastore, and that data is backed up, restoring
the backup could fail.
NFS backups might take more time than VMFS backups. This is because it
takes more time for VMware to commit snapshots in a NFS environment.
Hyper-V specific Partial backups are not supported. If the Hyper-V VSS writer fails to back up
one of the virtual machines in the backup and the failure occurs at the Hyper-V
restrictions
parent host, the backup fails for all of the virtual machines in the backup.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you from
mounting its partner mirror destination backup copy. For a Mirror-generated destination backup copy
to be mountable, its associated mirror source backup copy must still exist on the source node.
Steps
Backups | 285
A dialog box appears with a link to the mount job and when you click the link, the Jobs tab
appears.
After you finish
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If there are virtual objects in use from the previously mounted datastores of a backup, the unmount
operation fails. You must manually clean up the backup prior to mounting the backup again because
its state reverts to not mounted.
If all the datastores of the backup are in use, the unmount operation fails but this backup's state
changes to mounted. You can unmount the backup after determining the datastores are not in use.
Steps
If the ESX server becomes inactive or restarts during an unmount operation, the job is terminated and
the mount state remains mounted and the backup stays mounted on the ESX server.
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you from
mounting its partner mirror destination backup copy. For a Mirror-generated destination backup copy
to be mountable, its associated mirror source backup copy must still exist on the source node.
Steps
Backups | 287
After you finish
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If there are virtual objects in use from the previously mounted datastores of a backup, the unmount
operation fails. You must manually clean up the backup prior to mounting the backup again because
its state reverts to not mounted.
If all the datastores of the backup are in use, the unmount operation fails but this backup's state
changes to mounted. You can unmount the backup after determining the datastores are not in use.
Steps
If the ESX server becomes inactive or reboots during an unmount operation, the job is terminated and
the mount state remains mounted and the backup stays mounted on the ESX server.
You can monitor the status of your mount and unmount jobs in the Jobs tab.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
You can locate a specific backup copy by searching one of the following criteria:
Steps
Backups | 289
You can locate multiple backup versions by inserting a comma between search terms and you can
clear the search field to view all backups.
3. Click Find.
Related references
Two Snapshot copy names are displayed for a Hyper-V backup. You must choose the Snapshot copy
with the suffix _backup to mount the backup. This ensures that you select the copy containing
application-consistent data.
Connecting to a LUN in a Snapshot copy
You can connect to a LUN in a Snapshot copy using either a FlexClone volume or a read/write
connection to a LUN in a Snapshot copy depending on what version of Data ONTAP you have
installed on your storage system.
Before you begin
You must have the FlexClone license enabled to connect to a LUN that resides on a volume with a
SnapMirror or SnapVault destination.
Steps
1. Under SnapDrive in the left MMC pane, expand the instance of SnapDrive you want to manage,
then expand Disks and select the disk you want to manage.
2. Expand the LUN whose Snapshot copy you want to connect, then click on Snapshot Copies to
display the list of Snapshot copies. Select the Snapshot copy you want to connect.
3. From the menu choices at the top of MMC, navigate to Action > Connect Disk to launch the
Connect Disk wizard.
4. In the Connect Disk Wizard, click Next.
5. In the Provide a Storage System Name, LUN Path and Name panel, the information for the
LUN and Snapshot copy you selected is automatically filled in. Click Next.
6. In the Select a LUN Type panel, Dedicated is automatically selected because a Snapshot copy
can be connected only as a dedicated LUN. Click Next.
7. In the Select LUN Properties panel, either select a drive letter from the list of available drive
letters or type a volume mount point for the LUN you are connecting, then click Next.
When you create a volume mount point, type the drive path that the mounted drive will use: for
example, G:\mount_drive1\.
8. In the Select Initiators panel, select the FC or iSCSI initiator for the LUN you are connecting
and click Next.
9. In the Select Initiator Group management panel, specify whether you will use automatic or
manual igroup management.
If you specify...
Then...
Automatic igroup
management
Click Next.
Manual igroup
management
SnapDrive uses existing igroups, one igroup per initiator, or, when necessary, creates
new igroups for the initiators you specified in the Select Initiators panel.
a. In the Select Initiator Groups panel, select from the list the igroups to which you
want the new LUN to belong.
Note: A LUN can be mapped to an initiator only once.
OR
Click Manage Igroups and for each new igroup you want to create, type a name
in the Igroup Name text box, select initiators from the initiator list, click Create,
and then click Finish to return to the Select Initiator Groups panel.
b. Click Next.
10. In the Completing the Connect Disk Wizard panel, perform the following actions.
a. Verify all the settings
b. If you need to change any settings, click Back to go back to the previous Wizard panels.
c. Click Finish.
Backups | 291
Result
The newly connected LUN appears under Disks in the left MMC pane.
Viewing the contents of a LUN
You can view the contents in a LUN, including VHDs and other files. Viewing the contents of the
LUN enables you to confirm that you have the correct data before performing a restore operation on
the whole Hyper-V backup.
Before you begin
You must have installed Windows 2008 R2 and the Windows Disk Management Snap-In.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. To view the contents of a specific VHD, use Windows Explorer to locate the VHD in the LUN
you mounted using SnapDrive for Windows.
2. Using the Windows Disk Management Snap-In, right-click the VHD and select Attach VHD.
3. Specify the VHD location and click OK.
The VHD is mounted on the Hyper-V parent host.
4. Verify the contents of the VHD.
5. Using the Windows Disk Management Snap-In, right-click the VHD and select Detach VHD.
After you finish
You can now disconnect the LUN using SnapDrive for Windows to unmount the disk mounted from
the Snapshot copy.
Disconnecting a LUN
You can use the SnapDrive for Windows MMC snap-in to disconnect a dedicated or shared LUN, or
a LUN in a Snapshot copy or in a FlexClone volume.
Before you begin
Make sure that neither Windows Explorer nor any other Windows application is using or
displaying any file on the LUN you intend to disconnect. If any files on the LUN are in use, you
will not be able to disconnect the LUN except by forcing the disconnect.
If you are disconnecting a disk that contains volume mount points, change, move, or delete the
volume mount points on the disk first before disconnecting the disk containing the mount points;
otherwise, you will not be able to disconnect the root disk. For example, disconnect G:
\mount_disk1\, then disconnect G:\.
Before you decide to force a disconnect of a SnapDrive LUN, be aware of the following
consequences:
Any cached data intended for the LUN at the time of forced disconnection is not committed to
disk.
Any mount points associated with the LUN are also removed.
A pop-up message announcing that the disk has undergone "surprise removal" appears in the
console session.
Under ordinary circumstances, you cannot disconnect a LUN that contains a file being used by an
application such as Windows Explorer or the Windows operating system. However, you can force a
disconnect to override this protection. When you force a disk to disconnect, it results in the disk
being unexpectedly disconnected from the Windows host.
Steps
1. Under SnapDrive in the left MMC pane, expand the instance of SnapDrive you want to manage,
then expand Disks and select the disk you want to manage.
2. From the menu choices at the top of MMC, navigate to either Action > Disconnect Disk to
disconnect normally, or Action > Force Disconnect Disk to force a disconnect.
3. When prompted, click Yes to proceed with the operation.
Note: This procedure will not delete the folder that was created at the time the volume mount
point was added. After you remove a mount point, an empty folder will remain with the same
name as the mount point you removed.
The icons representing the disconnected LUN disappear from both the left and right MMC
panels.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
You can locate a specific backup copy by searching one of the following criteria:
Backups | 293
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you from
mounting its partner mirror destination backup copy. For a Mirror-generated destination backup copy
to be mountable, its associated mirror source backup copy must still exist on the source node.
Steps
Monitoring backups
Monitoring local backup progress
You can monitor the progress of your backup job to see whether it is running, succeeded, or failed.
Steps
Page descriptions
Backups tab
The Backups tab displays scheduled and on-demand backup versions that contain virtual objects, but
does not display storage backup versions. You can view detailed information about each virtual
object backup version and restore virtual objects.
Command buttons
Edit
Delete
Deletes the dataset backup. The Delete button is disabled for mounted backups.
Mount
Enables you to mount a selected VMware backup to an ESX server if you want to
verify its content before restoring it.
Unmount
Enables you to unmount a VMware backup after you mount it on an ESX server and
verify its contents.
Refresh
Backups | 295
Search field Enables you to search for a backup by description, resource name, or vendor object
ID.
Find
Clear
Backups list
Backup ID
Dataset
Version
Description
Specifies whether the backup is local or remote, as well as backup and mirror
information. Values are Local, Local (Primary data) for Hyper-V, Remote
(Backup), and Remote (Mirror).
VMware Snapshot Displays as Yes if the backup contains a VMware snapshot, No if does not,
and Not Applicable if it contains Hyper-V virtual objects.
Mount State
Specifies the mount state of the VMware backup. Values are Not Mounted,
Mounted, Mounting, and Unmounting.
Type
Restorable Entities
Restore button
Resource
Displays the virtual machines or datastores that belong to the specified backup
in the Backups list.
Datastore type
Displays the type of datastore. Values are VMFS or NFS. If the virtual
machine resides on an NFS datastore, and belongs to a local backup, the ESX
Host Name field is disabled.
Is Data ONTAP
Is template
Specifies whether the resource is a template. The Start virtual machine after
restore check box is disabled.
State
The Groups drop-down list is not applicable to the Manage Backups window.
Related references
297
Restore
Understanding restore
Restoring data from backups
The OnCommand console allows you to restore your virtual machines and datastores from legacy
backups and from backups taken of newly created datasets that contain your virtual machines and
datastores. The OnCommand console supports restore from local and remote backups and from
backups that contain VMware-based snapshots.
Backup selection
The OnCommand console allows you to browse for backups to restore from on the Backups tab
panel or the Server tab when determining which datastore, virtual machine or virtual disk files in the
virtual machine to restore. When you select a datastore, all virtual machines in the datastore will be
restored.
The table of backups provides centralized management of VMware and Hyper-V entities. You can
filter the table to show only backups with a backup ID, dataset, description, node name (full or
partial), VMware snapshot, mount state, or resource name of a datastore or virtual machine.
If you search for a comma-separated list, all backups that have either a matching backup ID or
backup name will display. You can specify multiple backup names and IDs by entering them in a
comma-separated list. The result lists all of the backups that have either one of the matching criteria
by entering them in a comma-separated list. The result lists all of the backups that have either one of
the given names or IDs.
Related tasks
Original location
The backup of an entire datastore, a single virtual machine, or a virtual machine's disk files are
restored to the original location. You set this location by choosing The entire virtual machine
option.
Different location
The backup of the virtual machine disk files are restored to a different location. You select the
destination location by setting the Particular virtual disks option.
The following arguments apply to scripts that run before the restore operation occurs:
prerestore
resourceids
Species the colon-separated list of resource IDs that are backed up.
backupid
snapshots
storage system:/vol/volx:snapshot
storage system:/vol/volx/lun:snapshot
storage system:/vol/volx/lun/qtree:snapshot
Restore | 299
resourceids
Specifies the colon-separated list of resource IDs that are being backed up.
backupid
Managing restore
Restoring data from backups created by the OnCommand console
You can restore a datastore, virtual machine, or its disk files to its original location or an alternate
location. From the Backup Management panel, you can sort the backup listings by vendor type to
help you find your backups.
From the Backup Management panel, you can do the following:
Restore a datastore, virtual machine, or its disk files from a local or remote backup to an original
location
Restore virtual machine disk files from a local or remote backup to a different location.
Restore from a backup that has a VMware snapshot.
Restoring a datastore
You can use the OnCommand console to restore a datastore. By doing so, you overwrite the existing
content with the backup you select.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,
you can use either the drive letter convention or the Universal Naming Convention.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
The process for restoring a VMware virtual machine differs from restoring a Hyper-V virtual
machine in that you can restore an entire virtual machine or its disk files. Once you start the
restoration, you cannot stop the process, and you cannot restore from a backup of a virtual machine
after you delete the dataset the virtual machine belonged to.
If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,
you can use either the drive letter convention or the Universal Naming Convention.
Steps
Description
Entire virtual
machine
Restores the contents of your virtual machine from a Snapshot copy to its original
location. The Start virtual machine after restore checkbox is enabled if you select
this option and the virtual machine is registered.
Particular virtual Restores the contents of the virtual disks on a virtual machine to a different location.
This option is enabled if you uncheck the entire virtual machine option. You can set a
disks
Destination datastore for each virtual disk.
6. In the ESX host name field, select the name of the ESX host. The ESX host is used to mount the
virtual machine components.
This option is available if you want to restore virtual disk files or the virtual machine is on a
VMFS datastore.
Restore | 301
7. In the Pre/Post Restore Script Path field, type the name of the script that you want to run before
or after the restore operation.
8. From this wizard, click Restore to begin the restoration.
Related tasks
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Before attempting a restore operation on a Hyper-V virtual machine, you must ensure that
connectivity to the storage system exists. If there is no connectivity, the restore operation fails.
About this task
If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,
you can use either the drive letter convention or the Universal Naming Convention.
If storage system volumes are renamed, you cannot restore a virtual machine created prior to
renaming the volumes.
Steps
Description
Restores the contents of your virtual machine from a Snapshot copy and
restarts the virtual machine after the operation completes.
Description
Pre/Post Restore Script: Runs a script that is stored on the host service before or after the restore
operation.
The Restore wizard displays the location of the virtual hard disk (.vhd) file.
6. From this wizard, click Restore to begin the restoration.
Monitoring restore
Viewing restore job details
After you start the restore job, you can track the progress of the restore job from the Jobs tab and
monitor the job for possible errors.
Steps
303
Reports
Understanding reports
Reports management
You can print, export, and share data in the reports that are generated by the OnCommand console.
You can also schedule a report and send the report schedule to one or more users.
If you want to do this...
Print a report
Export a report
),
),
),
),
Select the report, click the toolbar icon (
and select the Parameters option.
This option is valid only for the Events reports.
Share a report
Schedule a report
Delete a report
Select the column, click the arrow on the right, and click
the Header option.
Select the column, click the arrow on the right, and click
Group > Add Group.
Remove groups
Select the column, click the arrow on the right, and click
Group > Delete Inner Group.
This option is displayed only when data is organized into
groups.
Select the column, click the arrow on the right, and click
Group > Hide Detail.
This option is displayed only when data is organized into
groups.
Select the column, click the arrow on the right, and click
Group > Page Break.
Hide columns
Select the column, click the arrow on the right, and click
Column > Hide Column.
Select the column, click the arrow on the right, and click
Column > Show Column.
Delete a column
Select the column, click the arrow on the right, and click
Column > Delete Column.
Compute a column
Select the column, click the arrow on the right, and click
Column > New Computed Column.
Reorder columns
Select the column, click the arrow on the right, and click
Column > Reorder Columns.
Select the column, click the arrow on the right, and click
Column > Do Not Repeat Values.
Reports | 305
If you want to do this...
Select the column, click the arrow on the right, and click
Column > Repeat Values.
Aggregate data
Select the column, click the arrow on the right, and click
Aggregation.
Filter data
Select the column, click the arrow on the right, and click
Filter > Filter.
Sort data
Select the column, click the arrow on the right, and click
Sort.
Format a column
Select the column, click the arrow on the right, and click
Format > Font.
Select the column, click the arrow on the right, and click
Format > Conditional Formatting.
Warning
Error
The object is still performing without service disruption, but its performance might be
affected.
Critical
The object is still performing but service disruption might occur if corrective action is
not taken immediately.
Emergency The object unexpectedly stopped performing and experienced unrecoverable data loss.
You must take corrective action immediately to avoid extended downtime.
Unknown
The object is in an unknown transitory state. This status is displayed only for a brief
period.
Managing reports
Scheduling reports
You can use the Reports tab to schedule reports to be generated and sent by e-mail message to one or
more users on a recurring basis at a specified date and time. For example, you can schedule a report
to be sent as e-mail, in the HTML format, every Monday.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
The new schedule for a report is saved in the DataFabric Manager server, and is displayed in the
Saved Settings.
Related references
Reports | 307
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Sharing reports
You can share a report with one or more users. The report is sent by e-mail message to the specified
users instantly.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If you customize a report, you should save the changes before you share it. If not, when you share the
report, the changes are not displayed. You can save changes made to a custom report without
changing the report's current name. For detailed reports, you can save the changes with a new report
name.
Steps
5. Click Ok.
Related references
Deleting a report
You can delete one or more custom reports when they are no longer necessary.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Page descriptions
Reports tab
The Reports tab enables you to view detailed information about the reports that you generate. You
can search for a specific report, save a detailed report as a custom report, and delete a custom report.
You can also share and schedule a report.
Reports tab details
You can view the following information in the Viewer tab:
Reports | 309
Navigation tree Displays detailed and custom reports along with the available subcategories (for
example, Events, Inventory, and Storage capacity). You can click the name of a
report in the navigation tree to view the report contents in the reporting area.
Search filter
Enables you to search for a specific report by entering the report name in the
search filter.
Toolbar
Enables you to navigate to specific pages of the report, export the report, undo or
redo your actions, and so on.
Reporting area Displays, in tabular format or as a combination of charts and tabular data, the
contents of a selected report.
Command buttons
The command buttons enable you to perform the following tasks for a selected report:
Save
Save
Saves the changes made to a custom report without changing the report's current
name.
Note: This option is disabled for detailed reports.
Delete
Save As
Displays the Save As dialog box, which enables you to save changes made to a
report and to give the report a new name.
Enables you to delete one or more custom reports when they are no longer necessary.
Note: This option is disabled for detailed reports.
Schedule Enables you to schedule the report to be generated on a recurring basis at a specified
date and time. The report is sent by e-mail message to one or more users at the specified
date and time.
Share
Enables you to share a report with one or more users instantly. The report is sent by email message to the specified users instantly.
Refresh
Note: You can also save, delete, schedule, or share a report by right-clicking the selected report in
the navigation tree.
Saved Settings
Saved Settings displays all of the schedules for a report. You can create a new schedule by selecting
the New option. You can rename or delete a schedule by right-clicking the schedule and choosing the
appropriate option. You can also modify a schedule by selecting the schedule and making the
required changes.
Properties
E-mail
Specifies either the e-mail address or the alias of the administrator or the user to whom
you want to send the report schedule. You can specify one or more entries, separated
by commas. This is a mandatory field.
Format
Specifies the format in which you want to schedule the report. The HTML option is
selected by default.
Frequency Specifies the frequency at which you want to schedule the report. The Hourly at
Minute option is selected by default.
Scope
Specifies the group, storage system, resource pool, or quota user for which the report is
generated.
Note: For groups, you can navigate only to the direct members of a group.
Command buttons
The command buttons enable you to perform the following tasks:
Browse Enables you to browse through the available resources. The resource you select then
defines the scope of the generated report.
Reports | 311
Apply
Ok
Updates the properties that you specify for a report schedule, and closes the Schedule
Report dialog box.
Cancel Enables you to undo the schedule report configuration and closes the Schedule Report
dialog box.
Properties
E-mail Specifies either the e-mail address or the alias of the administrator or the user with whom
you want to share the report. You can specify one or more entries, separated by commas.
This is a mandatory field.
Subject Specifies the subject of the e-mail. By default, the name of the report is displayed.
Format Specifies the format in which you want to share the report. The HTML option is selected
by default.
Scope
Specifies the group, storage system, resource pool, or quota user for which the report is
generated.
Note: For groups, you can navigate only to the direct members of a group.
Command buttons
The command buttons enable you to perform the following tasks:
Browse Enables you to browse through the available resources. The resource you select then
defines the scope of the generated report.
Ok
Cancel Enables you to undo the share report configuration and closes the Share Report dialog
box.
Events reports
Related references
Page descriptions
Events Current report
The Events Current report displays information about events that are current and yet to be resolved.
The information includes the name, severity, and cause of the event.
Note: To display the charts and icons in a report, you must ensure that a DNS mapping is
established between the client machine from which you are starting the Web connection and the
host name of the system on which the DataFabric Manager server is installed.
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Severity
Event
Reports | 313
Triggered On
Displays the date and time when the event was generated.
Acknowledged By Displays the name of the administrator who acknowledged the event.
Acknowledged
Displays the date and time when the event was acknowledged.
Source
Displays the name of the object with which the event is associated.
Related references
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Severity
Event
Triggered On
Displays the date and time when the event was generated.
Acknowledged By Displays the name of the administrator who acknowledged the event.
Acknowledged
Displays the date and time when the event was acknowledged.
Source
Displays the name of the object with which the event is associated.
Resolved On
Displays the date and time when the event was resolved.
Resolved By
Related references
Inventory reports
Understanding inventory reports
What inventory reports are
Inventory reports provide information about objects such as aggregates, volumes, qtrees, and LUNs.
Inventory report types are as follows:
Aggregates report
File Systems report
LUNs report
Qtrees report
Storage Systems report
vFiler Units report
Volumes report
Storage Services report
Storage Service Policies report
Storage Service Datasets report
Page descriptions
Aggregates report
The Aggregates report displays information such as the type, state, and status of the aggregate, and
the SnapLock feature in the aggregate.
Note: To display the icons in a report, you must ensure that a DNS mapping is established between
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Aggregate
Storage
System
Displays the name of the storage system that contains the aggregate.
Type
Reports | 315
Aggregate
Striped
Aggregate
Block Type
Displays the block format of the aggregate as 32_bit or 64_bit. By default, this
column is hidden.
RAID
Displays the RAID protection scheme. The RAID protection scheme can be one of
the following:
raid0
raid4
raid_dp
State
Displays the current state of the aggregate. An aggregate can be in one of the
following three states:
Offline
Restricted Some operations such as parity reconstruction are allowed, but data
access is not allowed.
Online
Status
Displays the current status of the aggregate based on the events generated for the
aggregate. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Mirrored
Displays the type of the SnapLock feature (if it is enabled) used in the aggregate:
SnapLock
Compliance
SnapLock
Enterprise
No
Related references
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Type
File System
Storage
Server
Displays the name of the storage server. The storage server can be a storage
controller, Vserver, or a vFiler unit that contains the volume.
Status
Displays the current status of the volume based on the events generated for a
specific volume, qtree, or LUN. The status can be Normal, Warning, Error,
Critical, Emergency, or Unknown.
Related references
Reports | 317
LUNs report
The LUNs report displays information about the LUNs and LUN initiator groups in your storage
server.
Note: To display the icons in a report, you must ensure that a DNS mapping is established between
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
LUN Path
Displays the path name of the LUN (volume or qtree that contains the LUN).
Initiator Group
(LUN ID)
Displays the name of the initiator group to which the LUN is mapped.
Description
Displays the description (comment) that you specified when creating the LUN
on your storage server. By default, this column is hidden.
Size (GB)
Storage Server
Displays the name of the storage server that contains the LUN. The storage
server can be a storage controller or a vFiler unit.
Status
Displays the current status of the LUN. The status can be Normal, Warning,
Error, Critical, Emergency, or Unknown. The status of a LUN is determined
by the DataFabric Manager server based on the information that it obtains
from the storage controller or the vFiler unit in which the LUN exists. For
example, if the storage controller or the vFiler unit reports that a LUN is
offline, the DataFabric Manager server displays the status of the LUN as
Warning.
Read/Sec (Bytes)
Displays the rate of bytes (number of bytes per second) read from the LUN.
By default, this column is hidden.
Write/Sec (Bytes)
Displays the rate of bytes (number of bytes per second) written to the LUN.
By default, this column is hidden.
Operations/Sec
Displays the rate of the total operations performed on the LUN. By default,
this column is hidden.
Related references
Qtrees report
The Qtrees report displays information about all the qtrees in a volume. You can monitor the capacity
and status of the qtree, and the used and available space in the qtree.
You can monitor and manage only qtrees created by the user. Therefore, the default qtree, qtree 0, is
not monitored or managed.
Note: To display the icons in a report, you must ensure that a DNS mapping is established between
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Qtree
Displays the name of the qtree. The icon indicates if the qtree is clustered (
or nonclustered (
).
Storage Server
Displays the name of the storage server that contains the qtree. The storage
server can be a storage controller or a vFiler unit that contains the qtree.
Volume
Status
Displays the current status of the qtree based on the events generated for the
qtree. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Used Capacity
(GB)
Disk Space Limit Displays the hard limit on disk space as specified in the /etc/quotas file of
(GB)
the storage system. By default, this column is hidden.
Possible Addition Displays the amount of additional storage that can be installed on the storage
server to increase the available space for this qtree.
(GB)
Possible
Available (GB)
Displays the total amount of storage (currently available and possible addition)
that is available for increasing the capacity of the qtree.
Related references
Reports | 319
established between the client machine from which you are starting the Web connection and the
host name of the system on which the DataFabric Manager server is installed.
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Type
Status
Displays the current status of the storage system based on the events generated
for the storage system. The status can be Normal, Warning, Error, Critical,
Emergency, or Unknown.
Storage System
Model
Serial Number
Displays the serial number of the storage system. The system serial number is
usually provided on the chassis and is used to identify a system.
OS Version
Displays the version of the operating system running on the storage system.
Firmware
Version
System ID
Related references
Status
Displays the current status of the vFiler unit based on the events generated for
the vFiler unit. The status can be Normal, Warning, Error, Critical, Emergency,
or Unknown.
vFiler Units
IP Space
System ID
Displays the universal unique identifier (UUID) of the vFiler unit. By default,
this column is hidden.
Ping Status
Displays the status of the ping request sent to the vFiler unit. A vFiler unit might
be up or down. The ping status is displayed as "Down" if the vFiler unit has
stopped. The ping status is displayed as "Down (inconsistent)" if the vFiler unit
state is inconsistent.
Ping Timestamp
Displays the date and time when the vFiler unit was last queried.
Down Timestamp
Displays the date and time when the vFiler unit went offline.
Related references
Reports | 321
Volumes report
The Volumes report displays information about the type, state, and status of volumes, and the RAID
protection scheme.
Note: To display the icons in a report, you must ensure that a DNS mapping is established between
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Volume
Displays the name of the volume. The icon indicates if the volume is clustered
(
) or nonclustered (
).
Aggregate
Storage
Server
Displays the name of the storage server that contains the volume. The storage
server can be a storage controller, Vserver, or a vFiler Unit.
Type
Block Type
Displays the block format of the volume as 32_bit or 64_bit. By default, this
column is hidden.
RAID
Displays the RAID protection scheme. The RAID protection scheme can be one of
the following:
raid0
raid4
raid_dp
State
Displays the state of the volume. A volume can be in one of the following three
states (also called mount states):
Online
Offline
Displays the current status of the volume based on the events generated for the
volume. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Parent
Clones
Displays the parent volume from which the clone is derived. By default, this
column is hidden.
Related references
Storage Service
Description
Protection Policy
Primary Provisioning
Policy
Dataset Count
Displays the number of datasets that are associated with the storage
service.
Related references
Protection Policy
Policy Node
Displays the name of the data protection policy node, depending on the type
of protection policy that is assigned to the storage service.
Provisioning Policy Displays the provisioning policy that is assigned to the policy node of the
storage service.
vFiler Template
Displays the vFiler template that specifies the configuration settings that are
required to create a new vFiler unit.
Reports | 323
The vFiler template is assigned to the policy node of the storage service.
Resource Pools
Displays the resource pools that are associated with the policy node of the
storage service.
Related references
Dataset
Protection Policy
Primary Provisioning
Policy
Related references
Capacity reports
Committed capacity reports
Capacity growth reports
Space reservation reports
Space efficiency reports
Capacity reports
Capacity reports provide information about the total capacity, used space, and free space available in
the storage object.
You can view the following capacity reports:
Reports | 325
The input/output measurement reports provide information about the data read from or written to
each dataset node. The input/output measurement values are presented in the following reports:
The Usage Metric reports do not include information earlier than the most recent 12 months.
Guidelines for solving usage metric report issues
You must follow certain guidelines to avoid being unable to view or generate a usage metric report or
receiving excess data.
The following guidelines can help you to avoid issues related to report generation:
Reports cannot be created for the destination node if a mirror relationship is not created for the
primary node.
If a mirror relationship is not created in the primary node, then the destination volumes are
deleted. Therefore, metrics are not calculated as there are no volumes in the dataset of the
destination node.
Reports cannot be generated for the second node of a node pair if it is not accessible by the
DataFabric Manager server.
Reports cannot be generated for a dataset if the space utilization monitors or the input/output
monitors are turned off.
The Notes field of a usage metric report might display overcharge if a dataset has one or more
qtrees in the primary node.
Having one or more qtrees in the primary node results in the datasets volume information being
included in the metric computation of the qtrees.
Reports | 327
Physical Used Data
Space formula
Total Data Space = Total space of all the volumes in the dataset node
Used Snapshot Space = Sum of the physical space used by the volume
Snapshot copies
Snapshot Reserve
formula
Guaranteed Space
formula
volx, voly, volz are the volumes of the dataset's primary node.
Each volume has two samples. The samples are sx1, sx2 for volx.
All the samples are collected at the same time.
Maximum space utilization = MAX [(sx1 + sy1 + sz1), (sx2 + sy2 + sz2)]
Average space utilization = AVG [(sx1 + sy1 + sz1), (sx2 + sy2 + sz2)]
Example: Space utilization computation when sample data is not present for
some volumes
Assume the following about a dataset:
volx, voly, and volz are the volumes of the dataset's primary node.
Samples sz1, sz2, sz3, and sz4 are collected at time t1, t2, t3, and t4, respectively, for
volume volz.
Maximum space utilization = MAX [(sx1 + sy1 + sz1), (sy2 + sz2), sz3, sz4]
Average space utilization = AVG [(sx1 + sy1 + sz1), (sy2 + sz2), sz3, sz4]
How input/output measurement values are calculated
The total input/output measurement value is the sum of the total data read to and written from all the
volumes in each dataset node. You can either use the Data ONTAP APIs or the dfTables file to view
that sample data.
How total input/output measurement values are calculated
You can calculate the total data read and written for each dataset node. The total input/output
measurement value is the sum of data read and written for all volumes in a dataset node, at specified
intervals for a specified period.
Example: Input/output measurement for data collected from different volumes
Assume the following about a dataset:
volx, voly, volz are the volumes of the dataset's primary node.
Each volume has three samples for a input/output metric. Let the samples be sx1, sx2, and
sx3 for volx, collected at time t1,t2, and t3 respectively.
Total Data Read for volx between interval t1 and t3 = (sx1-sx0) + (sx2-sx1) + (sx3-sx2)
Total Data Read from a dataset node between time t1 and t3 = { [(sx1-sx0) + (sx2-sx1) +
(sx3-sx2)] + [(sy1-sy0) + (sy2-sy1) + (sy3-sy2)] + [(sz1-sz0) + (sz2-sz1) + (sz3-sz2)] }
The total data read between time t01 to t3, where t01 is the timestamp between t0 and t1, is
calculated by normalizing the t0 and t1 samples.
Therefore, Total Data Read for volx for the interval between t01 to t3 = [(sx1-sx0) * (t1t01)/(t1-t0)] + (sx2-sx1) + (sx3-sx2)
Reports | 329
By default, if you have configured an alarm to alert you to an event, the DataFabric Manager server
issues the alarm only once per event. You can configure the alarm to repeat until you receive an
acknowledgment.
Note: If you want to set an alarm for a specific aggregate, you must create a group with that
To free disk space, ask your users to delete files that are no longer needed
from volumes contained in the aggregate that generated the event.
You must add one or more disks to the aggregate that generated the event.
Note: After you add a disk to an aggregate, you cannot remove it
without first destroying all flexible volumes present in the aggregate to
which the disk belongs. You must destroy the aggregate after all the
flexible volumes are removed from the aggregate.
Aggregate Nearly
Full (%)
The value for this threshold must be lower than the value for Aggregate Full
Threshold for DataFabric Manager server to generate meaningful events.
Event generated: Aggregate Almost Full
Event severity: Warning
Corrective action
Perform one or more of the actions mentioned in Aggregate Full.
Aggregate
Overcommitted
(%)
You must create new free blocks in the aggregate by adding one or more
disks to the aggregate that generated the event.
Note: You must add disks with caution. After you add a disk to an
aggregate, you cannot remove it without first destroying all flexible
volumes present in the aggregate to which the disk belongs. You must
destroy the aggregate after all the flexible volumes are destroyed.
You must temporarily free some already occupied blocks in the aggregate
by taking unused flexible volumes offline.
Note: When you take a flexible volume offline, it returns any space it
uses to the aggregate. However, when you bring the flexible volume
online again, it requires the space again.
Aggregate Nearly
Overcommitted
(%)
Reports | 331
Aggregate
Snapshot Reserve
Full Threshold
(%)
Note: A newly created traditional volume tightly couples with its containing aggregate so that the
capacity of the aggregate determines the capacity of the new traditional volume. Therefore, you
should synchronize the capacity thresholds of traditional volumes with the thresholds of their
containing aggregates.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
set a Qtree Full Threshold Interval to a non-zero value. By default, the Qtree Full
threshold Interval is set to zero. The Qtree Full Threshold Interval specifies the
time during which the condition must persist before the event is generated. If the
condition persists for the specified amount of time, DataFabric Manager server
generates a Qtree Full event.
For example, if the monitoring cycle time is 60 seconds and the threshold
interval is 90 seconds, the threshold event is generated only if the condition
persists for two monitoring intervals.
Default value: 90 percent
Event generated: Qtree Full
Event severity: Error
Corrective action
Perform one or more of the following actions:
Ask users to delete files that are no longer needed, to free disk space.
Reports | 333
Qtree Nearly Description: Specifies the percentage at which a qtree is considered nearly full.
Full
Default value: 80 percent
Threshold
Event severity: Warning
(%)
Corrective action
Perform one or more of the following actions:
Ask users to delete files that are no longer needed, to free disk space.
Increase the hard disk space quota for the qtree.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
User quota thresholds
You can set a user quota threshold to all the user quotas present in a volume or a qtree.
When you configure a user quota threshold for a volume or qtree, the settings apply to all user quotas
on that volume or qtree.
DataFabric Manager server uses the user quota thresholds to monitor the hard and soft quota limits
configured in the /etc/quotas file of each storage system.
Volume capacity thresholds and events
DataFabric Manager server features thresholds to help you monitor the capacity of flexible and
traditional volumes. You can configure alarms to send notification whenever an event related to the
capacity of a volume occurs. You can also take corrective actions based on the event generated. For
the Volume Full threshold, you can configure an alarm to send notification only when the condition
persists over a specified period.
By default, if you have configured an alarm to alert you to an event, the DataFabric Manager server
issues the alarm only once per event. You can configure the alarm to repeat until it is acknowledged.
Note: If you want to set an alarm for a specific volume, you must create a group with that volume
as the only member.
Volume Full Threshold Interval specifies the time during which the
condition must persist before the event is triggered. Therefore, if the
condition persists for the specified time, DataFabric Manager server
generates a Volume Full event.
Ask your users to delete files that are no longer needed, to free disk
space.
For flexible volumes containing enough aggregate space, you can
increase the volume size.
For traditional volumes containing aggregates with limited space, you
can increase the size of the volume by adding one or more disks to the
aggregate.
Note: Add disks with caution. After you add a disk to an aggregate,
you cannot remove it without destroying the volume and its
aggregate.
Reports | 335
modify existing ones. For more information about the Snapshot copy
reserve, see the Data ONTAP Data Protection Online Backup and
Recovery Guide.
Volume Nearly Full Description: Specifies the percentage at which a volume is considered
nearly full.
Threshold (%)
Default value: 80. The value for this threshold must be lower than the
value for the Volume Full Threshold in order for DataFabric Manager
server to generate meaningful events.
Event generated: Volume Almost Full
Event severity: Warning
Corrective action
Perform one or more of the actions mentioned in Volume Full.
Volume Space
Reserve Nearly
Depleted Threshold
(%)
Volume Space
Reserve Depleted
Threshold (%)
Volume Quota
Overcommitted
Threshold (%)
Volume Quota
Nearly
Overcommitted
Threshold (%)
Create new free blocks by increasing the size of the volume that
generated the event.
Permanently free some of the occupied blocks in the volume by
deleting unnecessary files.
Volume Growth
Event Minimum
Change (%)
Volume Snap
Reserve Full
Threshold (%)
Reports | 337
instructions on how to identify Snapshot copies you can delete, see the
Operations Manager Help.
User Quota Full
Threshold (%)
User Quota Nearly Description: Specifies the value (percentage) at which a user is considered
Full Threshold (%) to have consumed most of the allocated space (disk space or files used) as
specified by the user quota. The user quota includes hard limit in
the /etc/quotas file. If this limit is exceeded, DataFabric Manager
server generates a User Disk Space Quota Almost Full event or a User
Files Quota Almost Full event.
Default value: 80
Event generated: User Quota Almost Full
Volume No First
Snapshot
Threshold (%)
Volume Nearly No
First Snapshot
Threshold (%)
that its capacity is determined by the capacity of the aggregate. For this reason, you should
synchronize the capacity thresholds of traditional volumes with the thresholds of their containing
aggregates.
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/
knowledge/docs/ontap/ontap_index.shtml
Page descriptions
Aggregates Capacity report
The Aggregates Capacity report displays information about the used and available space in an
aggregate and its capacity.
Note: To display the charts and icons in a report, you must ensure that a DNS mapping is
established between the client machine from which you are starting the Web connection and the
host name of the system on which the DataFabric Manager server is installed.
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Aggregate
Storage System
Displays the name of the storage system that contains the aggregate.
Snap Reserve Total (GB) Displays the size of the Snapshot reserve for this aggregate.
Snap Reserve Used (%)
Aggregate Used Capacity Displays the amount of space used for data in the aggregate. By
default, this column is hidden.
(GB)
Reports | 339
Aggregate Available
Capacity (%)
Snapshot Autodelete
Snapshots Disabled
Status
Overcommitted
Threshold (%)
Nearly Overcommitted
Threshold (%)
Related references
Qtree
Displays the name of the qtree. The icon indicates if the qtree is clustered
(
) or nonclustered (
).
Storage Server
Displays the name of the storage server that contains the qtree. The storage
server can be a storage controller or a vFiler unit that contains the qtree.
Volume
Status
Displays the current status of the qtree based on the events generated for the
qtree. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Used (%)
Displays the soft limit on disk space as specified in the /etc/quotas file of
the storage system. By default, this column is hidden.
Displays the hard limit on disk space as specified in the /etc/quotas file
of the storage system. By default, this column is hidden.
Available Capacity Displays the percentage of available space in the qtree that is not committed.
By default, this column is hidden.
(%)
Full Threshold (%) Displays the limit, as a percentage, at which a qtree is considered full.
Nearly Full
Threshold (%)
Displays the percentage of storage space used by the qtree. By default, this
column is hidden.
Files Used (% )
Displays the percentage of space used by files in the qtree. By default, this
column is hidden.
Related references
Reports | 341
Type
Status
Displays the current status of the storage system based on the events
generated for the storage system. The status can be Normal, Warning,
Error, Critical, Emergency, or Unknown.
Storage System
Volume Used
Capacity (GB)
Displays the amount of space used for data in all the volumes. By default,
this column is hidden.
Volume Total
Capacity (GB)
Displays the total space available for data in all the volumes.
Volume Used
Capacity (%)
Displays the percentage of space used for data in all the volumes.
Aggregate Used
Capacity (GB)
Aggregate Total
Capacity (GB)
Aggregate Used
Capacity (%)
Related references
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
User Name
In this case, the DataFabric Manager server displays joe, finance\joe in the User
Name column.
When the user name of a storage system cannot be reported, the DataFabric
Manager server reports one of the following:
File System
Displays the name, path, and quota information of the volumes or qtrees on
which the user quota or group quota is enabled.
Status
Displays the status of a user's quotas. The status can be Normal, Warning, Error,
Critical, Emergency, or Unknown.
If the status for a user is not Normal, an event related to the user's quotas has
occurred. For details about the events, you must go to the Events tab.
Disk Space
Used (MB)
Displays the total amount of disk space used. By default, this column is hidden.
Disk Space
Threshold
(MB)
Displays the disk space threshold as specified in the /etc/quotas file of the
storage system.
Note: This threshold is different from the user quota thresholds that you can
configure in the DataFabric Manager server.
Disk Space Soft Displays the soft limit on disk space as specified in the /etc/quotas file of the
Limit (MB)
storage system. By default, this column is hidden.
Reports | 343
Disk Space
Hard Limit
(MB)
Displays the hard limit on disk space as specified in the /etc/quotas file of the
storage system. By default, this column is hidden.
Disk Space
Used (%)
Files Used
Displays the total number of files used. By default, this column is hidden.
Files Soft Limit Displays the soft limit on files as specified in the /etc/quotas file of the
storage system. By default, this column is hidden.
Files Hard
Limit (Million)
Displays the hard limit on files as specified in the /etc/quotas file of the
storage system. By default, this column is hidden.
SID
Nearly Full
Threshold (%)
Displays the percentage value at which a user is likely to consume most of the
allocated space (disk space or files used) as specified by the user's quota (hard
limit in the /etc/quotas file).
If this threshold is crossed, the DataFabric Manager server generates a User Disk
Space Quota Almost Full event when the disk space is consumed or a User Files
Quota Almost Full event when the file space is consumed.
Full Threshold
(%)
Displays the percentage value at which a user is likely to consume the entire
allocated space (disk space or files used) as specified by the user's quota (hard
limit in the /etc/quotas file).
If this threshold is crossed, the DataFabric Manager server generates a User Disk
Space Quota Full event when the disk space is consumed or a User Files Quota
Full event when the file space is consumed.
Related references
Volume
Displays the name of the volume. The icon indicates if the volume is
clustered (
) or nonclustered (
).
Aggregate
Storage Server
Displays the name of the storage server that contains the volume. The
storage server can be a storage controller, Vserver, or a vFiler unit.
Available Capacity
(GB)
Displays the amount of space available for data in the volume. By default,
this column is hidden.
Used Capacity (GB) Displays the amount of space that is used for data, in GB, in the volume. By
default, this column is hidden.
Total Capacity (GB) Displays the total space available for data in the volume.
Used Capacity (%)
Displays the amount of space that is used for data, in percentage, in the
volume.
Used Snapshot
Space (GB)
Displays the amount of space used to store Snapshot copies in the volume.
This value can be larger than the specified size of the Snapshot reserve.
Used Snapshot
Space (%)
Available Capacity
(%)
Status
Displays the current status of the volume based on the events generated for
the volume. The status can be Normal, Warning, Error, Critical, Emergency,
or Unknown.
Nearly Full
Threshold (%)
Displays the percentage of files used by the volume. By default, this column
is hidden.
Reports | 345
Related references
Aggregate
Storage System
Displays the name of the storage system that contains the aggregate.
Type
Bytes
Committed
(GB)
Traditional
Aggregate
Striped
Aggregate
Displays the current status of the aggregate based on the events generated for the
aggregate. The status can be Normal, Warning, Error, Critical, Emergency, or
Unknown.
Related references
the client machine from which you are starting the Web connection and the host name of the
system on which the DataFabric Manager server is installed.
Volume
Displays the name of the volume. The icon indicates if the volume is
clustered (
) or nonclustered (
).
Aggregate
Storage Server
Displays the name of the storage server that contains the volume. The
storage server can be a storage controller, Vserver, or a vFiler Unit.
Quota OverCommitted Displays the amount of physical space in the qtrees that can be used
before the system generates the Volume Quota OverCommitted event.
Space (GB)
Used Capacity (GB)
Committed (GB)
Committed (%)
Displays the total amount of storage allocated for this volume, if the
volume autosize option is disabled on this volume.
Displays the maximum size to which the volume can grow, if the
volume autosize option is enabled on this volume.
Related references
established between the client machine from which you are starting the Web connection and the
host name of the system on which the DataFabric Manager server is installed.
Reports | 347
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Aggregate
Storage System
Displays the name of the storage system that contains the aggregate.
Displays the number of days required for the aggregate to reach the actual
Aggregate Full (in terms of capacity), and is based on the daily growth rate
(GB) value.
Displays, in GB, the amount of disk space used in the aggregate if the
amount of change between the last two samples continues for 24 hours. The
default sample collection interval is four hours.
For example, if an aggregate uses 10 GB of disk space at 2 pm and 12 GB at
6 pm, the daily growth rate (GB) for this aggregate is 2 GB.
Displays the percentage of the total space currently in use in the aggregate.
Committed %
Committed (GB)
Total Capacity (GB) Displays the total amount of space in the aggregate. By default, this column
is hidden.
Used Capacity (GB) Displays the amount of used space in the aggregate. By default, this column
is hidden.
Related references
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Qtree
Displays the name of the qtree. The icon indicates if the qtree is clustered (
or nonclustered (
).
Storage Server
Displays the name of the storage server that contains the qtree. The storage
server can be a storage controller or a vFiler unit that contains the qtree.
Volume
Data Days to
Full
Displays the estimated amount of time left before this qtree runs out of storage
space.
If the time is less than one day, the current storage status of the qtree is
displayed.
Daily Growth
Rate (GB)
Displays, in GB, the amount of disk space used in the qtree if the amount of
change between the last two samples continues for 24 hours.
Daily Growth
Rate (%)
Displays the percentage of change in the disk space used in the qtree if the
amount of change between the last two samples continues for 24 hours.
Related references
Reports | 349
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
Volume
Displays the name of the volume. The icon indicates if the volume is
clustered (
) or nonclustered (
).
Aggregate
Storage Server
Displays the name of the storage server that contains the volume. The storage
server can be a storage controller, Vserver, or a vFiler unit.
Daily Growth Rate Displays, in GB, the amount of disk space used in the volume, if the amount
of change between the last two samples continues for 24 hours.
(GB)
Data Days To Full Displays the estimated time left before this volume runs out of storage space.
If the estimated time is less than one day, the current storage status of the
(GB)
volume is displayed.
Daily Growth Rate Displays the percentage of change in the used space in the volume reserve, if
the change between the last two samples continues for 24 hours.
(%)
Related references
The following details are displayed for volumes that have space-reserved files.
Displays the name of the volume. The icon indicates if the volume is
clustered (
) or nonclustered (
).
Aggregate
Storage Server
Displays the name of the storage server that contains the volume. The
storage server can be a storage controller, Vserver, or a vFiler unit.
Fractional Reserve
(%)
Controls the size of the overwrite reserve. If the fractional reserve is less
than 100 percent, the reserved space for all the space-reserved files in that
volume is reduced to the fractional reserve percentage. By default, this
column is hidden.
Reservation Used
(GB)
Displays the total space reservation used for overwrites in this volume. By
default, this column is hidden.
Reservation Available Displays the amount of free space remaining in the space reservation. By
default, this column is hidden.
(GB)
Space Reserve Total
(GB)
Displays the total size of the space reserve for this volume.
Space Reservation
Used (GB)
Status
Displays the current status of the volume based on the events generated
for the volume. The status can be Normal, Warning, Error, Critical,
Emergency, or Unknown.
Space Reserve
Depleted Threshold
(%)
Related references
Reports | 351
established between the client machine from which you are starting the Web connection and the
host name of the system on which the DataFabric Manager server is installed.
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
The following details are displayed when the space savings on the aggregate is enabled.
Aggregate
Storage System
Displays the name of the storage system that contains the aggregate.
Displays the active file system data of all the deduplicated volumes in
the aggregate without deduplication space savings.
Volume Enabled
Available Capacity (GB) Displays the amount of space available for data in the aggregate.
Total Capacity (GB)
Related references
Chart
You can customize the data displayed in the chart, change the subtype and format of the chart, and
export the data from the chart.
Report details
The following details are displayed when the space savings on the volume is enabled.
Volume
Displays the name of the volume. The icon indicates if the volume is
clustered (
) or nonclustered (
).
Storage Server
Displays the name of the storage server that contains the volume. The
storage server can be a storage controller, Vserver, or a vFiler Unit.
Dedupe Status
Displays the active file system data in the volume with deduplication space
savings.
Dedupe Space
Savings (GB)
Displays the active file system data in the volume without deduplication
space savings (that is, if deduplication has not been enabled on the
volume).
Available Capacity
(GB)
Related references
Reports | 353
Storage Service
Displays the name of the storage service associated with the dataset at the
time of computing the metrics. If the storage service changes for a dataset, the
new dataset details are displayed in a new row. The field is blank if a storage
service is not associated with the dataset.
Protection Policy
Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning
Policy
Displays the name of the provisioning policy associated with the dataset at
the time of computing the metrics. If the provisioning policy changes for a
dataset, the new dataset details are displayed in a new row. The field is blank
if a provisioning policy is not associated with the dataset.
Effective Used
Data Space
Displays the space used by user data in the dataset node, without accounting
for data sharing.
Physical Used
Data Space
Displays the actual space used by user data in the dataset node, accounting for
the space saved by data sharing.
Used Snapshot
Space
Displays the physical space used by the volume Snapshot copies in the
dataset node.
Displays the space allocated to the dataset's primary node. The field is blank
for non-primary nodes.
Snapshot Reserve
Displays the space allocated for Snapshot copies in the dataset's primary
node. The field is blank for non-primary nodes.
Total Space
Displays the space allocated for data and Snapshot copies in the dataset's
primary node. The field is blank for non-primary nodes.
Guaranteed Space Displays the physical space allocated to the dataset node.
Metric Period
Displays the period (in days) for which metrics is calculated for the dataset.
Deleted
Notes
Related references
Storage Service
Displays the name of the storage service associated with the dataset at the
time of computing the metrics. If the storage service changes for a dataset, the
new dataset details are displayed in a new row. The field is blank if a storage
service is not associated with the dataset.
Protection Policy
Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning
Policy
Displays the name of the provisioning policy associated with the dataset at
the time of computing the metrics. If the provisioning policy changes for a
dataset, the new dataset details are displayed in a new row. The field is blank
if a provisioning policy is not associated with the dataset.
Effective Used
Data Space
Displays the space used by user data in the dataset node, without accounting
for data sharing.
Physical Used
Data Space
Displays the actual space used by user data in the dataset node, accounting for
the space saved by data sharing.
Used Snapshot
Space
Displays the physical space used by the volume Snapshot copies in the
dataset node.
Displays the space allocated in the dataset's primary node. The field is blank
for non primary nodes.
Reports | 355
Snapshot Reserve
Displays the space allocated for Snapshot copies in the dataset's primary
node. The field is blank for non primary nodes.
Total Space
Displays the space allocated for data and Snapshot copies in the dataset's
primary node. The field is blank for non primary nodes.
Guaranteed Space Displays the physical space allocated to the dataset node.
Metric Period
Displays the period (in days) for which metrics is calculated for the dataset.
Deleted
Comments
Notes
Related references
Storage Service
Displays the name of the storage service associated with the dataset at the time
of computing the metrics. If the storage service changes for a dataset, the new
dataset details are displayed in a new row. The field is blank if a storage service
is not associated with the dataset.
Protection Policy Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning
Policy
Displays the name of the provisioning policy associated with the dataset at the
time of computing the metrics. If the provisioning policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
provisioning policy is not associated with the dataset.
Displays the total data read by the user from all volumes of the dataset node.
Data Written
Displays the total data written by the user to all volumes of the dataset node.
Metric Period
Displays the period (in days) for which metrics is calculated for the dataset.
Deleted
Notes
Comments
Related references
Storage Service
Displays the name of the storage service associated with the dataset at the
time of computing the metrics. If the storage service changes for a dataset, the
new dataset details are displayed in a new row. The field is blank if a storage
service is not associated with the dataset.
Protection Policy
Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning
Policy
Displays the name of the provisioning policy associated with the dataset at
the time of computing the metrics. If the provisioning policy changes for a
dataset, the new dataset details are displayed in a new row. The field is blank
if a provisioning policy is not associated with the dataset.
Timestamp
Reports | 357
Effective Used
Data Space
Displays the space used by user data in the dataset node, without accounting
for data sharing.
Physical Used
Data Space
Displays the actual space used by user data in the dataset node, accounting for
the space saved by data sharing.
Used Snapshot
Space
Displays the physical space used by the volume Snapshot copies in the
dataset node.
Displays the space allocated in the dataset's primary node. The field is blank
for non primary nodes.
Snapshot Reserve
Displays the space allocated for Snapshot copies in the dataset's primary
node. The field is blank for non primary nodes.
Total Space
Displays the space allocated for data and Snapshot copies in the dataset's
primary node. The field is blank for non primary nodes.
Guaranteed Space Displays the physical space allocated to the dataset node.
Deleted
Notes
Comments
Related references
Storage Service
Displays the name of the storage service associated with the dataset at the
time of computing the metrics. If the storage service changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
storage service is associated with the dataset.
Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning Policy Displays the name of the provisioning policy associated with the dataset at
the time of computing the metrics. If the provisioning policy changes for a
dataset, the new dataset details are displayed in a new row. The field is blank
if a provisioning policy is not associated with the dataset.
Timestamp
Effective Used Data Displays the space used by user data in the dataset node, without accounting
for data sharing.
Space
Physical Used Data Displays the actual space used by user data in the dataset node, accounting
for the space saved by data sharing.
Space
Used Snapshot
Space
Displays the physical space used by the volume Snapshot copies in the
dataset node.
Snapshot Reserve
Displays the space allocated for Snapshot copies in the dataset's primary
node.
Total Space
Displays the space allocated for data and Snapshot copies in the dataset's
primary node.
Guaranteed Space
Deleted
Notes
Related references
Reports | 359
is collected. For example, if the interval is set to one hour, the report can collect and display the data
read for a dataset at each hour.
Dataset
Storage Service
Displays the name of the storage service associated with the dataset at the time
of computing the metrics. If the storage service changes for a dataset, the new
dataset details are displayed in a new row. The field is blank if a storage service
is not associated with the dataset.
Protection Policy Displays the name of the protection policy associated with the dataset at the
time of computing the metrics. If the protection policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
protection policy is not associated with the dataset.
Dataset Node
Provisioning
Policy
Displays the name of the provisioning policy associated with the dataset at the
time of computing the metrics. If the provisioning policy changes for a dataset,
the new dataset details are displayed in a new row. The field is blank if a
provisioning policy is not associated with the dataset.
Timestamp
Data Read
Displays the total data read by the user from all volumes of the dataset node.
Data Written
Displays the total data written by the user to all volumes of the dataset node.
Deleted
Notes
Related references
Database schema
How to access DataFabric Manager server data
By using third-party tools, you can create customized reports from the data you export from the
DataFabric Manager server. By default, you cannot access the DataFabric Manager server views. To
Before you can create and give access to a database user, you must have the CoreControl capability.
The CoreControl capability allows you to perform the following operations:
All of these operations can be performed only through the CLI. For more information about the CLI
commands, see the DataFabric Manager server manual (man) pages.
You can use a third-party reporting tool to connect to the DataFabric Manager server database for
accessing views. Following are the connection parameters:
alarmView
cpuView
designerReportView
Reports | 361
datasetIOMetricView
datasetSpaceMetricView
datasetUsageMetricCommentView
hbaInitiatorView
hbaView
initiatorView
reportOutputView
sanhostLunView
usersView
volumeDedupeDetailsView
alarmView
Column name
Type
Length Description
alarmId
Unsigned
integer
alarmScript
Varchar
254
alarmScriptRunAs
Varchar
64
alarmTrapHosts
Varchar
254
alarmGroupId
Unsigned
integer
alarmGroupName
Varchar
1024
alarmEventClass
Varchar
254
alarmEventType
Varchar
128
alarmEventSeverity
Varchar
16
alarmEventTimeFrom
Date time
alarmEventTimeTo
Date time
alarmsRepeatNotify
Varchar
Type
Length Description
alarmRepeatInterval
Unsigned
small integer
alarmDisabled
Varchar
alarmEmailAddrs
Varchar
254
alarmEventName
Varchar
256
alarmEventPageLoginName
Text
32767
alarmAdminEmailLoginName Text
32767
alarmAdminPageAddress
Text
32767
alarmAdminEmailAddress
Text
32767
alarmPageAddrs
Varchar
254
cpuView
Column name
Type
Length Description
cpuId
Unsigned
integer
Reports | 363
Column name
Type
Length Description
cpuBusyPercentInterval Float
cpuStatTimestamp
Timestamp
designerReportView
Column name Type
Length Description
drId
Unsigned integer 4
drCliName
Varchar
64
drGuiName
Varchar
64
drIsCustom
Unsigned integer 4
drDescription
Varchar
1024
Data type
Length
Description
dsIOMetricDatasetId
unsigned int 4
Dataset ID
dsIOMetricDatasetName
varchar
255
Dataset name
dsIOMetricProtectionPolicyName
varchar
255
dsIOMetricProtectionPolicyId
unsigned int 4
Data type
Length
Description
dsIOMetricStorageServiceName
varchar
255
dsIOMetricStorageServiceId
unsigned int 4
dsIOMetricNodeName
varchar
255
dsIOMetricProvisioningPolicyNam varchar
e
255
dsIOMetricProvisioningPolicyId
unsigned int 4
dsIOMetricMetricTimestamp
timestamp
dsIOMetricTotalDataRead
unsigned
big int
dsIOMetricTotalDataWritten
unsigned
big int
dsIOMetricDeletedTimestamp
timestamp
dsIOMetricOvercharge
bit
dsIOMetricPartialData
bit
dsIOMetricCommentId
unsigned int 4
Data type
Length
Description
dsSpaceMetricDatasetId
unsigned
int
Dataset ID
dsSpaceMetricDatasetName
varchar
255
Dataset name
dsSpaceMetricProtectionPolicyNa
me
varchar
255
dsSpaceMetricProtectionPolicyId
unsigned
int
dsSpaceMetricNodeName
varchar
255
Reports | 365
Column Name
Data type
Length
Description
dsSpaceMetricStorageServiceNam
e
varchar
255
dsSpaceMetricStorageServiceId
unsigned
int
dsSpaceMetricProvisioningPolicyN varchar
ame
255
dsSpaceMetricProvisioningPolicyI
d
unsigned
int
dsSpaceMetricTimestamp
timestamp
dsSpaceMetricAvgEffectiveUsedD unsigned
ataSpace
big int
dsSpaceMetricAvgPhysicalUsedDa unsigned
taSpace
big int
dsSpaceMetricAvgUsedSnapshotS
pace
unsigned
big int
dsSpaceMetricAvgTotalDataSpace unsigned
big int
dsSpaceMetricAvgSnapshotReserv unsigned
e
big int
dsSpaceMetricAvgTotalSpace
unsigned
big int
dsSpaceMetricAvgGuaranteedSpac unsigned
e
big int
dsSpaceMetricMaxEffectiveUsedD unsigned
ataSpace
big int
Data type
Length
Description
dsSpaceMetricMaxPhysicalUsedD
ataSpace
unsigned
big int
dsSpaceMetricMaxUsedSnapshotS unsigned
pace
big int
dsSpaceMetricMaxTotalDataSpace unsigned
big int
dsSpaceMetricMaxSnapshotReserv unsigned
e
big int
dsSpaceMetricMaxTotalSpace
unsigned
big int
dsSpaceMetricMaxGuaranteedSpac unsigned
e
big int
dsSpaceMetricDeletedTimestamp
timestamp
dsSpaceMetricOvercharge
bit
dsSpaceMetricPartialData
bit
dsSpaceMetricCommentId
unsigned
int
Data type
Length Description
datasetId
unsigned int
Dataset ID
dsUsageMetricCommentN varchar
ame
255
Reports | 367
Column Name
Data type
Length Description
dsUsageMetricCommentV varchar
alue
255
hbaInitiatorView
Column name
Type
Length
Description
initiatorId
Unsigned integer
hbaId
Unsigned integer
hbaView
Column name
Type
Length
Description
hbaId
Unsigned integer
hbaName
Varchar
64
initiatorView
Column name
Type
Length Description
initiatorId
Unsigned integer
iGroupId
Unsigned integer
initiatorName
Varchar
255
reportOutputView
Column name
Type
Length Description
reportOutputId
Unsigned
integer
Report output ID
reportScheduleId
Unsigned
integer
reportId
Unsigned
integer
Type
Length Description
reportName
Varchar
64
reportBaseCatalog
Varchar
64
reportOutputTargetObjId
Unsigned big
integer
reportOutputTimestamp
Timestamp
reportOutputRunStatus
Unsigned
small integer
reportOutputRunBy
Varchar
255
reportOutputFailureReason Varchar
255
reportOutputFileName
128
Varchar
sanhostlunview
Column name Type
Length Description
shlunId
Unsigned integer 4
LUN ID
hostId
Unsigned integer 4
SAN host ID
shInitiatorId
Unsigned integer 4
shlunpathId
Unsigned integer 4
usersView
Column name
Type
Length Description
userId
Unsigned
integer
Reports | 369
Column name
Type
userNearlyFullThreshold Unsigned
small
integer
Length Description
2
userFullThreshold
Unsigned
small
integer
volumeDedupeDetailsView
Column name
Type
Length Description
volumeId
Unsigned 4
integer
Volume ID
volumeOverDedupeThreshold
Unsigned 2
small
integer
Type
Length Description
volumeNearlyOverDedupeThreshold Unsigned 2
small
integer
371
Administration
Users and roles
Understanding users and roles
What RBAC is
RBAC (role-based access control) provides the ability to control who has access to various features
and resources in DataFabric Manager server.
How RBAC is used
Applications use RBAC to authorize user capabilities. Administrators use RBAC to manage groups
of users by defining roles and capabilities.
For example, if you need to control user access to resources, such as groups, datasets, and resource
pools, you must set up administrator accounts for them. Additionally, if you want to restrict the
information these administrators can view and the operations they can perform, you must apply roles
to the administrator accounts you create.
Note: RBAC permission checks occur in the DataFabric Manager server. RBAC must be
configured using the Operations Manager console or command line interface.
Administration | 373
Note: A user who is part of the local administrators group is treated as a super-user and
GlobalDataProtection
GlobalDataset
GlobalDelete
GlobalHostService
GlobalEvent
GlobalFullControl
Enables you to view and perform any operation on any object in the
DataFabric Manager server database and configure administrator
accounts. You cannot apply this role to accounts with group access
control.
GlobalMirror
GlobalRead
GlobalRestore
GlobalWrite
GlobalProvisioning
GlobalPerfManagement
Related information
Policies
The VI administrator will need the following operation permissions for the group
created for the VI administrator role.
DFM.Database
All
DFM.BackManager
All
DFM.ApplicationPolicy
All
DFM.Dataset
All
DFM.Resource
Control
The VI administrator will need the following operation permissions for each policy
template, located under Local Policies, that you want the virtual administrator to be
able to copy.
DFM.ApplicationPolicy
Storage
services
Read
The VI administrator will need the following operation permissions for each of the
storage services that you want to allow the VI administrator to use.
DFM.StorageService
Administration | 375
Protection
Policies
These are the policies contained within the storage services that you selected above.
DFM.Policy
All
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
1. Click the Administration menu, then click the Users and Roles option.
A separate browser window opens to the Administrators page in the Operations Manager console.
2. Configure users and roles.
For more information, see the Operations Manager Help.
3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to the
OnCommand console.
Related references
Groups
Understanding groups
create in the OnCommand console. However, you cannot perform management tasks for the global
group.
Related references
Datacenters
Datastores
ESX Servers
Host Agents
Host services
Hyper-V Servers
Hyper-V VMs
Virtual Centers
VMware VMs
Administration | 377
Datasets
Local policies
Resource pools
Storage services
Aggregates
Clusters
LUNs
Qtrees
SRM paths
Storage controllers
vFiler Units
Vservers
Volumes
Configuring groups
Creating groups
You can create groups to contain multiple objects so that you easily manage these objects. You can
create a group directly under the global group or create a subgroup to a parent group you already
created.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must review the Guidelines for creating groups on page 378.
When you create a group, you can add an object to the group membership only if you have
permission to view that object.
Steps
The new group appears in the Groups list. You can select any group in the Groups list from the
Groups menu.
Related references
Deleting groups
You can delete groups that you no longer find useful.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Deleting a group removes only the group container from the OnCommand database. The objects
contained in the deleted group are not removed from the database. When you delete a group, you also
delete all its subgroups, if any. If you want to preserve the subgroups, you must move them to a
different parent group before deleting the current parent group.
Administration | 379
Steps
Managing groups
Editing groups
You can edit a group name, add or delete members of a group, and modify the contact information of
a group from the Edit Group dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Copying a group
You can copy a group and assign it to a different parent group from the Copy To dialog box. When
you copy a group, you create a copy of the selected group, and assign the copy to a different parent
group.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
To copy a group to a new parent group, you must be logged in as an administrator with Database
Write capability on the new parent group.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
To move a group to a new parent group, you must have following capabilities:
The group you want to move must not already exist in the target group. If the group already exists, an
appropriate message is displayed, and you cannot perform the move operation.
Administration | 381
Steps
Page descriptions
Groups tab
The Groups tab enables you to view existing groups and perform tasks such as creating, deleting,
modifying, moving, and copying groups.
Command buttons
The command buttons enable you to perform the following management tasks for a selected group:
Create
Launches the Create Group dialog box, which enables you to create groups with different
member types.
Edit
Launches the Edit Group dialog box. You can edit the group name, add or delete group
members, or modify group contact information.
Delete
Copy
Launches the Copy To dialog box. When you copy a group, you create a copy of the
selected group and assign it to a different parent group.
Move
Launches the Move To dialog box. When you move a group, you assign the selected
group to a new parent group. If the group you moved has any subgroups, those subgroups
are also moved to the new parent, maintaining the same hierarchical structure and
membership.
Owner
Resource Tag
Annual Rate
(currency unit/GB)
Specifies the amount to charge for storage space usage per GB per year.
Used (%)
Status
Displays the current status of each group. The status can be Normal,
Information, Warning, Error, Critical, Emergency, or Unknown.
ID
Members tab
The Members tab displays detailed information about the selected group.
The Members tab displays the current status of each group as mentioned in the groups list, and
includes the following additional information:
Member Name Specifies the name of the group member.
Member Type Specifies the object type of the group member.
Status
Displays the current status of the group member. The status can be Normal,
Information, Warning, Error, Critical, Emergency, or Unknown.
Member Of
Specifies the name of the parent group to which the group member belongs.
Graph tab
The Graph tab displays information about the performance of the selected group. You can select the
graph you want to view from the drop-down menu in the area.
You can display information for a specified time period, such as one day, one week, one year, one
month, and three months. By clicking the export icon, you can export the graphical data in CSV
format.
Administration | 383
Related references
Properties
You can create groups by specifying properties such as group name, owner name, e-mail address of
the owner, and annual rate.
Name
Owner
Specifies the e-mail address of the user who owns the group.
Resource Tag
Annual Rate (Per Specifies the amount to charge for storage space usage per GB per year. You
must enter a value in the x.y notation, where x is the integer part of the number
GB)
and y is the fractional part. For example, to specify an annual charge rate of
$150.55, you must enter 150.55.
Member Type
Available
Members
Displays the list of members based on the object type selected. You can use the
filter to search for the objects. You can use appropriate arrow keys to move the
objects to the list on the right.
Selected
Members
Command buttons
You can use command buttons to perform the following management tasks:
Create
Cancel
Does not save the group configuration and closes the Create Group dialog box.
General tab
You can edit properties groups such as group name, owner name, and e-mail address of the owner.
Name
Owner
Specifies the e-mail address of the user who owns the group.
Resource Tag Specifies the resource tag of the group. This is a system-generated custom comment
field.
Group Member tab
You can edit properties of group members such as member type, available members, and selected
members.
Member Type
Available
Members
Displays the list of members based on the object type selected. You can use
appropriate arrow keys to move the member types to the list on the right.
Selected Members
Chargeback tab
You can edit chargeback properties of groups such as annual rate and format of the annual rate.
Annual Rate (Per
GB)
Specifies the amount to charge for storage space usage per GB per year. You
must enter a value in the x.y notation, where x is the integer part of the
number and y is the fractional part. For example, to specify an annual charge
rate of $150.55, you must enter 150.55.
Administration | 385
Command buttons
You can use command buttons to perform the following management tasks:
OK
Cancel Does not save the modification of the group configuration, and closes the Edit Group
dialog box.
Alarms
Understanding alarms
Alarm configuration
DataFabric Manager server uses alarms to notify you when events occur. DataFabric Manager server
sends the alarm notification to one or more specified recipients in different formats, such as e-mail
notification, pager alert, an SNMP traphost, or a script you wrote (you should attach the script to the
alarm).
You should determine the events that cause alarms, whether the alarm repeats until it is
acknowledged, and how many recipients an alarm has. Not all events are severe enough to require
alarms, and not all alarms are important enough to require acknowledgment. Nevertheless, to avoid
multiple responses to the same event, you should configure DataFabric Manager server to repeat
notification until an event is acknowledged.
Note: DataFabric Manager server does not automatically send alarms for the events.
Configuring alarms
Creating alarms for events
The OnCommand console enables you to configure alarms for immediate notification of events. You
can also configure alarms even before a particular event occurs. You can add an alarm based on the
event, event severity type, or event class from the Create Alarm dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have your mail server configured so that the DataFabric Manager server can send e-mails
to specified recipients when an event occurs.
You must have the following information available to add an alarm:
The event name, event class, or event severity type that triggers the alarm.
The recipients and the modes of event notifications.
The period during which the alarm is active.
DFM.Event.Write
DFM.Alarm.Write
Alarms you configure based on the event severity type are triggered when that event severity level
occurs.
Steps
Administration | 387
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have your mail server configured so that the DataFabric Manager server can send e-mails
to specified recipients when an event occurs.
You must have the following information available to add an alarm:
DFM.Event.Write
DFM.Alarm.Write
Alarms you configure for a specific event are triggered when that event occurs.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must have the following capabilities to perform this task:
Administration | 389
DFM.Event.Write
DFM.Alarm.Write
Steps
The new configuration is immediately activated and displayed in the alarms list.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
Page descriptions
Alarms tab
The Alarms tab provides a single location from which you can view a list of alarms configured based
on event, event severity type, and event class. You can also perform various actions from this
window, such as edit, delete, test, and enable or disable alarms.
Command buttons
The command buttons enable you to perform the following management tasks for a selected event:
Create
Launches the Create Alarm dialog box in which you can create an alarm based on event,
event severity type, and event class.
Edit
Launches the Edit Alarm dialog box in which you can modify alarm properties.
Delete
Test
Tests the selected alarm to check its configuration, after creating or editing the alarm.
Event
Event Severity
Group
Enabled
Administration | 391
Start
Displays the time at which the selected alarm becomes active. By default, this
column is hidden.
End
Displays the time at which the selected alarm becomes inactive. By default, this
column is hidden.
Repeat Interval Displays the time period (in minutes) before the DataFabric Manager server
continues to send a repeated notification until the event is acknowledged or
(Minutes)
resolved. By default, this column is hidden.
Repeat Notify
Event Class
Displays the class of event that is configured to trigger an alarm. By default, this
column is hidden.
You can configure a single alarm for multiple events using the event class. The
event class is a regular expression that contains rules, or pattern descriptions, that
typically use the word "matches" in the expression. For example, the
userquota.*|qtree.* expression matches all user quota or qtree events.
Details area
Apart from the alarm details displayed in the alarms list, you can view other additional properties of
the alarms in the area below the alarms list.
Effective Time Range
Administrators (Email
Address)
Administrators (Pager
Number)
The SNMP traphost system that receives the alarm notification in the
form of SNMP traps.
Script Path
The name, along with the path of the script that is run when an alarm
is triggered.
Related references
Event Options
You can create an alarm based on event name, event severity type, or event class:
Group
Displays the group that receives an alert when an event or event type triggers an
alarm.
Event
Event
Severity
Displays the severity types of the event that triggers an alarm. The event severity
types are Normal, Information, Warning, Error, Critical, and Emergency.
Event Class
Notification Options
You can specify alarm notification properties by selecting one of the following check boxes:
SNMP Trap Host
E-mail Administrator
(Admin Name)
Page Administrator
(Admin Name)
E-mail Addresses
(Others)
Script Path
Specifies the name of the script that is run when the alarm is triggered.
Administration | 393
Repeat Interval
(Minutes)
Command buttons
You can use command buttons to perform the following management tasks for a selected event:
Create
Cancel
Does not save the alarm configuration and closes the Create Alarm dialog box.
Host services
Understanding host services
What a host service is
The host service is software that runs on a physical machine, a Hyper-V parent, or in a virtual
machine. The host service software includes plug-ins that enable the DataFabric Manager server to
discover, back up, and restore virtual objects, such as virtual machines and datastores. The host
service also enables you to view virtual objects in the OnCommand console.
Guidelines for managing host services
Resource discovery by a host service can be initiated manually by an administrator and by default,
automatic notification is available in response to changes in resources. When you make changes to
the virtual infrastructure, the results are available immediately because of the automatic notification
from the host service to the DataFabric Manager server. You can manually start a rediscovery job to
see your changes. You might need to refresh the host service information to see the updates in the
OnCommand console.
During the process of installing the NetApp OnCommand management software, you must register at
least one host service with the DataFabric Manager server and with the virtual infrastructure
(VMware or Hyper-V). You can register additional host services after installation, from the Host
Services tab accessible from the Administration menu in the OnCommand console. After
registration, you can monitor and manage host services from the Host Services tab.
Note: If the Hyper-V parent is part of a cluster, you must install the OnCommand Host Package on
each node of the cluster and all the cluster nodes must have the same TCP/IP port number to
enable communication between host services on different nodes. You must register and authorize
each node with the same DataFabric Manager server.
Note: When you register a host service with the DataFabric Manager server, you can type the fully
qualified domain name or IP address in the IPv4 format.
The host service is included as part of the installation of the OnCommand Host Package. You can
install multiple host services on multiple vCenter Servers, virtual machines, or Hyper-V parents.
Note: In a Hyper-V cluster only, if you manually shut down a host service on the node that is
designated as the owner of a cluster and the node is active, the host service on both the cluster and
the node become inactive.
Note: The OnCommand Host Package upgrade does not force host services to reregister with
DataFabric Manager server. Therefore, if you unregister a host service from DataFabric Manager
server prior to an OnCommand Host Package upgrade, you must manually register the host service
to DataFabric Manager server after the upgrade is finished.
Messages from host services are stored persistently in the DataFabric Manager server database
hsNotifications table so that even if DataFabric Manager server goes down, information is not lost
and incomplete operations are automatically restarted or resumed after the server comes back up
again. This table continues to grow over time, and can quickly become huge in a large environment.
You can use the following global options to manage the size of this table:
hsNotificationsMaxCount
hsNotificationsPurgingInterval
The host service firewall must be disabled for the administration and management ports.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
DataFabric Manager server does not support a host service created as a generic service from failover
cluster manager in Microsoft Windows.
If you unregister a cluster level host service, DataFabric Manager server will not automatically
register the host service when you re-register the node. You must re-register or add the host service
using the cluster IP address.
Administration | 395
Attention: If you change the name of the machine after installing the OnCommand Host Package,
you must uninstall the OnCommand Host Package and perform a fresh installation.
Steps
1. Click the Administration menu, then click the Host Services option.
2. In the Host Services tab, click Add.
3. In the Add Host Service dialog box, type the IP address or the DNS name of the host on which
the host service is installed.
4. The default number of the administrative port is automatically entered.
This is the port that is used by plug-ins to discover information about the host service. If the port
number has been changed in the host service, type in the changed port number.
5. Click Add.
Result
The host service is added and registered with the DataFabric Manager server.
Tip: If you see an error stating that the requested operation did not complete in 60 seconds, wait
several minutes and then click Refresh to see if the host service was actually added.
Attention: Host services can be registered with only one DataFabric Manager server at a time.
Before you register a host service with a new DataFabric Manager server, you must first manually
unregister the host service from the old DataFabric Manager server. To unregister a host service
you must use the DataFabric Manager server hsid command.
After you finish
To make the host service fully operational, you might need to authorize the host service. In a
VMware environment, you must edit the host service to add the vCenter Server credentials.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1.
2.
3.
4.
5.
Verifying that a host service is registered with the DataFabric Manager server on page 396
Authorizing a host service to access storage system credentials on page 397
Associating a host service with the vCenter Server on page 397
Associating storage systems with a host service on page 399
Editing storage system login and NDMP credentials from the Host Services tab on page 400
Related references
A host service can be registered with the DataFabric Manager server during installation, or later from
the OnCommand console. However, you might want to verify that the registration is still valid when
troubleshooting problems or prior to performing an action involving a host service, such as adding a
storage system to a host service.
Steps
If the host service is not displayed in the list, you must add and configure the new host service.
Administration | 397
The host service must be registered with the DataFabric Manager server prior to performing this task.
About this task
DataFabric Manager server does not support a host service created as a generic service from failover
cluster manager in Microsoft Windows.
Steps
If you do not have storage systems associated with the host service, you must associate at least one
storage system to be able to perform backups.
After you finish editing the host service properties, you can view job progress from the Jobs subtab
on the Manage Host Services window and you can view details about each job from the Jobs tab.
Associating a host service with the vCenter Server
In a VMware environment, you must authorize each host service and associate it with a vCenter
Server. This provides part of the communication needed for discovery, monitoring, backup, and
recovery of virtual server objects such as virtual machines and datastores.
Before you begin
The host service must be registered with the DataFabric Manager server prior to performing this task.
Have the following information available:
Authorization is required to create backup jobs as it allows the host service to access the storage
system credentials.
DataFabric Manager server does not support a host service created as a generic service from failover
cluster manager in Microsoft Windows.
Steps
If you do not have storage systems associated with the host service, you must associate at least one
storage system to be able to perform backups.
After you finish editing the host service properties, you can view job progress from the Jobs subtab
on the Manage Host Services window and you can view details about each job from the Jobs tab.
Administration | 399
If you add a new storage system to associate with the host service, you must have the following
storage system information available:
IP address or name
Login and NDMP credentials
Access protocol (HTTP or HTTPS)
Steps
To associate storage systems shown in the Available Storage Systems list, select the system
names and click OK.
To associate a storage system not listed in Available Storage Systems, click Add, enter the
required information, and click OK.
The newly associated storage system displays in the Storage Systems area.
6. In the list of storage systems, verify that the status is Good for the login and NDMP credentials
for each storage system.
After you finish
If the login or NDMP status is other than Good for any storage system, you must edit the storage
system properties to provide the correct credentials before you can use that storage system.
After you finish editing the host service properties, you can view job progress from the Jobs subtab
on the Manage Host Services window and you can view details about each job from the Jobs tab.
Editing storage system login and NDMP credentials from the Host Services tab
You must have valid login and NDMP credentials for storage systems so they can be accessed by the
DataFabric Manager server. If the server cannot access the storage, your backups might fail.
Before you begin
IP address or name
Login and NDMP credentials
Access protocol (HTTP or HTTPS)
Steps
After you finish editing the storage system properties, you can view job progress from the Jobs
subtab on the Manage Host Services window and you can view details about each job from the Jobs
tab.
Administration | 401
you did not authorize the host service when you added it, you can also authorize it from the Edit Host
Service dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
If you are adding a new host service, see Configuring a new host service.
Attention: If you change the name of the machine after installing the OnCommand Host Package,
you must uninstall the OnCommand Host Package and perform a fresh installation.
Steps
1.
2.
3.
4.
Verifying that a host service is registered with the DataFabric Manager server on page 401
Associating a host service with the vCenter Server on page 402
Associating storage systems with a host service on page 403
Editing storage system login and NDMP credentials from the Host Services tab on page 404
Related references
A host service can be registered with the DataFabric Manager server during installation, or later from
the OnCommand console. However, you might want to verify that the registration is still valid when
troubleshooting problems or prior to performing an action involving a host service, such as adding a
storage system to a host service.
Steps
If the host service is not displayed in the list, you must add and configure the new host service.
Associating a host service with the vCenter Server
In a VMware environment, you must authorize each host service and associate it with a vCenter
Server. This provides part of the communication needed for discovery, monitoring, backup, and
recovery of virtual server objects such as virtual machines and datastores.
Before you begin
The host service must be registered with the DataFabric Manager server prior to performing this task.
Have the following information available:
Authorization is required to create backup jobs as it allows the host service to access the storage
system credentials.
DataFabric Manager server does not support a host service created as a generic service from failover
cluster manager in Microsoft Windows.
Steps
Administration | 403
After you finish
If you do not have storage systems associated with the host service, you must associate at least one
storage system to be able to perform backups.
After you finish editing the host service properties, you can view job progress from the Jobs subtab
on the Manage Host Services window and you can view details about each job from the Jobs tab.
Associating storage systems with a host service
For each host service instance, you must associate one or more storage systems that host virtual
machines for the host service. This enables communication between the service and storage to ensure
that storage objects, such as virtual disks, are discovered and that host service features work properly.
Before you begin
If you add a new storage system to associate with the host service, you must have the following
storage system information available:
IP address or name
Login and NDMP credentials
Access protocol (HTTP or HTTPS)
Steps
To associate storage systems shown in the Available Storage Systems list, select the system
names and click OK.
To associate a storage system not listed in Available Storage Systems, click Add, enter the
required information, and click OK.
The newly associated storage system displays in the Storage Systems area.
6. In the list of storage systems, verify that the status is Good for the login and NDMP credentials
for each storage system.
If the login or NDMP status is other than Good for any storage system, you must edit the storage
system properties to provide the correct credentials before you can use that storage system.
After you finish editing the host service properties, you can view job progress from the Jobs subtab
on the Manage Host Services window and you can view details about each job from the Jobs tab.
Editing storage system login and NDMP credentials from the Host Services tab
You must have valid login and NDMP credentials for storage systems so they can be accessed by the
DataFabric Manager server. If the server cannot access the storage, your backups might fail.
Before you begin
IP address or name
Login and NDMP credentials
Access protocol (HTTP or HTTPS)
Steps
After you finish editing the storage system properties, you can view job progress from the Jobs
subtab on the Manage Host Services window and you can view details about each job from the Jobs
tab.
Administration | 405
1. Click the Administration menu, then click the Host Services option.
2. In the Host Services tab, click Delete.
3. In the Delete Host Service Confirmation dialog box, click Yes to delete the host service or click
No to terminate the deletion request.
Result
The host service is deleted from DataFabric Manager server and the associated virtual objects are
removed from the inventory lists.
If you restart the host service on the server side, the host service attempts to register with the
DataFabric Manager server. You can prevent this by manually changing the dfm_server attribute in
HSServiceHost.exe.config to a different value before restarting the host service plug-in.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Copy the DataFabric Manager server keys to the new DataFabric Manager server.
2. Enter the following command to reload the SSL service: dfm ssl service reload
3. Enter the following command for each host service: dfm hs configure -i <New
DataFabric Manager server IP> <host service name or ID>.
Moving a host service to a different DataFabric Manager server
If you uninstall DataFabric Manager server and start with a fresh database or make your host service
point to a new DataFabric Manager server with brand new databases without first unregistering the
host service, you might need to clean up the host service repository either by reinstalling the host
service or by cleaning up the old, leftover data from the host service repository.
Before you begin
If any of the resources from the host service you want to move are in a dataset, they must be removed
from the dataset prior to unregistering the host service.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
This procedure describes how to manually delete the old dataset information rather than reinstalling
the host service.
Steps
Administration | 407
a. Click the Administration menu, then click the Host Services option.
b. In the Host Services tab, click Delete.
c. In the Delete Host Service Confirmation dialog box, click Yes.
2. Stop the host service by using the Service Control Manager on the host service machine.
3. Clean up the data by performing the following steps:
a. Remove policyenforcementdata.xml and eventrepository.xml from the data stores
folder in the host service installation directory.
b. Delete any leftover messages in the messages queues.
Message queue folders end with "queue" and are located in the installation directory.
c. Clean up the scheduled jobs.
This step is done from the Microsoft Windows Task Scheduler on the host service machine.
4. Restart the host service by using the Service Control Manager on the host service machine.
5. Re-register the host service with DataFabric Manager server.
Related tasks
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Step
1. Click the Administration menu, then click the Host Services option.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
Note: If the host service FQDN changes, DataFabric Manager server will be unable to discover
any new virtual objects for the host service until you delete it and then add it again.
Steps
1. Click the Administration menu, then click the Host Services option.
2. In the Host Services tab, select a host service, then click Rediscover.
Result
A discovery job is started for the selected host service. When the discovery job finishes, DataFabric
Manager server reflects the current list of configured VMware or Hyper-V hosts and virtual objects
managed by the host service. You might need to refresh the host service information to see the
updated list.
Related tasks
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Administration | 409
Steps
1. Click the Administration menu, then click the Host Services option.
2. In the Host Services tab, click Refresh.
Result
The current list of host services is retrieved from the DataFabric Manager server.
Related references
Page descriptions
Host Services tab
You can view information about registered virtual host services; add, configure, edit, and delete host
services; rediscover virtual objects; and refresh the virtual object inventory display from the Host
Services tab. You can access this window by clicking Administration > Host Services.
Command buttons
Add
Opens the Add Host Service dialog box, which allows you to add a virtual host service
to OnCommand console.
You can add a host service by using the Add button and then configure it later by
using the Edit button.
Edit
Configures the credentials for a selected host service, which enables the host to be
used by the OnCommand console. Typically, you first add a host service and then
configure it. Thereafter, you use the Edit button to edit the configuration for a host, if
needed.
Delete
Refresh
Name
The name of the host on which the host service is installed. This might be a
fully qualified domain name if the host is on a domain.
IP Address
Admin Port
Management Port The host service port that is used for management operations.
Version
Discovery Status
Indicates whether the discovery of the host service was successful. "Error"
indicates that the discovery was not completely successful. The Jobs tab at the
bottom of the list displays the reason that the discovery failed.
Status
Indicates whether the host service is running (up) or not running (down).
Indicates the name of the storage system associated with the host service.
IP Address
System Status
Indicates whether or not the host service has valid credentials for the storage
system.
NDMP Status
Indicates whether or not DataFabric Manager server has the valid network
data management protocol credentials for the storage system.
Login Status
(Server)
Indicates whether or not DataFabric Manager server has the valid login
credentials for the storage system.
Transport Protocol
You can click on the storage system name to display the respective entry in
the storage inventory.
Administration | 411
More Details tab
This section displays detailed information about the components of the selected host service. The
components that are listed vary depending upon the virtual infrastructure type the host service is
managing.
Type
Version
Jobs tab
This section displays information about the most recent jobs that ran on the host service. Host service
jobs are typically discovery and host service software upgrade jobs.
Job ID
Job Type
The type of job, which is determined by the policy assigned to the dataset or by the
direct request initiated by a user.
Description A description of the job taken from the policy configuration or the job description
entered when the job was manually started.
Started By The ID of the user who started the job.
Start
Status
End
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
1. Click the Administration menu, then click the Storage Systems user option.
A separate browser window opens to the Host Users page in the Operations Manager console.
2. Click the Local Users tab.
3. Configure the local users.
For more information, see the Operations Manager Help.
4. When finished, press Alt-Tab or click the OnCommand console browser tab to return to the
OnCommand console.
Related references
Administration | 413
DataFabric Manager server is installed. Administrators can also manage CIFS data through
configuration management.
List of configuration management tasks for storage systems
You can perform a variety of configuration management tasks by using the storage system
configuration management feature.
Following are some of the tasks you can perform:
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
1. Click the Administration menu, then click the Storage Systems Configuration option.
A separate browser window opens to the Storage System Configurations page in the Operations
Manager console.
2. Configure the storage system.
For more information, see the Operations Manager Help.
3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to the
OnCommand console.
Related references
vFiler configuration
Understanding vFiler unit configuration
List of configuration management tasks for vFiler units
You can perform a variety of configuration management tasks by using the vFiler units configuration
management feature.
Following are some of the tasks you can perform:
Administration | 415
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.
Steps
1. Click the Administration menu, then click the vFiler Configuration option.
A separate browser window opens to the vFiler Configurations page in the Operations Manager
console.
2. Configure the vFiler units.
For more information, see the Operations Manager Help.
3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to the
OnCommand console.
Related references
Options
Page descriptions
Setup Options
You can configure the following options from the Setup Options dialog box:
Backup
The Backup option enables you to configure the lag threshold values for all volumes or specific
secondary volumes.
Default Thresholds
Specifies the Backup Manager monitoring intervals.
Snapshot copy
Specifies the global-level names for Snapshot copies that are generated by dataset protection jobs.
Primary volume
Specifies the global-level names of primary volumes that are generated by protection jobs.
Secondary volume
Specifies the global-level names of secondary volumes that are generated by protection jobs.
Administration | 417
Secondary qtree
Specifies the global-level names of secondary qtrees that are generated by protection jobs.
Costing
The Costing option enables you to configure the chargeback settings to obtain billing reports for the
space used by a specific storage object or a group of objects.
Chargeback
Specifies the parameters you can configure to generate billing reports for the amount of space
used by a specific object or a group of objects.
Database Backup
The Database Backup option enables you to configure the backup destination directory and retention
count for the DataFabric Manager server database backup, and also manages existing backups.
Schedule
Specifies the parameters that you can configure to schedule a database backup.
Completed
Displays the ongoing database backups and the associated database backup events.
Default Thresholds
The Default Thresholds option enables you to configure the global default threshold values for
objects such as aggregates, volumes, qtrees, user quotas, resource pools, HBA ports and Hosts.
Aggregates
Specifies the global default threshold values for monitored aggregates.
Volumes
Specifies the global default threshold values for monitored volumes.
Other
Specifies the global default threshold values for host agents, HBA ports, qtrees, user quotas, and
resource pools.
Discovery
The Discovery option enables you to set host discovery options, discovery methods, timeout and
interval values. You can also configure networks, and settings for the discovery of storage objects
such as networks, storage systems (including clusters), host agents, and Open Systems SnapVault
agents.
Options
Specifies the discovery methods and the monitoring interval for discovery.
Addresses
Specifies the network addresses which are discovered for new hosts.
Credentials
Specifies the credentials for network addresses that are used for network and host discovery.
File SRM
The File SRM option enables you to configure the File SRM settings, such as the number of largest
files, recently modified files, least accessed files, and least modified files.
Options
Specifies the file parameters that you can configure.
LDAP
The LDAP option enables you to configure the LDAP settings to successfully retrieve data from the
LDAP server.
Authentication
Specifies the authentication settings that help the DataFabric Manager server to communicate
with the LDAP servers.
Server Types
Specifies settings that are configured to establish compatibility with the LDAP server.
Servers
Specifies the LDAP server properties and the last authentication status.
Monitoring
The Monitoring option enables you to configure the monitoring intervals for various storage objects
monitored by the DataFabric Manager server.
Storage
Specifies the monitoring parameters for storage objects.
Protection
Specifies the monitoring parameters for the protection of storage objects.
Networking
Specifies the monitoring parameters for networking objects.
Inventory
Specifies the monitoring parameters for inventory objects.
System
Specifies the monitoring parameters for system objects.
Management
The Management option enables you to configure the connection protocols settings for management
purposes.
Client
Administration | 419
Specifies the HTTP and HTTPS settings that you can configure to establish a connection between
the client and the DataFabric Manager server.
Managed Host
Specifies settings that you can configure to establish a connection between the managed host and
the DataFabric Manager server.
Host Agent
Specifies settings that you can configure to establish a connection between the host agent and the
DataFabric Manager server.
Systems
The Systems option enables you to configure the system settings such as event notifications (e-mail
notification, pager alerts, and SNMP traps), create custom comment fields, and set audit log options.
Alarms
Specifies settings that you can configure to send event notifications in different formats.
Annotations
Enables you to create annotations for the DataFabric Manager server that can be assigned to any
resource objects.
Miscellaneous
Specifies miscellaneous settings that you can configure such as audit log options, credential TTL
cache, and options to preserve your local configuration settings.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Options
You can configure the lag threshold values using the following options:
SnapVault
Replica
Administration | 421
Out-of-Date
Threshold
Purge Backup
Jobs
Specifies whether backup job files that are older than the
designated period of time are purged.
The default is 12.86 weeks.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Easy location of secondary and tertiary storage backup items for restoration
Custom naming applied to Snapshot copies, secondary volumes, or secondary qtrees enables you
to easily identify the backup objects in which to locate files for restoration.
Easy identification of backup data by an assortment of criteria
Custom naming, configured with specific naming conventions and applied to the backup objects,
enables you to identify those objects by priority, business unit, administrator, backup time,
physical container, or logical container.
Customization of formats to maintain naming used by imported protection relationships
Custom naming can be used by the OnCommand console to assign naming formats for related
objects that matches the related object naming conventions originally used by imported protection
relationships.
Easy identification of backup data related to protection, SnapManager, or SnapDrive operations
A consistent naming convention enables you to easily find the right data to restore even if that
backup data is related to such disparate activities as the OnCommand console protection
operations, SnapManager activity, or SnapDrive activity.
Easy identification of source and secondary volumes in case of application shutdown
If the database application that is generating the backed up data shuts down, the naming formats
of the associated Snapshot copy, secondary volume, or secondary qtree objects, if set properly,
enable you to identify primary volumes from the names of the secondary volumes and Snapshot
copies that are associated with that application.
Administration | 423
Custom naming and storage management tasks
If you are a storage administrator, configuration of custom naming enables you to specify a
company-wide naming convention for related objects at the global level or at the dataset level.
Custom naming and application management tasks
If you are an application administrator and have specified a particular dataset in which to store data
generated by a particular application, custom naming enables you to specify distinctive naming
conventions for that dataset's related object types. The distinctive naming enables you to track the
objects that are generated by that application more easily.
Naming settings by format strings
The OnCommand console naming settings enable you to enter custom format strings that contain
letter attributes for identifying information to include in the names of protection-related objects.
The attributes that you can include in the format strings cause such identifying characteristics as
timestamps, storage system names, custom labels, dataset names, and retention type to be included in
the name of a protection-related object type.
Naming settings by naming script
Naming scripts are user-authored scripts for naming some protection-related object types (Snapshot
copies, primary volumes, or secondary volumes) that are generated by protection jobs being executed
on a dataset.
For each supported protection-related object type (secondary qtrees are the only related object type
for which naming scripts are not supported), you can write a script that makes use of DataFabric
Manager server supported environmental variables to generate a name for objects of that type that are
generated when a protection job is executed on a dataset. When you configure global naming settings
in the Setup Options dialog box, you can specify this naming script and path as an alternative to
accepting the default naming settings or to entering a custom naming format string in the Setup
Options dialog box itself.
You can specify naming scripts for global naming settings only. Naming scripts are not applied to a
dataset's protection-related objects if that dataset's dataset-level naming settings specify a custom
naming format string instead.
Naming script restrictions and precautions
As you author and apply naming scripts for your dataset's protection-related object types, keep in
mind the following points:
You can specify naming scripts for global naming settings only.
Naming scripts are not applied to a dataset's protection-related objects if that dataset's datasetlevel naming settings specify a custom naming format string instead.
The output of your naming scripts must be tested.
The naming script must be in a location that is readable from the DataFabric Manager server.
You can assign a naming script only to the global naming settings for Snapshot copy, primary
volume, and secondary volume objects.
You cannot assign a naming script to secondary qtree objects.
A naming script does not apply to an object type in a dataset if that dataset has a dataset-level
custom naming format enabled for that object type.
If you specify an incomplete script path in the global naming settings, an error event and job
failure results when a protection job is run.
If you specify a script that does not exist, an error event and job failure results when a protection
job is run.
If the script generates the name of a volume that already exists, an error event and job failure
results when a protection job is run.
ENV_DATASET_NAME
ENV_DATASET_LABEL
ENV_NODE_ID
ENV_NODE_NAME
ENV_STORAGE_SYSTEM_ID
Administration | 425
ENV_VOLUME_ID
ENV_VOLUME_NAME
ENV_TIMESTAMP
ENV_RETENTION_TYPE
ENV_NODE_NAME
ENV_DATASET_NAME
ENV_DATASET_LABEL
ENV_NODE_ID
ENV_NODE_NAME
ENV_PRI_VOLUME_NAME
ENV_CONNECTION_TYPE
ENV_DP_POLICY_ID
ENV_DP_POLICY_NAME
Indicates the year, month, date, and time of the Snapshot copy. Timestamp
is in the format yyyy-mm-dd_hhmm (along with UTC offset).
%R (Retention type) Indicates whether the Snapshot copy's retention class is hourly, daily,
weekly, monthly, or unlimited.
%L (Custom label)
Administration | 427
that are generated by protection jobs that are run on this dataset. If the
naming format for a related object type includes the Custom label
attribute, then the value that you specify is included in the related object
names. If you do not specify a value, then the dataset name is used as the
custom label. If you include a blank space in the custom label string, the
blank space is converted to letter x in any Snapshot copy, volume, or qtree
object name that includes the custom label as part of its syntax.
%H (Storage system Indicates the name of the storage system that contains the volume from
which a Snapshot copy is made.
name)
%N (Volume name)
Indicates the name of the volume from which a Snapshot copy is made.
%A (Application
fields)
%1 (One-digit
suffix), %2 (Twodigit suffix), or %3
(Three-digit suffix)
Resulting name
%L_%R_%T_%H
my_data_hourly_2010-03-04_0330+0430_mgt-u35
%T_myunit_%Lmysection-%R
2010-03-04_0330_myunit_my_data-mysection-hourly
myunit-mydept-%R_%H_
%T
myunit-mydept-hourly_mgt-u35_2010-03-04_0403-0800
%R_%T_%N_%A
hourly_2010-03-04_0330_myVol_qtree1_qtree2_qtree3
%T
%L_%R_%H_%2
my_data_hourly_mgt-u35_01
my_data_hourly_mgt-u35_02
When a SMHV plug-in creates a Snapshot copy in a host system, the plug-in creates two
Snapshot copies for every backup.
The second Snapshot copy has the string "_backup" appended to the end of the Snapshot copy
name irrespective of the order of attribute selection.
When a SMHV plug-in creates a Snapshot copy, and the Snapshot copy name exceeds the SMHV
Snapshot copy character limit, the SMHV plug-in does not truncate the name by removing the
characters of the Application fields attribute.
Instead, it truncates the name by removing characters before the Application fields
attribute, from right to left.
For Snapshot copies created by a SMHV plug-in, if the Application fields attribute is not
specified, it is added automatically at the end of the naming format.
For Snapshot copies created by a SMVI plug-in, if the Application fields attribute is not
specified, it is not added to the naming format.
For Snapshot copies created by the NetApp Management Console data protection capability, if
the Application fields attribute is not mentioned, it is not added implicitly in the naming
format.
When SMVI plug-in creates a Snapshot copy, and the Snapshot copy name exceeds the SMVI
Snapshot copy character limit, SMVI plug-in does not truncate the name by removing the
characters of the Application fields attribute.
Instead, it truncates the name by removing characters before the Application fields
attribute, from right to left.
If you want to use scripts to generate the Snapshot copy name, and if the Snapshot copy is
generated by the SnapManager plug-in in the host system, the plug-in does not use the user script.
Instead, the plug-in uses the global naming format to create the Snapshot copy name. The user
script is used only if the Snapshot copy is created by the NetApp Management Console data
protection capability.
Snapshot copies created by the host system are in the local time zone of the host system.
Administration | 429
Including four digits reserved for suffixes, a Snapshot copy name cannot exceed 128 characters.
The Snapshot copy name, excluding the suffixes, can be no more than 124 characters. If the
generated Snapshot copy name exceeds 124 characters, then the name is truncated by removing
characters from right to left.
To avoid possible truncation of timestamp information from the Snapshot copy name, best
practice is to place the timestamp %T attribute at the left end of the format string.
%D (Dataset
name)
%1 (One-digit
suffix), %2
(Two-digit
suffix), or %3
(Three-digit
suffix)
Resulting name
%L_%D
mydata_mydataset
%L
mydata
pri_%L
pri_mydata
myunit-privol
myunit-privol
%L_%D_%3
mydata_mydataset_001
mydata_mydataset_002
If the primary volume's naming format in one or more datasets is customized, then the primary
volumes generated in those datasets are named according to the dataset-level format.
For a primary volume to be provisioned, if you specify the primary volume name from the
OnCommand console user interface, then the name that you specify in the user interface takes
precedence over the options for primary volume naming settings.
Administration | 431
%C (Type)
%S (Primary
storage system
name)
%V (Primary
volume name)
%1 (One-digit
suffix), %2 (Twodigit suffix), or %3
(Three-digit suffix)
Resulting name
%L_C_%S_%V
mydata_backup_myhost1_myVol1
%C-%S-%L-destVol
backup_muhost1_mydata_destVol
%C_%L
backup_mydata
%V
myhost1
%C_%L_%1
backup_mydata_1
backup_mydata_2
If secondary volume's naming format in one or more datasets is customized, then the secondary
volumes generated in those datasets are named accordingly.
Administration | 433
When taking a backup of Open Systems SnapVault (OSSV) directories, if you include %V
(Primary volume name) attribute in the naming format, %V will be replaced with %S
(Primary storage system name).
This is because OSSV does not have the concept of volume.
%S (Primary
storage system)
%V (Primary
volume name)
%1 (One-digit
suffix), %2 (Twodigit suffix), or %3
(Three-digit suffix)
Resulting name
%L_%S_%V_%Q
mydata_myhost1_myVol1_qtree1
%L_%S_%Q
mydata_myhost1_qtree1
%V_%Q
myVol1_qtree1
%L_%S_%V_%Q_%3
mydata_myhost1_myVol1_qtree1_001
mydata_myhost1_myVol1_qtree1_002
If secondary qtree naming format in one or more datasets is customized, the secondary qtrees
generated in those datasets are named accordingly.
Administration | 435
characters, or if the directory path is / (slash), then the Primary qtree name attribute is
replaced with the directory ID.
Managing naming settings options
Globally customizing naming of protection-related objects
To improve recognition and usability, you can globally customize the naming formats of all
protection-related objects (Snapshot copies, primary volumes, secondary volumes, or secondary
qtrees that are generated by the OnCommand console protection or provisioning operations).
Before you begin
You must have reviewed the Guidelines for globally customizing naming of protection-related
objects on page 436
You must have reviewed the Requirements and restrictions when customizing naming of
protection-related objects on page 438
Have the following custom naming information available:
Configuring global custom naming for a related object type applies to all objects of that type that are
generated by OnCommand console protection or provisioning jobs unless those objects belong to a
dataset that is already configured with dataset-level custom naming for object type in question.
Steps
1. In the OnCommand console Setup Options dialog box, select Naming Settings and select the
object type (Snapshot copy, primary volume, secondary volume, or secondary qtree) whose
global naming format you want to customize.
2. Customize the naming settings for the selected object type.
Select Use naming format if you want to customize naming by selecting and ordering
naming attributes.
Then use the name format field to complete your selection and ordering of naming attributes.
Select Use naming script if you want to customize naming by a pre-authored script.
The OnCommand console applies the custom global naming format to all protection job or
provisioning job generated objects of the type that you customized. Exception is made only to those
objects that belong to a dataset that has dataset-level naming customized for it.
Related references
by script.
Note: You do not have to assign a policy to create a new dataset. You can assign a policy to the
dataset later by running the Dataset Policy Change wizard.
If you intend to customize global naming by script, you should specify the path location to a preauthored script and the name of an application to read and execute that script.
Note: The output of the naming script determines what name is generated and given to the
dataset's related objects for whom the script is written. If the script generates an error message,
part of that error message might be included in the names generated for the objects in question.
Administration | 437
Snapshot
copy naming
attributes
Primary
volume
naming
attributes
Secondary
volume
naming
attributes
Attributes that can be specified and ordered to customize global Snapshot copy
naming include the following:
%T (Timestamp) (required)
%R (Retention type) (retention class of the Snapshot copy)
%L (Custom label) (custom label of the containing dataset, if one exists)
%H (Storage system name) (storage system of the containing volume)
%N (Volume name) (the containing volume)
%A (Application fields)
%1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Threedigit suffix) (applied as necessary if naming differentiation is required)
Attributes that can be specified and ordered to customize global primary volume
naming include the following:
Attributes that can be specified and ordered to customize global secondary volume
naming include the following:
backed up or mirrored)
Attributes that can be specified and ordered to customize global secondary volume
Secondary
qtree naming naming include the following:
attributes
%Q (Primary qtree name) (qtree being backed up)
%L (Custom label) (custom label of the containing dataset, if one exists)
%S (Primary storage system name) (storage system of the volume being
backed up or mirrored)
%V (Primary volume name) (volume being backed up or mirrored)
%1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Threedigit suffix) (applied as necessary if naming differentiation is required)
Primary volume
The following are the naming restrictions of primary volume:
Secondary volume
The following are the naming restrictions of secondary volume:
Administration | 439
At least one attribute must be enabled at any point of time, or there must be some free-form text.
In case of name conflict, numerical suffixes are appended to the names.
If the Fan-in feature is enabled for a backup destination, and two or more qtrees from different
primary volumes are backed up into the same secondary volume, then the Primary storage
system name and Primary volume name attributes of the source volumes are randomly
selected to form the names of the secondary volumes.
For example, if host1:/vol1/qtr1, host2:/vol2/qtr2, and host3:/vol3/qt3 are backed up to one
secondary volume, then all the names for the secondary qtrees in that volume include one
common <Host name> and <Volume name> attribute combination character string. That
common string is either: "host1_vol1", "host2_vol2," or "host3_vol3."
If the Custom label attribute is included in the naming format, but no custom name exists for a
dataset, the resulting names use the actual dataset name instead.
Secondary volume name can be 60 characters long.
If the generated secondary volume name exceeds 60 characters, then the name gets truncated by
removing characters from left to right. Additional 4 digits are dedicated for suffixes. Therefore,
the maximum length of secondary volume name including the suffix can be 64 characters long.
Secondary volume name can only display ASCII alphanumeric characters, and _ (underscore).
Any other characters cause errors. Secondary volume name cannot start with a number.
Secondary qtree
The following are the naming restrictions of secondary qtree naming:
The characters of the root directory path that are not supported are converted to letter "x".
When taking a backup of Open Systems SnapVault directories, if you include the Primary
qtree name attribute in the naming format, and if the directory path contains non-ASCII
characters, or if the directory path is / (slash), then the Primary qtree name attribute is
replaced with the directory ID.
Page descriptions
Global Naming Settings Snapshot Copy area
You can customize the Snapshot copy naming settings in the Setup Options dialog box to determine
global-level names for Snapshot copies generated by dataset protection jobs.
Options
You can specify a Snapshot copy name at the global level by specifying either a naming format or
the path to a naming script.
Note: The global-level Snapshot copy naming settings apply to any Snapshot copy that is
generated by a OnCommand console protection job that is executed on a dataset that does not have
dataset-level Snapshot copy naming settings specified. Dataset-level Snapshot copy naming
settings take precedence over global-level naming settings.
Selecting this option specifies that global-level Snapshot copy naming is determined by
Use
Naming the format that is specified in the Name Format field.
Format
Enables you to specify global-level naming attributes of any Snapshot copy
Name
Format that is generated by OnCommand console protection jobs. You can type
the following attributes (separated by the underscore character) in this field
in any order:
%T (timestamp attribute)
Administration | 441
name is used as the custom label. If you include a blank space in the
custom label string, the blank space is converted to letter x in any
Snapshot copy, volume, or qtree object name that includes the custom
label as part of its syntax.
%H (storage system name attribute)
The name of the storage system that contains the volume from which a
Snapshot copy is made
%N (volume name attribute)
The name of the volume from which a Snapshot copy is made
%A (application fields attribute)
Data inserted by outside applications into the name of the Snapshot
copy. In the case of regular datasets, %A contains a list of qtrees on the
volume for which the Snapshot copy is made.
%1, %2, %3 (digit suffix)
A one-digit, two-digit, or three-digit suffix, if required, to distinguish
Snapshot copies with otherwise matching names
Displays a sample Snapshot copy name based on the attributes that you
entered in the Name Format field.
For example, if Snapshot copy naming format is customized as %T_%R_
%L_%H_%N_%A , and if a dataset with custom label "mydata" has some
data backed up "hourly" from primary to the backup destination mgt-u35:/
myVol, then the name of the Snapshot copy on myVol is
"2010-03-04_03.30.45+0430_hourly_mydata_mgtu35_myVol_(Application fields)".
Selecting this option specifies at a global level the path and name of a user-authored
Use
Naming naming script. The script specifies how a Snapshot copy that is generated by
OnCommand console protection jobs is named. Naming scripts for Snapshot copies
Script
apply to datasets of physical storage objects only. They do not apply to datasets of
virtual objects. Host services will use the Name Format instead of the script.
Script Path The path and file name of a user-supplied naming script. The naming
script must be in a location that is readable by DataFabric Manager
server.
Run As:
The name of the authorized user under whose identity the operations
that are specified in the naming script are executed.
generated by an OnCommand console protection job that is executed on a dataset that does not
have dataset-level primary volume naming settings specified. Dataset-level primary volume
naming settings take precedence over global-level naming settings.
Use
Naming
Format
Selecting this option specifies that global-level primary volume naming is determined
by the format that is specified in the Name Format field.
Name
Format
The custom label, if any, that is specified for the primary volume's
containing dataset. If no custom label is specified, then the dataset
name is included in the primary volume name.
%D (dataset name)
The actual name of the dataset in which a volume was created
%1, %2, %3 (digit suffix)
A one-digit, two-digit, or three-digit suffix, if required, to distinguish
primary volumes with otherwise matching names
Displays a sample primary volume name based on the attributes that you
entered in the Name Format field.
For example, if primary volume naming format is customized as %L_
%D , and if a dataset named "mydataset" with custom label "mydata"
and a new volume is provisioned through the OnCommand console, then
the name of the primary volume is "mydata_mydataset."
Use
Naming
Script
Selecting this option specifies at a global level the path location and name of a userauthored naming script. The script specifies how a primary volume that is generated by
Administration | 443
OnCommand console protection jobs is named. Host services will use the Name
Format instead of the script.
Script Path The path and file name of a user-supplied naming script. The naming
script must be in a location that is readable by DataFabric Manager
server.
Run As:
The name of the authorized user under whose identity the operations
that are specified in the naming script are executed.
Use
Naming
Format
Selecting this option specifies that global-level secondary volume naming is determined
by the format that is specified in the Name Format field of this option.
Name
Format
Enables you to specify at a global level the naming attributes that are to be
included in the name of any secondary volume that is generated by
OnCommand console protection jobs. You can type the following
attributes (separated by the underscore character) in this field in any order:
The custom label, if any, that is specified for the secondary volume's
containing dataset. If no custom label is specified, then the dataset
name is included in the secondary volume name.
It enables you to specify a custom alphanumeric character, . (period), _
(underscore), or - (hyphen) to include in the names of the related
objects that are generated by protection jobs that are run on this
dataset. If the naming format for a related object type includes the
Custom label attribute, then the value that you specify is included in
the related object names. If you do not specify a value, then the dataset
name is used as the custom label. If you include a blank space in the
custom label string, the blank space is converted to letter x in any
Snapshot copy, volume, or qtree object name that includes the custom
label as part of its syntax.
%S (primary storage system name)
The name of the primary storage system
%V (primary volume name)
The name of the primary volume
%C (type)
The connection type (backup or mirror)
%1, %2, %3 (digit suffix)
A one-digit, two-digit, or three-digit suffix, if required, to distinguish
secondary volumes with otherwise matching names
Use
Naming
Script
Selecting this option specifies at a global level the path location and name of a userauthored naming script. The script specifies how a secondary volume that is generated
by OnCommand console protection jobs is named. Host services will use the Name
Format instead of the script.
Script Path The path and file name of a user-supplied naming script. The naming
script must be in a location that is readable by DataFabric Manager
server.
Run As:
The name of the authorized user under whose identity the operations
that are specified in the naming script are executed.
Administration | 445
have dataset-level secondary qtree naming settings specified. Dataset-level secondary qtree
naming settings take precedence over global-level naming settings.
Name
Format
Enables you to specify at a global level the naming attributes that are to be included in
the name of any secondary qtree that is generated by OnCommand console protection
jobs. You can type the following attributes (separated by the underscore character) in
this field in any order:
The custom label, if any, that is specified for the secondary qtree's containing
dataset. If no custom label is specified, then the dataset name is included in the
secondary qtree name.
It enables you to specify a custom alphanumeric character, . (period), _ (underscore),
or - (hyphen) to include in the names of the related objects that are generated by
protection jobs that are run on this dataset. If the naming format for a related object
type includes the Custom label attribute, then the value that you specify is
included in the related object names. If you do not specify a value, then the dataset
name is used as the custom label. If you include a blank space in the custom label
string, the blank space is converted to letter x in any Snapshot copy, volume, or
qtree object name that includes the custom label as part of its syntax.
%S (primary storage system name)
The name of the primary storage system
%V (primary volume name)
The name of the primary volume
%Q (primary qtree name)
The name of the primary qtree
%1, %2, %3 (digit suffix)
A one-digit, two-digit, or three-digit suffix, if required, to distinguish secondary
qtrees with otherwise matching names.
Displays a sample secondary qtree name based on the attributes that you entered in the
Name Format field.
For example, if secondary qtree naming format is customized as %L_%S_%V_%Q, and
if a dataset with custom label "mydata" has some data backed up "hourly" from primary
myhost1:/myvol1/qtree1 to the backup destination myhost2:/myvol2, then the name of
the secondary qtree is "qtree1_mydata_host1_vol1".
What chargeback is
You can configure the DataFabric Manager server to collect data related to space usage by
individual, or groups of, appliances and file systems. The statistical information you collect can be
used for chargeback, and planning space utilization.
The chargeback reports provide an easy way to track space usage and to generate bills based on your
specifications. If your organization bills other organizations or groups in your company for the
storage services they use, you can use the chargeback reports.
Managing costing options
Editing chargeback options
You can edit the chargeback options for objects to customize the billing information.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Costing option.
3. Click the Chargeback option.
The Costing Chargeback area appears.
4. Specify the chargeback increment, currency display format, the amount to charge for disk space
usage, and the day of the month when the billing cycle begins and ends.
5. Click Save and Close.
Related references
Administration | 447
Options
You can configure the following chargeback options:
Chargeback
Increment
Displays how the charge rate is calculated. You can specify this setting only at
the global level.
You can specify the following values for the Chargeback increment option:
Daily: Charges are variable and they are adjusted based on the number of
days in the billing period.
Formula used by DataFabric Manager server to calculate the charges: Annual
Rate / 365 x number of days in the billing period
Monthly: Charges are fixed and there is a flat rate for each billing period
regardless of the number of days in the period.
Formula used by DataFabric Manager server to calculate the charges: Annual
Rate / 12
Annual Charge Displays the amount to charge for storage space usage, per GB, per year.
Rate (Per GB)
You must specify the value in the x.y notation, where x is the integer of the
number and y is the fraction. For example, to specify an annual charge rate of $
150.55, you must enter 150.55.
Day of the
Month for
Billing
Displays the day of the month when the billing cycle begins.
You can specify the following values for the day of the month for the billing
option:
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Archive
This backup process backs up your critical data in compressed form as a .zip file. The DataFabric
Manager server data is automatically converted to an archive format and the DataFabric Manager
server stores the backup in a local or remote directory. You can easily move an archive-based
backup to a different system and restore it. This backup process is time-consuming.
Snapshot
This backup process uses the Snapshot technology to back up the database. You can quicken the
backup process through this approach. But you cannot transfer a Snapshot backup to a different
system and restore it.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Database Backup option.
3. Click the Completed option.
4. In the Database Backup Completed area, select the backup file you want to delete.
Administration | 449
5. Click Delete.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can
confirm your authorization in advance.
You must log in to the DataFabric Manager server as an administrator with the GlobalFullControl
role.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Database Backup option.
3. Click the Schedule option.
4. In the Database Backup Schedule area, specify the database backup properties, such as backup
type, backup path, retention count, and schedule.
You can select between the Archive and Snapshot backup types.
5. Select Schedule.
You can configure the time of your database backup schedule in minutes, hours, days, and weeks.
6. Click Save and Close.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Database Backup option.
3. Click the Schedule option.
4. In the Database Backup Schedule area, select a backup type:
Archive
Snapshot
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Database Backup option.
Administration | 451
3. Click the Schedule option.
4. In the Database Backup Schedule area, select a backup type:
Archive
Snapshot
Options
The following options are available in the Database Backup Schedule area:
Status
Backup
Type
Archive: Performs only critical data backup in a compressed form using the ZIP
format. You can transfer the backup to a different system and restore it with ease;
however, this process is time-consuming. By default, Archive is enabled.
Snapshot: Uses the Snapshot technology to perform the database backup. Although
this is a faster option, you cannot transfer the backup to a different system and
restore it.
You can export a Snapshot backup to Archive backup by using the dfm backup
export snapshot-name command.
Note: Snapshot backup is enabled only when both of the following conditions
exist:
Backup Path
Either SnapDrive for Linux 2.2.1 (and later) or SnapDrive for Windows 4.2
(and later) is installed.
DataFabric Manager server data resides on a dedicated LUN managed by
SnapDrive.
Retention
Count
Specifies the maximum number of backups that the OnCommand console can store
simultaneously. If you exceed this limit, the old backups are automatically deleted
to provide space for new backups. The default retention count is 0.
Hourly at Minute: Displays the time (in minutes) at which the hourly backup must be
performed.
Every (Hours): Displays the time (in hours) at which the backup must be performed.
Starting Every Day At: Displays the time during the day when the backup must start.
This backup is performed based on the time interval set to the Every (Hours) field.
Daily At: Displays the time during the day when the backup must start. This backup
is performed once every 24 hours.
Weekly On: Displays the day for the weekly backup schedule.
At: Displays the time for the weekly backup schedule.
Note: Hourly backups are not possible with Archive backup. By default, Snapshot
backup is selected when the DataFabric Manager server data resides on a LUN.
Command buttons
The command buttons enable you to save or cancel the setup options, and back up data.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Administration | 453
Back Up Now
Options
You can view the following information about the completed database backup:
File Name
File Size
Creation Time Displays the time when the database backup file was created.
Backup Events Display the events that are triggered during the database backup. You can also
view the time when the event was triggered.
Command buttons
The command buttons enable you to save or cancel the setup options, and delete the database backup.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Delete
Aggregates
Volumes
Qtrees
Hosts
Resource pools
User quotas
HBA ports
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Default Thresholds option.
3. Click the Aggregates option.
4. In the Default Thresholds Aggregates area, specify the new values, as required.
5. Click Save and Close.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Administration | 455
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Default Thresholds option.
3. Click the Volumes option.
4. In the Default Thresholds Volumes area, specify the new values, as required.
5. Click Save and Close.
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task
User quota thresholds that you configure for a qtree do not apply to quotas on a qtree.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Default Thresholds option.
3. Click the Other option.
4. In the Default Thresholds Other area, specify the new values, as required.
5. Click Save and Close.
Related references
Options
The following thresholds apply to all monitored aggregates. You can override the default values of
any aggregate from the Aggregates View page.
Full Threshold (%)
Nearly Full
Threshold (%)
Full Threshold
Interval
Displays the time that a condition can persist before the event is generated.
If the condition persists for the specified amount of time, the DataFabric
Manager server generates an Aggregate Full event. Threshold intervals
apply only to error and informational events.
If the threshold interval is 0 seconds, or a value less than the aggregate
monitoring interval, the DataFabric Manager server continuously generates
Aggregate Full events until an event is resolved. If the threshold interval is
greater than the aggregate monitoring interval, the DataFabric Manager
server waits for the specified threshold interval (which includes two or
more monitoring intervals), and generates an Aggregate Full event only if
the condition persisted throughout the threshold interval.
For example, if the monitoring cycle time is 60 seconds and the threshold
interval is 90 seconds, the threshold event is generated only if the condition
persists for two monitoring cycles.
The default is 0 seconds.
Overcommitted
Threshold (%)
Administration | 457
Nearly
Overcommitted
Threshold (%):
Snapshot Reserve
Full Threshold (%)
Snapshot Reserve
Nearly Full
Threshold (%)
Over-Deduplicated
Threshold (%)
Displays the percentage of user data that can be deduplicated and stored on
a volume before the system generates a Volume Over Deduplicated event.
The default is 150%.
Nearly overDeduplicated
Threshold (%)
Displays the percentage of user data that can be deduplicated and stored on
a volume before the system generates a Volume Nearly Over Deduplicated
event.
The default is 140%.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
The following thresholds apply to all monitored volumes.
Nearly Full
Threshold (%)
Full Threshold
Interval
Quota
Overcommitted
Threshold (%)
Quota Nearly
Overcommitted
Threshold (%):
Growth Event
Minimum Change
(%)
Administration | 459
Snap Reserve Full
Threshold (%)
Displays the value at which the space reserved for making volume Snapshot
copies is considered full.
The default is 90%.
No First Snapshot
Threshold (%)
Nearly No First
Displays the value at which a volume is considered to have consumed most
Snapshot Threshold of the free space that it needs when the first Snapshot copy is created.
(%)
This option applies to volumes that contain space-reserved files, no
Snapshot copies, and a fractional overwrite reserve set to greater than 0, and
for which the sum of the space reservations for all LUNs in the volume is
greater than the free space available to the volume.
You should specify a limit that is less than the value specified for the
Volume No First Snapshot Threshold option.
The default is 80%.
Space Reserve
Depleted Threshold
(%)
Displays the value at which a volume is considered to have consumed all its
reserved space.
This option applies to volumes with LUNs, Snapshot copies, no free space,
and a fractional overwrite reserve of less than 100%. A volume that has
crossed this threshold is getting dangerously close to having write failures.
The default is 90%.
Space Reserve
Nearly Depleted
Threshold (%)
Snapshot Count
Threshold
Displays the limit to the number of Snapshot copies allowed on the volume.
A volume is allowed up to 255 Snapshot copies.
Specifies the limit to the age of a Snapshot copy allowed for the volume.
The Snapshot copy age can be specified in seconds, minutes, hours, days, or
weeks.
The default is 52 weeks.
Nearly overDeduplicated
threshold (%)
Displays the percentage of user data that can be deduplicated and stored on
a volume before the system generates a Nearly Over-Deduplicated event.
Over Deduplicated
threshold (%)
Displays the percentage of user data that can be deduplicated and stored on
a volume before the system generates an Over Deduplicated event.
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Options
The following information is available under Other thresholds:
HBA Port Too Displays the percentage of maximum traffic the HBA port can handle without
Busy Threshold adversely affecting performance. If this threshold is crossed, the DataFabric
Manager server generates an HBA Port Traffic High event.
(%)
The default is 90%.
Host CPU Too Displays the percentage of maximum traffic the host CPU can handle without
Busy Threshold adversely affecting performance.
(%)
The default is 95%.
Administration | 461
Host CPU Busy Specifies the maximum duration of a threshold breach that can persist before the
CPU busy event is generated.
Threshold
Interval
If the condition persists for the specified amount of time, the DataFabric
Manager server generates a CPU-too-busy event. Threshold intervals apply only
to error and informational events.
If the threshold interval is 0 seconds or a value less than the CPU monitoring
interval, the DataFabric Manager server generates CPU-too-busy events as
they occur.
If the threshold interval is greater than the CPU monitoring interval, the
DataFabric Manager server waits for the specified threshold interval, which
includes two or more monitoring intervals, and generates a CPU-too-busy
event only if the condition persisted throughout the threshold interval.
For example, if the monitoring cycle time is 60 seconds and the threshold
interval is 90 seconds, the event is generated only if the condition persists for
two monitoring cycles.
The default is 15 minutes.
Qtree Full
Threshold (%)
Qtree Nearly
Full Threshold
(%)
Displays the percentage at which a qtree is considered nearly full. If this limit is
exceeded, the DataFabric Manager server generates a Qtree Nearly Full event.
You should to specify a limit that is less than the value specified for the Qtree
Full Threshold option
The default is 80%.
Qtree Full
Threshold
Interval
Displays the maximum duration of a threshold breach before the Qtree full
threshold event is generated.
If the condition persists for a specified amount of time, the DataFabric Manager
server generates a Qtree Full event. Threshold intervals apply only to error and
informational events.
If the threshold interval is 0 seconds or a value less than the qtree monitoring
interval, the DataFabric Manager server continuously generates Qtree Full
events until an event is resolved.. If the threshold interval is greater than the
qtree monitoring interval, the DataFabric Manager server waits for the specified
threshold interval, which includes two or more monitoring intervals, and
generates a Qtree Full event only if the condition persisted throughout the
threshold interval. For instance, if the monitoring cycle time is 60 seconds and
the threshold interval is 90 seconds, the threshold event is generated only if the
condition persists for two monitoring cycles.
Displays the minimum change in qtree size (as a percentage of total volume
size). If the change in qtree size is more than the specified value, and the growth
is abnormal with respect to the qtree-growth history, the DataFabric Manager
server generates a Qtree Growth Abnormal event.
The default is 1%.
User Quota Full Displays the value at which a user is considered to have consumed all the
Threshold (%) allocated space (disk space or files used) as specified by the user's quota (hard
limit in the /etc/quotas file).
If this limit is exceeded, the DataFabric Manager server generates a User Disk
Space Quota Full or User Files Quota Full event.
The default is 90%.
User Quota
Nearly Full
Threshold (%)
Displays the value at which a user is considered to have consumed most of the
allocated space (disk space or files used) as specified by the user's quota (hard
limit in the /etc/quotas file).
If this limit is exceeded, the DataFabric Manager server generates a User Disk
Space Quota Almost Full or User Files Quota Almost Full event.
You should to specify a limit that is less than the value specified for the User
Quota Full Threshold option.
The default is 80%.
Resource Pool
Full Threshold
(%)
Resource Pool
Nearly Full
Threshold (%)
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Administration | 463
Then...
When the DataFabric Manager server is installed for the first time or updated, by default, the global
and network setting uses SNMPv1 as the preferred version. However, you can configure the global
and network setting to use SNMPv3 as the default version.
Guidelines for editing discovery options
You must follow a set of guidelines for changing the default values of the discovery options.
Interval
This option specifies the period after which the DataFabric Manager server scans
for new storage systems and networks.
You can change the default value if you want to increase the minimum time
interval between system discovery attempts. This option affects the discovery
interval only at the time of installation. After storage systems are discovered, the
user should determine the interval based on the number of networks and their size.
If you choose a longer interval, there might be a delay in discovering new storage
systems, but the discovery process is less likely to affect the network load.
The default is 15 minutes.
Timeout
This option specifies the time interval after which the DataFabric Manager server
considers a discovery query to have failed.
You can change the default value if you want to lengthen the time before
considering a discovery to have failed (to avoid discovery queries on a local area
network failing due to the long response times of a storage system).
This option enables the discovery of storage systems, host agents and vFiler Units
through SNMP.
You can change the default value if any of the following situations exist:
All storage systems that you expected the DataFabric Manager server to
discover have been discovered and you do not want the DataFabric Manager
server to continue scanning for new storage systems.
You want to manually add storage systems to the DataFabric Manager server
database.
Manually adding storage systems is faster than discovering storage systems in
the following cases:
Host agent
discovery
Network
discovery
This option enables the discovery of networks, including SAN and cluster
networks.
You can change the default value if you want to disable the discovery of LUNs or
storage area network (SAN) hosts and host agents.
You can change the default value if you want the DataFabric Manager server to
automatically discover storage systems on your entire network.
Note: When the Network Discovery option is enabled, the list of networks on
the Networks to Discover page can expand considerably as the DataFabric
Manager server discovers additional networks attached to previously discovered
networks.
Network
Discovery
Limit (in
hops)
This option sets the boundary of network discovery as a maximum number of hops
(networks) from the DataFabric Manager server.
You can change the default value if you want to increase this limit if the storage
systems that you want the DataFabric Manager server to discover are connected to
networks that are more than 15 hops (networks) away from the network to which
the DataFabric Manager server is attached. The other method for discovering these
storage systems is to add them manually.
You can decrease the discovery limit if a smaller number of hops includes all the
networks with storage systems you want to discover. For example, reduce the limit
to six hops if there are no storage systems that must be discovered on networks
beyond six hops. Reducing the limit prevents the DataFabric Manager server from
Administration | 465
using cycles to probe networks that contain no storage systems that you want to
discover.
The default is 15 hops.
Networks to
discover
This option enables you to manually add or delete networks that the DataFabric
Manager server scans for new storage systems.
You can change the default value if you want to add a network to the DataFabric
Manager server that it cannot discover automatically, or you want to delete a
network for which you no longer want storage systems to be discovered.
Network
Credentials
This option enables you to specify, change, or delete an SNMP community that the
DataFabric Manager server uses for a specific network or host.
You can change the default value if storage systems and routers that you want to
include in the DataFabric Manager server do not use the default SNMP
community.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Discovery option.
3. Click the Credentials option.
4. In the Discovery Credentials area, select the network address you want to edit.
5. Click Edit.
6. In the Edit Network Credentials dialog box, under Preferred SNMP version, you can select one
of the following options:
SNMP v1; if you select SNMP v1, you can configure the SNMP communities.
SNMP v3; if you select SNMP v3, you can configure the following:
Auth protocol: You can choose either of the two protocol options: MD5 and SHA. The
default option is MD5.
Login: Type your login information.
Password: Type your password.
Privacy password: Type your privacy password.
7. Click OK.
8. Click Save and Close.
Related references
Administration | 467
Options
Displays the settings you can configure to discover storage objects and the time taken for discovery.
Host discovery
Host agent
discovery
Enables (default) or disables the discovery of hosts running the NetApp Host
Agent software.
vFiler Unit
discovery
Host-initiated
discovery
Agent.
Other Discovery
Methods
Displays the methods you can use to discover storage systems and
networks.
SAN discovery
Cluster discovery
Network discovery
Default Discovery
Options (except
vFiler Units)
Displays default settings that are used by the DataFabric Manager server to
start or end the discovery process.
Interval
Displays the minimum time interval during which the DataFabric Manager
server scans for new storage systems and networks.
The default is 15 minutes.
Timeout
Displays the time interval after which the DataFabric Manager server
considers a discovery query to have failed.
The default is 5 seconds.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
Displays the properties of the discovered network address in a tabular format. You can sort data
based on the filters applied to the columns. Multiple filters and single column sorting can be applied
at the same time.
Address
Prefix Length
Hop Count
Last Searched
Displays the date and timestamp of the last searched network address.
Command buttons
The command buttons enable you to save or cancel the setup options, and add, edit, or delete network
addresses.
Add
Edit
Delete
Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.
Administration | 469
Cancel
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Options
Displays, in tabular format, the properties of the discovered network credential. You can sort data
based on the filters applied to the columns. Multiple filters and single column sorting can be applied
at the same time.
Address
Prefix Length
Command buttons
The command buttons enable you to save or cancel the setup options, and add, delete, or edit network
credentials.
Add
Edit
Enables you to edit the selected network and configure the SNMP settings.
Delete
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
You can then decide how to most efficiently use your existing space.
How FSRM monitoring works
The DataFabric Manager server monitors directory paths that are visible to the host agent. Therefore,
if you want to enable FSRM monitoring of NetApp storage systems, the remote host must mount a
NetApp share using NFS or CIFS, or the host must use a LUN on the storage system.
Note: The DataFabric Manager server cannot obtain FSRM data for files that are located in
NetApp volumes, which are not exported by CIFS or NFS. Host agents can also gather FSRM data
about other file system paths that are not on a NetApp storage system: for example, local disk or
third-party storage systems.
Configuring File SRM options
Adding new file types for monitoring file-level statistics
DataFabric Manager server collects file-system metadata using File SRM. You can add new file
types from the Setup Options dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Administration | 471
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Options
You can configure the following File SRM setting options:
Largest Files (Max)
Displays the maximum number of largest files for each File SRM
path.
Administration | 473
SRM File Types
Command buttons
The command buttons enable you to save or cancel the setup options, and add or delete SRM file
types.
Add
Delete
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Administration | 475
Related references
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Options
The LDAP authentication setting options are as follows:
LDAP Is Enabled
LDAP Bind DN
Specifies the bind distinguished name (DN) that DataFabric Manager server
uses to identify itself to the LDAP server.
Specifies the password that DataFabric Manager server uses to gain access
to the bind distinguished name.
LDAP Base DN
Specifies the directory on the LDAP server that DataFabric Manager server
uses to search for the LDAP server.
For example, dc=domain, dc=com specifies whether LDAP is enabled or
disabled. By default, LDAP is disabled.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
The LDAP server type setting options are as follows:
Templates
Product Line Specifies the template that provides the predefined LDAP settings designed to
make DataFabric Manager server compatible with your LDAP server. You can
select Netscape/iPlanet (default), UMich/OpenLDAP, or Lotus Domino from the
drop-down menu.
Netscape/iPlanet
Administration | 477
Lotus Domino
Custom
Protocol
Version
Specifies the LDAP protocol version. You can either select 2 or 3 (default) from
the drop-down menu.
The default is 3.
UID
Attribute
Specifies the name of the attribute in the LDAP directory that contains user login
names to be authenticated by the DataFabric Manager server.
The default is UID.
GID
Attribute
Specifies a value that assigns the DataFabric Manager server group membership to
LDAP users based on an attribute and value specified in their LDAP user objects.
UGID
Attribute
Member
Attribute
Specifies the attribute name that your LDAP server uses to store information about
the individual members of a group.
The default is uniqueMember.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
The LDAP server setting options are as follows:
Displays the IP address or host name of the LDAP server that is used to
authenticate the user on the DataFabric Manager server.
Port
Last Used
Displays the date and timestamp of the most recent authentication success.
Last Failed
Displays the date and timestamp of the most recent permission failure.
Command buttons
The command buttons enable you to add, delete, save or cancel the setup options.
Add
Delete
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Administration | 479
Start
No
Abnormal
status received
from the storage
system?
Ye
No
Generate an event
Alarm
configured for
the event?
Ye
Generate the alarm
No
No
Repeat
notification for
the alarm
configured?
Ye
Alarm
acknowledged?
Ye
SNMP queries
The DataFabric Manager server uses periodic SNMP queries to collect data from the storage systems
it discovers. The data is reported by the DataFabric Manager server in the form of tabular and
graphical reports and event generation.
The time interval at which an SNMP query is sent depends on the data being collected. For example,
although the DataFabric Manager server pings each storage system every minute to ensure that the
storage system is reachable, the amount of free space on the disks of a storage system is collected
every 30 minutes.
Guidelines for changing monitoring intervals
Although you should generally keep the default values, you might need to change some of the
options to suit your environment. All the monitoring option values apply to all storage systems in all
groups.
If you decrease the monitoring intervals, you receive more real-time data. However, the DataFabric
Manager server queries the storage systems more frequently, thereby increasing the network traffic
and the load on the DataFabric Manager server and the storage systems responding to the queries.
If you increase the monitoring interval, the network traffic and the storage system load are reduced.
However, the reported data might not reflect the current status or condition of a storage system.
Managing monitoring options
Editing storage options for monitoring
Although the DataFabric Manager server is configured with defaults that enable you to manage the
global default threshold values immediately, you might need to change some of the storage options to
suit your environment. You can change the storage options from the Setup Options dialog box.
Before you begin
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Administration | 481
Related concepts
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Administration | 483
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
1. Click the Administration menu, then click the Setup Options option.
2. In the Setup Options dialog box, click the Monitoring option.
3. Click the Networking option.
4. In the Monitoring Networking area, specify the following settings:
The ping is declared successful only if the reply is received before the ping timeout interval.
5. Click Save and Close.
Related references
Options
You can use the monitoring options to configure monitoring intervals for various storage objects that
are monitored by DataFabric Manager server.
Cluster
Interval
Specifies the time at which the DataFabric Manager server gathers status
information from each cluster.
The default is 15 minutes.
Cluster
Failover
Interval
Specifies the time at which the DataFabric Manager server gathers highavailability configuration status information from each controller.
vFiler Unit
Interval
Specifies the time at which the DataFabric Manager server gathers information
about vFiler units that are configured or destroyed on hosting storage systems.
Specifies the time at which the DataFabric Manager server gathers information
about virtual servers on hosting cluster systems.
The default is 1 hour.
User Quota
Interval
Specifies the time at which the DataFabric Manager server collects the user quota
information from the monitored storage systems.
The default is 1 day.
Administration | 485
Note: The process of collecting the user quota information from storage systems
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Options
You can use the monitoring options to configure monitoring intervals for various protection objects
that are monitored by DataFabric Manager server.
Dataset
Conformance
Specifies the time at which the DataFabric Manager server checks whether
each dataset conforms to its protection policy.
The default is 1 hour.
Dataset Disaster
Recovery Status
Specifies the time at which the DataFabric Manager server checks the
disaster recovery status of each dataset.
The default is 15 minutes.
Specifies the time at which the DataFabric Manager server checks the
protection status of each dataset.
The default is 15 minutes.
Resource Pool Space Specifies the time at which the DataFabric Manager server checks the space
usage in each resource pool.
The default is 1 hour.
SnapMirror
SnapShot
Specifies the time at which the DataFabric Manager server gathers Snapshot
information from each storage system.
The default is 30 minutes.
SnapVault
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.
Cancel
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Options
You can use the monitoring options to configure monitoring intervals for various networking objects
that are monitored by DataFabric Manager server.
Ping
Specifies the time at which the DataFabric Manager server pings a storage system.
Administration | 487
A short ping interval is recommended when you want to quickly detect if a storage
system is available. The minimum ping monitoring interval is 1 second.
The default is 1 minute.
Note: The actual ping interval depends on variables such as networking conditions
and the number of monitored hosts. Therefore, the ping interval can be longer than
the specified value.
Ping
Method
Specifies the ping method that the DataFabric Manager server uses to check that a
storage system is accessible. By default, the ICMP echo and SNMP ping method is
selected.
You can select one of the following options from the drop-down menu:
Specifies the time after which a storage system is considered to be not responsive if
the DataFabric Manager server does not receive a reply from the storage system to a
ping request.
The default is 3 seconds.
Ping Retry Specifies the time period that the ping utility remains inactive before retrying an
unresponsive host.
Delay
When an SNMP timeout occurs, this option specifies the time between SNMP
connection attempts.
The default is 4 seconds.
When an SNMP timeout occurs, the SNMP monitor attempts to reconnect to the
device for the number of times specified in the SNMP retries option. If the number of
retries is exceeded, the DataFabric Manager server generates a Host SNMP Not
Responding event.
SNMP
Timeout
Specifies the time that can elapse before an SNMP timeout occurs.
The default is 5 seconds.
When an SNMP timeout occurs, the SNMP monitor attempts to reconnect to the
device for the number of times specified in the SNMP retries option. If the number of
retries is exceeded, the DataFabric Manager server generates a Host SNMP Not
Responding event.
Fibre
Channel
Specifies the time at which the DataFabric Manager server gathers status information
from each Fibre Channel switch.
The default is 5 minutes.
SAN Host
Specifies the time at which the DataFabric Manager server gathers information from
each SAN host.
The default is 5 minutes.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.
Cancel
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Administration | 489
usage, disk space and status, environmental status, file system, and global status from the storage
system.
Options
You can use the monitoring options to configure monitoring intervals for various inventory objects
that are monitored by DataFabric Manager server.
CPU
Specifies the time at which the DataFabric Manager server gathers CPU usage
information from each storage object.
The default is 5 minutes.
Disk Free Space Specifies the time at which the DataFabric Manager server gathers available disk
space information from each storage object.
In addition to monitoring free disk space on storage systems, the DataFabric
Manager server also monitors disk space on the workstation. If the workstation is
running low on disk space, monitoring is turned off.
The default is 30 minutes.
Disk
Specifies the time at which the DataFabric Manager server gathers disk status
information, such as disks that are not functioning or spare disk count.
The default is 4 hours.
Environmental
File System
Specifies the time at which the DataFabric Manager server gathers file system
information from each storage object.
The default is 15 minutes.
Global Status
Specifies the time at which the DataFabric Manager server gathers global status
information from each storage object.
The default is 10 minutes.
Interface
Specifies the time at which the DataFabric Manager server gathers network
interface information from each storage object.
The default is 15 minutes.
Command buttons
The command buttons enable you to save or cancel the setup options.
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
Options
You can use the monitoring options to configure monitoring intervals for various system objects that
are monitored by DataFabric Manager server.
Agent
Specifies the time at which the DataFabric Manager server gathers status
information from each host agent.
The default is 2 minutes.
Config
Conformance
Specifies the time at which the DataFabric Manager server verifies that the
configuration on the storage system conforms with the configuration
provided by the DataFabric Manager server.
The default is 4 hours.
Host RBAC
Specifies the time interval at which the host RBAC Monitor should run.
The default is 1 day.
License
Specifies the time at which the DataFabric Manager server gathers license
status information from each appliance.
The default is 4 hours.
Operation Count
SRM Host
Administration | 491
System
Information
Specifies the time at which the DataFabric Manager server gathers system
information from each system.
The default is 1 hour.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.
Cancel
Enables you to undo all the configuration settings and then closes the Setup
Options dialog box.
Save
HTTP or HTTPS
RSH or SSH
Administration Port
hosts.equiv authentication
This option enables you to set login protocols (RSH or SSH) that the
DataFabric Manager server uses when connecting to the managed hosts.
Login connections.
Active/active configuration operations
The dfm run command for running commands on the storage system
Change the default value if you want a secure connection for active/active
configuration operations, running commands on the storage system.
Administration
Transport
Administration
Port
This options enables you to configure the administration port which, along
with administration transport, monitors and manages storage systems.
If you do not configure the port option at the storage system-level, the default
value for the corresponding protocol is used.
hosts.equiv option This option enables users to authenticate storage systems when the user name
and password are not provided.
You must change the default value if you have selected the global default
option and if you do not want to set authentication for a specific storage
system.
Note: If you do not set the transport and port options for a storage system, then the DataFabric
Manager server uses SNMP to get storage system-specific transport and port options for
communication. If SNMP fails, then the DataFabric Manager server uses the options set at the
global level.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Administration | 493
3. In the Management Managed Host area, specify the new values, as required.
4. Click Save and Close.
Related concepts
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Enables you to connect the clients to the DataFabric Manager server. By default,
HTTP is enabled with the default port 8080.
Enable
HTTPS
Enables you to connect the clients to the DataFabric Manager server. By default,
HTTPS is enabled with the default port 8443.
Note: To enable HTTPS on the DataFabric Manager server, you must configure
SSL using the dfm ssl server setup command.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
You can configure the following options:
Login Protocol
Specifies the login protocol that the DataFabric Manager server must use
when connecting to managed hosts.
The default is Remote Shell (RSH).
Administration
Transport
Specifies the transport protocol that the DataFabric Manager server must
use when connecting to storage systems.
The default is HTTP.
Administration Port Specifies the port that the DataFabric Manager server must use when
connecting to storage systems.
The default is port 80.
Administration | 495
Enable hosts.equiv
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
You can configure the following options:
Login
Specifies the administrator access level that the DataFabric Manager server
uses to connect with the host agent.
The default login option is guest.
You can specify the following login options for NetApp Host Agent:
guest
Enables the user to log in to the host agent to monitor its LUNs.
admin
Enables the user to log in to the host agent to monitor and manage its
LUNs and successfully execute file walk on directory structures (paths).
Monitoring
Password
Specifies the password that the NetApp Host Agent uses to authenticate a
"guest" user that has monitoring access privilege on the host agent. This
value is the NetApp Host Agent Software option, Monitoring API Password.
Management
Password
Specifies the password that the NetApp Host Agent uses to authenticate an
"admin" user account that has monitoring and management access privileges
Specifies the transport protocol that runs on the host agent. You can select
either HTTP or HTTPS from the drop-down menu.
The default is http.
Administration
Port
Specifies the port used for communication between the DataFabric Manager
server and NetApp Host Agent.
The default is port 4092.
CIFS Account
Specifies the CIFS account name. This information is required for host agents
running Windows during file walk on CIFS shares.
CIFS Password
Specifies the Host Agent CIFS password. This information is required for
Windows during file walk on CIFS shares.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Annotations
You can add, edit, or delete user-defined properties such as the e-mail address
of the owner, owner name, and resource tag information.
Administration | 497
Audit Log
You can use the audit log option to set the global auditLogForever option
to keep the audit log files forever in the default log directory of the DataFabric
Manager server. You can view the specific operation in the audit.log file
and determine who performed certain actions from the CLI.
Credential Cache You can use the credential cache option to specify the Time-To-Live (TTL) for
web responses cached by the DataFabric Manager server.
Storage System
Configuration
You can use the storage system configuration option to manage local
configuration file changes on all storage systems that are discovered and
managed by the DataFabric Manager server.
Script Plugins
You can configure the script plug-ins to specify the search path that the
DataFabric Manager server uses to find script interpreters.
Paged Tables
You can configure the number of rows for display in a table. This is applicable
only to the Operations Manager console
Note: This option is not applicable to OnCommand console list tables. It is
applicable only to the Operations Manager console tables.
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
Steps
Administration | 499
4. Click Save and Close.
Related references
Options
You can configure the following options:
E-mail
Mail Server
Specifies the name of your mail server.
From Field
Specifies the e-mail address of the owner who is marked on all the e-mails.
Events
Purge Interval
Specifies the period of time after which events are removed from the DataFabric
Manager server database. The DataFabric Manager server evaluates events for
deletion on a daily basis.
Specifies the SNMP trap settings to be received from storage systems. By default, the
SNMP trap is enabled.
Listener Port
Specifies the UDP port on which the DataFabric Manager server Trap Listener
receives traps. To use this feature, you must also configure the DataFabric Manager
server as a trap destination in the systems you are monitoring. The DataFabric
Manager server Trap Listener communicates through port 162, by default.
Window Size
Displays the SNMP Maximum Traps Received per window option to determine the
number of SNMP traps that can be received by the trap listener within a specified
period of time. By default, 5 minutes is specified.
Max Traps/Window
Specifies the maximum number of SNMP traps that the workstation receives within
the time specified in the SNMP Trap Window Size option. The trap listener attempts
to limit the incoming rate of traps to this value.
The default is 250 traps per window.
User
Quota
Alerts
Enables you to configure or change the values of options that specify e-mail domains
and enable or disable alerts based on quota events. By default, the user quota alerts
option is enabled.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
You can configure the following options:
Name
Displays the list of comment field names that include both system-defined and userdefined annotations:
ownerEmail
Specifies the system-defined annotation used for owner's e-mail address.
ownerName
Specifies the system-defined annotation used for the name of the owner.
resourceTag
Administration | 501
Displays
Yes
Edit
Delete
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
Options
You can configure the following options:
Audit Log
Enables you to set the global option auditLogForever to keep the audit log
files forever in the default log directory of the DataFabric Manager server.
You can view the specific operation in the audit.log file and determine who
performed certain actions in the OnCommand console, the command line
interface, and the APIs. By default, the audit log option is enabled.
Enables you to specify the Time-To-Live (TTL) for web responses cached by the
DataFabric Manager server.
When a user authenticates to the DataFabric Manager server, the DataFabric
Manager server caches the web response and reuses the information to satisfy
subsequent authentication queries for the amount of time specified in this option.
TTL
Displays the Time-To-Live for LDAP server responses cached by the
DataFabric Manager server. By default, LDAP and Windows authentication
information is cached for 20 minutes.
Storage System Enables you to manage local configuration file changes on all storage systems
that are recognized and managed by the DataFabric Manager server.
Configuration
Script Plugins
Enables you to specify the search path that the DataFabric Manager server uses
to find script interpreters. The value you enter for this option must be a string
that contains multiple paths, delimited by colons or semicolons, depending on
your DataFabric Manager server's platform.
Paged Tables
Search Path
Displays the path for locating the script interpreters. The DataFabric
Manager server uses this path information before using the system path when
searching for script interpreters.
reports.
Command buttons
The command buttons enable you to save or cancel the setup options.
Save and Close
Administration | 503
Cancel
Does not save the recent changes and closes the Setup Options dialog box.
Save
505
Note: A user who is part of the local administrators group is treated as a super-user and
GlobalDataProtection
GlobalDataset
GlobalDelete
GlobalHostService
GlobalEvent
GlobalFullControl
Enables you to view and perform any operation on any object in the
DataFabric Manager server database and configure administrator
accounts. You cannot apply this role to accounts with group access
control.
GlobalMirror
GlobalRead
GlobalRestore
GlobalWrite
GlobalResourceControl
GlobalPerfManagement
Related information
Policies
The VI administrator will need the following operation permissions for the group
created for the VI administrator role.
DFM.Database
All
DFM.BackManager
All
DFM.ApplicationPolicy
All
DFM.Dataset
All
DFM.Resource
Control
The VI administrator will need the following operation permissions for each policy
template, located under Local Policies, that you want the virtual administrator to be
able to copy.
DFM.ApplicationPolicy
Storage
services
The VI administrator will need the following operation permissions for each of the
storage services that you want to allow the VI administrator to use.
DFM.StorageService
Protection
Policies
Read
These are the policies contained within the storage services that you selected above.
DFM.Policy
All
Understanding authentication
Authentication methods on the DataFabric Manager server
The DataFabric Manager server uses the information available in the native operating system for
authentication. The server does not maintain its own database of the administrator names and the
passwords.
You can also configure the DataFabric Manager server to use Lightweight Directory Access Protocol
(LDAP). If you configure LDAP, then the server uses it as the preferred method of authentication.
511
Plug-ins
Hyper-V troubleshooting
Error: Vss Requestor - Backup Components failed with partial writer error.
Description
This message occurs when backing up a dataset using the Hyper-V plug-in. This
error causes the backup to fail for some of the virtual machines in the dataset.
The following message appears:
Error: Vss Requestor - Backup Components failed with partial
writer error.Writer Microsoft Hyper-V VSS Writer involved in
backup or restore operation reported partial failure. Writer
returned failure code 0x80042336. Writer state is 5.
Application specific error information:
Application error code: 0x1
Application error message: Failed component information:
Failed component: VM GUID XXX
Writer error code: 0x800423f3
Application error code: 0x8004230f
Application error message: Failed to revert to VSS
snapshot on the virtual hard disk 'volume_guid' of the
virtual machine 'vm_name'. (Virtual machine ID XXX)
The following errors appear in the Windows Application event log on the Hyper-V
host:
Volume Shadow Copy Service error: Unexpected error calling
routine GetOverlappedResult. hr = 0x80070057, The parameter
is incorrect.
Operation:
Revert a Shadow Copy
Context:
Execution Context: System Provider
Volume Shadow Copy Service error: Error calling a routine on
a Shadow Copy Provider {b5946137-7b9f-4925af80-51abd60b20d5}. Routine details RevertToSnashot [hr =
0x80042302, A Volume Shadow Copy Service component
encountered an unexpected error.
Check the Application event log for more information.].
Operation:
Revert a Shadow Copy
Context:
Execution Context: Coordinator
Corrective
action
After a successfully restore operation, you might get an error message stating that
your Hyper-V virtual machine did not restart. The Hyper-V plug-in gives this error
because the virtual machine is not yet ready to start.
Corrective
action
Currently the Hyper-V plug-in waits two seconds before restarting the virtual
machine. You can configure a longer delay by adding the following attribute in the
Windows registry:
KEY: System\CurrentControlSet\Services\OnCommandHyperV
\Parameters"; attribute (DWORD) : 'vm_restart_sleep'
Error: Failed to start VM. You might need to start the VM using Hyper-V
Manager
Description
After a successfully restore operation, you might get an error message stating that
your Hyper-V virtual machine did not restart. The Hyper-V plug-in gives this error
because the virtual machine is not yet ready to start.
Corrective
action
Currently the Hyper-V plug-in waits two seconds before restarting the virtual
machine. You can configure a longer delay by adding the following attribute in the
Windows registry:
KEY: System\CurrentControlSet\Services\OnCommandHyperV
\Parameters"; attribute (DWORD) : 'vm_restart_sleep'
This message occurs when you back up a dataset using the Hyper-V plug-in and the
following error appears in the Windows Application event log on the Hyper-V host.
A Shadow Copy LUN was not detected in the system and did not
arrive.
LUN ID
guid
Version
0x0000000000000001
Device Type
0x0000000000000000
Device TypeModifier 0x0000000000000000
Plug-ins | 513
Operation:
Exposing Disks
Locating shadow-copy LUNs
PostSnapshot Event
Executing Asynchronous Operation
Context:
Execution Context: Provider
Provider Name: Data ONTAP VSS Hardware Provider
Provider Version: 6. 1. 0. 4289
Provider ID: {ddd3d232-a96f-4ac5-8f7b-250fd91fd102}
Current State: DoSnapshotSet
Corrective
action
Error: Vss Requestor - Backup Components failed. Writer Microsoft HyperV VSS Writer involved in backup or restore encountered a retryable error
Description
If you receive a VSS retry error that causes your backup to fail, the Hyper-V plug-in
retries the backup three times with a wait of one minute between each attempt.
The following error message is displayed in the Hyper-V plug-in report and the
Windows Event log:
Error: Vss Requestor - Backup Components failed.Writer
Microsoft Hyper-V VSS Writer involved in backup or restore
encountered a retryable error. Writer returned failure code
0x800423f3. Writer state is XXX. For more information, see the
Hyper-V-VMMS event log in the Windows Event Viewer.
Corrective
action
You can configure the number of retries (retry count) and the duration of wait time
between the retries (retry interval) using the following registry keys:
Key: HKLM\System\CurrentControlSet\Services\OnCommandHyperV
\Parameters DWORD value in seconds: vss_retry_sleep (The time
duration to wait between retries) DWORD value: vss_retry
(Number of retries)
These settings are at the Hyper-V host level and the keys and values should be set on
the Hyper-V host for each virtual machine. If the virtual machine is clustered, the
keys should be set on each node in the cluster.
After either first configuring the Hyper-V plug-in or after a failover, Hyper-V
virtual objects take a long time to appear in the OnCommand console.
Cause
The Hyper-V plug-in uses SnapDrive for Windows to enumerate virtual machines.
With large numbers of virtual machines in a clustered setup, it can take SnapDrive
for Windows a significant amount of time to enumerate all of the virtual machines,
so the discovery of Hyper-V objects takes time.
Corrective
action
Depending on the size of your setup, discovery might take longer than you expect.
The Hyper-V plug-in does not support MBR LUNs for virtual machines running
on shared volumes or cluster shared volumes.
Cause
A Microsoft API issue returns different volume GUIDs when the cluster shared
volume disk ownership changes from active to passive, for example Node A to
Node B. The volume GUID is not the same as the GUID in the cluster disk
resource property. This issue also applies to virtual machines made highly
available using Microsoft Failover clustering.
Corrective
action
Related information
Plug-ins | 515
To successfully complete the backup operation, you need to fix the virtual machine that has the issue.
If that is not possible, you can temporarily move the virtual machine out of the dataset, or create a
dataset that only contains virtual machines known not to have a problem.
Space consumption when taking two snapshot copies for each backup
Issue
For every backup containing Hyper-V objects two snapshots are created, which can
lead to concerns over space consumption.
Cause
Corrective
action
The two snapshots are considered a pair. When the retention period ends for the
backup, both the snapshots are deleted. You should not manually delete the first
snapshot since it is necessary for restore operations.
Microsoft VSS only supports backing up VMs on the host that owns the Cluster
Shared Volume (CSV), so CSV ownership moves between the nodes to create
backups of the VMs on each host in the cluster.
When backing up a CSV, the Hyper-V plug-in creates two snapshots per host in the
cluster that runs a VM from that CSV. This means that if you backup up 15 VMs on a
single CSV and those VMs are evenly split across three Hyper-V Servers that there
will be a total of six snapshots per backup.
Virtual machine snapshot file location change can cause the Hyper-V plugin backup to fail
If you change a virtual machine snapshot file location to a different Data ONTAP LUN after creating
the virtual machine, you should create at least one virtual machine snapshot using Hyper-V manager
before making a backup using the Hyper-V plug-in. If you change the snapshot file location to a
different LUN and do not make a virtual machine snapshot before making a backup, the backup
operation could fail.
Cause
The Hyper-V writer takes a hardware snapshot of all the LUNs in the virtual
machine using the SnapDrive for Windows VSS hardware provider.
Corrective
action
You can use a Microsoft hotfix that uses the default system provider (software
provider) in the virtual machine to make the snapshot. As a result, the Data
ONTAP VSS hardware provider is not used for snapshot creation inside the child
OS and the backup speed increases.
The Hyper-V writer takes a hardware snapshot of all the LUNs in the virtual machine using the
SnapDrive for Windows VSS hardware provider. There is a Microsoft hotfix that uses the default
system provider (software provider) in the virtual machine to make the snapshot. As a result, the
Data ONTAP VSS hardware provider is not used for snapshot creation inside the child OS and the
backup speed increases. See Knowledge Base article 975354 on the Microsoft support site.
Related information
Plug-ins | 517
Cause
The Hyper-V plug-in restore operations delete the virtual machine configuration
information from the Hyper-V host before performing a restore operation. This
behavior is by design from the Microsoft Hyper-V Writer.
Corrective
action
Ensure that the backup schedule does not coincide with the restore operation, or that
the on-demand backup you want to perform does not overlap with a restore
operation of the same data.
You must have manually copied all of the virtual machine files, including the virtual machine
configuration, VHD's, and virtual machine snapshot files, from the backup snapshot copy to the
virtual machine's original path.
Steps
When you set the value to 1 and perform a restore operation, the Hyper-V plug-in does not copy
the data from the Data ONTAP snapshot copies to the original virtual machine location.
2. Restore the virtual machine from any backup using the Hyper-V plug-in or the Restore-Backup
PowerShell cmdlet.
The Hyper-V plug-in notifies the Hyper-V VSS writer to restore the virtual machine from
existing data.
3. When you are finished, delete the registry value created during Step 1.
When you perform a backup of a virtual machine that uses Windows Server 2003,
it repeatedly fails due to a retry error.
Corrective
action
Check the Windows Application event log inside the virtual machine for any VSS
errors. You can also see Knowledge Base article 940184 on the Microsoft support
site if you see the following error:
Volume Shadow Copy Service error: An internal inconsistency
was detected in trying to contact shadow copy service
Hyper-V VHDs do not appear in the OnCommand console after being properly
added to a virtual machine.
Cause
After a VHD is added to a virtual machine, a Windows WMI event for virtual
machine changes is not generated and pass-through LUNS are not properly listed
in the OnCommand console.
Corrective
action
Resend the storage system credentials for one of the storage systems managed by
the host service.
519
Copyright information
Copyright 19942011 NetApp, Inc. All rights reserved. Printed in the U.S.A.
No part of this document covered by copyright may be reproduced in any form or by any means
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval systemwithout prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S.A. patents, foreign
patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
521
Trademark information
NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri,
ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, Campaign
Express, ComplianceClock, Cryptainer, CryptoShred, Data ONTAP, DataFabric, DataFort, Decru,
Decru DataFort, DenseStak, Engenio, Engenio logo, E-Stack, FAServer, FastStak, FilerView,
FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy, GetSuccessful,
gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management, LockVault,
Manage ONTAP, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp on the Web),
Onaro, OnCommand, ONTAPI, OpenKey, PerformanceStak, RAID-DP, ReplicatorX, SANscreen,
SANshare, SANtricity, SecureAdmin, SecureShare, Select, Service Builder, Shadow Tape,
Simplicity, Simulate ONTAP, SnapCopy, SnapDirector, SnapDrive, SnapFilter, SnapLock,
SnapManager, SnapMigrator, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot,
SnapSuite, SnapValidator, SnapVault, StorageGRID, StoreVault, the StoreVault logo, SyncMirror,
Tech OnTap, The evolution of storage, Topio, vFiler, VFM, Virtual File Manager, VPolicy, WAFL,
Web Filer, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United States,
other countries, or both.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. A complete and current list of
other IBM trademarks is available on the Web at www.ibm.com/legal/copytrade.shtml.
Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or
other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft
Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer,
RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and
SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks.
NetApp, Inc. NetCache is certified RealSystem compatible.
523
Index | 525
Index
A
access roles (RBAC)
See RBAC
Add Dataset wizard
decisions to make for protection 200
administration
HTTP transport 494, 495
port 494496
transport 495, 496
Administration Port option 491
Administration Transport option 491
administrator roles
list and descriptions 372, 373, 506, 507
See also RBAC
aggregate full threshold 94, 328
aggregate nearly overcommitted threshold 94, 328
aggregate overcommitted threshold 94, 328
aggregates
editing threshold conditions 454
monitoring inventory 98
relation to traditional volume 141, 333
viewing inventory 98
Aggregates Capacity Growth report 346, 347
Aggregates Capacity report 338
Aggregates Committed Capacity report 345
Aggregates report 314
Aggregates Space Savings report 351
Aggregates view 109111, 114, 115
alarm conditions
event class 32, 33, 385, 387
event severity 32, 33, 385, 387
alarm formats
e-mail format 32, 33, 385, 387
pager alert 32, 33, 385, 387
script 32, 33, 385, 387
SNMP trap 32, 33, 385, 387
alarms
adding 30, 32, 3741, 385, 390, 391
adding for a specific event 33, 387
alarm begin 32, 33, 385, 387
alarm end 32, 33, 385, 387
configurations 32, 385
configuring 41, 42, 392, 393
creating 30, 41, 42, 392, 393
deleting 40, 41, 390, 391
details 37
disabling 40, 41, 390, 391
editing 35, 40, 41, 43, 44, 388, 390, 391
enabling 40, 41, 390, 391
guidelines for creating 30
modifying 35, 43, 44, 388
notification 32, 385
repeat notification 32, 33, 385, 387
settings 496
testing 40, 41, 390, 391
viewing details 37
Alarms tab 29, 40, 41, 390, 391
alarmView 361
archive backup 448
associating storage systems 399, 403
audit log
file 501, 502
settings 496
authentication
LDAP 473, 509
methods 473, 509
AutoSupport
information provided for 19
Availability dashboard panel 22
B
backing up
virtual objects on-demand 61, 276
datasets on-demand 222, 280
Backup Management panel 299
backup options
editing 420
backup relationships 419
Backup Settings area
for configuring local policies 179
backups
before performing on-demand backup 62, 64, 223,
225, 277, 279, 281, 283
deleting 293
guidelines for mounting or unmounting backups in
a VMware environment 271
guidelines for performing on-demand backup 62,
64, 223, 225, 277, 279, 281, 283
Hyper-V saved-state backups 274
locating 288, 292
C
calculations
effective used data space 326
guaranteed space 326
physical used data space 326
Snapshot reserve 326
total data space 326
total space 326
used Snapshot space 326
chargeback rates for groups
configuring 446
CIFS
account 495, 496
password 495, 496
cluster-related objects 86
clusters
adding 100102
deleting 100102
editing settings 100102
grouping 100102
monitoring inventory 97
viewing inventory 97
Clusters view 100102
configured threshold
reached 24
configuring
D
dashboard panels
Availability 22
Dataset Overall Status 25
descriptions 21
Index | 527
Events 23
External Relationship Lags 26
Fastest Growing Storage 24
Full Soon Storage 24
monitoring objects 22
Resource Pools 25
Unprotected Data 26
Data ONTAP
licenses, described 195
database backup
archive backup 448
deleting 448
process 448
scheduling to reoccur 449
Snapshot backup 448
starting 450
types 448
database backup option
completed 453
scheduling 451, 452
datacenters
viewing VMware inventory of 54
VMware Datacenters view 72, 73
DataFabric Manager server
verifying host service registration 396, 401
Dataset Overall Status dashboard panel 25
dataset-level naming settings
configuring while adding datasets of virtual objects
215
editing a dataset of virtual objects for 217
datasets
adding to manage physical storage objects 199
adding virtual objects to 209
adding, decisions to make for protection 200
attaching storage services to 229
best practices when configuring datasets of virtual
objects 187, 208, 209
changing storage services on 228
conformance conditions 241243
conformance status values 238
conformance to policy, evaluating 239
creating for protection of virtual objects 203
decisions to make before adding to manage
physical storage objects 200
deleting backups 293
editing to specify naming settings 218
evaluating conformance issues 232, 246
general concepts 181
guidelines for adding a dataset of virtual objects
204207
E
Edit Alarm dialog box 35, 43, 44, 388
Edit Group dialog box 384, 385
Edit Local Policy dialog box 176, 177
ESX host name field
selecting the host server for a virtual machine
restore 65, 300
ESX servers
viewing VMware inventory of 55
VMware ESX Servers view 73, 74
event purge interval 499, 500
F
Fastest Growing Storage dashboard panel 24
favorite topics, adding to list 15
File SRM
editing options 472
See also FSRM
File SRM area 472, 473
File Storage Resource Management (FSRM)
See FSRM
File Systems report 316
file types
adding 470
File SRM 470
file-level metadata
monitoring 470
file-level statistics 470
FlexClone
license, described 195
forced disconnect (of LUN) 291
Index | 529
FSRM
monitoring requirements 470
what FSRM does 470
Full Soon Storage dashboard panel 24
G
General Properties tab 260
global access control
precedence over group access control 371, 505
global groups 376
global naming settings
customizing 435
guidelines for customizing 436
requirements for customizing 438, 439
use of a naming script 423
Global Naming Settings Primary Volume area 442
Global Naming Settings Secondary Qtree area 444
Global Naming Settings Secondary Volume area 443
Global Naming Settings Snapshot Copy area 440
group access control
precedence over global access control 371, 505
groups
adding 381, 382
chargeback 384, 385
copying 380382
creating 377, 383
deleting 378, 381, 382
editing 379, 384, 385
global 376
managing 378
member types 383385
moving 380382
what groups are 376
Groups tab 381, 382
growth rate
of storage space utilization 24
H
hbaInitiatorView 367
hbaView 367
health
of managed objects 23
Host Agent
editing options 493
login 495, 496
host services
adding 394
associating with vCenter Server 397, 402
I
initiatorView 367
interface groups 86
inventory options
editing 482
monitoring 482
inventory reports
overview 314
J
jobs
canceling 45
defined 45
monitoring 46
understanding 45
viewing details of restore 302
Jobs tab 46, 47, 51, 52
junctions 86
L
lag thresholds 419
LDAP
adding servers 474
authentication 473, 475, 476, 509
deleting servers 474
disabling authentication 474
editing authentication settings 475
enabling 475, 476
enabling authentication 474
server types 476, 477
servers 477, 478
template settings 475
LDAP Authentication area 475, 476
LDAP Server Types area 476, 477
LDAP Servers area 477, 478
licenses
Data ONTAP 195
LIFs
cluster management LIF 86
data LIF 86
node management LIF 86
local backups
scheduling for virtual objects 172, 211
local policies
adding 170
and local backup of virtual objects 167, 192, 193
Backup Settings area 179
copying 173
deleting 174
editing 171
effect of time zones 194
guidelines for adding or editing 169, 170
Name area 177
Policies tab 175, 176
Schedule and Retention area 177
scheduling local backup of virtual objects 172, 211
local protection
Index | 531
M
mail server
configuring 499, 500
configuring for alarm notifications 36, 389
managed host options
editing 492
guidelines for changing 491
overview 491
Management Console
how the OnCommand console works with 17
installing 18
management options
for clients 493, 494
for Host Agent 495, 496
for managed hosts 494, 495
miscellaneous options 501, 502
monitoring
dataset conformance to policy 239
flow chart of process 478
local backup progress 294
process 478
query intervals 480
system options 490, 491
monitoring options
for inventory 488, 489
for networking 486, 488
for protection 485, 486
for storage 484, 485
guidelines for changing 480
location to change 480
mounting backups
manually in a Hyper-V environment 288
MultiStore Option
license, described 195
N
Name area
for configuring local policies 177
namespace 120
namespaces 86
naming properties
custom label 200
Naming Properties tab 260263
naming scripts
environment variables for naming primary volumes
425
environment variables for naming secondary
volumes 425
O
objects
status types 305
what objects are 376
on-demand backups
before performing 62, 64, 223, 225, 277, 279, 281,
283
P
paged tables 496
password
management 495, 496
monitoring 495, 496
percentage
of space availability 22
physical storage
adding
clusters 88
storage controllers 88
aggregates 85
Aggregates view 109111, 114, 115
clusters 85
Clusters view 100102
configuring
aggregate settings 91
cluster settings 90
storage controller settings 89
Deleted Objects view 117, 118
disks 85
Disks view 116, 117
grouping
aggregates 93
clusters 92
storage systems 92
monitoring
aggregate capacity thresholds and events 94,
328
discovery of storage systems 94
overview 85
Storage Controllers view 102104, 107109
storage systems 85
ping intervals
configuring 483
policies
evaluating dataset conformance to 239
monitoring dataset conformance to 239
Policies tab
Q
qtree threshold conditions
editing 455
qtrees
configuring quota settings 137
definition of 135
grouping 140
monitoring inventory 149
viewing inventory 149
Qtrees Capacity Growth report 348
Qtrees Capacity report 340
Qtrees report 318
Qtrees view 160162
quota settings
monitoring inventory 149
viewing inventory 149
Quota Settings view 162, 163
quotas
configuring settings 139
Index | 533
monitoring quota settings inventory 149
process 135
viewing quota settings inventory 149
why you use 135
R
RBAC
capabilities 372, 373, 506, 507
default roles 372, 373, 506, 507
definition 371, 505
example of how to use 371, 505
how RBAC is used 371, 505
how roles relate to administrators 371, 505
related objects
configuring naming settings using the Add Dataset
wizard 214
datasets
configuring naming settings using the Add
Dataset wizard 214
definition of 181
global and dataset-level naming settings 421, 422
primary volume naming settings 429, 430
secondary qtrees naming settings 433, 434
secondary volume naming settings 431, 432
Snapshot copy naming settings 426, 428
specifying naming by editing a dataset 218
when to customize naming for 422, 423
remote backups 269
remote configuration 412
remote protection
assigning to virtual objects 210
of virtual objects 188
reportOutputView 367
reports
Aggregates 314
Aggregates Capacity 338
Aggregates Capacity Growth 346, 347
Aggregates Committed Capacity 345
Aggregates Space Savings 351
aggregating data 304
computing new columns 304
Datasets Average Space Usage Metric 353
Datasets Average Space Usage Metric Samples 356
Datasets IO Usage Metric 355
Datasets IO Usage Metric Samples 358
Datasets Maximum Space Usage Metric 354
Datasets Maximum Space Usage Metric Samples
357
deleting 303, 308
S
sanhostlunview 368
saved-state backups
how Hyper-V plug-in handles 274
Schedule and Retention area
for configuring local policies 177
Schedule Report dialog box 310
scheduled backups 269
scheduled reports log
viewing 307
script plugins
configuring 496
scripting
arguments 270, 298
backups 270
restore 298
searching for backup copies 288, 292
secondary qtrees
naming settings descriptions 433, 434
secondary volumes
naming settings descriptions 431, 432
secure connections 493, 494
selection of backups 297
server types
editing 475
servers
grouping virtual objects 59
Hyper-V Servers view 80, 81
Hyper-V VMs view 8183
viewing Hyper-V server inventory 57
viewing Hyper-V VM inventory 58
viewing VMware datacenter inventory 54, 55
viewing VMware datastore inventory 56
viewing VMware virtual center inventory 53
viewing VMware virtual machine inventory 56
VMware Datacenters view 72, 73
VMware Datastores view 77, 78, 80
Index | 535
warning 305
storage capacity reports
overview 323
storage chargeback
configuring 446
storage controllers
adding 102104, 107109
deleting 102104, 107109
editing settings 102104, 107109
grouping 102104, 107109
monitoring inventory 98
viewing inventory 98
Storage Controllers view 102104, 107109
storage objects
adding again 117, 118
deleted by 117, 118
deleted date 117, 118
deleted objects 87
deleting 117, 118
restoring data from 230
undeleting 117, 118
viewing deleted objects 117, 118
storage options
editing 480
monitoring 480
Storage Service Datasets report 323
Storage Service Policies report 322
storage services
assigning to a dataset of virtual objects 210
attaching to existing datasets 229
changing on a dataset 228
for executing remote protection of virtual objects
188
overview 181
supplied with the product 189
Storage Services report 322
storage systems
adding users 412
associating with a host service 399, 403
authorizing host service access 397
configuration 501, 502
configuring 413, 496
Data ONTAP licenses, described 195
discovering 467, 468
login credentials 400, 404
managing configuration files 413
NDMP credentials 400, 404
remote configuration 412
Storage Systems Capacity report 341
Storage Systems report 319
T
table settings 496, 501, 502
test conformance check 243
thresholds
aggregate full 94, 328
aggregate full interval 94, 328
aggregate nearly full 94, 328
aggregate nearly overcommitted 94, 328
aggregate overcommitted 94, 328
time zone
effect on protection job schedules in datasets of
virtual objects 194
traditional volumes
See volumes
transitioning legacy Hyper-V dataset information
manually 275
trends
of storage space utilization 24
troubleshooting
dataset conformance conditions 241243
dataset conformance issues 246
dataset failure to conform 240
evaluating dataset conformance 239
listing nonconformant datasets 246
U
unmounting backups
manually in a Hyper-V environment 288
Unprotected Data dashboard panel 26
usage metric reports
guidelines for solving issues 326
user quotas
alerts 499, 500
editing 123
User Quotas Capacity report 342
Users and Roles capability
See RBAC
usersView 368
V
vCenter Server
associating a host service 397, 402
registering a host service 397, 402
version management
backups 269
vFiler units
configuration tasks to manage 414
configuring 415
defined 119
deleting 125127, 129, 130
discovery 119
editing settings 120, 125127, 129, 130
grouping 122, 125127, 129, 130
monitoring inventory 124
viewing inventory 124
vFiler Units report 320
vFiler Units view 125127, 129, 130
vFilers
discovery 119
editing settings 120
grouping 122
threshold settings 120
virtual centers
viewing VMware inventory of 53
VMware Virtual Centers view 71, 72
virtual disk files
selecting a backup 297
virtual inventory
adding virtual machines to 59
deleting virtual objects from 60
virtual machines
adding to inventory 59
restoring Hyper-V virtual machines 66, 301
restoring VMware virtual machines 65, 300
selecting a backup 297
viewing Hyper-V inventory 58
viewing VMware inventory 56
VMware VMs view 7477
virtual object inventory
rediscovery 408
virtual objects
adding to a dataset 209
best practices when configuring datasets of virtual
objects 187, 208, 209
configuring remote protection for 210
definition of 181
deleting from virtual inventory 60
discovery 53
grouping 59
guidelines for adding datasets containing virtual
objects 204207
local protection of 167, 192, 193
performing on-demand backups 61, 276
remote protection of 188
removing from a dataset 213
scheduling local backup for 172, 211
unprotecting 213
virtual servers
definition 120
deleting 130134
editing settings 121, 130134
grouping 123, 130134
monitoring inventory 125
viewing inventory 125
virtual storage
grouping vFiler units 122
grouping Vservers 123
vFiler Units view 125127, 129, 130
Vservers view 130134
VMs (virtual machines)
See virtual machines
VMware
best practices when configuring datasets of
VMware objects 187, 208, 209
guidelines for mounting or unmounting backups in
a VMware environment 271
how virtual objects are discovered 53
local protection of virtual objects 167, 192, 193
objects that a dataset can include 186
remote protection of virtual objects 188
viewing datacenter inventory 54, 55
viewing datastore inventory 56
viewing virtual center inventory 53
viewing virtual machine inventory 56
VMware Datacenters view 72, 73
VMware Datastores view 77, 78, 80
VMware ESX Servers view 73, 74
VMware Virtual Centers view 71, 72
VMware VMs view 7477
volume full threshold 141, 333
volume nearly full threshold 141, 333
volume threshold conditions
editing 454
volumeDedupeDetailsView 369
volumes
configuring quota settings 136
definition of 135
Index | 537
grouping 139
monitoring inventory 148
viewing inventory 148
Volumes Capacity Growth report 349
Volumes Capacity report 344
Volumes Committed Capacity report 346
Volumes report 321
Volumes Space Reservation report 349
Volumes Space Savings report 352
Volumes view 150153, 156, 157
Vservers
definition 120
deleting 130134
editing settings 121, 130134
grouping 123, 130134
monitoring inventory 125
W
welcome 15
window layout
customization 16
navigation 15