Professional Documents
Culture Documents
P3ag 5962
P3ag 5962
FUJITSU Storage
ETERNUS AX series All-Flash Arrays,
ETERNUS HX series Hybrid Arrays
ONTAP 9.8
1. exit........................................................................................................................ 40
2. history................................................................................................................... 41
3. man ...................................................................................................................... 42
4. redo ...................................................................................................................... 43
5. rows ...................................................................................................................... 44
6. set......................................................................................................................... 45
7. top ........................................................................................................................ 48
8. up.......................................................................................................................... 49
cluster ha show................................................................................................................................................80
Share Commands.........................................................................................................................................1998
First Edition
April 2021
This symbol indicates the possibility of serious or fatal injury if the is not used properly.
This symbol indicates the possibility of minor or moderate personal injury, as well as
damage to the and/or to other users and their property, if the is not used properly.
Notice Symbols
The following notice symbols are used in this manual:
Indicates information that you need to observe when using the ETERNUS storage system.
Make sure to read the information.
Indicates information and suggestions that supplement the descriptions included in this
manual.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The exit command ends the current CLI session.
● Examples
The following example ends the current CLI session:
cluster1::> exit
Goodbye
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The history command displays the command history of the current CLI session. A numeric ID precedes
each command. Use this number with the redo command to re-execute that history item.
● Examples
The following example displays the command history of the current CLI session:
cluster1::> history
1 vserver show
2 man volume show
3 volume delete -vserver vs0 -volume temporary2
4 volume modify { -volume temp* } -state offline
cluster1::> redo 3
Related reference
4. redo (page 43)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The man command displays the manual page of the command you specify. If you do not specify a command,
command displays the man page index.
● Parameters
[<text>] - Valid CLI command
The command for which you'd like to see the manual page. The syntax of the command is the same as
the command itself. The man command supports abbreviations and tab completion of the command
name.
● Examples
The following example displays the manual page for the storage aggregate create command.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The redo command re-executes a command that has been executed previously in the current CLI session.
Specify a previously run command using:
• A string that matches part of a previous command. For example, if the only volume command you have
run is volume show, enter redo vol to re-execute the command.
• The numeric ID of a previous command, as listed by the history command. For example, enter redo 4 to
re-execute the fourth command in the history list.
• A negative offset from the end of the history list. For example, enter redo -2 to re-execute the command
that you ran two commands ago.
● Parameters
[<text>] - String, Event Number, or Negative Offset
Use this parameter to specify a string, a numeric ID from the command history, or a negative number
that identifies the command to be re-executed.
● Examples
The following example re-executes command number 10 in the command history:
cluster1::> redo 10
Related reference
2. history (page 41)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The rows command displays the number of rows that can be displayed in the current CLI session before the
interface pauses output. If you do not set this value, it adjusts automatically based on the actual height of
your terminal. If the actual height is undefined, the default number of rows is 24.
Specify a number to set the number of rows that can be displayed. Setting this value manually disables
auto-adjustment. Specify zero (0) to disable pausing.
You can also set this value using the set -rows command.
● Parameters
[<integer>] - Number of Rows the Screen Can Display
Use this parameter to specify the number of rows your terminal can display.
● Examples
The following example displays the current number of rows, then resets the number of rows to 48:
cluster1::> rows
36
cluster1::> rows 48
Related reference
6. set (page 45)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The set command changes attributes of the user interface.
● Parameters
[-privilege <PrivilegeLevel>] - Privilege Level
Use this parameter to specify the privilege level of the command session. Possible values are
• admin - Used for routine system management commands
• advanced - Used for infrequent, dangerous, or complicated commands
• diagnostic - Used for detailed diagnostic commands that are used only by support personnel
[-confirmations {on|off}] - Confirmation Messages
Use this parameter with the value on to specify that the interface prompt for confirmation before
executing potentially dangerous commands. Use this parameter with the value off to specify that the
interface not prompt for confirmation, even before potentially dangerous commands execute. The
default setting is on.
[-showallfields {true|false}] - Show All Fields
Use this parameter with the value true to specify that the interface display all field columns when
displaying tabular output. Use this parameter with the value false to specify that the interface display
only selected columns. The default setting is false.
[-showseparator <text>] - Show Separator
Use this parameter to specify the characters to use as the field separator. The field separator is used
between field columns when -showallfields is set to "true". The separator can be from one to three
characters in length. When specifying the separator, enclose it in quotation marks ("). Set the separator
to one or more spaces to disable this feature.
[-active-help {true|false}] - Active Help
Use this parameter with the value true to specify that pressing the question mark (?) key is sufficient to
execute a help request. Use this parameter with the value false to specify that you must press the
Return key after the question mark key to execute a help request. The default setting is true.
[-units {auto|raw|B|KB|MB|GB|TB|PB}] - Data Units
Use this parameter to specify the default units used when reporting data sizes. Possible values are:
• auto - Auto-scale data size for human-readable output
• raw - Bytes without unit designation
• B - Bytes
• KB - Kilobytes (1024 bytes, aka kibibytes)
• MB - Megabytes (KB x 1024, aka mebibytes)
• GB - Gigabytes (MB x 1024, aka gibibytes)
• TB - Terabytes (GB x 1024, aka tebibytes)
• PB - Petabytes (TB x 1024, aka pebibytes)
The default setting is auto.
[-rows <integer>] - Pagination Rows ('0' disables)
Use this parameter to specify the number of rows that can be displayed in the current CLI session before
the interface pauses output. If you do not set this value, it adjusts automatically based on the actual
height of your terminal. If the actual height is undefined, the default number of rows is 24.
Setting this value manually disables auto-adjustment. Specify zero (0) to disable pausing.
You can also set this value using the rows command.
[-vserver <text>] - Default Vserver
Use this parameter to specify the name of the Vserver to use as the default value for the -vserver
parameter of commands.
WARNING: Vserverized commands that only have a single required parameter, which is the <userinput>-
vserver<userinput>, allow the Vserver to be specified positionally, without <userinput>-
vserver<userinput> preceding it. Due to this, care must be taken when using CLI commands that do not
require the <userinput>-vserver<userinput> parameter. For example, using the "vserver nfs delete *"
command will ignore the "set -vserver" value as the parser considers the "*" to be the Vserver.
[-node <text>] - Default Node
Use this parameter to specify the name of the node to use as the default value for the -node parameter
of commands.
[-stop-on-error {true|false}] - Stop On Error
Use this parameter with the value true to specify that continuing commands should stop if they
encounter an error. Use this parameter with the value false to specify that continuing commands
should continue if they encounter an error.
[-prompt-timestamp {above|inline|none}] - Display Prompt Timestamp
Print the current date and time as a part of the prompt. The possible values are
• above - print the timestamp using the system timestamp format on the line above the remainder of
the prompt.
• inline - print the timestamp using the system timestamp format at the beginning of the line with the
remainder of the prompt.
• none - do not print the timestamp.
The default value is none.
● Examples
The following example sets the privilege level to advanced.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Fujitsu personnel.
Do you wish to continue? (y or n): y
cluster1::*>
The following examples cause all columns to be shown in output rows, with a comma used as the field
separator.
node1,e0a,cluster,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:90:20:e9,true,physical,-,-,
-,-,-,-,-,
node1,e0b,cluster,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:90:20:f3,true,physical,-,-,
-,-,-,-,-,
node1,e0c,data,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:90:20:fd,true,physical,-,-,-,-,
-,-,-,
node1,e0d,data,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:90:20:07,true,physical,-,-,-,-,
-,-,-,
node2,e0a,cluster,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:2e:b6:62,true,physical,-,-,
-,-,-,-,-,
node2,e0b,cluster,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:2e:b6:6c,true,physical,-,-,
-,-,-,-,-,
node2,e0c,data,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:2e:b6:76,true,physical,-,-,-,-,
-,-,-,
node2,e0d,data,up,1500,true,true,full,full,auto,1000,full,none,00:0c:29:2e:b6:80,true,physical,-,-,-,-,
-,-,-,
[2/25/2016 16:38:38]
cluster1::>
Related reference
5. rows (page 44)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The top command changes the current working directory of the command prompt to the top-level
command directory.
● Examples
The following example returns the command prompt from the storage aggregate directory to the top-
level directory:
cluster1::>
Related reference
Storage aggregate Commands (page 970)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The up command, which can also be specified as two dots (..), changes the current working directory of the
command prompt to the directory that is up one level in the command hierarchy.
● Examples
The following example takes the command prompt up one level from the storage aggregate directory:
cluster1::storage aggregate> up
cluster1::storage>
Related reference
Storage aggregate Commands (page 970)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command modifies the options for application provisioning operations.
● Parameters
[-is-mixed-storage-services-allowed {true|false}] - Is Mixed Storage Services Allowed
Specifies whether mixed cost storage services are allowed for provisioning placement. If the value of this
parameter is false, only the aggregates closest to the performance requirements of the storage service
are used. If the value of this parameter is true, all aggregates with sufficient performance are
considered. The initial value for option is false.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays options for application provisioning.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The autobalance aggregate show-aggregate-state command displays information about an
aggregate state that is considered by the Auto Balance Aggregate feature.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the display will be limited to only those aggregates with a node that
matches the specified value.
[-aggregate <aggregate name>] - Name of the Aggregate
If this parameter is specified, the display will be limited to only that aggregate with a name that
matches the specified value.
[-total-size {<integer>[KB|MB|GB|TB|PB]}] - Total Size of the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with a total-size that
matches the specified value.
[-used-size {<integer>[KB|MB|GB|TB|PB]}] - Used Size of the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with a used-size that
matches the specified value.
[-aggregate-unbalanced-threshold {<integer>[KB|MB|GB|TB|PB]}] - Threshold When Aggregate Is
Considered Unbalanced
If this parameter is specified, the display will be limited to only those aggregates with a threshold that
matches the specified value.
[-outgoing-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Outgoing Volumes in the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with an outgoing-size
that matches the specified value. Outgoing size will be equal to the total size of the volumes that move
away from each one of those aggregate.
[-incoming-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Incoming Volumes in the Aggregate
If this parameter is specified, the display will be limited to only those aggregates with an incoming-size
that matches the specified value. Incoming size will be equal to the total size of the volumes that move
towards to each one of those aggregates.
[-raidtype {raid_tec|raid_dp|raid4}] - RAID Type
If this parameter is specified, the display will be limited to only those aggregates with a raidtype that
matches the specified value.
[-home-cluster <UUID>] - Home Cluster ID
If this parameter is specified, the display will be limited to only those aggregates with a home-cluster ID
that matches the specified value.
[-is-hybrid {true|false}] - Aggregate Is a Hybrid
If this parameter is specified as true, the display will be limited to only hybrid aggregates. If the
parameter is specified as false, the display will be limited to only non-hybrid aggregates.
[-is-incoming-volume-thin {true|false}] - An Incoming Volume Is Thin
When you use thin provisioning for a volume, it can run out of space even if it has not yet consumed its
nominal size and you should carefully monitor space utilization to avoid unexpected errors due to the
volume running out of space. If this parameter is specified as true, the display will be limited to only
those aggregates which are the target of a move of thin volume. If the parameter is specified as false,
the display will be limited to only those aggregates which are not the target of a move of thin volume.
[-is-balanceable {true|false}] - Is Balanceable
If this parameter is specified as true, the display will be limited to only balanceable aggregates. If the
parameter is specified as false, the display will be limited to only non-balanceable aggregates.
[-is-move-target {true|false}] - Aggregate Is a Volume Move Target
If this parameter is specified as true, the display will be limited to only those aggregates which are
target of a volume move. If the parameter is specified as false, the display will be limited to only those
aggregates which are not the target of a volume move.
[-attributes <text>, ...] - Aggregate Attributes
If this parameter is specified, the display will be limited to only those aggregates with attributes that
matches the specified values.
[-aggregate-available-threshold {<integer>[KB|MB|GB|TB|PB]}] - Threshold When Aggregate Is
Considered Balanced
If this parameter is specified, the display will be limited to only those aggregates which meet the
specified threshold to be considered as balanced.
● Examples
The following example displays information about the state for all aggregates in the cluster.
Aggregate: aggr_1
Total Size: 12.61GB
Used Size: 111.6MB
Outgoing Size: 0B
Incoming Size: 0B
Aggregate Used Space Threshold: 8.83GB
The following example displays information about all entries of the aggregate state, for all aggregates in
the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The autobalance aggregate show-unbalanced-volume-state command displays information about
a volume that is considered by the Auto Balance Aggregate feature.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the display will be limited to only those volumes with a node that matches
the specified value.
[-DSID <integer>] - DSID of the Last Volume Queried
If this parameter is specified, the display will be limited to only those volumes with a DSID that matches
the specified value.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the display will be limited to only those volumes with an aggregate name
that matches the specified value.
[-volume-name <text>] - Name of the Volume
If this parameter is specified, the display will be limited to only that volume with a name that matches
the specified value.
[-last-threshold-crossed-time <MM/DD/YYYY HH:MM:SS>] - Last Time Threshold Crossed
If this parameter is specified, the display will be limited to only those volumes with a threshold crossing
time that matches the specified value.
[-last-placed-time <MM/DD/YYYY HH:MM:SS>] - Last Time Volume Was Moved
If this parameter is specified, the display will be limited to only those volumes with a last time they have
been moved that matches the specified value.
[-is-moving {true|false}] - Is Volume Currently Moving
If this parameter is specified as true, the display will be limited to only the moving volumes. If the
parameter is specified as false, the display will be limited to only the non-moving volumes.
[-is-quiesced {true|false}] - Is Volume Quiesced
If this parameter is specified as true, the display will be limited to only the quiesced volumes. If the
parameter is specified as false, the display will be limited to only the non-quiesced volumes.
[-total-footprint {<integer>[KB|MB|GB|TB|PB]}] - Total Size of the Volume
If this parameter is specified, the display will be limited to only those volumes with a total footprint that
matches the specified value.
[-attributes <text>, ...] - Volume's Attributes
If this parameter is specified, the display will be limited to only those volumes with attributes that
matches the specified value.
[-last-checked <MM/DD/YYYY HH:MM:SS>] - Last Time Volume State Was Checked
If this parameter is specified, the display will be limited to only those volumes with a last time their
state was checked that matches the specified value.
● Examples
The following example display information about all of the unbalanced volumes that the Auto Balance
Aggregate feature is aware of.
Volume: ro10
Footprint: 20.20MB
Volume: test
Footprint: 20.20MB
Last Time Over IOPS Threshold: 3/12/2014 16:20:18
Last Placed: 3/11/2014 10:16:42
Attributes: Over IOPS Threshold
In Mirror
Stabilizing
The following example displays all of the information that the Auto Balance Aggregate feature has collected
for all of the unbalanced volumes it is aware of.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The autobalance aggregate config modify command allows the user to customize the parameters
that determine when volumes should be considered for automatic move or recommendation by the Auto
Balance Aggregate feature.
● Parameters
[-is-enabled {true|false}] - Is the Auto Balance Aggregate Feature Enabled
This specifies whether the Auto Balance Aggregate feature is enabled and running.
[-aggregate-unbalanced-threshold-percent <integer>] - Threshold When Aggregate Is Considered
Unbalanced (%)
This specifies the space used threshold percentage that will cause the Auto Balance Aggregate feature to
consider an aggregate as unbalanced.
[-aggregate-available-threshold-percent <integer>] - Threshold When Aggregate Is Considered
Balanced (%)
This specifies the threshold percentage which will determine if an aggregate is a target destination for a
move. The Auto Balance Aggregate feature will attempt to move volumes from an unbalanced
aggregate until it is under this percentage.
● Examples
The following example displays a modification for the default configuration of the Auto Balance Aggregate
feature
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The autobalance aggregate config show command displays information about parameters that
determine when volumes should be considered for automatic move or recommendation by the Auto Balance
Aggregate feature.
● Examples
The following example displays the default configuration for the Auto Balance Aggregate feature
The cluster commands enable you to create and manage Data ONTAP 8 clusters.
cluster add-node
Expand the cluster by discovering and adding new nodes
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster add-node command discovers and adds new nodes to the cluster. When the -node-count
parameter is specified, the command attempts to add that many nodes to the cluster. The -node-ip
parameter can be specified to directly add a node. The -cluster-ips parameter can be specified to
directly add one or more nodes in parallel. Only one of the -node-count, -node-ip and -cluster-ips
parameters can be provided. The system node show-discovered command displays all the nodes
discovered on the local network.
Note
The node-count parameter is deprecated and may be removed in a future release of Data ONTAP.
Use the -cluster-ips parameter instead.
Note
The node-ip parameter is deprecated and may be removed in a future release of Data ONTAP. Use
the -cluster-ips parameter instead.
● Parameters
{ -cluster-ips <IP Address>, ... - List of Cluster Interface IP Addresses of the Nodes Being Added
This parameter contains a comma separated list of cluster interface IP addresses of the nodes in the
cluster you are creating. All the nodes specified in the list will be added to the cluster.
| -retry [true] - Retry a failed cluster add-node operation
Use this parameter to retry the most recently failed cluster add-node command with the originally
specified parameters. Retry is not supported if the cluster add-node command was originally run
with either the -node-count or -node-ip parameters.
| -node-count <integer> - (DEPRECATED)-Number of Nodes Being Added
Number of nodes to be added to the cluster. If fewer nodes are discovered, all the discovered nodes are
added to the cluster and the command will fail since there are fewer nodes than specified. If more nodes
are found than the number specified, the command will fail because there is no way to determine which
nodes you intend to add to the cluster.
Note
The -node-count parameter is supported on non-shared architecture platforms only.
This parameter contains a comma separated list of node names of all the nodes in the cluster you are
creating. The node names must have an one to one correspondence with -cluster-ips parameter. The
names provided will be used to rename the nodes once they are added to the cluster.
[-foreground {true|false}] - Foreground Process
When set to false the command runs in the background as a job. The default is true, which causes the
command to return after the operation completes.
[-allow-mixed-version-join [true]] - Allow a Node At a Different Version to Join Cluster
This parameter allows nodes with different, but compatible versions of Data ONTAP to be added to the
cluster. A Data ONTAP best practice is to add nodes to the cluster that are of the same Data ONTAP
version as the nodes in the cluster, but that may not always be possible.
● Examples
The following example adds a node using -cluster-ips:
Related reference
system node show-discovered (page 1413)
cluster create (page 60)
cluster add-node-status
Show cluster expansion progress
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster add-node-status command displays the progress of the node joining a cluster initiated by
using the cluster create command or the cluster add-node command
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node-uuid <UUID>] - Node UUID
Select the node that match the specified node UUID.
[-node-name <text>] - Node Name
Select the nodes that match the specified node name.
[-cluster-ip <IP Address>] - IP Address of a Cluster Interface of Node
● Examples
The following example shows the progress of a node add operation:
Related reference
cluster create (page 60)
cluster add-node (page 58)
cluster create
Create a cluster
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster create command creates a cluster with one or more nodes. When the -node-count
parameter is specified, the command attempts to add that many nodes to the cluster. The -cluster-ips
parameter can be specified to add one or more nodes in parallel. Only one of the -node-count and -
cluster-ips parameters can be provided.
Note that single-node clusters do not require configuring the cluster network. A cluster network interface
must be configured before other nodes can join the cluster.
Note
The node-count parameter is deprecated and may be removed in a future release of Data ONTAP.
Use the -cluster-ips parameter instead.
● Parameters
[-license <License Code V2>] - (DEPRECATED)-Base License
Note
This parameter is deprecated and may be removed in a future release of Data ONTAP.
Use this optional parameter to specify the base license for the cluster. Obtain this value from your sales
or support representative.
● Examples
The following example creates a cluster named cluster1
The following example creates a cluster named cluster1 with node-count 4 on a non-shared architecture
platform.
cluster join
(DEPRECATED)-Join an existing cluster using the specified member's IP address or by cluster name
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release of Data ONTAP. Use cluster
add-node from a node in the cluster instead.
The cluster join command adds a node to an existing cluster. Use the cluster create command to
create a cluster if one does not already exist.
Note that a cluster network interface must be configured for the cluster before other nodes can join the
cluster.
● Parameters
{ -clusteripaddr <IP Address> - IP Address of a Cluster Interface from a Node in the Cluster
Use this parameter to specify the IP address of a cluster interface. This must be the IP address of a cluster
interface of a node that is already in the cluster. This parameter is mutually exclusive with the -
cluster-name parameter.
| -cluster-name <text>} - (DEPRECATED)-Cluster Name of the Cluster to Join
Deprecated. Use this parameter to specify the name of an existing cluster to join.
[-allow-mixed-version-join [true]] - Allow a Node at a Different Version to Join Cluster
This parameter allows nodes with different, but compatible versions of Data ONTAP to join the cluster. A
Data ONTAP best practice is to join nodes to the cluster that are of the same Data ONTAP version as the
nodes in the cluster, but that may not always be possible.
[-node-name <text>] - Name to Use for the Node in the Cluster
This parameter specifies the name that the node will have when we join it to the cluster.
● Examples
The following example joins the local node to a cluster. The IP address 192.0.2.66 is the address of a cluster
interface of a node that already belongs to the cluster.
Related reference
cluster add-node (page 58)
cluster create (page 60)
cluster modify
Modify cluster node membership attributes
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster modify command modifies the cluster attributes of a node, including its eligibility to
participate in the cluster.
At the advanced privilege level, you can use the command to specify whether a node holds epsilon. Epsilon
is an extra fractional vote that enables quorum to form using slightly weaker requirements. For example,
two out of four eligible nodes are sufficient to form quorum if one of those two nodes holds epsilon.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node to modify. If you do not specify a node, the
command runs on the local node.
[-epsilon {true|false}] - Epsilon
Use this parameter with the value true to specify that the node holds Epsilon in the cluster. Use this
parameter with the value false to specify that the node does not hold Epsilon in the cluster. In a
cluster, only one node can be designated as Epsilon at any given time. You can designate a node as
Epsilon to add weight to its voting in a cluster with an even number of nodes.
[-eligibility {true|false}] - Eligibility
Use this parameter with the value true to specify that the node is eligible to participate in the cluster.
Use this parameter with the value false to specify that the node is not eligible to participate in the
cluster.
If you modify a node as ineligible to participate in the cluster, the command prompts you for
confirmation before it runs.
[-skip-quorum-check-before-eligible [true]] - Skip Quorum Check Before Setting Node Eligible
If this parameter is specified, quorum checks will be skipped prior to setting a node eligible. When
setting a node to eligible, the operation will continue even if there is a possible data outage due to a
quorum issue.
[-skip-quorum-check-before-ineligible [true]] - Skip Quorum Check Before Setting Node
Ineligible
If this parameter is specified, quorum checks will be skipped prior to setting a node ineligible. When
setting a node to ineligible, the operation will continue even if there is a possible data outage due to a
quorum issue.
● Examples
This example modifies a node to make it eligible to participate in the cluster.
The following example removes epsilon from the node named node0 and adds it to the node named node1:
cluster ping-cluster
Ping remote cluster interfaces and perform RPC server check
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster ping-cluster command probes network connectivity to remote cluster interfaces, and
performs an RPC server check.
● Parameters
-node <nodename> - Node
Use this parameter to send the ping from the node you specify.
[-use-sitelist {true|false}] - Use Sitelist for Cluster Interfaces
Use this parameter with the value true to specify that the command use the sitelist to determine any
incomplete cluster IP information. Use this parameter with the value false to specify that the
command not use the sitelist.
[-skip-rpccheck {true|false}] - Skip RPC Server Check
Use this parameter with the value true to specify that the command not perform the rpcinfo check of
remote hosts. Use this parameter with the value false to specify that the command perform the rpcinfo
check. The rpcinfo check checks the status of the RPC servers on the remote hosts. By default, the rpcinfo
check runs on the program number of the portmapper. Use the -rpc-prognum parameter to override this
default.
[-rpc-prognum <integer>] - RPC Server to Check
Use this parameter to override default behavior and run the rpcinfo check on the program number you
specify. By default, the rpcinfo check runs on the program number of the portmapper.
● Examples
The following example shows typical output for this command.
cluster remove-node
Remove a node from the cluster
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster remove-node command removes a node from a cluster.
Before you can remove a node from a cluster, you must shut down all of the node's shared resources, such as
virtual interfaces to clients. If any of a node's shared resources are still active, the command fails. The failure
message will display which active resources must be shut down before the node can be removed from the
cluster.
● Parameters
{ -node <nodename> - Node to Unjoin
Use this parameter to specify the name of the node to remove from the cluster.
| -cluster-ip <IP Address>} - IP Address of a Cluster Interface of Node to Unjoin
Use this parameter to specify the cluster IP of the node to remove from the cluster.
[-skip-quorum-check-before-unjoin [true]] - Skip Quorum Check before Unjoin
If this parameter is specified, quorum checks will be skipped prior to the remove-node command. The
operation will continue even if there is a possible data outage due to a quorum issue.
[-skip-last-low-version-node-check [true]] - Skip the Check That Prevents Unjoining the Last Low
Versioned Node
This parameter allows the node with lowest version of Data ONTAP to be removed from the cluster.
● Examples
The following example shows how to remove the node named node4 from the cluster.
The following example forcibly removes the node from the cluster:
cluster setup
Setup wizard
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
Use of this command to join a node to an existing cluster is deprecated and might be removed in a
future release of Data ONTAP. From a node in the cluster use the system node show-discovered
command and then use the cluster add-node command.
The cluster setup command runs the cluster setup wizard, which can be used to either create a cluster or
join a node to an existing cluster. When you run the cluster setup wizard, enter the appropriate information
at the prompts. You will be asked to provide the following information to create a cluster:
• Node management interface port, IP address, netmask, default gateway
• Cluster name
• Cluster base license key
Note
This parameter has been deprecated. It may be removed from a future release of Data ONTAP.
● Examples
The following example shows the create option of cluster setup.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to Fujitsu Technical
Support. To disable this feature, enter
autosupport modify -support disable
within 24 hours.
Otherwise, press Enter to complete cluster setup using the command line
interface:
Do you want to create a new cluster or join an existing cluster? {create, join}:
create
Do you intend for this node to be used as a single node cluster? {yes, no} [no]:
A cluster management interface on port e0d with IP address 192.0.2.60 has been created. You can use th
is address to connect to and manage the cluster.
SFO is licensed.
SFO will be enabled when the partner joins the cluster.
To complete cluster setup, you must join each additional node to the cluster
by running "system node show-discovered" and "cluster add-node" from a node in the cluster.
To complete system configuration, you can use either OnCommand System Manager
or the Data ONTAP command-line interface.
To access OnCommand System Manager, point your web browser to the cluster
management IP address (https://192.0.2.60).
cluster1::>
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to Fujitsu Technical
Support. To disable this feature, enter
autosupport modify -support disable
within 24 hours.
Otherwise, press Enter to complete cluster setup using the command line
interface:
Do you want to create a new cluster or join an existing cluster? {create, join}:
join
Enter the IP address of an interface on the private cluster network from the
cluster you want to join: 169.254.115.8
SFO is licensed.
SFO will be enabled when the partner joins the cluster.
To complete cluster setup, you must join each additional node to the cluster
by running "system node show-discovered" and "cluster add-node" from a node in the cluster.
To complete system configuration, you can use either OnCommand System Manager
or the Data ONTAP command-line interface.
To access OnCommand System Manager, point your web browser to the cluster
management IP address (https://192.0.2.60).
cluster1::>
Related reference
system node show-discovered (page 1413)
cluster add-node (page 58)
cluster create (page 60)
cluster join (page 61)
cluster show
Display cluster node members
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster show command displays information about the nodes in a cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-node-uuid <UUID>] - UUID (privilege: advanced)
Selects the nodes that match this parameter value.
[-epsilon {true|false}] - Epsilon (privilege: advanced)
Selects the nodes that match this parameter value. In a cluster, only one node can be designated as
Epsilon at any given time. You can designate a node as Epsilon to add weight to its voting in a cluster
with an even number of nodes.
[-eligibility {true|false}] - Eligibility
Selects the nodes that match this parameter value (true means eligible to participate in the cluster).
[-health {true|false}] - Health
Selects the nodes that match this parameter value (true means online).
● Examples
The following example displays information about all nodes in the cluster:
The following example displays information about the node named node1:
Node: node1
Eligibility: true
Health: true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster contact-info modify command modifies contact information for the cluster
administrators. If any values contain spaces, you must enclose those values in quotes.
Use the cluster contact-info show command to display contact information for the cluster
administrators.
● Parameters
[-primary-name <text>] - Name of Primary Contact
Use this parameter to specify the name of the primary contact.
[-primary-phone <text>] - Phone Number of Primary Contact
Use this parameter to specify the phone number of the primary contact.
[-primary-alt-phone <text>] - Alternate Phone Number of Primary Contact
Use this parameter to specify the alternate phone number of the primary contact.
[-primary-email <text>] - Email Address or User ID of Primary Contact
Use this parameter to specify the email address of the primary contact.
[-secondary-name <text>] - Name of Secondary Contact
Use this parameter to specify the name of the secondary contact.
[-secondary-phone <text>] - Phone Number of Secondary Contact
Use this parameter to specify the phone number of the secondary contact.
[-secondary-alt-phone <text>] - Alternate Phone Number of Secondary Contact
Use this parameter to specify the alternate phone number of the secondary contact.
[-secondary-email <text>] - Email Address or User ID of Secondary Contact
Use this parameter to specify the email address of the secondary contact.
[-business-name <text>] - Business Name
Use this parameter to specify the name of the business responsible for this cluster.
[-address <text>] - Business Address
Use this parameter to specify the street address of the business responsible for this cluster.
[-city <text>] - City Where Business Resides
Use this parameter to specify the name of the city in which the business is located.
[-state <text>] - State Where Business Resides
Use this parameter to specify the name of the state or province in which the business is located.
[-country <Country Code>] - 2-Character Country Code
Use this parameter to specify the 2-character country code of the country in which the business is
located.
[-zip-code <text>] - Postal Code Where Business Resides
Use this parameter to specify the postal or ZIP code area in which the business is located.
● Examples
The following example changes the name and phone numbers of the secondary contact person for the
cluster.
cluster1::> cluster contact-info modify -secondary-name "John Doe" -secondary-phone 123.555.0156 -secon
dary-alt-phone 123.555.0178
The following example changes the mailing address of the business responsible for the cluster.
cluster1::> cluster contact-info modify -address "123 Example Avenue" -city Exampleville -state "New Ex
ample" -zip-code 99999 -country US
Related reference
cluster contact-info show (page 72)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster contact-info show command displays contact information for the cluster administrators.
● Examples
The following example shows example output for this command.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete references to logical interfaces that were displaced to another node due to a controller-replacement.
A LIF that has been displaced to another node has had its home-node modified to another node because no
network ports were available in the same broadcast domain on the original home-node. Deleting this entry
does not delete the LIF, it only deletes the entry from the displaced-lifs table indicating that the LIF's current
home-node is considered restored.
● Parameters
-node <nodename> - Node
Selects the node from which the LIF was displaced.
-vserver <vserver> - Vserver
Selects the vserver on which the LIF resides.
-lif-name <text> - Lif Name
Selects the name of the LIF for which to display displaced information.
● Examples
The following example deletes displaced LIF information.
cluster1::> cluster controller-replacement network displaced-interface delete -vserver vs0 -lif lif1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Restore the original home node of logical interfaces that were displaced to another node due to a controller-
replacement. A LIF that has been displaced to another node has had its home-node modified to another
node because no network ports were available in the same broadcast domain on the original home-node.
Restoring the home-node attempts to find a home-port on the original home node if a suitable port exists.
● Parameters
-node <nodename> - Node
Selects the node from which the LIF was displaced.
● Examples
The following example restores the home-node of a displaced LIF.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display logical interfaces that were displaced to another node due to a controller-replacement. A LIF that
has been displaced to another node has had its home-node modified to another node because no network
ports were available in the same broadcast domain on the original home-node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the node from which the LIF was displaced.
[-vserver <vserver>] - Vserver
Selects the vserver on which the LIF resides.
[-lif-name <text>] - Lif Name
Selects the name of the LIF for which to display displaced information.
[-original-home-node <nodename>] - Original Home Node
The original home-node that was assigned to the LIF prior to controller-replacement.
[-current-home-node <nodename>] - Current Home Node
The current home-node assigned to the LIF after controller-replacement.
● Examples
The following example displays the displaced LIF information.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete VLAN tags that were displaced due to a controller-replacement. A VLAN tag that has been displaced
is a tag that was based on a network port that either no longer exists, or was moved to a new broadcast
domain. Restoring the vlan-tags re-creates them on the specified network port.
● Parameters
-node <nodename> - Node
Selects the node on which the displaced vlans reside.
-port <netport> - Original Base Port
The original base port where the vlans existed prior to controller-replacement.
● Examples
The following example deletes the displaced vlan-tag information.
cluster1::> cluster controller-replacement network displaced-vlans delete -node local -port e0c
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Restore VLAN tags that were displaced due to a controller-replacement. A VLAN tag that has been displaced
is a tag that was based on a network port that either no longer exists, or was moved to a new broadcast
domain. Restoring the vlan-tags re-creates them on the specified network port.
● Parameters
-node <nodename> - Node
Selects the node on which the displaced vlans reside.
-port <netport> - Original Base Port
The original base port where the vlans existed prior to controller-replacement.
-destination-port <netport> - Destination Port
The destination port where the vlan-tags will be restored.
● Examples
The following example restores vlan-tags displaced from port e0c onto port e0d.
cluster1::> cluster controller-replacement network displaced-vlans restore -node node1 -port e0c -dest
ination-port e0d
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display VLAN tags that were displaced due to a controller-replacement. A VLAN tag that has been displaced
is a tag that was based on a network port that either no longer exists, or was moved to a new broadcast
domain. Restoring the vlan-tags re-creates them on the specified network port.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the node on which the displaced vlans reside.
[-port <netport>] - Original Base Port
The original base port where the vlans existed prior to controller-replacement.
[-vlan-tags <integer>, ...] - Displaced VLANs
The vlan-tags that were assigned to the network port prior to controller-replacement.
● Examples
The following example displays the displaced vlan-tag information.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster date modify command sets the time zone, date, and time on every node in the cluster.
● Parameters
[-timezone <Area/Location Timezone>] - Time Zone
This parameter sets the timezone, specified in the Olson format.
{ [-date {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Date and Time
This parameter sets the date and time, in the format MM/DD/YYYY HH:MM:SS.
| [-dateandtime <[[[[[cc]yy]mm]dd]hhmm[.ss]]>] - Date and Time
This parameter sets the date and time information, in the format [[[[[cc]yy]mm]dd]hhmm[.ss]]. The
argument for setting the date and time is interpreted as follows:
• cc First 2 digits of the year (e.g., 20 for 2011).
• yy Last 2 digits of year (e.g., 10 for 2010).
• mm Numeric month, a number from 01 to 12.
• dd Day, a number from 01 to 31.
• hh Hour, a number from 00 to 23.
• mm Minute, a number from 00 to 59.
• ss Second, a number from 00 to 59.
If the first two digits of the year are omitted, and the last two digits are greater than 68, a date in the
1900s is used. Otherwise, a date in the 2000s is used. If all four digits of the year are omitted, the
default is the current year. If the month or day is omitted, the default is the current month or day,
respectively. If the seconds are omitted, the default is set to 00. The system automatically handles the
time changes for Daylight Saving and Standard time, and for leap seconds and years.
| [-utcdateandtime | -u <[[[[[cc]yy]mm]dd]hhmm[.ss]]>]} - UTC Date and Time
This parameter sets the date and time information in Coordinated Universal Time (UTC), in the format
[[[[[cc]yy]mm]dd]hhmm[.ss]]. -u is an alias for -utcdateandtime. The argument for setting the date
and time is interpreted as follows:
• cc First 2 digits of the year (e.g., 20 for 2011).
• yy Last 2 digits of year (e.g., 10 for 2010).
• mm Numeric month, a number from 01 to 12.
• dd Day, a number from 01 to 31.
• hh Hour, a number from 00 to 23.
• mm Minute, a number from 00 to 59.
• ss Second, a number from 00 to 59.
If the first two digits of the year are omitted, and the last two digits are greater than 68, a date in the
1900s is used. Otherwise, a date in the 2000s is used. If all four digits of the year are omitted, the
default is the current year. If the month or day is omitted, the default is the current month or day,
respectively. If the seconds are omitted, the default is set to 00. Time changes for Daylight Saving and
Standard time, and for leap seconds and years, are handled automatically.
● Examples
The following example sets the date and time to January 1 2011, at 1:00 a.m.:
The following example sets the date and time in the UTC format to May 22, 2011, at 09:25:00 a.m.:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster date show command displays the time zone, date, and time settings for one or more nodes
in the cluster. By default, the command displays date and time settings for all nodes in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-utc ]
Displays date and time information in Coordinated Universal Time (UTC).
| [-utcdate ]
Displays date and time information in UTC.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-timezone <Area/Location Timezone>] - Time Zone
Selects the nodes that match this parameter value (specified in the Olson format).
[-date {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Date and Time
Selects the nodes that match this parameter value.
[-utc-date <MM/DD/YYYY HH:MM:SS>] - UTC Date and Time
Selects the nodes that match this parameter value.
[-dateandtime <[[[[[cc]yy]mm]dd]hhmm[.ss]]>] - Date and Time
Selects the nodes that match this parameter value (interpreted as follows):
• cc First 2 digits of the year (e.g., 20 for 2011).
• yy Last 2 digits of year (e.g., 11 for 2011).
• mm Numeric month, a number from 01 to 12.
• dd Day, a number from 01 to 31.
• hh Hour, a number from 00 to 23.
• mm Minute, a number from 00 to 59.
• ss Second, a number from 00 to 59.
[-utcdateandtime | -u <[[[[[cc]yy]mm]dd]hhmm[.ss]]>] - UTC Date and Time
-u is used as an alias for -utcdateandtime. Selects the nodes that match this parameter value
(interpreted as follows):
• cc First 2 digits of the year (e.g., 20 for 2011).
• yy Last 2 digits of year (e.g., 11 for 2011).
• mm Numeric month, a number from 01 to 12.
• dd Day, a number from 01 to 31.
• hh Hour, a number from 00 to 23.
• mm Minute, a number from 00 to 59.
• ss Second, a number from 00 to 59.
● Examples
The following example displays the date and time settings for all nodes in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster date zoneinfo load-from-uri command loads a new set of timezone zoneinfo data to
replace the version installed in the cluster. Releases of Data ONTAP software contain the timezone data that
is current at the time of release. If a change is made to the timezone between Data ONTAP releases, then an
update can be made to the release data. For instance, if a change is made to when daylight saving time is
observed for a country then an update to cluster zoneinfo data may be required.
Only zoneinfo files provided by NetApp for use in Data ONTAP should be used with this command.
To update the zoneinfo database do the following:
• Place the file on a local web server accessible without password from the cluster.
• Execute the cluster date zoneinfo load-from-uri command, passing the Universal Resource
Identifier (URI) of the file as parameter.
Note
The command need only be executed once for the cluster. The data will be distributed to each node of
the cluster.
● Parameters
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI of Timezone Zoneinfo Data
URI of the new zoneinfo file.
● Examples
The following example loads a new version of the timezone zoneinfo database to the cluster:
Related reference
cluster date zoneinfo show (page 79)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display information about the current timezone zoneinfo data.
● Examples
The following example shows the zoneinfo information for a cluster:
cluster ha commands
Manage high-availability configuration
cluster ha modify
Modify high-availability configuration of cluster management services
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster ha modify command enables or disables cluster high availability in a two-node cluster.
Enable high availability when performing some procedures, such as replacing hardware.
Note
This command is required to enable high availability if the cluster only has two nodes. Do not run this
command in a cluster that has three or more nodes.
Note
Cluster high availability for two-node clusters differs from the storage failover technology used
between two nodes for storage high availability.
● Parameters
[-configured {true|false}] - HA Configured
Use this parameter with the value true to enable high availability mode in the cluster. Use this
parameter with the value false to disable high availability mode in the cluster.
● Examples
The following example enables cluster high availability in a cluster.
cluster ha show
Show high-availability configuration status for the cluster
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster ha show command displays the high-availability status of the cluster. Cluster high-
availability mode applies only to two-node clusters.
● Examples
The following example displays the high-availability status for a two-node cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster identity modify command changes a cluster's identity information.
● Parameters
[-name <Cluster name>] - Cluster Name
Use this parameter to specify a new name for the cluster.
• The name must contain only the following characters: A-Z, a-z, 0-9, "-" or "_".
• The first character must be one of the following characters: A-Z or a-z.
• The last character must be one of the following characters: A-Z, a-z or 0-9.
• The maximum supported length is 44 characters.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
[-location <text>] - Cluster Location
Use this parameter to specify the physical location of the cluster. For example, "Lab 5".
[-contact <text>] - Cluster Contact
Use this parameter to specify contact information for the cluster, such as a name or e-mail address.
● Examples
The following example renames the current cluster to cluster2:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster identity show command displays the identity information of the cluster.
● Examples
The following example displays the cluster's UUID, name, serial number, location and contact information:
cluster1::>
The following example displays the cluster's UUID, name, serial number, location, contact information, and
RDB UUID:
Warning: These diagnostic commands are for use by Fujitsu personnel only.
Do you want to continue? {y|n}: y
cluster1::*>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image cancel-update command is used to cancel an update that is in either paused-by-
user or paused-by-error state. The update cannot be canceled if it is not in a paused state.
● Examples
The following example displays a cancel-update operation:
Info: Canceling update. It may take a few minutes to finish canceling the update
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image pause-update command is used to pause a currently running update. The update
pauses at the next predefined pause point (for example, after validation, download to the boot device,
takeover completion, or giveback completion) which might take some time to reach. When the update
reaches the pause point, it transitions into the pause-by-user state.
● Examples
The following example displays pause-update operation:
Info: Pausing update. It may take a few minutes to finish pausing the update
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image resume-update command is used to resume an update that is currently paused in
paused-by-user or paused-by-error state. If the update is not paused then an error is returned.
● Parameters
[-ignore-post-update-checks-failures {true|false}] - Ignore Post-update-checks Phase Failures
(privilege: advanced)
Specifies whether the post update checks phase warnings and/or errors should be ignored. The default
value is false.
● Examples
The following example shows an resume-update operation:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image show command displays information about the version of Data ONTAP that is running
on each node and the date/time when it was installed. By default, the command displays the following
information:
• Node name
• Current version
• Installation date and time
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays information about the specified node.
[-version <text>] - Current Version
Displays information about the nodes running the specified version.
[-date <MM/DD/YYYY HH:MM:SS>] - Date Installed
Displays information about the nodes with the specified installation date.
● Examples
The following example displays information about currently running images on all nodes of the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image show-update-history command displays the update history for each node. By
default, the command displays the following information:
• Status
• Package version
• Start time
• Completion time
• Component ID
• Previous version
• Updated version
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-component-id <text>] - Component ID
Displays updates for the specified component.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
Displays updates with the specified start time.
[-package-version <text>] - Package Version
Displays updates for the specified package version.
[-status {successful|canceled|back-out}] - Status
Displays updates that completed with the specified status.
[-completion-time <MM/DD/YYYY HH:MM:SS>] - Completion Time
Displays updates with the specified completion time.
[-previous-version <text>] - Previous Version
Displays updates with the specified previous version.
[-updated-version <text>] - Updated Version
Displays updates with the specified updated version.
● Examples
The following example displays history of automated nondisruptive updates:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster image show-update-log command displays detailed information about the currently
running, or previously run nondisruptive updates. By default, the command displays the following
information:
• Phase
• Transaction
• Transaction ID
• Component ID
• Time stamp
• Status
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-trans-id <integer>] - Transaction ID
Displays information for the step associated with the specified transaction ID.
[-component-id {<nodename>|local}] - Component ID
Displays information for steps associated with the specified component.
[-phase {validation|prereq-updates|ontap-updates|package-management|default-phase|post-update-
checks}] - Transaction Phase
Displays information for steps associated with the specified update phase.
[-trans-name {initialize|mount-image|restart-hm|get-health|run-scripts|unmount-image|clear-alert|
post-restart-hm|cleanup-rd|synch-image|do-download-job|do-failover-job|do-giveback-job|check-
progress|complete-validation|invalid-task|default-task|do-postupdate-checks-task}] - Transaction Name
Displays information for steps associated with the specified transaction.
[-timestamp <MM/DD/YYYY HH:MM:SS>] - Timestamp
Displays information for steps associated with the specified timestamp.
[-status {waiting|started|completed|paused-on-error|paused-by-user|pause-pending|cancel-pending|
canceled|failed}] - Status
Displays information for steps matching the specified status.
● Examples
The following example displays information about automated nondisruptive update events:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster image show-update-log-detail command displays detailed information about the
currently running and previously run nondisruptive update events. By default, the command displays the
following information:
• Node
• Transaction ID
• Time stamp
• Destination node
• Task phase
• Task name
• Task status
• Message
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays information only for the specified node.
[-task-id <integer>] - Task Id
Displays information only for the specified task ID.
[-posted-time <MM/DD/YYYY HH:MM:SS>] - Posted Time
Displays information that occurred at the specified time.
[-msg-seq-no <integer>] - Message Sequence
Displays information only for the specified message sequence number.
[-current-pid <integer>] - Process ID
Displays information only for the specified process ID.
[-destination <text>] - Task Target node
Displays information only for the specified destination node.
[-ndu-phase {validation|prereq-updates|ontap-updates|package-management|default-phase|post-
update-checks}] - Update phase
Displays information only for the specified phase.
[-task-name {initialize|mount-image|restart-hm|get-health|run-scripts|unmount-image|clear-alert|
post-restart-hm|cleanup-rd|synch-image|do-download-job|do-failover-job|do-giveback-job|check-
progress|complete-validation|invalid-task|default-task|do-postupdate-checks-task}] - Task Name
Displays information only for the specified task name.
[-status {created|ready-to-run|running|completed|failed|pause_req|paused|paused-error|cancel_req|
canceled|resume_req|default_status}] - Status Of Task
Displays information only for items with the specified status.
[-message <text>] - Update Log Message
Displays information only for items with the specified message.
[-msg-type <text>] - Type of Message
Displays information only for items with the specified message type.
[-src-info <text>] - Source Information
Displays information only for items for the specified source.
● Examples
The following example displays detailed information automated nondisruptive updates:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image show-update-progress command displays information about the current state of
an update. By default, the command displays the following information:
• Update phase
• Status
• Estimated Duration
• Elapsed Duration
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ndu-phase {validation|prereq-updates|ontap-updates|package-management|default-phase|post-
update-checks}] - Update Phase
Displays information about the specified update phase.
[-phase-status {in-progress|waiting|paused-by-user|paused-on-error|completed|canceled|failed|
pause-pending|cancel-pending}] - Phase Status
Displays information about progress matching the specified phase status.
[-phase-duration <text>] - Phase Duration
Displays information about progress matching the specified phase duration.
[-phase-comments <text>] - Phase Comments
Displays information about progress matching the specified phase comments.
● Examples
The following example shows the automated nondisruptive update of two nodes, nodeA and nodeB. In this
case, nodeA's update is waiting, nodeB's update is in progress. nodeB's giveback operation is in progress.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:02
Data ONTAP updates in-progress 01:23:00 00:32:07
Details:
cluster1::>
The following example shows the automated nondisruptive update of two nodes, nodeA and nodeB. In this
case, automated nondisruptive update is paused-on-error in "Data ONTAP updates" phase. nodeA's update is
waiting, nodeB's update is failed. "Status Description" show nodeB's error and action.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:02
Data ONTAP updates paused-on-error 00:49:00 00:05:21
Details:
cluster1:>
The following example shows that the automated nondisruptive update is paused-on-error in "Post-update
checks" update phase and "Status Description" shows the error and action.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- --------------- --------------- ---------------
Data ONTAP updates completed 02:19:00 00:00:03
Post-update checks paused-on-error 00:10:00 00:00:02
Details:
cluster1::>
The following example shows that the automated nondisruptive update is completed on nodeA and nodeB.
Estimated Elapsed
Update Phase Status Duration Duration
-------------------- ----------------- --------------- ---------------
Pre-update checks completed 00:10:00 00:00:13
The following example shows the automated update of two-node MetroCluster configuration having clusters
cluster_A and cluster_B. In this case, cluster_A's update is waiting and cluster_B's update is in progress.
cluster_B's switchback operation is in progress.
Estimated Elapsed
Cluster Duration Duration Status
------------------------------ --------------- --------------- ----------------
cluster_A 00:00:00 00:00:00 waiting
cluster_B 00:00:00 00:06:42 in-progress
cluster_A::>
The following example shows that the automated update is completed on both cluster_A and cluster_B in
two-node MetroCluster configuration.
Estimated Elapsed
Cluster Duration Duration Status
------------------------------ --------------- --------------- ----------------
cluster_A 00:00:00 00:20:44 completed
cluster_B 00:00:00 00:10:43 completed
cluster_A::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image update command is used to initiate a Data ONTAP update. The update is preceded by
a validation of the cluster to ensure that any issues that might affect the update are identified. There are
two types of updates of a cluster. A rolling update updates Data ONTAP one HA pair at a time. This type of
update is performed for clusters with fewer than eight nodes or when the -force-rolling option is
specified for clusters with eight or more nodes. A batch update is used for clusters of eight or more nodes,
and performs updates of multiple HA pairs at the same time.
There are predefined points in the update when the update can be paused (either by the user or by an
error). These pause points occur after validation, after download to the boot device, after takeover has
completed, and after giveback has completed.
● Parameters
-version <text> - Update Version
Specifies the Data ONTAP version to use to update the cluster.
[-nodes {<nodename>|local}, ...] - Node
Specifies the nodes that are to be updated. This parameter is not supported for updates of MetroCluster
configurations and for two-stage upgrades.
[-estimate-only [true]] - Estimate Only
Creates a report of the steps that occur during the update without actually doing them.
[-pause-after {none|takeover-giveback|all}] - Update Pause
Specifies that the update should pause at each predefined pause points (for example, after validation,
after download to the boot device, after takeover, and after giveback) during the update.
[-ignore-validation-warning {true|false}] - Ignore Validation
Specifies that the update should proceed even if the validation reports warnings.
[-skip-confirmation {true|false}] - Skip Confirmation
Specifies that a validation that does not detect any error issues should not ask the user to confirm the
update but simply proceed with the update.
[-force-rolling [true]] - Force Rolling Update
This option is used for clusters with eight or more nodes to specify that a rolling update (one HA pair at a
time) should be done. This parameter is not supported for single-node cluster and two-node
MetroCluster.
[-stabilize-minutes <integer>] - Minutes to stabilize
Specifies the number of minutes that the update should wait after a takeover or giveback is completed.
This allows time for the clients to recover from the pause in I/O that occurs during takeover and
giveback. This parameter is not supported for single-node cluster.
● Examples
The following example shows the update operation:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image validate command checks for issues within the cluster that might lead to problems
during the update.
● Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
[-version <text>] - Update Version
Specifies the Data ONTAP version to use to validate the cluster.
[-rolling [true]] - Rolling Update
Specify this optional parameter on a cluster with eight or more nodes to perform a rolling-update check.
The default is to perform a batch-update check.
Note
This parameter is only supported on a cluster with eight or more nodes, and is not supported for
two-node MetroCluster.
● Examples
The following example shows the validate operation:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image package delete command deletes the specified version of the package from the
package repository. The associated information about the package is also deleted from the update
database.
● Parameters
-version <text> - Version To Be Deleted
Specifies the package version that is to be deleted.
● Examples
The following example deletes the package with version 8.3:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image package get command fetches a Data ONTAP package file specified by the URL into
the cluster. The package is stored in the cluster package respository and the information from the package is
stored in the update database.
● Parameters
-url <text> - Package URL
Specifies the URL from which to get the package.
● Examples
The following example displays how to get a package from a URL:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster image package show-repository command displays the package versions that are in the
cluster package repository. By default, the command displays the following information:
• Package version
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
This parameter specifies that detailed information should be displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-download-ver <text>] - Downloaded Version
Displays packages with the specified download version.
[-component-name <text>, ...] - Component Name
Displays packages for the specified component.
[-component-version <text>, ...] - Component Version
Displays packages with the specified component version.
[-package-build-time <MM/DD/YYYY HH:MM:SS>] - Package Build Time
Displays packages with the specified build time.
● Examples
The following example displays the packages in the cluster package repository:
Commands and methods used to manage the distributed kernel services of the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster kernel-service show command displays the following information from the master node
for each node in the cluster:
• Node name
• The quorum status of that node
• The availability status of that node
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-master-node {<nodename>|local}] - Node
The node in the cluster where the information be being reported from. If this parameter is not specified,
the command displays information about all nodes in the cluster.
[-cluster-node <text>] - Cluster Node
The node in the cluster that the information listed is regarding. If this parameter is specified, the
command displays information only about the nodes with the specified state value.
[-status-quorum {out-of-quorum|in-quorum}] - Quorum Status
The quorum status of the node specified by -cluster-node. If this parameter is specified, the
command displays information only about the nodes with the specified state value.
[-status-avail {false|true|unknown}] - Availability Status
The availability status of the node specified by -cluster-node. If this parameter is specified, the
command displays information only about the nodes with the specified state value.
[-status-oper {unknown|operational|not-operational}] - Operational Status
The operational status of the node specified by -cluster-node. If this parameter is specified, the
command displays information only about the nodes with the specified state value.
● Examples
The following example displays information about all nodes in the cluster:
Commands and mehtods used to manage the distributed kernel services of the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster kernel-service config modify used to manage the cluster kernel-service subsystem for
a node.
● Parameters
-node {<nodename>|local} - Node
The node in the cluster where the configuration is being modified.
[-kcs-enable-takeover {true|false}] - Initiated Takeover Enabled
This indicates whether the kernel service subsystem for this node will initiate a takeover of any node
determined to be out of quorum if allowed by the HA subsystem.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster kernel-service config show is used to display the configuration of the cluster kernel
service subystem for one or more nodes.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
The node in the cluster where the information be being reported from. If this parameter is not specified,
the command displays information about all nodes in the cluster.
[-clam-master-id <integer>] - CLAM Master Node ID
The node ID of the master node for the cluster. If this parameter is specified, the command displays
information only about the nodes with the specified state value.
[-kcs-enable-takeover {true|false}] - Initiated Takeover Enabled
Indicates if the distributed kernel service subystem will initiate a takeover of any node determined to be
out of quorum if the HA subsystem allows it. If this parameter is specified, the command displays
information only about the nodes with the specified state value.
● Examples
Node: cluster1-01
Master Node: cluster1-01
Master Node ID: 1000
Enabled: true
Initiated Takeover Enabled: false
Initiated Core on Takeover Enabled: false
Current FSM State: Master: Waiting for heartbeat timeout
Running Version: 1
Quorum Epoch: 115
Voting Status: false
CHAAQ Enabled: true
Node: cluster1-02
Master Node: cluster1-01
Master Node ID: 1000
Enabled: true
Initiated Takeover Enabled: false
Initiated Core on Takeover Enabled: false
Current FSM State: Non-Master: Waiting for backoff timeout
Running Version: 1
Quorum Epoch: 115
Voting Status: false
CHAAQ Enabled: true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster log-forwarding create command creates log forwarding destinations for remote logging.
● Parameters
-destination <Remote InetAddress> - Destination Host
Host name or IPv4 or IPv6 address of the server to forward the logs to.
[-port <integer>] - Destination Port
The port that the destination server listen on.
[-protocol {udp-unencrypted|tcp-unencrypted|tcp-encrypted}] - Log Forwarding Protocol
The protocols are used for sending messages to the destination. The protocols can be one of the
following values:
• udp-unencrypted - User Datagram Protocol with no security
• tcp-unencrypted - Transmission Control Protocol with no security
• tcp-encrypted - Transmission Control Protocol with Transport Layer Security (TLS)
[-verify-server {true|false}] - Verify Destination Server Identity
When this parameter is set to true, the identity of the log forwarding destination is verified by
validating its certificate. The value can be set to true only when the tcp-encrypted value is selected in
the protocol field. When this value is true the remote server might be validated by OCSP. The OCSP
validation for cluster logs is controlled with the security config ocsp enable -app audit_log
and security config ocsp disable -app audit_log.
[-facility <Syslog Facility>] - Syslog Facility
The syslog facility to use for the forwarded logs.
[-force [true]] - Skip the Connectivity Test
Normally, the cluster log-forwarding create command checks that the destination is reachable
via an ICMP ping, and fails if it is not reachable. Setting this value to true bypasses the ping check so
that the destination can be configured when it is unreachable.
● Examples
This example causes audit logs to be forwarded to a server at address 192.168.0.1, port 514 with USER
facility.
cluster1::> cluster log-forwarding create -destination 192.168.0.1 -port 514 -facility user
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster log-forwarding delete command deletes log forwarding destinations for remote logging.
● Parameters
-destination <Remote InetAddress> - Destination Host
Host name or IPv4 or IPv6 address of the server to delete the forwarding entry for.
-port <integer> - Destination Port
The port that the destination server listen on.
● Examples
This example deletes the forwarding of all logs to the server at address 1.1.1.1, port 514.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster log-forwarding modify command modifies log forwarding destinations for remote
logging.
● Parameters
-destination <Remote InetAddress> - Destination Host
The host name or IPv4 or IPv6 address of the server to be modified.
-port <integer> - Destination Port
The port that the destinations servers listen on.
[-verify-server {true|false}] - Verify Destination Server Identity
When this parameter is set to true, the identity of the log forwarding destination is verified by
validating its certificate. The value can be set to true only when the tcp-encrypted value is selected in
the protocol field. When this value is true the remote server might be validated by OCSP. The OCSP
validation for cluster logs is controlled with the security config ocsp enable -app audit_log
and security config ocsp disable -app audit_log.
[-facility <Syslog Facility>] - Syslog Facility
The syslog facility to use for the forwarded logs.
● Examples
This example modifies the facility of audit logs that are forwarded to the destination server at address
192.168.0.1, port 514.
cluster1::> cluster log-forwarding modify -destination 192.168.0.1 -port 514 -facility local1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster log-forwarding show command displays log forwarding information:
• Destination (IPv4/IPv6/hostname)
• Port number
• List of log classes
• Facility
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-destination <Remote InetAddress>] - Destination Host
If this optional parameter is specified, the command displays information about the forwarding
destinations with the specified host name, IPv4 or IPv6 address.
[-port <integer>] - Destination Port
If this optional parameter is specified, the command displays information about the forwarding
destinations with the specified ports.
[-protocol {udp-unencrypted|tcp-unencrypted|tcp-encrypted}] - Log Forwarding Protocol
If this optional parameter is specified, the command displays information about the forwarding
destinations with the specified protocols.
[-verify-server {true|false}] - Verify Destination Server Identity
If this optional parameter is specified, the command displays information about the forwarding
destinations with the specified verify-server values.
[-facility <Syslog Facility>] - Syslog Facility
If this optional parameter is specified, the command displays information about the forwarding
destinations with the specified facility.
● Examples
Verify Syslog
Destination Host Port Protocol Server Facility
------------------------ ------ --------------- ------ --------
192.168.0.1 514 udp-unencrypted false user
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer create command establishes a peer relationship between two clusters. Cluster
peering enables independent clusters to coordinate and exchange data.
Before creating a new cluster peer relationship, make sure that both clusters are individually healthy and
that there are no other peer relationships between the two clusters that might interfere with the new
relationship.
You can create a cluster peer relationship using the IPv4 or IPv6 protocol. You may not use both protocols
within a single relationship.
Use the cluster show and cluster peer show commands on each cluster to display health, peering
eligibility, and peering information about the two clusters.
● Parameters
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Use this parameter to specify the names or IP addresses of the logical interfaces used for intercluster
communication. Separate the addresses with commas.
The addresses you provide here are associated with the remote cluster until you modify or delete the
relationship, regardless of whether the addresses are valid. Make sure to provide addresses which you
know will remain available on the remote cluster. You can use the hostnames of the remote cluster's
intercluster addresses, the IP addresses of the remote cluster's intercluster LIFs or both.
[-username <text>] - Remote User Name
Use this optional parameter to specify a username that runs a reciprocal cluster peer create
command on the peered cluster. If you choose not to use the reciprocal creation option, by not supplying
a username for reciprocal creation, you must run cluster peer create again on the remote cluster
to complete the peering relationship.
If you specify the username for the remote cluster, you will be prompted to enter the associated remote
password. These credentials are not stored, they are used only during creation to authenticate with the
remote cluster and to enable the remote cluster to authorize the peering request. The provided
username's profile must have access to the console application in the remote cluster.
Use the security login role show and security login show commands on each cluster to find
user names and their privilege levels.
[-no-authentication [true]] - Do Not Use Authentication
Use this optional parameter when omitting the -username parameter to indicate that you will create an
unauthenticated peering relationship.
[-timeout <integer>] - Operation Timeout (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for peer communications. Specify the value in
seconds. The default timeout value is 60 seconds.
[-address-family {ipv4|ipv6}] - Address Family of Relationship
Use this optional parameter to specify the address family of the cluster peer relationship. The default is
based on existing relationships, existing local intercluster LIFs belonging to a particular address-family,
and the addresses supplied to the cluster peer create command.
[-offer-expiration {MM/DD/YYYY HH:MM:SS | {1..7}days | {1..168}hours | PnDTnHnMnS | PnW}] -
Passphrase Match Deadline
Specifying cluster peer create normally creates an offer to establish authentication with a cluster
that is a potential cluster peer to this cluster. Such offers expire unless they are accepted within some
definite time. Use this optional parameter to specify the date and time at which this offer should expire,
the time after which the offer will no longer be accepted.
[-rpc-connect-timeout <integer>] - Timeout for RPC Connect (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for the RPC connect during peer communications.
Specify the value in seconds. The default timeout value is 10 seconds.
[-update-ping-timeout <integer>] - Timeout for Update Pings (seconds) (privilege: advanced)
Use this optional parameter to specify a timeout value for pings while updating remote cluster
information. Specify the value in seconds. The default timeout value is 5 seconds. This parameter applies
only to cluster peer relationships using the IPv4 protocol.
[-ipspace <IPspace>] - IPspace for the Relationship
Use this optional parameter to specify the IPspace within which the cluster peering relationship is to
operate. The default is to use the 'Default' IPspace.
● Examples
This example creates a peer relationship between cluster1 and cluster2. This reciprocal create executes the
create command on both the local cluster and the remote cluster. The cluster peer create command can use
the hostnames of cluster2's intercluster addresses, the IP addresses of cluster2's intercluster LIFs, or both.
Note that the admin user's password was typed at the prompt, but was not displayed.
Remote Password:
This example shows coordinated peer creation. The cluster peer create command was issued locally on
each cluster. This does not require you to provide the username and password for the remote cluster. There
is a password prompt, but if you are logged in as the admin user, you may simply press enter.
Remote Password:
NOTICE: Addition of the local cluster information to the remote cluster has
failed with the following error: not authorized for that command. You may
need to repeat this command on the remote cluster.
Remote Password:
NOTICE: Addition of the local cluster information to the remote cluster has
failed with the following error: not authorized for that command. You may
need to repeat this command on the remote cluster.
This example shows a reciprocal cluster peer create over IPv6 addresses, that establishes a cluster peer
relationship with an IPv6 address family.
Remote Password:
This example shows creation of an authenticated peering relationship. It is an example of using the
coordinated method to create a cluster peer relationship. The cluster peer create command is issued
locally on each cluster. Before executing this pair of commands, a passphrase to be used with the
commands is chosen and given at the prompts. The passphrase can be any text; it is prompted for twice on
each cluster, and all four copies of the passphrase must agree. The passphrase does not echo on the screen.
The passphrase must be longer than the minimum length as specified by the cluster peer policy on
both clusters.
Notice: Now use the same passphrase in the "cluster peer create" command in the
other cluster.
This example creates a peer relationship between cluster1 and cluster2. This reciprocal create executes the
create command on both the local cluster and the remote cluster. The cluster peer create command can use
the hostnames of cluster2's intercluster addresses, the IP addresses of cluster2's intercluster LIFs or both.
Note that the admin user's password was typed at the prompt, but was not displayed. The -local-name
parameter is specified to create a local name used to identify the peer cluster in cases where the name of
the peer cluster is not unique or not descriptive.
The following example create a peer relationship between cluster1 and cluster2 using system-generated
passphrases:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: cluster2
Initial Allowed Vserver Peers: -
The following example creates a cluster peer offer from cluster1 for an anonymous cluster using system-
generated passphrase with offer expiration period of two days and the cluster2 uses the offer from cluster2
with the system-generated passphrase:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: Clus_7ShR (temporary generated)
Initial Allowed Vserver Peers: -
Expiration Time: 6/9/2017 08:16:10 +5:30
Intercluster LIF IP: 10.140.106.185
Cluster "cluster1" creates an offer with initial-allowed-vserver-peers option set to Vservers "vs1" and
"vs2". And the peer cluster "cluster2" uses the offer and creates peer relationship with cluster1, upon the
successful peer relationship establishment, Vserver peer permission entries are created for the Vservers "vs1"
and "vs2" in cluster "cluster1" for the peer cluster "cluster2". The following example describes the usage of
initial-allowed-vserver-peers option in the cluster peer creation workflow:
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Peer Cluster Name: Clus_7ShR (temporary generated)
Initial Allowed Vserver Peers: vs1,vs2
Expiration Time: 6/7/2017 09:16:10 +5:30
Intercluster LIF IP: 10.140.106.185
Related reference
security login role show (page 673)
security login show (page 655)
cluster show (page 70)
cluster peer show (page 114)
cluster peer policy commands (page 129)
cluster identity modify (page 81)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer delete command removes a peering relationship. It removes the relationship records,
state data, and all associated jobs.
Before removing the relationship, the command verifies that no resources depend on the relationship. For
example, if any SnapMirror relationships exist, the command denies the request to delete the peering
relationship. You must remove all dependencies for the deletion to succeed. The cluster peer delete
command removes only the local instance of the peer relationship. An administrator in the peer cluster must
use the cluster peer delete command there as well to completely remove the relationship.
● Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the peering relationship to delete by specifying the name of the peered
cluster.
● Examples
This example shows a failed deletion due to a SnapMirror dependency.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer modify command modifies the attributes of a peering relationship. When you modify
a peer relationship and specify -peer-addrs, all of the remote addresses must respond, must be
intercluster addresses, and must belong to the remote cluster that is being modified; or the modification
request is denied.
● Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the peering relationship to modify by specifying the name of the peered
cluster.
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Use this parameter to specify the names or IP addresses of the logical interfaces used for intercluster
communication. Separate the addresses with commas. The list of addresses you provide replaces the
existing list of addresses.
[-address-family {ipv4|ipv6}] - Address Family of Relationship
Use this parameter to specify the address family of the names specified with the peer-addrs parameter.
[-timeout <integer>] - Operation Timeout (seconds) (privilege: advanced)
Use this parameter to specify a timeout value for peer communications. Specify the value in seconds.
● Examples
This example modifies the peering relationship to use a new IP address in the remote cluster for intercluster
communications and revoke authentication.
View existing cluster peer configuration using following command :
Warning: You are removing authentication from the peering relationship with
cluster "cluster2". Use the "cluster peer modify" command on
cluster "cluster2" with the "-auth-status-admin
no-authentication" parameter to complete authentication removal from
the peering relationship.
The following example modifies the peering relationship to use authentication with -generate-
passphrase option.
Notice: Use the below system-generated passphrase in the "cluster peer modify"
command in the other cluster.
Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
Expiration Time: 6/7/2017 09:16:10 +5:30
Peer Cluster Name: cluster2
Until then, the operational authentication state of the relationship remains as "pending".
Modify cluster peer relationship in cluster2 with use-authentication option and use the auto-generated
passphrase.
Related reference
cluster peer create (page 102)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer modify-local-name command modifies the local name for a remote cluster. The new
local name must be unique among all the local names for the remote clusters with which this cluster is
peered.
● Parameters
-name <text> - Cluster Peer Name
Use this parameter to specify the existing local name for a peer cluster.
-new-name <Cluster name> - Cluster Peer Local Name
Use this parameter to specify the new local name of the peer cluster. The new local name must conform
to the same rules as a cluster name.
● Examples
Related reference
cluster identity modify (page 81)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer ping command displays the status of the network mesh used by the peering
relationship. The command checks the network connection to each remote IP address known by the cluster.
This includes all intercluster addresses. It is possible for a known address to be not present during the ping.
These addresses are not checked, but the absence is temporary.
The most useful parameters for diagnosing problems are -count and -packet-size. Use the -count and
-packet-size parameters to diagnose problems similarly to how you use them with the standard ping
utility.
To display network connection status within a cluster, use the network ping command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-originating-node {<nodename>|local}] - Node that Initiates Ping
Use this parameter to send the ping from the node you specify.
[-destination-cluster <Cluster name>] - Cluster to Ping
Use this parameter to specify the peer cluster you wish to ping.
[-destination-node <Peer Node Name>] - Node to Ping in Destination Cluster
Use this parameter to specify a specific node in the destination cluster to ping.
[-ip-address <IP Address>] - Active IP Address
Use this parameter to specify the active IP address you wish to ping.
[-count <integer>] - Ping Count
Use this parameter to specify the number of requests to be sent to the destination.
[-status {unknown_node|internal_error|unreachable|session_reachable|interface_reachable}] - Status
of Ping Operation
Use this parameter to display only ping results that have the status you specify.
[-timeout <integer>] - Ping Timeout in Seconds
Use this parameter to specify a timeout value in seconds for the ping operation.
[-packet-size <integer>] - Size of Packet
Use this parameter to specify the number of data bytes to be sent in the ping packet.
[-ttl <integer>] - Time to Live/ Number of Hops
Use this parameter to specify the maximum number of network hops a packet may make before it is
considered a failure.
[-response-time <double>] - Response Time (ms)
Use this parameter to display only nodes that have the response time (in milliseconds) that you specify.
This parameter is most useful when specified with a range of values, such as >500
● Examples
This example shows a ping of cluster1 and cluster2 from cluster2. All nodes are reachable.
Related reference
network ping (page 371)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer show command displays information about the peering relationships between the
current cluster and other clusters. Cluster peering enables independent clusters to coordinate and exchange
data.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <text>] - Peer Cluster Name
Selects the peered clusters that match this parameter value.
[-cluster-uuid <UUID>] - Cluster UUID (privilege: advanced)
Selects the peered clusters that match this parameter value.
[-peer-addrs <Remote InetAddress>, ...] - Remote Intercluster Addresses
Selects the peered clusters that match this parameter value (remote-host name or IP address).
[-availability <availability>] - Availability of the Remote Cluster
Selects the peered clusters that match this parameter value. This parameter can have four different
values:
• Available - The peer cluster availability status will be Available only if all the nodes in the local cluster
are able to contact all the nodes in the remote cluster.
• Partial - The peer cluster availability status will be Partial only if some nodes in the local cluster are
not able to contact some or all nodes in the peer cluster.
• Unavailable - The peer cluster availability status will be Unavailable only if all the nodes in the local
cluster are not able to contact any node in the peer cluster.
• Pending - The peer cluster availability status will be Pending while the system is creating in-memory
health data.
• Unidentified - The peer cluster availability status will be Unidentified if the cluster peer offer is created
for an anonymous cluster and is unused. When the offer is used, then the availability will get changed
to any of the above mentioned status.
Note: If one or more nodes in the local cluster are offline or unreachable, then those nodes are not used
to determine the availability status for the remote nodes.
[-rcluster <text>] - Remote Cluster Name
Selects the peered clusters that match this parameter value.
[-ip-addrs <Remote InetAddress>, ...] - Active IP Addresses
Selects the peered clusters that match this parameter value.
[-serialnumber <Cluster Serial Number>] - Cluster Serial Number
Selects the peered clusters that match this parameter value.
[-remote-cluster-nodes <text>, ...] - Remote Cluster Nodes
Selects the peered clusters that match this parameter value.
[-remote-cluster-health {true|false}] - Remote Cluster Health
Selects the peered clusters that match this parameter value.
• true - This means that there is cluster quorum in the peer cluster.
• false - This means that there is no cluster quorum in the peer cluster.
[-unreachable-local-nodes <text>, ...] - Unreachable Local Nodes
Selects the peered clusters that match this parameter value.
[-timeout <integer>] - Operation Timeout (seconds) (privilege: advanced)
Selects the peered clusters that match this parameter value.
[-address-family {ipv4|ipv6}] - Address Family of Relationship
Selects the peered clusters that have a relationship established using this protocol.
[-auth-status-admin {no-authentication|revoked|use-authentication}] - Authentication Status
Administrative
Selects the peered clusters that match this parameter value, which must be chosen from the following
values.
• no-authentication - The cluster peer relationship uses no authentication.
• use-authentication - The cluster peer relationship is authenticated.
• revoked - The cluster peer relationship is revoked until agreement can be reached.
[-auth-status-operational {ok|absent|pending|expired|revoked|declined|refused|ok-and-offer|
absent-but-offer|revoked-but-offer|key-mismatch|intent-mismatch|incapable}] - Authentication Status
Operational
Selects the peered clusters that match this parameter value, which must be one of the following values.
• ok - The clusters both use authentication and they have agreed on an authentication key.
• absent - The clusters agree not to use authentication.
• pending - This cluster has made an outstanding offer to authenticate with the other cluster, but
agreement has not yet been reached.
• expired - This cluster's offer to authenticate with the other cluster expired before agreement was
reached.
• revoked - This cluster has revoked any prior authentication agreement.
• declined - The other cluster has revoked the authentication agreement and is declining to
communicate with this cluster.
• refused - The other cluster actively refuses the communication attempts, perhaps because its part of
the peering relationship has been deleted.
• ok-and-offer - The clusters agree on an authentication key and are using it. In addition, this cluster
has made an outstanding offer to re-authenticate with the other cluster.
• absent-but-offer - The clusters currently agree that neither side requires authentication of the other,
but this cluster has made an outstanding offer to authenticate.
• revoked-but-offer - This cluster has revoked any authentication agreement, but it has made an
outstanding offer to authenticate.
• intent-mismatch - The two clusters disagree on whether authentication is required.
• key-mismatch - The two clusters both believe that they are authenticated, but one of the shared
secrets has become corrupted.
• incapable - The other cluster is no longer running a version of Data ONTAP that supports authenticated
cluster peering.
[-rpc-connect-timeout <integer>] - Timeout for RPC Connect (privilege: advanced)
Selects the peered clusters that match this parameter value.
[-update-ping-timeout <integer>] - Timeout for Update Pings (privilege: advanced)
Selects the peered clusters that match this parameter value.
[-last-updated <MM/DD/YYYY HH:MM:SS>] - Last Update Time
Selects the peered clusters that match this parameter value.
[-ipspace <IPspace>] - IPspace for the Relationship
Selects the peered clusters whose relationships are to cross the named local IPspace. The default value is
the IPspace name "Default". In relationships created before ONTAP 8.3.1, the initial value is "-" and is not
updated to "Default" until an action is taken on a cluster peer relationship, such as creating, modifying,
or deleting a relationship.
[-encryption-protocol-proposed {none|tls-psk}] - Proposed Setting for Encryption of Inter-Cluster
Communication
Selects the peered clusters that match the value of this parameter, that is, whether they are proposing
to encrypt their cross-cluster communication or not.
[-encryption-protocol {none|tls-psk}] - Encryption Protocol For Inter-Cluster Communication
Selects the peered clusters that match the value of this parameter, that is, whether they are encrypting
their cross-cluster communication or not.
[-psk-algorithm {akep2|jpake}] - Algorithm By Which the PSK Was Derived
Selects the peered clusters that match the value of this parameter, that is, which of the following
algorithms was used to derive the Pre-Shared Key between the clusters.
• akep2 - The key was obtained through the AKEP2-and-PBKDF2 algorithm.
• jpake - The key was obtained through the J-PAKE algorithm.
● Examples
This example shows the output of the cluster peer show command when all nodes in the local cluster are
able to contact all nodes in the remote peer cluster. Additionally, the peer relationship is authenticated and
operating correctly.
This example shows the output of the cluster peer show command when some nodes in the local cluster are
not able to contact some or all of the nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster
cannot be contacted from the node where the command is executed, but all the other nodes including node
on which command is executed are able to contact all nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster
cannot be contacted from the node where the command is executed, and the node on which command is
executed is also not able to contact the remote peer cluster.
This example shows the output of the cluster peer show command when all the nodes in the local cluster are
not able to contact any nodes in the remote peer cluster.
This example shows the output of the cluster peer show command while the system is creating in-
memory health data.
This example shows the output of the cluster peer show command when all nodes in the local cluster are
able to contact all nodes in the remote peer cluster. Additionally, the peer relationship is authenticated and
operating correctly.
This example shows the output of the cluster peer show command when some nodes in the local cluster are
not able to contact some or all of the nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster
cannot be contacted from the node where the command is executed, but all the other nodes including node
on which command is executed are able to contact all nodes in the remote peer cluster.
This example shows the output of the cluster peer show command when some nodes in the local cluster
cannot be contacted from the node where the command is executed, and the node on which command is
executed is also not able to contact the remote peer cluster.
This example shows the output of the cluster peer show command when all the nodes in the local cluster are
not able to contact any nodes in the remote peer cluster.
This example shows the output of the cluster peer show command while the system is creating in-
memory health data.
This example shows the output of the cluster peer show command for the offer created for an
anonymous cluster:
The cluster peer connection commands provide you with the ability to observe, and to some extent
manage, the connections created on behalf of cluster peering, both for control and data access.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer connection show command displays information about the current TCP connections
and how they are supporting the set of peering relationships.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster-name <text>] - Remote Cluster
Selects the connections associated with the named peered cluster.
[-node {<nodename>|local}] - Node
Selects the connections hosted by the given node.
[-connection-type {mgmt-client|mgmt-server|data}] - Cluster Peering Connection Type
Selects the connections of the named type. This parameter can have one of three different values:
• Mgmt-client - Management-plane client connections, created so that this node may make
management-plane requests of other nodes.
• Mgmt-server - Management-plane server connections, over which this node services requests made by
other nodes' mgmt-client connections.
• Data - Connections made between data-planes of different nodes.
[-index <integer>] - Index of Connection
Selects the connections with the given index value.
[-cluster-uuid <UUID>] - Cluster UUID (privilege: advanced)
Selects the connections to the cluster with the given cluster UUID.
[-auth-status-admin {no-authentication|revoked|use-authentication}] - Authentication Status
Administrative
Selects connections to the peered clusters whose intended authentication matches this parameter value.
[-auth-status-operational {ok|absent|pending|expired|revoked|declined|refused|ok-and-offer|
absent-but-offer|revoked-but-offer|key-mismatch|intent-mismatch|incapable}] - Authentication Status
Operational
Selects connections to the peered clusters whose authentication state matches this parameter value.
[-authenticated {true|false}] - Authenticated
Selects connections that have been authenticated, or not, according to this parameter value.
[-port <integer>] - Network Port
Selects the connections whose port matches this parameter value.
[-idle <[<integer>h][<integer>m][<integer>s]>] - Idle Time
Selects the connections whose idle times match this parameter value.
[-address <IP Address>] - Remote Network Address
Selects the connections that have this parameter value as the remote network address.
● Examples
This example shows the output of the cluster peer connection show command.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer health show command displays information about the health of the nodes in peer
clusters from the perspective of the nodes in the local cluster. The command obtains health information by
performing connectivity and status probes of each peer cluster's nodes from each node in the local cluster.
To enable quick access to remote cluster health information, remote cluster health status is periodically
checked and cached. These cached results enable users and system features to quickly assess the availability
of remote resources. By default, this command accesses cached results. Use the -bypass-cache true option
to force a current, non-cached check of remote cluster health.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-originating-node {<nodename>|local}] - Local Node
● Examples
The following example shows typical output for this command in a cluster of two nodes that has a peer
cluster of two nodes.
The following example shows detailed health information for node3 in cluster2 from the perspective of
node1 in cluster1.
cluster1::> cluster peer health show -originating-node node1 -destination-cluster cluster2 -destination
-node node3 -instance
The cluster peer offer commands provide you with the ability to manage the authentication offers
that can be created by the cluster peer create and cluster peer modify commands.
Related reference
cluster peer create (page 102)
cluster peer modify (page 109)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer offer cancel command cancels an outstanding offer to authenticate with a
potentially peered cluster. After the command completes, the given cluster can no longer establish
authentication using the given authentication offer.
● Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify which offer should be cancelled, by specifying the name of the cluster to
which the offer is extended.
● Examples
The following example cancels the authentication offer to cluster2.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer offer modify command modifies the outstanding offer to authenticate with a
potentially peered cluster. Every authentication offer has an expiration time, after which the offer will no
longer be honored. This command is used to change that expiration time. To cancel the offer outright, use
the cluster peer offer cancel command instead.
● Parameters
-cluster <text> - Peer Cluster Name
Use this parameter to specify the offer to be modified by indicating the name of the cluster to which it
has been extended.
[-offer-expiration {MM/DD/YYYY HH:MM:SS | {1..7}days | {1..168}hours | PnDTnHnMnS | PnW}] -
Authentication Offer Expiration Time
Use this parameter to specify the new expiration time for the offer.
[-initial-allowed-vserver-peers <Vserver Name>, ...] - Vservers Initially Allowed for Peering
Use this optional parameter to specify the list of Vservers for which reciprocal Vserver peering with peer
cluster should be enabled.
● Examples
This example modifies the expiration time for the authentication offer to push it out by an hour.
cluster1::> cluster peer offer modify -cluster cluster2 -offer-expiration "7/23/2013 16:45:47"
Related reference
cluster peer offer cancel (page 127)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer offer show command displays information about authentication offers still pending
with potential peer clusters. By default, the command displays information about all unexpired offers made
by the local cluster.
To display detailed information about a specific offer, run the command with the -cluster parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <text>] - Peer Cluster Name
Selects the offer that matches this parameter value.
[-auth-status-operational {ok|absent|pending|expired|revoked|declined|refused|ok-and-offer|
absent-but-offer|revoked-but-offer|key-mismatch|intent-mismatch|incapable}] - Authentication Status
Operational
Selects the offers that match this parameter value.
[-offer-creation <MM/DD/YYYY HH:MM:SS>] - Authentication Offer Creation Time
Selects the offers that match this parameter value.
[-offer-expiration {MM/DD/YYYY HH:MM:SS | {1..7}days | {1..168}hours | PnDTnHnMnS | PnW}] -
Authentication Offer Expiration Time
Selects the offers that match this parameter value.
[-initial-allowed-vserver-peers <Vserver Name>, ...] - Vservers Initially Allowed for Peering
Selects the offers that match this parameter value.
[-offer-creator <text>] - Authentication Offer Creator
Selects the offers that match this parameter value.
[-encryption-protocol-proposed {none|tls-psk}] - Encryption Protocol to Be Used For the
Relationship
Selects the offers that match this parameter value.
● Examples
The following example displays information about the outstanding authentication offers:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer policy modify command modifies the prevailing policy settings. One setting governs
whether unauthenticated cluster peer relationships can exist. The other setting specifies a minimum length
for passphrases.
● Parameters
[-is-unauthenticated-access-permitted {true|false}] - Is Unauthenticated Cluster Peer Access
Permitted
Use this parameter to specify whether unauthenticated peering relationships are allowed to exist.
Setting the parameter value to true allows such relationships to exist. Setting the value to false prevents
both the creation of unauthenticated peering relationships as well as the modification of existing
peering relationships to be unauthenticated. Setting the value to false is not possible if the cluster
currently is in any unauthenticated relationships.
[-passphrase-minlength <integer>] - Passphrase Length Minimum
Use this parameter to specify a minimum length for passphrases as given to the cluster peer
create or cluster peer modify commands in the future. The default value for this parameter is 8.
[-is-unencrypted-access-permitted {true|false}] - Is Unencrypted Cluster Peer Access Permitted
Use this parameter to specify whether peering relationships that do not use encryption are allowed to
exist. Setting the parameter value to true allows such relationships to exist. Setting the value to false
prevents the creation of unauthenticated peering relationships and the modification of existing peering
relationships to be unauthenticated, as well as preventing unencrypted peering relationships from
being created and the modification of existing peering relationships to be unencrypted. Setting the
value to false is not possible if the cluster currently is in any unauthenticated or unencrypted
relationships.
● Examples
This example modifies the peering policy to disallow unauthenticated intercluster communications.
Related reference
cluster peer create (page 102)
cluster peer modify (page 109)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster peer policy show command displays the prevailing cluster peer authentication policy.
There are two policies at present: one to control whether any cluster peer relationships can be
unauthenticated, and one to control the minimum length for a passphrase. If the policy is set to preclude
unauthenticated peering relationships, then unauthenticated relationships cannot be created inadvertently.
Passphrases of less than the minimum length may not be used. By default, this minimum length is set to 8,
so passphrases must be 8 characters long or longer.
● Examples
This example shows the cluster peer policy when unauthenticated relationships may not be created
inadvertently.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster quorum-service options modify command modifies the values of cluster quorum
services options.
● Parameters
[-ignore-quorum-warning-confirmations {true|false}] - Whether or Not Warnings Are Enabled
Specifies whether cluster quorum warnings and confirmations should be ignored when cluster
operations could negatively impact cluster quorum:
• Halting a node (system node halt)
• Rebooting a node (system node reboot)
• Issuing a planned takeover (storage failover takeover)
The default setting is false.
● Examples
The following example shows the usage of this command:
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Fujitsu personnel.
Do you want to continue? {y|n}: y
Related reference
system node halt (page 1404)
system node reboot (page 1407)
storage failover takeover (page 1146)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster quorum-service options show command displays the values of cluster quorum services
options.
● Examples
The following example demonstrates showing the state of ignore-quorum-warning-confirmations when it is
false and true.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Fujitsu personnel.
Do you want to continue? {y|n}: y
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster ring show command displays a cluster's ring-replication status. Support personnel might ask
you to run this command to assist with troubleshooting.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the rings that match this parameter value.
[-unitname {mgmt|vldb|vifmgr|bcomd|crs|availd}] - Unit Name
Selects the rings that match this parameter value. Possible values are:
• mgmt - The management application
• vldb - The volume location database
• vifmgr - The virtual-interface manager
● Examples
The following example displays information about all replication rings in a two-node cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster statistics show command displays the following information. Each item lists the current
value and; if applicable, the change (delta) from the previous reported value.
• CPU busy percentage
• Average of CPU busy percentage (advanced privilege level only)
• Total number of NFS and CIFS operations
• Number of NFS operations
• Number of CIFS operations
• Number of cache operations (advanced privilege level only)
• Total amount of network data received (advanced privilege level only)
• Total amount of network data sent (advanced privilege level only)
• Number of packets received (advanced privilege level only)
• Number of packets sent (advanced privilege level only)
• Busy percentage for the data network
• Amount of data received on the data network
• Amount of data sent on the data network
• Busy percentage for the cluster network
• Amount of data received on the cluster network
• Amount of data sent on the cluster network
• Amount of data read from disk
• Amount of data written to disk
At the diagnostic privilege level, the command displays the following information:
• Average of CPU busy percentage
• CPU busy percentage
• Total number of operations
• Number of NFS operations
• Number of CIFS operations
• Number of 7M Fcache operations
• Number of SpinFS operations
• Total amount of network traffic received
• Total amount of network traffic sent
• Percentage of data-network utilization
• Amount of data-network traffic received
• Amount of data-network traffic sent
• Percentage of cluster-network utilization
• Amount of cluster-network traffic received
• Amount of cluster-network traffic sent
• Amount of data read from disk
• Amount of data written to disk
• Number of packets received
• Number of packets sent
● Examples
The following example displays cluster statistics:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp key create command creates a cryptographic key that can be used to
verify that Network Time Protocol (NTP) packets are coming from a configured NTP server.
To use the created key it must be assigned to the required NTP server configuration using the cluster
time-service ntp server create or cluster time-service ntp server modify commands.
Note
The id, key-type and value must all be configured identically on both the ONTAP cluster and the
external NTP time server for the cluster to be able to synchronize time to that server.
● Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Uniquely identifies this key in the cluster. Must be an integer between 1 and 65535.
-type <sha1> - NTP Symmetric Authentication Key Type
The cryptographic algorithm that this key is used with. Only SHA1 is currently supported.
-value <text> - NTP Symmetric Authentication Key Value
A 40 character hexadecimal digit string that represents a cryptographic key that is shared with the NTP
server.
● Examples
The following example creates a new SHA-1 NTP symmetric authentication key.
Related reference
cluster time-service ntp server create (page 138)
cluster time-service ntp server modify (page 140)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete an NTP symmetric authentication key.
Note
It is not possible to delete a key that is referenced in an existing NTP server configuration. Remove all
references to this key using the cluster time-service ntp server modify or cluster time-
service ntp server delete commands before attempting to delete the key using this command.
● Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Unique identifier of this key in the cluster.
● Examples
The following example deletes the NTP key with ID 1.
Related reference
cluster time-service ntp server modify (page 140)
cluster time-service ntp server delete (page 139)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp key modify command modifies a Network Time Protocol (NTP)
symmetric authentication key.
● Parameters
-id <integer> - NTP Symmetric Authentication Key ID
Unique identifier of this key in the cluster.
[-type <sha1>] - NTP Symmetric Authentication Key Type
The cryptographic algorithm that this key is used with. Only SHA1 is currently supported.
[-value <text>] - NTP Symmetric Authentication Key Value
A 40 character hexadecimal digit string that represents a cryptographic key that is shared with the NTP
server.
● Examples
The following example modifies the NTP key with ID 1 to have a new value.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp key show command displays the configured Network Time Protocol
(NTP) symmetric authentication keys.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - NTP Symmetric Authentication Key ID
If this parameter is specified, the command displays the keys that match the specified key ID.
[-type <sha1>] - NTP Symmetric Authentication Key Type
If this parameter is specified, the command displays the keys that match the specified key type.
[-value <text>] - NTP Symmetric Authentication Key Value
If this parameter is specified, the command displays the keys that match the specified value.
● Examples
The following example displays information about the NTP authentication keys in the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster time-service ntp security modify command allows setting of security parameters
related to the Network Time Protocol (NTP) subsystem.
● Parameters
[-is-query-enabled {true|false}] - Is Querying of NTP Server Enabled?
Setting this parameter to true allows querying of the NTP subsystem from systems external to the
cluster. For example, querying a node using the standard "ntpq" command can be enabled by this
command. The default setting is false to protect against possible security vulnerabilities. If querying of
the NTP subsystem is disabled, the cluster time-service ntp status show command can be
used to obtain similar information. Although querying of the NTP subsystem from external hosts can be
disabled with this command, executing a local query to the localhost address is always enabled.
● Examples
The following example enables the querying of the NTP subsystem from clients external to the cluster:
Related reference
cluster time-service ntp status show (page 142)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster time-service ntp security show command displays the configuration of security
features related to the Network Time Protocol (NTP) subsystem.
● Examples
The following example displays the NTP security configuration of the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp server create command associates the cluster with an external
network time server for time correction and adjustment by using the Network Time Protocol (NTP).
The command resolves the time server host name to an IP address and performs several validation checks. If
an error is detected during validation, it is reported.
The validation checks performed by this command include the following:
• The NTP replies to an NTP query with the specified protocol version.
• The NTP reply indicates that the external time server is synchronized to another time server.
• The distance and dispersion of the NTP reply from the "root" or source clock are within the required limits.
● Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This parameter specifies the host name or IP address of the external NTP server that is to be associated
with the cluster for time correction and adjustment.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
Use this parameter to optionally specify the NTP protocol version that should be used for communicating
with the external NTP server. If the external NTP server does not support the specified protocol version,
time exchange cannot take place.
The supported values for this parameter include the following:
• 3 - Use NTP protocol version 3, which is based on Internet Standard request for comments (RFC)
#1305.
• 4 - Use NTP protocol version 4, which is based on Internet Standard RFC #5905.
• auto - Have Data ONTAP select the NTP protocol version.
The default setting is auto.
[-is-preferred {true|false}] - Is Preferred NTP Server (default: false) (privilege: advanced)
Use this parameter to optionally specify whether the external NTP server is the primary time source for
correcting and adjusting the cluster time. The responses from this source are used unless its time is
outside the accepted selection range.
The default setting is false.
You use this parameter when a high quality radio (or GPS) based time server is being used with a set of
non-radio based backup time servers.
[-key-id <integer>] - NTP Symmetric Authentication Key ID
Use this parameter to optionally enable NTP symmetric authentication key for communication with the
specified time server. The ID must refer to a key created by the cluster time-service ntp key
create command and must be a key with the same ID and value as one configured on the specified
time server.
● Examples
The following example associates the cluster with an NTP server named ntp1.example.com.
Related reference
cluster time-service ntp key create (page 135)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp server delete command removes the association between the
cluster and an external network time server that uses the Network Time Protocol (NTP).
● Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This specifies the host name or IP address of an existing external NTP server that the cluster will
disassociate from.
● Examples
The following example disassociates an NTP server named ntp2.example.com from the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp server modify command modifies the configuration of an existing
external network time server that uses the Network Time Protocol (NTP) for time correction and adjustment.
● Parameters
-server <text> - NTP Server Host Name, IPv4 or IPv6 Address
This parameter specifies the host name or IP address of an existing external NTP server that is to be
modified.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
Use this parameter to optionally specify the NTP protocol version that should be used for communicating
with the external NTP server. If the external NTP server does not support the specified protocol version,
time exchange cannot take place.
The supported values for this parameter include the following:
• 3 - Use NTP protocol version 3, which is based on Internet Standard request for comments (RFC)
#1305.
• 4 - Use NTP protocol version 4, which is based on Internet Standard RFC #5905.
• auto - Have Data ONTAP select the NTP protocol version.
The default setting is auto.
[-is-preferred {true|false}] - Is Preferred NTP Server (default: false) (privilege: advanced)
Use this parameter to optionally specify whether the external NTP server is the primary time source for
correcting and adjusting the cluster time. The responses from this source are used unless its time is
outside the accepted selection range
The default setting is false.
You use this parameter when a high quality radio (or GPS) based time server is being used with a set of
non-radio based backup time servers.
This parameter is available only at the advanced privilege level and higher.
[-is-authentication-enabled {true|false}] - Is NTP Symmetric Key Authentication Enabled
Use this parameter to optionally disable NTP symmetric key authentication for communication with the
specified time server. Using this parameter and selecting false disables the NTP symmetric key
authentication and clears the key-id parameter for the specified server. This parameter is not required
to enable NTP symmetric key authentication, but if specified as true the NTP symmetric authentication
key must also be specified using the key-id parameter.
[-key-id <integer>] - NTP Symmetric Authentication Key ID
Use this parameter to optionally enable NTP symmetric authentication key for communication with the
specified time server. The ID must refer to a key created by the cluster time-service ntp key
create command and must be a key with the same ID and value as one configured on the specified
time server.
● Examples
The following example modifies the NTP version of an NTP server named ntp1.example.com. The NTP
version is changed to 4.
Related reference
cluster time-service ntp key create (page 135)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster time-service ntp server reset command replaces the current configuration with one
of the selected configurations.
If none or more than one time service configuration is selected, the command will fail.
● Parameters
[-use-public {true|false}] - Reset Server List to Public Identified Defaults (default: false)
When set to true, this specifies that the public NTP server list used by Data ONTAP should replace the
current configuration.
The default setting is false.
● Examples
The following example replaces the current time service configuration with the default public NTP server list
that is used by Data ONTAP.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The cluster time-service ntp server show command displays the association between the cluster
and external network time servers that use the Network Time Protocol (NTP).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command only displays the fields that
you specify. For example: -fields server, version.
| [-instance ]}
If this parameter is specified, the command displays all the available field information.
[-server <text>] - NTP Server Host Name, IPv4 or IPv6 Address
If this parameter is specified, the command displays the external NTP servers that match the specified
server name or IP address.
[-version {3|4|auto}] - NTP Version for Server (default: auto)
If this parameter is specified, the command displays the external NTP servers that use the specified NTP
version.
[-is-preferred {true|false}] - Is Preferred NTP Server (default: false) (privilege: advanced)
If this parameter is specified, the command displays the external NTP server or servers that match the
specified preferred server status.
[-is-public {true|false}] - Is Public NTP Server Default (privilege: advanced)
If this parameter is specified, the command displays the information for the external NTP servers that
are either on the NTP server list defined by Data ONTAP (`true') or not on the list (`false').
[-is-authentication-enabled {true|false}] - Is NTP Symmetric Key Authentication Enabled
If this parameter is specified, the command displays the external NTP server or servers that require NTP
symmetric key authentication for communication.
[-key-id <integer>] - NTP Symmetric Authentication Key ID
If this parameter is specified, the command displays the external NTP server or servers that match the
specified symmetric authentication key ID.
● Examples
The following example displays information about all external NTP time servers that are associated with the
cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The cluster time-service ntp status show command displays the status of the associations
between the cluster and external network time servers that use the Network Time Protocol (NTP).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information related to associations on the specifed
node.
[-server <text>] - NTP Server Host Name, IPv4 or IPv6 Address
If this parameter is specified, the command displays information about the associations related to the
specified NTP server. The server should be specified as it is configured in the cluster time-service
ntp server show command.
[-server-address <IP Address>] - Server IP Address
If this parameter is specified, the command displays information about the associations related to the
NTP server with the specified IP address.
[-is-peer-reachable {true|false}] - Is Peer Reachable and Responding to Polls?
If this parameter is specified as true, the command displays information about associations with the NTP
servers that have been successfully polled.
[-is-peer-selected {true|false}] - Is Peer Selected as Clock Source?
If this parameter is specified as true, the command displays information about associations with the NTP
servers that have been selected as the current clock source.
[-selection-state <State of NTP Peer Selection>] - State of Server Selection
If this parameter is specified, the command displays information about associations with the specified
selection state.
[-selection-state-text <text>] - Description of Server Selection State
If this parameter is specified, the command displays information about associations with the specified
selection state description.
[-poll-interval <integer>] - Poll Interval (secs)
If this parameter is specified, the command displays information about associations that have the
specified polling interval.
[-time-last-poll <integer>] - Time from Last Poll (secs)
If this parameter is specified, the command displays information about associations that are polled at
the specified time.
[-offset <double>] - Offset from Server Time (ms)
If this parameter is specified, the command displays information about associations with the specified
offset from the NTP server.
[-delay <double>] - Delay Time to Server (ms)
If this parameter is specified, the command displays information about associations with the specified
travelling time to the NTP server.
[-jitter <double>] - Maximum Offset Error (ms)
If this parameter is specified, the command displays information about associations with the specified
offset error from the NTP server.
[-reachability <Hex String>] - Reachability of Server
If this parameter is specified, the command displays information about associations with the specified
reachability to the NTP server. Reachability is specified as a hexbyte that has a bit to represent the
success of each of the last eight polls of the specified server. A set bit represents a successful poll. The
least significant bit represents the most recent poll, the next most significant bit the poll before that,
and so on.
[-server-stratum <integer>] - Stratum of Server Clock
If this parameter is specified, the command displays information about associations with NTP servers
with the specified clock stratum.
[-server-reference <text>] - Reference Clock at Server
If this parameter is specified, the command displays information about associations with NTP servers
using the specified clock as reference.
[-reported-errors <NTP Peer and Packet Errors>, ...] - Reported Packet and Peer Errors
If this parameter is specified, the command displays information about associations with the specified
errors.
● Examples
The following example displays the status of the NTP associations of the cluster:
The following example displays the status of the association with the specified external NTP server:
Node: node-1
NTP Server Host Name, IPv4 or IPv6 Address: ntp1.example.com
Server IP Address: 10.56.32.33
Is Peer Reachable and Responding to Polls?: true
Is Peer Selected as Clock Source?: true
State of Server Selection: sys_peer
Description of Server Selection State: Currently Selected Server
Poll Interval (secs): 64
Time from Last Poll (secs): 1
Offset from Server Time (ms): 26.736
Delay Time to Server (ms): 61.772
Maximum Offset Error (ms): 3.064
Reachability of Server: 01
Stratum of Server Clock: 2
Reference Clock at Server: 10.56.68.21
Reported Packet and Peer Errors: -
Related reference
cluster time-service ntp server show (page 141)
The event commands enable you to work with system events and set up notifications.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event catalog show command displays information about events in the catalog. By default, this
command displays the following information:
• Message name of the event
• Severity of the event
• SNMP trap type of the event
To display detailed information about a specific event, run the command with the -message-name
parameter, and specify the name of the event. The detailed view adds the following information:
• Full description of the event
• Action to be taken to address the event
• Event's deprecation status
You can specify additional parameters to limit output to the information that matches those parameters. For
example, to display information only about events with an event name that begins with raid, enter the
command with the -message-name raid* parameter. The parameter value can either be a specific text
string or a wildcard pattern.
Alternatively, an event filter can also be specified to limit the output events.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-message-name <Message Name>] - Message Name
Selects the events that match this parameter value.
[-filter-name <text>] - Filter Name
Selects the events that match this parameter value. The parameter value indicates an existing filter
name that, when applied permits the inclusion of the listed events.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Selects the events that match this parameter value.
[-description <text>] - Description
Selects the events that match this parameter value.
Note
Deprecated events may be removed in a future release of Data ONTAP.
● Examples
The following example displays the event catalog:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The event config force-sync command forces a node's EMS configuration to be synchronized with the
cluster wide EMS configuration. The configuration is automatically synchronized among all nodes in the
cluster, but in rare cases a node may not be updated. This command simplifies the recovery from this issue.
The following example shows where this command is useful: An email destination is configured for all
CRITICAL level event occurrences. When the event is generated, all nodes generate an email except one. This
command forces that node to refresh a stale configuration.
● Parameters
[-node {<nodename>|local}] - Node
The node parameter specifies which controller will be synchronized.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use the event config modify command to configure event notification and logging for the cluster.
● Parameters
[-mail-from <mail address>] - Mail From
Use this parameter to configure the email address from which email notifications will be sent. You can
configure the cluster to send email notifications when specific events occur. Use the event route
add-destinations and event destination create commands to configure email destinations for
events.
[-mail-server <text>] - Mail Server (SMTP)
Use this parameter to configure the name or IP address of the SMTP server used by the cluster when
sending email notification of events.
[-suppression {on|off}] - Event Throttling/Suppression (privilege: advanced)
Use this parameter to configure whether event suppression algorithms are enabled ("on") or disabled
("off"). The event processing system implements several algorithms to throttle events. The
documentation for event show-suppression command describes the suppression algorithms in
detail.
Note
The suppression parameter can disable both autosuppression and duplicate suppression, but
timer suppression cannot be disabled.
● Examples
The following command sets the "Mail From" address for event notifications to "admin@example.com" and
the "Mail Server" to "mail.example.com":
Related reference
event route add-destinations (page 187)
event destination create (page 151)
event log show (page 171)
event config set-proxy-password (page 149)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use the event config set-proxy-password command to set the password for authenticated access to
an HTTP or HTTPS proxy being used for EMS notifications. This password is used with the user name you
specify using the event config modify -proxy-user command to send EMS messages to REST API
destinations via the proxy you specify using the event config modify -proxy-url command. IF you
enter the command without parameters, the command prompts you for a password and for a confirmation
of that password. Enter the same password at both prompts. The password is not displayed.
● Examples
The following example shows successful execution of this command:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event config show command displays information about the configuration of event notification and
event logging for the cluster.
"Mail From" is the email address that the event notification system uses as the "From" address for email
notifications.
"Mail Server" is the name or IP address of the SMTP server that the event notification system uses to send
email notification of events.
"Proxy URL" is the HTTP or HTTPS proxy server URL used by rest-api type EMS notification destinations if your
organization uses a proxy.
"Proxy User Name" is the user name for the HTTP or HTTPS proxy server if authentication is required.
"Suppression" indicates whether event suppression algorithms are enabled ("on") or disabled ("off"). The
event processing system implements several algorithms to throttle events.
Note
The suppression parameter can disable both autosuppression and duplicate suppression, but not
timer suppression.
"Console" indicates whether events are displayed on the console port ("on") or not displayed("off").
● Examples
The following example displays the configuration of event notification for the cluster:
The following example displays the configuration of event notification with HTTP or HTTPS proxy:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification destination" command set.
The event destination create command creates a new event destination. An event destination is a list
of addresses that receive event notifications. These addresses can be e-mail addresses, SNMP trap hosts,
and syslog servers. Event destinations are used by event routes. Event routes describe which events
generate notifications, and event destinations describe where to send those notifications.
When you create a destination, you can add e-mail addresses, SNMP trap hosts, and syslog hosts to the
definition of the destination. Once the destination is fully defined, use the event route add-
destinations command to associate the destination with event routes so that notifications of those
events are sent to the recipients in the destination.
To see the current list of all destinations and their recipients, use the event destination show
command.
There are several default destinations provided for your use.
• allevents - A useful destination for all system events, though no events are routed to this destination by
default.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send
notifications to technical support. See system node autosupport for more information.
• criticals - A useful destination for critical events though no events are routed to this destination by default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though
no events are routed to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost add
command to add SNMP recipients to the traphost default destination.
To add recipients to the default destinations, use the event destination modify command.
You should not create a destination that sends events to more than one type of recipient. Use separate
destinations for e-mail, SNMP, and syslog activity. Also, use the traphost default destination for all SNMP
activity. You must not create any other destination that sends traps to SNMP trap hosts. The traphost default
destination is not required to be added to any event route.
● Parameters
-name <text> - Name
This mandatory parameter specifies name of the event destination to create.
[-mail <mail address>, ...] - Mail Destination
Use this parameter to specify one or more e-mail addresses to which event notifications will be sent. For
events to properly generate e-mail notifications, the event system must also be configured with an
address and mail server from which to send mail. See event config modify for more information.
[-snmp <Remote IP>, ...] - SNMP Destination
To send traps to SNMP trap hosts, use this parameter with the host names or IP addresses of those trap
hosts.
[-syslog <Remote IP>, ...] - Syslog Destination
Use this parameter with the host names or IP addresses of any remote syslog daemons to which syslog
entries will be sent.
[-syslog-facility <Syslog Facility>] - Syslog Facility
This parameter optionally specifies a syslog facility with which the syslog is sent. Possible values for this
parameter are default, local0, local1, local2, local3, local4, local5, local6, and local7. If you specify the
default syslog facility, syslogs are tagged LOG_KERN or LOG_USER.
[-snmp-community <text>] - SNMP Trap Community
To specify an SNMP trap community, use this parameter with that string.
[-hide-parameters {true|false}] - Hide Parameter Values?
Use this parameter with the value "true" to hide event parameters by removing them from event
notifications. This is useful to prevent sensitive information from being sent over non-secure channels.
● Examples
The following example creates an event destination named support.email that e-mails events to the
addresses supportmgr@example.com, techsupport@example.com, and oncall@example.com.
This example creates an event destination named support.bucket01 that sends the notifications to a syslog
host.
Related reference
event config modify (page 148)
event route add-destinations (page 187)
event destination show (page 155)
system node autosupport commands (page 1415)
system snmp traphost add (page 1575)
event destination modify (page 153)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification destination" command set.
The event destination delete command removes a specified destination from the list of valid
destinations. An event destination is a list of addresses that receive event notifications. These addresses can
be e-mail addresses, SNMP trap hosts, and syslog servers. Event destinations are used by event routes.
Event routes describe which events generate notifications, and event destinations describe where to send
those notifications.
Once you delete a destination, you will not be able to add that destination to any event route.
You will not be able to delete a destination if it is in use by any event routes. To remove a destination from
all event routes, so that you can delete it, use the event route remove-destinations -messagename
* -destination name command.
There are several default destinations that cannot be deleted:
• allevents - A useful destination for all system events, though no events are routed to this destination by
default.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send
notifications to technical support. See system node autosupport for more information.
• criticals - A useful destination for critical events though no events are routed to this destination by default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though
no events are routed to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost
delete command to delete SNMP recipients from the traphost default destination.
To see the current list of all destinations, use the event destination show command. To add a new
destination to the list, use the event destination create command.
● Parameters
-name <text> - Name
This mandatory parameter specifies the event destination to delete.
● Examples
The following example deletes an event destination named manager.pager:
Related reference
event route remove-destinations (page 189)
system node autosupport commands (page 1415)
system snmp traphost delete (page 1576)
event destination show (page 155)
event destination create (page 151)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification destination" command set.
The event destination modify command changes the definition of an existing event destination. An
event destination is a list of addresses that receive event notifications. These addresses can be e-mail
addresses, SNMP traphosts, and syslog servers. Event destinations are used by event routes. Event routes
describe which events generate notifications, and event destinations describe where to send those
notifications.
Modifying a parameter writes over the existing value of the parameter. To extend a parameter, make sure to
include the current value of that parameter. For instance, to add an e-mail address to a destination, include
all of the current e-mail addresses assigned to that destination along with the new address. To see the
current definition of a destination, use the event destination show -name name command.
You must not create a destination that sends events to more than one type of recipient. Use separate
destinations for e-mail, SNMP, and syslog activity. Also, use the traphost default destination for all SNMP
activity. You should not create any other destination that sends to SNMP traphosts. The traphost default
destination is not required to be added to any event route.
● Parameters
-name <text> - Name
This mandatory parameter specifies name of the event destination to modify.
[-mail <mail address>, ...] - Mail Destination
Use this parameter to specify one or more e-mail addresses to which event notifications will be sent. For
events to properly generate e-mail notifications, the event system must also be configured with an
address and mail server from which to send mail. See event config modify for more information.
[-snmp <Remote IP>, ...] - SNMP Destination
To send traps to SNMP trap hosts, use this parameter with the host names or IP addresses of those trap
hosts.
[-syslog <Remote IP>, ...] - Syslog Destination
Use this parameter with the host names or IP addresses of any remote syslog daemons to which syslog
entries will be sent.
[-syslog-facility <Syslog Facility>] - Syslog Facility
This parameter optionally specifies a syslog facility with which the syslog is sent. Possible values for this
parameter are default, local0, local1, local2, local3, local4, local5, local6, and local7. If you specify the
default syslog facility, syslogs are tagged LOG_KERN or LOG_USER.
[-snmp-community <text>] - SNMP Trap Community
To specify an SNMP trap community, use this parameter with that string.
[-hide-parameters {true|false}] - Hide Parameter Values?
Enter this parameter with the value "true" to hide event parameters by removing them from event
notifications. This is useful to prevent sensitive information from being sent over non-secure channels.
Enter it with the value "false" to turn off parameter hiding.
● Examples
The following example modifies an event destination named snmp.hosts to send events to SNMP trap hosts
named traphost1 and traphost2:
This example adds the e-mail address of a remote support facility to an existing list of e-mail recipients.
Name: support
Mail Destination: support.hq@company.com
SNMP Destination: -
Syslog Destination: -
Syslog Facility: -
SNMP Trap Community: -
Hide Parameter Values?: -
Name: support
Mail Destination: support.hq@company.com, support.remote@company.com
SNMP Destination: -
Syslog Destination: -
Syslog Facility: -
SNMP Trap Community: -
Hide Parameter Values?: -
Related reference
event config modify (page 148)
event destination show (page 155)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification destination" command set.
The event destination show command displays information about configured event destinations. An
event destination is a list of addresses that receive event notifications. These addresses can be e-mail
addresses, SNMP trap hosts, and syslog servers. Event destinations are used by event routes. Event routes
describe which events generate notifications, and event destinations describe where to send those
notifications.
Default destinations:
• allevents - A useful destination for all system events, though no events are routed to this destination by
default.
• asup - Events routed to this destination trigger AutoSupport(tm). Only use this destination to send
notifications to technical support. See system node autosupport for more information.
• criticals - A useful destination for critical events although no events are routed to this destination by
default.
• pager - A useful destination for all events that are urgent enough to page a system administrator, though
no events are routed to this destination by default.
• traphost - The default destination for all SNMP traps. You can also use the system snmp traphost
show command to view SNMP recipients for the traphost default destination.
To add recipients to the default destination, use the event destination modify command.
Note
While you can use both host names and IP addresses with parameters, only IP addresses are stored.
Unless all DNS and reverse-DNS operations complete successfully, IP addresses might appear in
command output.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-facility ]
Displays only the syslog destinations and syslog facilities.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
Selects the destinations that match this parameter value.
[-mail <mail address>, ...] - Mail Destination
Selects the destinations that match this parameter value.
[-snmp <Remote IP>, ...] - SNMP Destination
Selects the destinations that match this parameter value (SNMP trap hosts).
[-syslog <Remote IP>, ...] - Syslog Destination
Selects the destinations that match this parameter value (syslog event notification daemons).
[-syslog-facility <Syslog Facility>] - Syslog Facility
Selects the destinations that match this parameter value. Valid values are: default, local0, local1,
local2, local3, local4, local5, local6, and local7.
[-snmp-community <text>] - SNMP Trap Community
Selects the destinations that match this parameter value.
[-hide-parameters {true|false}] - Hide Parameter Values?
Selects the destinations that match this parameter value (true selects destinations that do not receive
full event parameters, false selects destinations that receive full event parameters). Event parameters
may be hidden to prevent sensitive information from being sent over non-secure channels.
● Examples
The following example displays information about all event destinations:
Hide
Name Mail Dest. SNMP Dest. Syslog Dest. Params
---------------- ----------------- ------------------ ------------------ ------
allevents - - logger.example.com -
asup - - - -
criticals oncall - - -
@example.com
pager pager@example.com - - -
support.email supportmgr - - -
@example.com,
techsupport
@example.com,
oncall
@example.com - - -
traphost - th0.example.com, - -
th1.example.com
Related reference
system node autosupport commands (page 1415)
system snmp traphost show (page 1577)
event destination modify (page 153)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter copy command copies an existing filter to a new filter. The new filter will be created
with rules from the source filter. For more information, see the event filter create command.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to copy.
-new-filter-name <text> - New Event Filter Name
Use this mandatory parameter to specify the name of the new event filter to create and copy the rules.
● Examples
The following example copies an existing event filter named emer-wafl-events to a new filter named filter1:
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
no-info-debug-events
2 exclude * * *
12 entries were displayed.
Related reference
event filter create (page 158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter create command creates a new event filter. An event filter is used to select the events
of interest and is made up of one or more rules, each of which contains the following three fields:
• - name - event (message) name.
- severity - event severity.
- snmp-trap-type - event SNMP trap type.
These fields are evaluated for a match using a logical "AND" operation: name AND severity AND SNMP trap
type. Within a field, the specified values are evaluated with an implicit logical "OR" operation. So, if -
snmp-trap-type Standard, Built-in is specified, then the event must match Standard OR Built-in. The
wildcard matches all values for the field.
• Type - include or exclude. When an event matches an include rule, it will be included into the filter,
whereas it will be excluded from the filter if it matches an exclude rule.
Rules are checked in the order they are listed for a filter, until a match is found. There is an implicit rule at
the end that matches every event to be excluded. For more information, see the event filter rule
command.
There are three system-defined event filters provided for your use:
• default-trap-events - This filter matches all ALERT and EMERGENCY events. It also matches all Standard,
Built-in SNMP trap type events.
• important-events - This filter matches all ALERT and EMERGENCY events.
• no-info-debug-events - This filter matches all non-INFO and non-DEBUG messages (EMERGENCY, ALERT,
ERROR and NOTICE).
The system-defined event filters cannot be modified or deleted.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to create. An event filter name is 2
to 64 characters long. Valid characters are the following ASCII characters: A-Z, a-z, 0-9, "_", and "-". The
name must start and end with: A-Z, a-z, "_", or 0-9.
● Examples
The following example creates an event filter named filter1:
Related reference
event filter rule commands (page 165)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter delete command deletes an existing event filter, along with all its rules.
The system-defined event filters cannot be deleted.
For more information, see the event filter create command.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to delete.
● Examples
The following example deletes an event filter named filter1:
Related reference
event filter create (page 158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter rename command is used to rename an existing event filter.
There are system-defined event filters provided for your use. The system-defined event filters cannot be
modified or deleted.
For more information, see the event filter create comamnd.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to rename.
-new-filter-name <text> - New Event Filter Name
Use this mandatory parameter to specify the new name the event filter should be renamed to.
● Examples
The following example renames an existing filter named filter1 as emer-wafl-events:
Related reference
event filter create (page 158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter show command displays all the event filters which are configured. An event filter is
used to select the events of interest and is made up of one or more rules, each of which contains the
following three fields:
• - name - event (message) name.
- severity - event severity.
- snmp-trap-type - event SNMP trap type.
These fields are evaluated for a match using a logical "AND" operation: name AND severity AND SNMP trap
type. Within a field, the specified values are evaluated with an implicit logical "OR" operation. So, if -
snmp-trap-type Standard, Built-in is specified, then the event must match Standard OR Built-in. The
wildcard matches all values for the field.
• Type - include or exclude. When an event matches an include rule, it will be included into the filter,
whereas it will be excluded from the filter if it matches an exclude rule.
Rules are checked in the order they are listed for a filter, until a match is found. There is an implicit rule at
the end that matches every event to be excluded. For more information, see event filter rule
command.
There are three system-defined event filters provided for your use:
• default-trap-events - This filter matches all ALERT and EMERGENCY events. It also matches all Standard,
Built-in SNMP trap type events.
• important-events - This filter matches all ALERT and EMERGENCY events.
• no-info-debug-events - This filter matches all non-INFO and non-DEBUG messages (EMERGENCY, ALERT,
ERROR and NOTICE).
The system-defined event filters cannot be modified or deleted.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-filter-name <text>] - Filter Name
Selects the event filters that match this parameter value.
[-position <integer>] - Rule Position
Selects the event filters that match this parameter value.
[-type {include|exclude}] - Rule Type
Selects the event filters that match this parameter value. The rule types are as follows:
• include - Events matching this rule are included in the specified filter.
• exclude - Events matching this rule are excluded in the specified filter.
[-message-name <text>] - Message Name
Selects the event filters that match this parameter value.
[-severity <text>, ...] - Severity
Selects the events that match this parameter value. Severity levels:
• EMERGENCY - Disruption.
• ALERT - Single point of failure.
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
• DEBUG - Debug information.
• * - Includes all severities.
[-snmp-trap-type <text>, ...] - SNMP Trap Type
Selects the event filters that match this parameter value. The SNMP trap types are as follows:
• Standard - Traps defined in RFCs.
• Built-in - Enterprise traps specific to events.
• Severity-based - Traps specific to events that do not belong to the above two types.
• * - Includes all SNMP trap types.
● Examples
The following example displays the event filters:
The following example displays the event filters queried on the SNMP trap type value "Standard":
The following example displays the event filters with one or more rules that have no condition on the SNMP
trap type. Note that the wildcard character has to be specified in double-quotes. Without double-quotes,
output would be the same as not querying on the field.
Related reference
event filter rule commands (page 165)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter test command is used to test an event filter. When specified with a message name,
the command displays whether the message name is included or excluded from the filter. When specified
without a message name, the command displays the number of events from the catalog that match the
filter. For more information, see the event filter create command.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to test.
[-message-name <Message Name>] - Message Name
Use this optional parameter to specify the message name of the event to test against the filter.
● Examples
The following example tests an event filter named err-wafl-no-scan-but-clone:
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
no-info-debug-events
2 exclude * * *
12 entries were displayed.
Related reference
event filter create (page 158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter rule add command adds a new rule to an existing event filter. See event filter
create for more information on event filters and how to create a new event filter.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter to add the rule. Rules cannot be
added to system-defined event filters.
[-position <integer>] - Rule Position
Use this optional parameter to specify the position of the rule in the event filter. It should be in the
range (1..n-1), where 'n' is the position of the last rule, which is an implicit rule. Rules are checked in the
order they are listed for a filter, until a match is found.
-type {include|exclude} - Rule Type
Use this mandatory parameter to specify the type of the rule which determines whether to include or
exclude the events that match this rule.
[-message-name <text>] - Message Name
Use this parameter to specify the message name of the event to include or exclude from the filter.
[-severity <text>, ...] - Severity
Use this parameter to specify the list of severity values to match against the events. Enter multiple
severities separated by a comma. To enter all severities, the wild card (*) can be used. The wild card
cannot be specified with other severities. The default value is *.
[-snmp-trap-type <text>, ...] - SNMP Trap Type
Use this parameter to specify the list of the SNMP trap type values to match against the events. Enter
multiple SNMP trap types seperated by comma. To enter all SNMP trap types, the wild card (*) can be
used. The wild card cannot be specified with other SNMP trap types. The default value is *.
● Examples
The following example adds a rule to an existing event filter "emer-and-wafl": All events with severity
EMERGENCY and message name starting with "wafl.*" are included in the filter. Not specifiying the SNMP
trap type implies a default value of "*".
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -message-name wafl.* -sever
ity EMERGENCY
cluster1::> event filter show
Filter Name Rule Rule Message Name SNMP Trap Type Severity
Position Type
----------- -------- --------- ---------------------- --------------- --------
default-trap-events
1 include * * EMERGENCY, ALERT
2 include * Standard, Built-in
*
3 exclude * * *
emer-and-wafl
1 include wafl.* * EMERGENCY
2 exclude * * *
important-events
The following example adds a rule to the event filter "emer-and-wafl" at position 1: All events with severity
ALERT and message name starting with "wafl.scan.*" are included in the filter.
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -message-name wafl.scan.* -p
osition 1 -severity ALERT
The following example adds a rule to the event filter "emer-and-wafl" to include all "Standard" SNMP trap
type events:
cluster1::> event filter rule add -filter-name emer-and-wafl -type include -snmp-trap-type Standard
4 exclude * * *
important-events
1 include * * EMERGENCY, ALERT
2 include callhome.* * ERROR
3 exclude * * *
no-info-debug-events
1 include * * EMERGENCY, ALERT, ERROR, NOTICE
2 exclude * * *
12 entries were displayed.
Related reference
event filter create (page 158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter rule delete command deletes a rule from an event filter. The position of all the rules
following the deleted rule is updated to maintain a contiguous sequence. Use event filter show
command to view the filters and the rules associated with them.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter from which you want to delete the
rule. Rules cannot be deleted from system-defined filters.
-position <integer> - Rule Position
Use this mandatory parameter to specify the position of the rule to delete from the filter. It should be in
the range (1..n-1), where 'n' is the position of the last rule, which is an implicit rule.
● Examples
The following example deletes a rule at position 2 from an existing event filter "emer-and-wafl":
no-info-debug-events
1 include * * EMERGENCY, ALERT, ERROR, NOTICE
2 exclude * * *
12 entries were displayed.
cluster1::> event filter rule delete -filter-name emer-and-wafl -position 2
Related reference
event filter show (page 162)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event filter rule reorder command moves a rule to a new position in an existing event filter.
Use event filter show command to display all the event filters and the rules associated with them.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter from which you want to change the
position of the rule. Rules from system-defined event filters cannot be modified.
-position <integer> - Rule Positon
Use this mandatory parameter to specify the position of the rule you want to change. It should be in the
range (1..n-1), where 'n' is the position of the last rule, which is an implicit rule.
-to-position <integer> - New Rule Position
Use this mandatory parameter to specify the new position to move the rule. It should be in the range
(1..n-1), where 'n' is the position of the last rule, which is an implicit rule.
● Examples
The following example changes the position of a rule from 1 to 2 from an existing event filter "emer-and-
wafl":
Related reference
event filter show (page 162)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event log show command displays the contents of the event log, which lists significant occurrences
within the cluster. Use the event catalog show command to display information about events that can
occur.
By default, the command displays EMERGENCY, ALERT and ERROR severity level events with the following
information, with the most recent events listed first:
• The time at which the event occurred
• The node on which the event occurred
• The severity of the event
• The event's message
To display detailed information about events, use one or more of the optional parameters that affect how
the command output is displayed and the amount of detail that is included. For example, to display all
detailed event information, use the -detail parameter.
To display NOTICE, INFORMATIONAL or DEBUG severity level events, use the -severity parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
Displays additional event information such the sequence number of the event.
| [-detailtime ]
Displays detailed event information in reverse chronological order.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays a list of events for the node you specify. Use this parameter with the -seqnum parameter to
display detailed information.
[-seqnum <Sequence Number>] - Sequence#
Selects the events that match this parameter value. Use with the -node parameter to display detailed
information.
[-time <MM/DD/YYYY HH:MM:SS>] - Time
Selects the events that match this parameter value. Use the format: MM/DD/YYYY HH:MM:SS [+-
HH:MM]. You can specify a time range by using the ".." operator between two time statements.
Comparative time values are relative to "now". For example, to display only events that occurred within
the last minute:
Note
The month and date fields of this parameter are not zero-padded. These fields can be single
digits: for example, "7/1/2019 05:55:00".
[-ems-severity {NODE_FAULT|SVC_FAULT|NODE_ERROR|SVC_ERROR|WARNING|NOTICE|INFO|DEBUG|
VAR}] - EMS Severity (privilege: advanced)
Selects the events that match this parameter value. Severity levels:
• NODE_FAULT - Data corruption has been detected or the node is unable to provide client service
• SVC_FAULT - A temporary loss of service, typically a transient software fault, has been detected
• NODE_ERROR - A hardware error that is not immediately fatal has been detected
• SVC_ERROR - A software error that is not immediately fatal has been detected
• WARNING - A high-priority message that does not indicate a fault
• NOTICE - A normal-priority message that does not indicate a fault
• INFO - A low-priority message that does not indicate a fault
• DEBUG - A debugging message
• VAR - A message with variable severity, selected at runtime.
[-source <text>] - Source
Selects the events that match this parameter value (typically a software module).
[-message-name <Message Name>] - Message Name
Selects the events that match this parameter value (string). Message names are descriptive, so filtering
output by message name displays messages of a specific type.
[-event <text>] - Event
Selects the events that match this parameter value. The "event" field contains the full text of the event,
including any parameters. For example, a wafl.vol.offline event will contain the name of the volume
taken offline.
[-kernel-generation-num <integer>] - Kernel Generation Number (privilege: advanced)
Selects the events that match this parameter value. Only events that emanate from the kernel have
kernel generation numbers.
[-kernel-sequence-num <integer>] - Kernel Sequence Number (privilege: advanced)
Selects the events that match this parameter value. Only events that emanate from the kernel have
kernel sequence numbers.
[-action <text>] - Corrective Action
Selects the events that match this parameter value. The "action" field describes what steps, if any, you
must take to remedy the situation.
[-description <text>] - Description
Selects the events that match this parameter value. The "description" field describes why the event was
encountered and what it means.
[-filter-name <text>] - Filter Name
Selects the events that match this parameter value. Only events that were included by existing filters
that match this value are displayed.
● Examples
Selects the events that match this parameter value. Use the format: MM/DD/YYYY HH:MM:SS [+- HH:MM].
You can specify a time range by using the ".." operator between two time statements.
Comparative time values are relative to "now". For example, to display only events that occurred within the
last minute:
Selects the events that match this parameter value. Severity levels are as follows:
• EMERGENCY - Disruption.
• ALERT - Single point of failure.
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
• DEBUG - Debug information.
To display all events, including ones with severity levels of NOTICE, INFORMATIONAL and DEBUG, specify
severity as follows:
This example demonstrates how to use a range with the -time parameter to display all events that occurred
during an extended time period. It displays all events that occurred between 1:45pm and 1:50pm on
November 9, 2010.
The -time parameter also accepts values that are relative to "now". The following example displays events
that occurred more than one hour ago:
Severity levels sort in the order opposite to what you might expect. The following example displays all
events that have a severity level of ERROR or more severe:
Related reference
event catalog show (page 145)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future major release of Data ONTAP.
Instead, use the "event notification history" command set.
The event mailhistory delete command deletes a record from the e-mail history.
To delete a record, you must know which node contains the record, and the record's sequence number. Use
the event mailhistory show command to view this information.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node that contains the e-mail history record to delete.
-seqnum <Sequence Number> - Sequence Number
Use this parameter to specify the sequence number of the e-mail history record to delete.
● Examples
The following example deletes all mail-history records on node1:
Related reference
event mailhistory show (page 175)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification history" command set.
The event mailhistory show command displays a list of the event notifications that have been e-
mailed. The command output depends on the parameters you specify with the command. By default, the
command displays basic information about all notification e-mails that were sent.
To display detailed information about a specific mail-history record, run the command with the -seqnum
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the mail-history records that match this parameter value.
[-seqnum <Sequence Number>] - Sequence Number
Selects the mail-history records that match this parameter value.
[-message-name <Message Name>] - Message Name
Selects the mail-history records that match this parameter value.
[-address <mail address>, ...] - Mail Address
Selects the mail-history records that match this parameter value.
[-time <MM/DD/YYYY HH:MM:SS>] - Transmission Time
Selects the mail-history records that match this parameter value.
[-message <text>] - Alert Message
Selects the mail-history records that match this parameter value (text pattern).
[-previous-time <MM/DD/YYYY HH:MM:SS>] - Previous Transmission Time
Selects the mail-history records that match this parameter value.
[-num-drops-since-previous <integer>] - Number of Drops Since Previous Transmission
Selects the mail-history records that match this parameter value (number of event drops since last
transmission).
● Examples
The following example displays detailed information about the mail-history record with the sequence
number 20520:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification create command is used to create a new notification of a set of events
defined by an event filter to one or more notification destinations.
● Parameters
-filter-name <text> - Filter Name
Use this mandatory parameter to specify the name of the event filter. Events that are included in the
event filter are forwarded to the destinations specified in the destinations parameter.
The filter name passed to this command must be an existing filter. For more information, see the event
filter create command.
-destinations <text>, ... - List of Event Notification Destinations
Use this mandatory parameter to specify the list of destinations to which the notification should be
forwarded. Enter multiple destinations separated by a comma.
The destination passed to this command must be an existing destination. For more information, see the
event destination create command.
● Examples
The following example creates an event notification for filter name "filter1" to destinations "email_dest,
snmp-traphost and syslog_dest":
Related reference
event filter create (page 158)
event destination create (page 151)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification delete command deletes an existing event notification.
● Parameters
-ID <integer> - Event Notification ID
Use this parameter to specify the ID of the notification to be deleted.
● Examples
The following example shows the deletion of event notification with ID 1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification modify command is used to modify an existing notification.
● Parameters
-ID <integer> - Event Notification ID
Use this mandatory parameter to specify the ID of the notification to be modified.
[-filter-name <text>] - Event Filter Name
Use this parameter to specify the filter name to be modified.
[-destinations <text>, ...] - List of Event Notification Destinations
Use this parameter to specify the destinations to be modified. Enter multiple destinations separated by
a comma.
Provide the complete set of destinations to be modified. Individual destination cannot be added or
removed.
● Examples
The following example shows the modification of event notification with ID 1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification show command is used to display the list of existing event notifications.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ID <integer>] - Event Notification ID
Use this parameter to display the detailed information about the notification ID you specify.
[-filter-name <text>] - Event Filter Name
Use this parameter to display event notifications that use the filter-name you specify.
[-destinations <text>, ...] - List of Event Notification Destinations
Use this parameter to display event notifications that use the destinations you specify.
● Examples
The following example displays the event notification:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification destination create command creates a new event notification destination
of either email or syslog type.
The following system-defined notification destination is configured for your use:
• snmp-traphost - This destination reflects the configuration in "system snmp traphost".
● Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of the notification destination that is to be created.
An event notification destination name must be 2 to 64 characters long. Valid characters are the
following ASCII characters: A-Z, a-z, 0-9, "_", and "-". The name must start and end with: A-Z, a-z, or 0-9.
{ -email <mail address> - Email Destination
Use this parameter to specify the email address to which event notifications will be sent. For events to
properly generate email notifications, the event system must also be configured with an address and
mail server from which the mail will be sent. See event config modify command for more
information.
| -syslog <text> - Syslog Destination
Use this parameter to specify syslog server host name or IP address to which syslog entries will be sent.
| -rest-api-url <text> - REST API Server URL
Use this parameter to specify REST API server URL to which event notifications will be sent. Enter the full
URL, which must start either with an http:// or https:// prefix. To specify a URL that contains a question
mark, press ESC followed by the "?". If an https:// URL is specified, then Data ONTAP verifies the identity
of the destination host by validating its certificate. If the Online Certificate Status Protocol (OCSP) is
enabled for EMS, then Data ONTAP uses that protocol to determine the certificate's revocation status.
Use the security config oscp show -application ems command to determine if the OCSP-
based certificate revocation status check is enabled for EMS.
[-certificate-authority <text>] - Client Certificate Issuing CA
Use this parameter to specify the name of the certificate authority (CA) that signed the client certificate
that will be sent in case mutual authentication with the REST API server is required. There can be
multiple client certificates installed for the admin vserver in the cluster, and this parameter, along with
the certificate-serial parameter, uniquely identifies which one. Use the security certificate show
command to see the list of certificates installed in the cluster.
[-certificate-serial <text>]} - Client Certificate Serial Number
Use this parameter to specify the serial number of the client certificate that will be sent in case mutual
authentication with the REST API server is required.
● Examples
The following example shows the creation of a new event notification destination of type email called
"StorageAdminEmail":
The following example shows the creation of a new event notification destination of type rest-api called
"RestApi":
Related reference
event config modify (page 148)
security certificate show (page 566)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification destination delete command deletes an event notification destination.
The following system-defined notification destination is configured for your use:
• snmp-traphost - This destination reflects the configuration in "system snmp traphost". To remove snmp-
traphost addresses, use the system snmp traphost command.
● Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of an event destination to be removed.
● Examples
The following shows the examples of deleting event notification destinations:
Related reference
system snmp traphost commands (page 1575)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification destination modify command modifies event notification destination.
The following system-defined notification destination is configured for your use:
• snmp-traphost - This destination reflects the configuration in "system snmp traphost". To modify traphost
addresses, use the system snmp traphost command.
● Parameters
-name <text> - Destination Name
Use this mandatory parameter to specify the name of an event notification destination to be modified.
The name of the destination must already exist.
{ [-email <mail address>] - Email Destination
Use this parameter to specify a new value of email address to replace the current address in the event
notification destination. The parameter is specified only when the event notification destination type is
already "email". It is not allowed to specify the parameter for a destination that already has another type
of destination address.
| [-syslog <text>] - Syslog Destination
Use this parameter to specify a new syslog server host name or IP address to replace the current address
of the event notification destination. The parameter is specified only when the event notification
destination type is already "syslog". It is not allowed to specify the parameter for a destination that
already has another type of destination address.
| [-rest-api-url <text>] - REST API Server URL
Use this parameter to specify a new REST API server URL to replace the current address of the event
notification destination. Enter the full URL, which must start either with an http:// or https:// prefix. To
specify a URL that contains a question mark, press ESC followed by the "?". If an https:// URL is specified,
then Data ONTAP verifies the identity of the destination host by validating its certificate. If the Online
Certificate Status Protocol (OCSP) is enabled for EMS, then Data ONTAP uses that protocol to determine
the certificate's revocation status. Use the security config oscp show -application ems
command to determine if the OCSP-based certificate revocation status check is enabled for EMS. The
parameter is specified only when the event notification destination type is already "rest-api". It is not
allowed to specify the parameter for a destination that already has another type of destination address.
[-certificate-authority <text>] - Client Certificate Issuing CA
Use this parameter to specify a new value of the certificate authority (CA) to replace the current value in
the event notification destination. There can be multiple client certificates installed for the admin
vserver in the cluster, and this parameter, along with the certificate-serial parameter, uniquely identifies
which one. Use the security certificate show command to see the list of certificates installed in
the cluster.
[-certificate-serial <text>]} - Client Certificate Serial Number
Use this parameter to specify a new serial number of the client certificate to replace the current value in
the event notification destination.
● Examples
The following example shows the modification of event notification destinations:
syslog example.com
snmp-traphost snmp 10.30.40.10 (from "system snmp traphost")
3 entries were displayed.
The following example shows how to clear the client certificate configuration when mutual authentication
with the REST API server is no longer required:
Related reference
security certificate show (page 566)
system snmp traphost commands (page 1575)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification destination show command displays event notification destinations. Note:
In the case of a rest-api destination type, OCSP information is not included. It's available in security
config ocsp show -app ems command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Destination Name
Use this optional parameter to display information of an event notification destination that has the
specified name.
[-type {snmp|email|syslog|rest-api}] - Type of Destination
Use this optional parameter to display information of event notification destinations that have the
specified destination type.
[-destination <text>, ...] - Destination
Use this optional parameter to display information of event notification destinations that have the
specified destination address. Enter multiple addresses separated by a comma.
[-server-ca-present {true|false}] - Server CA Certificates Present?
Use this optional parameter to display information of event notification destinations that have the
specified server-ca-present value. This field indicates whether there are certificates of the server-ca type
exist in the system. If not, event messages will not be sent to a rest-api type destination having an
HTTPS URL.
[-certificate-authority <text>] - Client Certificate Issuing CA
Use this optional parameter to display information of event notification destinations that have the
specified certificate authority name.
[-certificate-serial <text>] - Client Certificate Serial Number
Use this optional parameter to display information of event notification destinations that have the
specified certificate serial number.
[-certificate-valid {true|false}] - Client Certificate Valid?
Use this optional parameter to display information of event notification destinations that have the
specified certificate-valid value. This field indicates whether the client certificate specified by the
certificate-authority and certificate-serial fields is valid. If not, and if the REST API server requires client
authentication, event messages will not be sent to the server.
● Examples
The following shows examples of "event notification destination show":
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event notification history show command displays a list of event messages that have been
sent to a notification destination. Information displayed by the command for each event is identical to that
of the event log show command. This command displays events sent to a notification destination while the
event log show command displays all events that have been logged.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-destination <text> - Destination
Specifies the destination to which event messages have been sent to be displayed.
[-node {<nodename>|local}] - Node
Displays a list of events for the node you specify. Use this parameter with the -seqnum parameter to
display detailed information.
[-seqnum <Sequence Number>] - Sequence#
Selects the events that match this parameter value. Use with the -node parameter to display detailed
information.
[-time <MM/DD/YYYY HH:MM:SS>] - Time
Selects the events that match this parameter value. Use the format: MM/DD/YYYY HH:MM:SS [+-
HH:MM]. You can specify a time range by using the ".." operator between two time statements.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Selects the events that match this parameter value. Severity levels are as follows:
• EMERGENCY - Disruption.
• ALERT - Single point of failure.
• ERROR - Degradation.
• NOTICE - Information.
• INFORMATIONAL - Information.
• DEBUG - Debug information.
[-message-name <Message Name>] - Message Name
Selects the events that match this parameter value (string). Message names are descriptive, so filtering
output by message name displays messages of a specific type.
[-event <text>] - Event
Selects the events that match this parameter value. This parameter is useful when entered with
wildcards. The "event" field contains the full text of the event, including any parameters. For example,
the wafl.vol.offline event displays the name of the volume that is taken offline.
● Examples
The following example displays all the events which match "important-events" filter and forwarded to the
"snmp-traphost" destination:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification" command set.
The event route add-destinations command adds destinations to an event route. Any existing
destinations assigned to the route are not removed.
The destinations you add must already exist. See the documentation for the event destination create
command for information about creating destinations. To show all existing destinations and their attributes,
use the event destination show command. To remove destinations from an event route, use the event
route remove-destinations command.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple
events that meet certain criteria. See examples below that show how to use extended queries.
● Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of
events or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the
-message-name parameter with wildcards to specify the family of events or type of events.
-destinations <Event Destination>, ... - Destinations
Specify a comma-separated list of destinations to which notifications for the named event are sent.
These destinations will be added to any existing destinations assigned to this event route.
● Examples
The following example specifies that all RAID events go to the destinations named support.email,
mgr.email, and sreng.pager:
The following example specifies that all alert, and emergency events go to the destination named test_dest:
The following example specifies that all alert events that support a SNMP trap go to the destination named
traphost. In this example, because the -snmp-support parameter is specified as part of extended queries,
the -severity parameter must also be specified in the extended queries:
cluster1::> event route add-destinations {-snmp-support true -severity ALERT} -destinations traphost
Related reference
event destination create (page 151)
event destination show (page 155)
event route remove-destinations (page 189)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification" command set.
Use the event route modify command to modify an event's destination, frequency threshold, and time
threshold. The event's destination must already exist; see the documentation for the event destination
create command for information about creating destinations. The frequency threshold and time threshold
prevent multiple event notifications in a brief period of time.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple
events that meet certain criteria. See examples provided in the event route add-destinations
command manpage that show how to use extended queries.
The frequency threshold specifies the number of times an event occurs before a repeat notification of the
event is sent; for instance, a frequency threshold of 5 indicates that a notification is sent every fifth time an
event occurs. The time threshold specifies the number of seconds between notifications for an event; for
instance, a time threshold of 120 indicates that a notification is sent only if it has been two minutes or more
since the last notification for that event was sent.
If both the frequency threshold and time threshold are set, a notification is sent if either threshold is met.
For instance, if the frequency threshold is set to 5 and the time threshold is set to 120, and the event occurs
more than five times within two minutes, a notification is sent. If both thresholds are set to 0 (zero) or
empty ("-" or ""), there is no suppression of multiple event notifications.
● Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of
events or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the
-messagename parameter with wildcards to specify the family of events or type of events.
[-destinations <Event Destination>, ...] - Destinations
Use this optional parameter to specify a comma-separated list of destinations to which notifications for
the named event are sent. Using this parameter replaces the current list of destinations with the list of
destinations you specify. To add or remove individual destinations from the current list, use event
route add-destinations or event route remove-destinations.
[-frequencythreshold <integer>] - Number of Drops Between Transmissions
Specifies the number of event notifications that must occur within the timethreshold period before a
repeat notification is sent.
[-timethreshold <integer>] - Dropping Interval (Seconds) Between Transmissions
If multiple notifications of an event occur within this many seconds, only the first notification is sent.
Multiple notifications will be sent during this time period only if the frequencythreshold quantity is
exceeded.
● Examples
The following example modifies all RAID events to send messages to a destination named "support.email",
and specify that multiple messages should only be sent if and event occurs more than five times within 60
seconds.
cluster1::> event route modify -messagename raid* -destinations support.email -frequencythreshold 5 -ti
methreshold 60
Related reference
event route add-destinations (page 187)
event route remove-destinations (page 189)
event destination create (page 151)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification" command set.
The event route remove-destinations command can be used to remove existing destinations from an
event route. This command removes only the specified destinations from the route, leaving any other
destinations assigned to that route.
The named destinations are not deleted, just removed from the specified event route. To delete a
destination entirely, use the event destination delete command. To show all existing destinations
and their attributes, use the event destination show command.
You can use extended queries with such parameters as -severity and -snmp-support to specify multiple
events that meet certain criteria. See examples provided in the event route add-destinations
command manpage that show how to use extended queries.
● Parameters
-message-name <Message Name> - Message Name
Specify the message name of the event you are modifying. You can use wildcards to specify a family of
events or type of event.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Use this optional parameter to specify a set of events that match this parameter value. You must use the
-message-name parameter with wildcards to specify the family of events or type of events.
-destinations <Event Destination>, ... - Destinations
Specify a comma-separated list of destinations to remove from the event's list of destinations.
● Examples
The following example specifies that the destination named "mgr.email" should no longer receive
notifications of RAID events.
Related reference
event destination delete (page 152)
event destination show (page 155)
event route add-destinations (page 187)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event catalog" command set.
This command displays information about event routes. Event routes describe which events generate
notifications. A route specifies what to watch for, whom to notify, and what to do should a particular event
occur. By default, the command displays the following information:
• Message name of the event
• Severity of the event
• Destinations for event notifications
• Frequency threshold for event notifications
• Time threshold for event notifications
To display detailed information about a specific event route, run the command with the -message-name
parameter, and specify the name of the message. The detailed view adds the following information:
• Full description of the event
• Action to be taken to address the event
You can specify additional parameters to limit output to the information that matches those parameters. For
example, to display information only about events with a message name that begins with "raid", run the
command with the -message-name raid* parameter. You can enter either a specific text string or a
wildcard pattern.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-message-name <Message Name>] - Message Name
Selects the event routes that match this parameter value.
[-severity {EMERGENCY|ALERT|ERROR|NOTICE|INFORMATIONAL|DEBUG}] - Severity
Selects the event routes that match this parameter value. Valid values:
• EMERGENCY - Disruption
• ALERT - Single point of failure
• ERROR - Degradation
• NOTICE - Infromation
• INFORMATIONAL - Information
• DEBUG - Debug information
[-action <text>] - Corrective Action
Selects the events that match this parameter value. This parameter is most useful when entered with
wildcards. The "action" field describes what steps, if any, you must take to remedy the situation.
[-description <text>] - Description
Selects the events that match this parameter value. This parameter is most useful when entered with
wildcards. The "description" field describes why the event was encountered and what it means.
[-snmp-support {true|false}] - Supports SNMP trap
Selects the event routes that match this parameter value.
[-destinations <Event Destination>, ...] - Destinations
Selects the event routes that match this parameter value. A destination is a list of email addresses,
SNMP clients, and syslogs.
[-frequencythreshold <integer>] - Number of Drops Between Transmissions
Selects the event routes that match this parameter value (number of events since previous notification).
[-timethreshold <integer>] - Dropping Interval (Seconds) Between Transmissions
Selects the event routes that match this parameter value.
● Examples
The following example displays information about all event routes:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification history" command set.
The event snmphistory delete command deletes an SNMP trap-history record. To delete a record, you
will need to know which node generated the event, and you will need to know the sequence number of that
event in the trap-history.
Use the event snmphistory show command to display a list of trap-history records and their sequence
numbers.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node that contains the snmp history record to delete.
-seqnum <Sequence Number> - Sequence Number
Use this parameter to specify the sequence number of the SNMP trap-history record to delete.
● Examples
The following example deletes all SNMP trap-history records on node1:
Related reference
event snmphistory show (page 192)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command has been deprecated. It may be removed from a future release of Data ONTAP. Instead,
use the "event notification history" command set.
The event snmphistory show command displays a list of event notifications that have been sent to
SNMP traps. The command output depends on the parameters specified with the command. By default, the
command displays general information about all trap-history records.
To display detailed information about a specific trap-history record, run the command with the -seqnum
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the trap-history records that match this parameter value (text pattern).
[-seqnum <Sequence Number>] - Sequence Number
Selects the trap-history records that match this parameter value (sequence number).
[-message-name <Message Name>] - Message Name
Selects the trap-history records that match this parameter value.
[-address <text>, ...] - SNMP Client Address
Selects the trap-history records that match this parameter value (IP address).
[-time <MM/DD/YYYY HH:MM:SS>] - Transmission Time
Selects the trap-history records that match this parameter value.
[-message <text>] - Alert Message
Selects the trap-history records that match this parameter value (text pattern).
[-previous-time <MM/DD/YYYY HH:MM:SS>] - Previous Transmission Time
Selects the trap-history records that match this parameter value.
[-num-drops-since-previous <integer>] - Number of Drops Since Previous Transmission
Selects the trap-history records that match this parameter value (number of event drops since last
transmission).
● Examples
The following example displays information about all SNMP trap-history records:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The event status show command summarizes information about occurrences of events. For detailed
information about specific occurrences of events, use the event log show command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the event records that match this parameter value. Events are tracked on a node-by-node basis,
rather than being rolled up cluster-wide.
[-message-name <Message Name>] - Message Name
Selects the event records that match this parameter value. The message name is a short descriptive
string. Filtering output by message name displays messages of a specific type.
[-indications <integer>] - Number of Indications
Selects the event records that match this parameter value. This parameter is most useful when used with
a range, such as using the range ">20" to display only events that have been posted more than 20
times.
[-drops <integer>] - Number of Drops
Selects the event records that match this parameter value.
[-last-time-occurred <MM/DD/YYYY HH:MM:SS>] - Last Indication Time
Selects the event records that match this parameter value.
[-last-time-dropped <MM/DD/YYYY HH:MM:SS>] - Last Suppressed Indication Time
Selects the event records that match this parameter value.
[-last-time-processed <MM/DD/YYYY HH:MM:SS>] - Last Processed Indication Time
Selects the event records that match this parameter value.
[-stat-starting-time <MM/DD/YYYY HH:MM:SS>] - Stat Starting Time
Selects the event records that match this parameter value.
[-last-hour-histogram <integer>, ...] - 60-minute Histogram (privilege: advanced)
Use this parameter with the -fields parameter to display the "last hour" histogram for each event
type. The last hour histogram records the number of times each event occurred in the last hour. The
histogram is divided into sixty buckets, and each bucket collects one minute's events. The buckets
display with the most recent event first.
[-last-day-histogram <integer>, ...] - 24-hour Histogram (privilege: advanced)
Use this parameter with the -fields parameter to display the "last day" histogram for each event type.
The last day histogram records the number of times each event occurred in the last day. The histogram is
divided into 24 buckets, and each bucket collects one hour's events. The buckets display with the most
recent event first.
[-last-week-histogram <integer>, ...] - 7-day Histogram (privilege: advanced)
Use this parameter with the -fields parameter to display the "last week" histogram for each event
type. The last week histogram records the number of times each event occurred in the last week. The
histogram is divided into 7 buckets, and each bucket collects one day's events. The buckets display with
the most recent event first.
[-severity {NODE_FAULT|SVC_FAULT|NODE_ERROR|SVC_ERROR|WARNING|NOTICE|INFO|DEBUG|VAR}] -
Severity
Selects events that have the event severity you specify. Severity levels sort with the most severe levels
first. Severity levels:
• NODE_FAULT - The node has detected data corruption, or is unable to provide client service.
• SVC_FAULT - The node has detected a temporary loss of service. Typically, this is caused by a transient
software fault.
• NODE_ERROR - The node has detected a hardware error that is not immediately fatal.
• SVC_ERROR - The node has detected a software error that is not immediately fatal.
• WARNING - A high-priority message that does not indicate a fault.
• NOTICE - A normal-priority message that does not indicate a fault.
● Examples
The following example displays recent event-occurrence status for node1:
The following example displays a summary of events which are warnings or more severe:
cluster1::> event status show -node node1 -severity <=warning -fields indications,drops,severity
node message-name indications drops severity
------- ------------------------ ----------- ----- --------
node1 api.output.invalidSchema 5463 840 WARNING
node1 callhome.dsk.config 1 0 WARNING
node1 callhome.sys.config 1 0 SVC_ERROR
node1 cecc_log.dropped 145 0 WARNING
node1 cecc_log.entry 5 0 WARNING
node1 cecc_log.entry_no_syslog 4540 218 WARNING
node1 cecc_log.summary 5 0 WARNING
node1 cf.fm.noPartnerVariable 5469 839 WARNING
node1 cf.fm.notkoverBadMbox 1 0 WARNING
node1 cf.fm.notkoverClusterDisable 1 0 WARNING
node1 cf.fsm.backupMailboxError 1 0 WARNING
node1 cf.takeover.disabled 23 0 WARNING
node1 cmds.sysconf.logErr 1 0 NODE_ERROR
node1 config.noPartnerDisks 1 0 NODE_ERROR
node1 fci.initialization.failed 2 0 NODE_ERROR
node1 fcp.service.adapter 1 0 WARNING
node1 fmmb.BlobNotFound 1 0 WARNING
node1 ha.takeoverImpNotDef 1 0 WARNING
node1 httpd.config.mime.missing 2 0 WARNING
The example below demonstrates using the -probability parameter and -fields parameter together to
display a list of events that might be suppressed.
Related reference
event config modify (page 148)
event log show (page 171)
The job commands enable you to manage jobs and schedules. A job is defined as any asynchronous
operation.
job delete
Delete a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job delete command deletes a job. Use the job show command to view a list of running jobs that
can be deleted.
● Parameters
-id <integer> - Job ID
The numeric ID of the job you want to delete. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example deletes the job that has ID 99:
Related reference
job show (page 198)
job pause
Pause a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job pause command pauses a job. Use the job resume command to resume a paused job. Use the
job show command to view a list of running jobs that can be paused.
● Parameters
-id <integer> - Job ID
The numeric ID of the job you want to pause. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example pauses the job that has ID 183:
Related reference
job resume (page 198)
job show (page 198)
job resume
Resume a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job resume command resumes a job that was previously paused by using the job pause command.
Use the job show command to view a list of paused jobs that can be resumed.
● Parameters
-id <integer> - Job ID
The numeric ID of the paused job to be resumed. A job ID is a positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example resumes the paused job that has ID 183:
Related reference
job pause (page 197)
job show (page 198)
job show
Display a list of jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job show command displays information about jobs. By default, the command displays information
about all current jobs.
To display detailed information about a specific job, run the command with the -id parameter.
You can specify additional parameters to select information that matches the values you specify for those
parameters. For example, to display information only about jobs running on a specific node, run the
command with the -node parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-inprogress ]
Displays the job ID, the job name, the owning Vserver, and the progress of the job.
| [-jobstate ]
Displays information about each job's state, including the queue state, whether the job was restarted,
and when the job has completely timed out.
| [-sched ]
Displays the job ID, the job name, the owning Vserver, and the schedule on which the job runs.
| [-times ]
Displays the job ID, the job name, the owning Vserver, the time when the job was last queued, the time
when the job was last started, and the time when the job most recently ended.
| [-type ]
Displays the job ID, the job name, the job type, and the job category.
| [-jobuuid ] (privilege: advanced)
Displays the job ID, the job name, the owning Vserver, and the job UUID.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Selects the jobs that match the ID or range of IDs that you specify.
[-vserver <vserver name>] - Owning Vserver
Selects jobs that are owned by the specified Vserver.
[-name <text>] - Name
Selects the jobs that match this parameter value.
[-description <text>] - Description
Selects the jobs that match this parameter value.
[-priority {Low|Medium|High|Exclusive}] - Priority
Selects the jobs that match this parameter value.
[-node <nodename>] - Node
Selects the jobs that match this parameter value.
[-affinity {Cluster|Node}] - Affinity
Selects the jobs that match this parameter value.
[-schedule <job_schedule>] - Schedule
Selects the jobs that match this parameter value.
[-queuetime <MM/DD HH:MM:SS>] - Queue Time
Selects the jobs that match this parameter value.
[-starttime <MM/DD HH:MM:SS>] - Start Time
Selects the jobs that match this parameter value.
[-endtime <MM/DD HH:MM:SS>] - End Time
Selects the jobs that match this parameter value.
[-dropdeadtime <MM/DD HH:MM:SS>] - Drop-dead Time
Selects the jobs that match this parameter value.
[-restarted {true|false}] - Restarted?
Selects the jobs that match this parameter value.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|Error|
Quit|Dead|Unknown|Restart|Dormant}] - State
Selects the jobs that match this parameter value.
[-code <integer>] - Status Code
Selects the jobs that match this parameter value.
[-completion <text>] - Completion String
● Examples
The following example displays information about all jobs on the node named node1:
job show-bynode
Display a list of jobs by node
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job show-bynode command displays information about jobs on a per-node basis. The command
output depends on the parameters specified with the command. If no parameters are specified, the
command displays information about all jobs in the cluster that are currently owned by a node.
To display detailed information about a specific job, run the command with the -id parameter. The detailed
view includes all of the default information plus additional items.
You can specify additional parameters to display only information that matches the values you specify for
those parameters. For example, to display information only about jobs running on a specific node, run the
command with the -node parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the jobs that are associated with the node you
specify.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you
specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter with the name of a Vserver to display only jobs that are owned by that Vserver.
[-name <text>] - Name
Use this parameter to display information only about the jobs that match the job name you specify.
[-description <text>] - Description
Use this parameter to display information only about the jobs that match the description you specify.
[-affinity {Cluster|Node}] - Affinity
Use this parameter with an affinity value to display only jobs that match the affinity you specify.
[-username <text>] - User Name
Use this parameter with a username to display only jobs that are associated with that user.
● Examples
The following example displays information about all jobs on a per-node basis:
job show-cluster
Display a list of cluster jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job show-cluster command displays information about cluster-affiliated jobs. The command output
depends on the parameters specified with the command. If no parameters are specified, the command
displays information about all cluster-affiliated jobs.
To display detailed information about a specific job, run the command with the -id parameter. The detailed
view includes all of the default information plus additional items.
You can specify additional parameters to display only information that matches the values you specify for
those parameters. For example, to display information only about jobs running on a specific node, run the
command with the -node parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you
specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter with the name of a Vserver to display only jobs that are owned by that Vserver.
[-name <text>] - Name
Use this parameter to display information only about the jobs that match the job name you specify.
[-description <text>] - Description
Use this parameter to display information only about the jobs that match the description you specify.
[-priority {Low|Medium|High|Exclusive}] - Priority
Use this parameter to display information only about the jobs that match the priority you specify.
[-node <nodename>] - Node
Use this parameter to display information only about the jobs that are associated with the node you
specify.
[-affinity {Cluster|Node}] - Affinity
Use this parameter with an affinity value to display only jobs that match the affinity you specify.
[-schedule <job_schedule>] - Schedule
Use this parameter to display information only about the jobs that run on the schedule you specify.
[-queuetime <MM/DD HH:MM:SS>] - Queue Time
Use this parameter to display information only about the jobs that match the queue time you specify.
[-starttime <MM/DD HH:MM:SS>] - Start Time
Use this parameter to display information only about the jobs that match the start time you specify.
[-endtime <MM/DD HH:MM:SS>] - End Time
Use this parameter to display information only about the jobs that match the end time you specify.
[-dropdeadtime <MM/DD HH:MM:SS>] - Drop-dead Time
Use this parameter to display information only about the jobs that match the final timeout time you
specify.
[-restarted {true|false}] - Restarted?
Use this parameter to display information only about the jobs that match the restart value you specify.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|Error|
Quit|Dead|Unknown|Restart|Dormant}] - State
Use this parameter to display information only about the jobs that match the job state you specify.
[-code <integer>] - Status Code
Use this parameter to display information only about the jobs that match the status code you specify.
[-completion <text>] - Completion String
Use this parameter to display information only about the jobs that match the completion text you
specify.
[-jobtype <text>] - Job Type
Use this parameter to display information only about the jobs that match the job type you specify.
[-category <text>] - Job Category
Use this parameter to display information only about the jobs that match the job category you specify.
[-uuid <UUID>] - UUID
Use this parameter to display information only about the jobs that match the UUID you specify.
[-username <text>] - User Name
Use this parameter with a username to display only jobs that are associated with the user you specify.
● Examples
The following example displays information about all cluster-affiliated jobs:
job show-completed
Display a list of completed jobs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job show-completed command displays information about completed jobs. The command output
depends on the parameters you specify with the command. If you do not use any parameters, the command
displays information about all completed jobs.
To display detailed information about a specific job, run the command with the -id parameter. The detailed
view includes all of the default information plus additional items.
You can specify additional parameters to display only information that matches those parameters. For
instance, to display information only about jobs running on a specific node, run the command with the -
node parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-id <integer>] - Job ID
Use this parameter to display information only about the jobs that match the ID or range of IDs you
specify.
[-vserver <vserver name>] - Owning Vserver
Use this parameter with the name of a Vserver to display only jobs that are owned by that Vserver.
[-name <text>] - Name
Use this parameter to display information only about the jobs that match the name you specify.
[-description <text>] - Description
Use this parameter to display information only about the jobs that match the description you specify.
[-priority {Low|Medium|High|Exclusive}] - Priority
Use this parameter to display information only about the jobs that match the priority you specify.
[-node <nodename>] - Node
Use this parameter to display information only about the jobs that are associated with the node you
specify.
[-affinity {Cluster|Node}] - Affinity
Use this parameter with an affinity value to display only jobs that match the affinity you specify.
[-schedule <job_schedule>] - Schedule
If you use this parameter, the command displays information only about the jobs that have the schedule
you specify.
[-queuetime <MM/DD HH:MM:SS>] - Queue Time
If you use this parameter, the command displays information only about the jobs that have the queue
time you specify.
[-starttime <MM/DD HH:MM:SS>] - Start Time
Use this parameter to display information only about the jobs that have the start time you specify.
[-endtime <MM/DD HH:MM:SS>] - End Time
Use this parameter to display information only about the jobs that have the end time you specify.
[-dropdeadtime <MM/DD HH:MM:SS>] - Drop-dead Time
Use this parameter to display information only about the jobs that time out at the time you specify.
[-restarted {true|false}] - Restarted?
Use this parameter to display information only about the jobs that match the restart value you specify.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|Error|
Quit|Dead|Unknown|Restart|Dormant}] - State
Use this parameter to display information only about the jobs that match the job state you specify.
[-code <integer>] - Status Code
Use this parameter to display information only about the jobs that match the status code you specify.
[-completion <text>] - Completion String
Use this parameter to display information only about the jobs that match the completion text you
specify.
[-jobtype <text>] - Job Type
Use this parameter to display information only about the jobs that match the job type you specify.
[-category <text>] - Job Category
Use this parameter to display information only about the jobs that match the job category you specify.
[-uuid <UUID>] - UUID
Use this parameter to display information only about the jobs that match the UUID you specify.
[-username <text>] - User Name
Use this parameter with a username to display only jobs that are associated with that user.
● Examples
The following example displays information about all completed jobs:
job stop
Stop a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job stop command stops a running job. A stopped job cannot be resumed. Use the job pause
command to pause a job so that you can later resume it. Use the job show command to view a list of
running jobs.
● Parameters
-id <integer> - Job ID
The numeric ID of the job to stop. A job ID is a positive integer.
● Examples
The following example stops the job that has ID 101:
Related reference
job pause (page 197)
job show (page 198)
job unclaim
Unclaim a cluster job
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job unclaim command causes a cluster-affiliated job that is owned by an unavailable node to be
unclaimed by that node. Another node in the cluster can then take ownership of the job. Use the job
show-cluster command to obtain a list of cluster-affiliated jobs.
● Parameters
-id <integer> - Job ID
Use this parameter to specify the ID number of the job to unclaim.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example shows how to unclaim the cluster-affiliated job with the ID 27 that is owned by the
Vserver vs1:
Related reference
job show-cluster (page 202)
job watch-progress
Watch the progress of a job
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job watch-progress command displays the progress of a job, and periodically updates that display.
You can specify the frequency of the updates.
● Parameters
-id <integer> - Job ID
● Examples
The following example show how to monitor the progress of the job that has ID 222 on Vserver vs0. The
progress display updates every 3 seconds.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job history show command displays a history of completed jobs with newer entries displayed first.
You can specify optional parameters to select information about job history items that match only those
parameters. For example, to display information about jobs that were completed on February 27 at noon,
run the command with -endtime "02/27 12:00:00".
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the completed jobs that match this parameter value.
[-record <Sequence Number>] - Record ID
Selects the completed jobs that match the record ID or range of record IDs you specify. Note that record
IDs are unique for each node, not for the cluster as a whole. As a result, there can be two records with
the same record ID within the cluster.
[-vserver <vserver name>] - Owning Vserver
Selects the completed jobs that are owned by the Vserver you specify.
[-id <integer>] - Job ID
Selects the completed jobs that match this parameter value.
[-endtime <MM/DD HH:MM:SS>] - End Time
Selects jobs that completed at the time you specify. This parameter is most useful when used with a
range of times.
[-starttime <MM/DD HH:MM:SS>] - Start Time
Selects completed jobs that were started at the time you specify. This parameter is most useful when
used with a range of times.
● Examples
The following example displays information about all completed jobs:
The following example shows how to use a range with the "endtime" parameter to select only the events
that ended between 8:15 and 8:16 on August 22nd.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The job initstate show command displays information about the initialization states of job-manager
processes.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-process <process_name>] - Process Name
Selects the nodes that match this parameter value.
[-initialized {true|false}] - Initialized?
Selects the nodes that match this parameter value (true means initialized; false means not
initialized).
[-cache-root <text>] - Cache Root
Selects the nodes that match this parameter value.
[-siteid <UUID>] - Site ID
Selects the nodes that match this parameter value.
[-hp-threads <integer>] - High Priority Threads
Selects the nodes that have the number of high-priority threads you specify.
[-mp-threads <integer>] - Medium Priority Threads
Selects the nodes that have the number of medium-priority threads you specify.
[-lp-threads <integer>] - Low Priority Threads
Selects the nodes that have the number of low-priority threads you specify.
[-tx-interval <integer>] - Transaction Interval
Selects the nodes that have the number of seconds you specify as their transaction interval.
[-initmsg <text>] - Initialization Message
Selects the nodes that match this parameter value.
[-thread-initmsg <text>] - Thread Initialization Message
Selects the nodes that match this parameter value. The thread initialization message contains
information about thread status. If there is no information to communicate, this message is empty.
[-recovery-enabled {true|false}] - Job Failover Enabled?
Selects the nodes that match this parameter value (true means enabled, false means not enabled).
[-ex-threads <integer>] - Exclusive Priority Threads
Selects the nodes that match this parameter value.
● Examples
The following example shows how to display general job-manager initialization-state information for a
cluster.
The following example shows how to display detailed job-manager initialization-state information for a
node named node0.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private delete command deletes a private job. Private jobs are affiliated with a specific node
and do not use any cluster facilities, such as the replicated database.
If you use this command on a job that does not support the delete operation, the command returns an error
message.
Use the job private show command to view a list of private jobs that can be deleted.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the private job to be deleted. A job ID is a positive
integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example shows how to delete the job that has ID 273 from the node named node2:
Related reference
job private show (page 212)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private pause command pauses a private job. Private jobs are affiliated with a specific node and
do not use any cluster facilities, such as the replicated database.
If you use this command to pause a job that does not support it, the command returns an error message.
Use the job private resume command to resume a paused private job.
Use the job private show command to view a list of private jobs.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the paused private job to be paused. A job ID is a
positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example pauses the private job that has ID 99 on the node node1:
Related reference
job private resume (page 212)
job private show (page 212)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private resume command resumes a private job that was paused by using the job private
pause command. Private jobs are affiliated with a specific node and do not use any cluster facilities, such as
the replicated database.
Use the job private show command to view a list of paused private jobs that can be resumed.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node with which the paused private job is associated.
-id <integer> - Job ID
Use this parameter to specify the numeric ID of the paused private job to be resumed. A job ID is a
positive integer.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example resumes the paused private job that has ID 99 on a node named node2:
Related reference
job private pause (page 211)
job private show (page 212)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private show command displays information about private jobs. Private jobs are affiliated with a
specific node and do not use any cluster facilities, such as the replicated database.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-inprogress ]
Displays the job ID, name, owning Vserver, and progress of each private job.
| [-jobstate ]
Displays information about each private job's state, including the queue state, whether the job was
restarted, and when the job has timed out.
| [-jobuuid ]
Displays the ID, name, owning Vserver, and UUID of each private job.
| [-sched ]
Displays the job ID, name, owning Vserver, and run schedule of each private job.
| [-times ]
Displays the queue time, start time, and end time of each private job.
| [-type ]
Displays the type and category of each private job.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the private jobs that match this parameter value. .
[-id <integer>] - Job ID
Selects the private jobs that match the ID or range of IDs that you specify.
[-vserver <vserver name>] - Owning Vserver
Selects the private jobs that match this parameter value.
[-name <text>] - Name
Selects the private jobs that match this parameter value.
[-description <text>] - Description
Selects the private jobs that match this parameter value.
[-priority {Low|Medium|High|Exclusive}] - Priority
Selects the private jobs that match this parameter value.
[-schedule <job_schedule>] - Schedule
Selects the private jobs that match this parameter value.
[-queuetime <MM/DD HH:MM:SS>] - Queue Time
Selects the private jobs that match this parameter value.
[-starttime <MM/DD HH:MM:SS>] - Start Time
Selects the private jobs that match this parameter value.
[-endtime <MM/DD HH:MM:SS>] - End Time
Selects the private jobs that match this parameter value.
[-dropdeadtime <MM/DD HH:MM:SS>] - Drop-dead Time
Selects the private jobs that match this parameter value.
[-restarted {true|false}] - Restarted?
Selects the private jobs that match this parameter value.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|Error|
Quit|Dead|Unknown|Restart|Dormant}] - State
Selects the private jobs that match this parameter value.
● Examples
The following example displays information about all private jobs on the local node:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private show-completed command displays information about completed private jobs. Private
jobs are affiliated with a specific node and do not use any cluster facilities, such as the replicated database.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Use this parameter to display information only about completed jobs that are associated with the user
you specify.
● Examples
The following example shows how to display information about all completed private jobs on the node
named node1:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private stop command stops a running private job. A private job is a job that is associated with
a specific node and does not use cluster facilities. A stopped job cannot be restarted.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the job is running.
-id <integer> - Job ID
This specifies the numeric ID of the job that is to be stopped.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the name of the Vserver that owns the job.
● Examples
The following example stops a private job with the ID 416 on a node named node0:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The job private watch-progress command displays and periodically updates the progress of a private
job. A private job is a job that is associated with a specific node and does not use cluster facilities. You can
specify the frequency of the progress updates.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the job is running.
-id <integer> - Job ID
This specifies the numeric ID of the job whose progress is to be monitored.
[-vserver <vserver name>] - Owning Vserver
Use this parameter to specify the Vserver with which the paused private job is associated.Use this
parameter to specify the name of the Vserver that owns the job.
[-interval <integer>] - Refresh Interval (seconds)
This optionally specifies, in seconds, the frequency of the updates.
● Examples
The following example monitors the progress of the private job that has ID 127 on a node named node1.
The progress is updated every 2 seconds.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule delete command deletes a schedule. Use the job schedule show command to
display all current schedules.
You cannot delete any schedules that are in use by jobs. Use the job schedule show-jobs command to
display jobs by schedule.
You cannot delete any schedules that are referenced by:
• Volume Snapshot copy policy entries
• SnapMirror entries
• SIS policy entries
• configuration backup settings
You must remove all references to a schedule before you can delete it. If you attempt to delete a schedule
that is referenced, an error message will list which entries reference the schedule you want to delete. Use
the show command for each of the items listed by the error message to display which entries reference the
schedule. You may need to use the -instance parameter to display more detail.
● Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete a schedule. By default, the
schedule is deleted from the local cluster. In a MetroCluster configuration, the partner cluster can be
specified if the local cluster is in switchover state.
● Examples
The following example deletes a schedule named overnightbackup:
Related reference
job schedule show (page 218)
job schedule show-jobs (page 219)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule show command displays information about schedules.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster
Selects the schedules that match this parameter value.
[-name <text>] - Schedule Name
Selects the schedules that match this parameter value.
[-type {cron|interval|builtin}] - Schedule Type
Selects the schedules that match this parameter value.
[-description <text>] - Description
Selects the schedules that match this parameter value.
● Examples
The following example displays information about all schedules:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The job schedule show-jobs command displays information about jobs that are associated with
schedules.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Schedule Name
Use this parameter to display information only about the jobs that are associated with the schedule you
specify.
[-affinity {Cluster|Node}] - Cluster / Node
Use this parameter to display information only about the jobs that match the affinity value you specify.
[-owner <text>] - Owner
Use this parameter to display information only about the jobs that are owned by the nodes you specify.
[-jobid <integer>] - ID
Use this parameter to display information only about the jobs that match the ID or range of IDs that you
specify.
[-jobname <text>] - Job Name
Use this parameter to display information only about the jobs that match the name you specify.
● Examples
The following example shows information about schedules that are associated with jobs:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule cron create command creates a cron schedule. A cron schedule, like a UNIX cron job,
runs at a specified time. You can also specify months, days of the month, or days of the week on which the
schedule will run.
If you specify values for both days of the month and days of the week, they are considered independently.
For example, a cron schedule with the day specification Friday, 13 runs every Friday and on the 13th day of
each month, not just on every Friday the 13th.
● Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to create a cron schedule. By
default, the schedule is created on the local cluster. In a MetroCluster configuration, the partner cluster
can be specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter to specify the name of the cron schedule that you want to create.
[-month <cron_month>, ...] - Month
Use this parameter to specify months in which the schedule runs. Valid values are January, February,
March, April, May, June, July, August, September, October, November, December, and all. Specify "all" to
run the schedule every month.
[-dayofweek <cron_dayofweek>, ...] - Day of Week
Use this parameter to specify days of the week on which the schedule runs. Valid values are Sunday,
Monday, Tuesday, Thursday, Friday, and Saturday, and all. Specify "all" to run the schedule every day.
[-day <cron_dayofmonth>, ...] - Day
Use this parameter to specify days of the month on which the schedule runs. Valid values range from 1
to 31.
[-hour <cron_hour>, ...] - Hour
Use this parameter to specify the hours value of the time of day at which the schedule runs. Valid values
range from 0 (midnight) to 23 (11:00 p.m.). Specify "all" to run the schedule every hour.
-minute <cron_minute>, ... - Minute
Use this parameter to specify the minutes portion of the time of day at which the schedule runs. Valid
values range from 0 to 59.
● Examples
The following example creates a cron schedule named weekendcron that runs on weekend days (Saturday
and Sunday) at 3:00 a.m.
cluster1::> job schedule cron create -name weekendcron -dayofweek "Saturday, Sunday" -hour 3 -minute 0
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule cron delete command deletes a cron schedule. Use the job schedule cron show
command to display all current cron schedules.
You cannot delete any cron schedules that are associated with jobs. Use the job schedule show-jobs
command to display jobs by schedule.
● Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete a cron schedule. By
default, the schedule is deleted from the local cluster. In a MetroCluster configuration, the partner
cluster can be specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing cron schedule to specify the cron schedule that you
want to delete.
● Examples
The following example deletes a cron schedule named midnightcron:
Related reference
job schedule cron show (page 222)
job schedule show-jobs (page 219)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule cron modify command modifies a cron schedule. A cron schedule, like a UNIX cron
job, runs at a specified time. You can also specify months, days of the month, or days of the week on which
the schedule runs. Use the job schedule cron show command to display all current cron schedules. See
the documentation for job schedule cron show for more information about how cron schedules work.
Modifying one parameter of a cron schedule does not affect the other parameters. For example, if cron
schedule is set to run at 3:15 AM, and you modify the "hour" parameter to 4, the schedule's new time will be
4:15am. To clear a parameter of the schedule's interval, you must explicitly set that portion to "0" or "-" Some
parameters can also be set to "all".
● Parameters
[-cluster <Cluster name>] - Cluster
Use this parameter to specify the cluster of an existing cron schedule you want to modify. The local
cluster is provided as the default value. In a MetroCluster configuration, the partner cluster can be
specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing cron schedule to specify the cron schedule you want to
modify.
[-month <cron_month>, ...] - Month
Use this parameter to specify a new "month" value for the cron schedule. Valid values are January,
February, March, April, May, June, July, August, September, October, November, December, or all. Specify
"all" to run the schedule every month.
[-dayofweek <cron_dayofweek>, ...] - Day of Week
Use this parameter to specify a new "day of week" value for the cron schedule. Valid values include
Sunday, Monday, Tuesday, Thursday, Friday, Saturday, or all. Specify "all" to run the schedule every day.
[-day <cron_dayofmonth>, ...] - Day
Use this parameter to specify a new "day of month" value for the cron schedule. Valid values range from
1 to 31.
[-hour <cron_hour>, ...] - Hour
Use this parameter to specify a new "hour of the day" value for the cron schedule. Valid values range
from 0 (midnight) to 23 (11:00 p.m.), Specify "all" to run the schedule every hour.
[-minute <cron_minute>, ...] - Minute
Use this parameter to specify a new "minute of the hour" value for the cron schedule. Valid values range
from 0 to 59.
● Examples
The following example modifies a cron schedule named weekendcron so that it runs at 3:15 a.m.:
Related reference
job schedule cron show (page 222)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule cron show command displays information about cron schedules. A cron schedule runs
a job at a specified time on specified days.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster
Selects the cron schedules that match this parameter value.
[-name <text>] - Name
Selects the cron schedules that match this parameter value.
[-month <cron_month>, ...] - Month
Selects the cron schedules that match this parameter value. Valid values are January, February,
March, April, May, June, July, August, September, October, November, December, or all.
[-dayofweek <cron_dayofweek>, ...] - Day of Week
Selects the cron schedules that match this parameter value. Valid values include Sunday, Monday,
Tuesday, Wednesday, Thursday, Friday, Saturday, or all.
[-day <cron_dayofmonth>, ...] - Day
Selects the cron schedules that match this parameter value. Valid values range from 1 to 31.
[-hour <cron_hour>, ...] - Hour
Selects the cron schedules that match this parameter value.
[-minute <cron_minute>, ...] - Minute
Selects the cron schedules that match the minute or range of minutes that you specify.
[-description <text>] - Description
● Examples
The following example displays information about all current cron schedules:
The following example displays information about the cron schedule named weekly:
Cluster: cluster1
Name: weekly
Month: -
Day of Week: Sunday
Day: -
Hour: 0
Minute: 15
Description: Sun@0:15
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule interval create creates an interval schedule. An interval schedule runs jobs at
specified intervals after the previous job finishes. For instance, if a job uses an interval schedule of 12 hours
and takes 30 minutes to complete, the job runs at the following times:
• Day one at 8:00 a.m. (the job's initial run)
• Day one at 8:30 p.m.
• Day two at 9:00 a.m.
• Day two at 9:30 p.m.
Each of the numerical parameters of the interval must be a whole number. These parameters can be used
individually, or combined to define complex time values. For example, use a value of 1 day, 12 hours to
create an interval of 1.5 days.
Large parameter values are converted into larger units. For example, if you create a schedule with an
interval of 36 hours, the job schedule interval show command will display it with an interval of 1 day
12 hours.
● Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to create an interval schedule. By
default, the schedule is created on the local cluster. In a MetroCluster configuration, the partner cluster
can be specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter to specify the name of the interval schedule you want to create.
[-days <integer>] - Days
Use this parameter to specify the "days" portion of the schedule's interval. A day is one calendar day.
[-hours <integer>] - Hours
Use this parameter to specify the "hours" portion of the schedule's interval.
[-minutes <integer>] - Minutes
Use this parameter to specify the "minutes" portion of the schedule's interval.
[-seconds <integer>] - Seconds
Use this parameter to specify the "seconds" portion of the schedule's interval.
● Examples
The following example creates an interval schedule named rollingdaily that runs six hours after the
completion of the previous occurrence of the job:
Related reference
job schedule interval show (page 225)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule interval delete command deletes an interval schedule. Use the job schedule
interval show command to display all current interval schedules.
You cannot delete interval schedules that are currently being run. Use the job schedule show-jobs
command to display jobs by schedule.
● Parameters
[-cluster <Cluster name>] - Cluster
This parameter specifies the name of the cluster on which you want to delete an interval schedule. By
default, the schedule is deleted from the local cluster. In a MetroCluster configuration, the partner
cluster can be specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing interval schedule to specify the interval schedule you
want to delete.
● Examples
The following example deletes an interval schedule named rollingdaily:
Related reference
job schedule interval show (page 225)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule interval modify command modifies an interval schedule. An interval schedule runs
jobs at a specified interval after the previous job finishes. Use the job schedule interval show
command to display all current interval schedules. See the documentation of job schedule interval
show for more information on how interval schedules work.
Modifying one parameter of a schedule's interval does not affect the other parameters. For example, if a
schedule's interval is 1 day 12 hours, and you modify the "hours" parameter to 16, the schedule's new
interval is 1 day 16 hours. To clear a parameter of the schedule's interval, you must explicitly set that
parameter to "0" or "-".
● Parameters
[-cluster <Cluster name>] - Cluster
Use this parameter to specify the cluster of an existing interval schedule you want to modify. The local
cluster is provided as the default value. In a MetroCluster configuration, the partner cluster can be
specified if the local cluster is in switchover state.
-name <text> - Name
Use this parameter with the name of an existing interval schedule to specify the interval schedule you
want to modify.
[-days <integer>] - Days
Use this parameter to specify a different "days" value for the schedule's interval.
[-hours <integer>] - Hours
Use this parameter to specify a different "hours" value for the schedule's interval.
[-minutes <integer>] - Minutes
Use this parameter to specify a different "minutes" value for the schedule's interval.
[-seconds <integer>] - Seconds
Use this parameter to specify a different "seconds" value for the schedule's interval.
● Examples
The following example sets the schedule named rollingdaily to run every eight hours:
Related reference
job schedule interval show (page 225)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The job schedule interval show command displays information about interval schedules.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster
Selects the interval schedules that match this parameter value.
[-name <text>] - Name
Selects the interval schedules that match this parameter value.
[-days <integer>] - Days
Selects the interval schedules that match the day value or range of values you specify.
[-hours <integer>] - Hours
Selects the interval schedules that match the hour value or range of values you specify.
[-minutes <integer>] - Minutes
Selects the interval schedules that match the minute value or range of values you specify.
[-seconds <integer>] - Seconds
Selects the interval schedules that match the second value or range of values you specify.
[-description <text>] - Description
Selects the interval schedules that match the description you specify.
● Examples
The following example displays information about all interval schedules:
lun create
Create a new LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command creates a new LUN of a specific size. You cannot create a LUN at a path that already exists.
You must create LUNs at the root of a volume or qtree. You can not create LUNs in the Vserver root volume.
You might find it useful to provide a meaningful path name for the LUN and containing volume. For
example, you might choose a name that describes how the LUN is used, such as the name of the
application, the type of data that it stores, or the user accessing the data. Examples are /vol/database/
lun0, /vol/finance/lun1, and /vol/bill/lun2.
It is recommended that you distribute LUNs across the cluster.
When you can create a LUN, the size of the LUN could be larger than what you specified. The system
generates a message if the size of the LUN is different from what you specified.
By default, when you create a LUN, it is online and it is space-reserved. Use the lun offline command to
take a LUN offline. When you set space reserved to false, the LUN is non-space reserved.
Note
For non-space reserved LUNs, write operations to that LUN might fail due to insufficient disk space. As
a result, the host application or operating system might crash.
Note
When you create a LUN from a file, that file cannot be deleted without deleting the LUN itself.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the new LUN. The LUN path cannot contain any files. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the new LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the new LUN.
-lun <text>} - LUN Name
Specifies the new LUN name. A LUN name is a case-sensitive name and has the following requirements:
• Must contain one to 255 characters. Spaces are not allowed.
• Can contain the letters A-Z, a-z, numbers 0-9, "-", "_", "}", "{", and ".".
{ -size | -s <size> - LUN Size
Specifies the size of the LUN in bytes. You can specify a one-character multiplier suffix:
• c (1 byte)
• w (2 bytes)
• B (512 bytes)
• k (1024 bytes)
• M (k*k bytes)
• G (k*m bytes)
• T (m*m bytes)
[-use-exact-size [true]] - Use Exact Size (privilege: advanced)
Create the LUN using the exact value specified by the -size parameter instead of rounding the size to
best fit the LUN geometry. Size of the LUN must be a multiple of 512 bytes.
| -file-path | -f <text> - File Path
Creates a LUN using the file path as the source.
| [-foreign-disk <text>]} - Foreign Disk Serial number (privilege: advanced)
LUN is created with the same attributes (size, alignment, bytes per sector and so on) as the specified
foreign disk.
[-prefix-size | -P <size>] - Prefix Size (privilege: advanced)
Specifies the size of the prefix stream for the new LUN.
-ostype | -t <LUN Operating System Format> - OS Type
Specifies the OS type for the new LUN. On an All SAN Array, the following OS types are supported:
• aix - the LUN will be used with AIX.
• hyper_v - the LUN will be used with Microsoft Hyper-V.
• linux - the LUN will be used with Linux.
• vmware - the LUN will be used with VMware VMFS.
• windows - the LUN will be used with Microsoft Windows.
On all other clusters, the following OS types are supported:
• aix - the LUN will be used with AIX.
• hpux - the LUN will be used with HP-UX.
• hyper_v - the LUN will be used with Microsoft Hyper-V.
• linux - the LUN will be used with Linux.
• netware - the LUN will be used with NetWare.
• openvms - the LUN will be used with Open-VMS.
• solaris - the LUN will be used with Solaris slice partitioning.
• solaris_efi - the LUN will be used with Solaris EFI partitioning.
• vmware - the LUN will be used with VMware VMFS.
• windows - the LUN will be used with a Master Boot Record (MBR) partition table on Microsoft Windows
2003 or earlier.
• windows_2008 - the LUN will be used with Microsoft Windows 2008 or later.
• windows_gpt - the LUN will be used with a GUID Partition Type (GPT) partition table on Microsoft
Windows.
• xen - the LUN will be used with Xen
[-space-reserve {enabled|disabled}] - Space Reservation
Specifies whether the space reservation setting is enabled or disabled for the new LUN. If you set the
parameter to enabled, the LUN is space-reserved. If you set the parameter to disabled, the LUN is non-
space reserved. The default is enabled.
[-comment <text>] - Comment
A description for the LUN you want to create. If the comment string contains white space, you must
enclose the comment string in double quotes. The limit is 254 characters.
[-space-allocation {enabled|disabled}] - Space Allocation
Specifies the value for the space allocation attribute of the LUN. The space allocation attribute
determines if the LUN supports the SCSI Thin Provisioning features defined in the Logical Block
Provisioning section of the SCSI SBC-3 standard.
Specifying enabled for this parameter enables support for the SCSI Thin Provisioning features.
Specifying disabled for this parameter disables support for the SCSI Thin Provisioning features.
Hosts and file systems that do not support SCSI Thin Provisioning should not enable space allocation.
The default is disabled.
[-class {regular|protocol-endpoint|vvol}] - Class
Specifies the class of the new LUN. The class types are:
• regular - the LUN is for normal blocks protocol access. This is the default value.
• protocol-endpoint - the LUN is a vvol protocol endpoint.
• vvol - the LUN is a vvol data LUN.
{ [-qos-policy-group <text>] - QoS Policy Group
This optionally specifies which QoS policy group to apply to the LUN. This policy group defines
measurable service level objectives (SLOs) that apply to the storage objects with which the policy group
is associated. If you do not assign a policy group to a LUN, the system will not monitor and control the
traffic to it.
Note
If you specify this parameter for a LUN that you want to create from a file and that file belongs to
a QoS policy group, Data ONTAP adds the LUN to the specified policy group and removes the file
from its policy group. Both the file and the LUN that you created from the file cannot belong to
QoS policy groups.
● Examples
cluster1::> lun create -vserver vs1 -path /vol/vol1/lun1 -size 100M -ostype linux
lun delete
Delete the LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command deletes a LUN from a specified Vserver and volume. If the LUN is mapped and online, the
force option is required to delete it.
If a LUN is mapped to an initiator group, you can unmap it by using the lun unmap command. If a LUN is
online, you take it offline by using the lun offline command.
Note
If you create a LUN from a file, you cannot remove the file while the LUN is linked to it. If you want to
remove the file, you must first delete the LUN.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to delete. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN you want to delete.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN you want to delete.
-lun <text>} - LUN Name
Specifies the LUN that you want to delete.
[-force | -f [true]] - Force Deletion of an Online and Mapped LUN
Force deletion of an online LUN that is mapped to an initiator group.
[-force-fenced [true]] - Force Deletion of a Fenced LUN
Force deletion of a LUN that is currently fenced.
● Examples
Related reference
lun mapping delete (page 267)
lun maxsize
Display the maximum possible size of a LUN on a given volume or qtree.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command returns the maximum size of LUNs for different OS types in a volume or qtree. The command
also includes possible maximum size for LUNs with Snapshots or without Snapshots. You can specify the
path of the volume or qtree to determine the maximum size of a LUN that you want to create within that
volume or qtree.
If you do not specify a path, the command returns the maximum LUN size for each OS type for all volumes
and qtrees in a cluster.
The available space in a volume can change over time which means that the size reported by lun maxsize
can change as well. In addition, the maximum LUN size allowed in a lun resize command may be less
than the size reported by lun maxsize.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Specifies the Vserver.
{ [-path <qtree path>] - Volume or Qtree Path
Specifies the path of the root volume or qtree. Examples of correct volume and qtree paths are /vol/vol1
and /vol/vol1/qtree1.
| [-volume <volume name>] - Volume Name
Specifies the volume that contains the LUN you want to get the maximum size for.
[-qtree <qtree name>]} - Qtree Name
Specifies the qtree that contains the LUN you want to get the maximum size for.
[-ostype | -t <text>] - OS Type
Specifies the OS type for the new LUN. On an All SAN Array, the following OS types are supported:
• aix - the LUN will be used with AIX.
• hyper_v - the LUN will be used with Microsoft Hyper-V.
• linux - the LUN will be used with Linux.
• vmware - the LUN will be used with VMware VMFS.
• windows - the LUN will be used with Microsoft Windows.
On all other clusters, the following OS types are supported:
• aix - the LUN will be used with AIX.
• hpux - the LUN will be used with HP-UX.
• hyper_v - the LUN will be used with Microsoft Hyper-V.
• linux - the LUN will be used with Linux.
• netware - the LUN will be used with NetWare.
• openvms - the LUN will be used with Open-VMS.
• solaris - the LUN will be used with Solaris slice partitioning.
• solaris_efi - the LUN will be used with Solaris EFI partitioning.
• vmware - the LUN will be used with VMware VMFS.
• windows - the LUN will be used with a Master Boot Record (MBR) partition table on Microsoft Windows
2003 or earlier.
• windows_2008 - the LUN will be used with Microsoft Windows 2008 or later.
• windows_gpt - the LUN will be used with a GUID Partition Type (GPT) partition table on Microsoft
Windows.
• xen - the LUN will be used with Xen
[-complete-ss-reserve <size>] - With Complete Snapshot Reserve
Shows the maximum size possible of a LUN if you have the complete Snapshot reserve enabled.
[-ss-reserve <size>] - With Snapshot Reserve
Shows the maximum size possible of a LUN if you have the Snapshot reserve enabled.
[-without-ss-reserve <size>] - Without Snapshot Reserve
Shows the maximum size possible of a LUN if you have no Snapshot reserve enabled.
● Examples
Related reference
lun resize (page 236)
lun modify
Modify a LUN
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command modifies LUN attributes. Because LUN modifications can result in data corruption or other
problems, we recommend that you call technical support if you are unsure of the possible consequences of
modifying a LUN.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path for the LUN you want to modify. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume for the LUN you want to modify.
-qtree <qtree name> - Qtree Name
Specifies the qtree for the LUN you want to modify.
-lun <text>} - LUN Name
Specifies the name for the LUN you want to modify. A LUN name is a case-sensitive name and has the
following requirements:
• Must contain one to 255 characters. Spaces are not allowed.
• Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_),
right bracket (}), left bracket ({) and period (.).
• Must start with a letter or number.
[-space-reserve {enabled|disabled}] - Space Reservation
Specifies whether the space reservation setting is enabled or disabled for a LUN. If you set the
parameter to enabled, the LUN is space-reserved. If you set the parameter to disabled, the LUN is non-
space reserved. The default is enabled.
{ [-serial <text>] - Serial Number
Specifies the serial number for the LUN you want to modify.
The LUN serial number is a twelve-character alphanumeric string containing one or more of the
following:
• upper- and lower-case letters
• numbers
• the characters: &, <, >, /, -, #, $, %, *, +, =, ?, @, [, !, ], ^, ~
Some of the characters that are valid in a LUN serial number also have special meaning to the cluster
shell command line:
• The question mark (?) activates the command line active help. In order to type a question mark as
part of a LUN's serial number, it is necessary to disable active help with the command set -active-
help false. Active help can later be re-enabled with the command set -active-help true.
• The number sign (#) indicates the beginning of a comment to the command line and will cause the
remainder of the line to be ignored. To avoid this, enclose the serial number in double quotes (").
Alternatively, the -serial-hex parameter can be used to set the LUN serial number specifying the
serial number encoded in hexadecimal form.
| [-serial-hex <Hex String>]} - Serial Number (Hex)
Specifies the serial number, encoded in hexadecimal form, for the LUN you want to modify. See the
description of the -serial parameter for additional details.
[-comment <text>] - Comment
• all - Read caches all data blocks read and written. It does not do any write caching.
Default caching-policy is auto.
● Examples
lun move-in-volume
Move a LUN within a volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command moves a LUN to a new path in the same volume or renames a LUN. If you are organizing
LUNs in a qtree, the command moves a LUN from one qtree to another. You can perform a LUN move while
the LUN is online and serving data. The process is non-disruptive. Use the lun move start command to
move a LUN to a different volume within the same Vserver.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to move. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume of the LUN you want to move.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree of the LUN you want to move.
-lun <text>} - LUN Name
Specifies the name of the LUN that you want to move.
{ -new-path <path> - New LUN Path
Specifies the new path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
| [-new-qtree <qtree name>] - New Qtree Name
Specifies the new qtree name that you want to move the LUN to.
-new-lun <text>} - New LUN Name
Specifies the new name of the LUN.
● Examples
cluster1::> lun move-in-volume -vserver vs1 -volume vol1 -lun lun1 -new-lun newlun1
Related reference
lun move start (page 275)
lun resize
Changes the size of the LUN to the input value size.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command resizes a LUN. You can resize a LUN that is mapped and online. However, to prevent any
potential problems, take the LUN offline before resizing it.
When you reduce the size of the LUN, the data in the LUN could be truncated. You will receive an error
message if you reduce the size of the LUN. To avoid this error message, use the force parameter.
When you increase the size of a LUN, the maximum resize size is based on the initial geometry of the LUN
and the currently available space in the volume. You will receive an error message if you exceed this limit.
The lun show -instance command reports the "Maximum Resize Size" for a LUN based on the initial
geometry. The lun maxsize command reports the maximum LUN size based on the available space. The
maximum size of the LUN is the smaller of the two limits issued by the lun show -instance command or
the lun maxsize command.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN that you want to resize. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN that you want to resize.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN that you want to resize.
-lun <text>} - LUN Name
Specifies the LUN name that you want to resize.
[-force | -f [true]] - Force Reduce LUN Size
Overrides any warnings if you are reducing the size of the LUN. If you use this parameter without a
value, it is set to true, and the command does not prompt you when reducing the size of a LUN would
produce warnings. If you do not use this parameter, the command displays an error if reducing the size
of a LUN would create a problem.
[-size <size>] - New Size
Specifies the new size of the LUN.
• c (1 byte)
• w (2 bytes)
• B (512 bytes)
• k (1024 bytes)
• M (k*k bytes)
• G (k*m bytes)
• T (m*m bytes)
● Examples
cluster1::> lun resize -vserver vs1 -path /vol/vol1/lun1 -size 500M -force
Error: command failed: Reducing LUN size without coordination with the host system
may cause permanent data loss or corruption. Use the force flag to allow
LUN size reduction.
Related reference
lun show (page 237)
lun maxsize (page 230)
lun show
Display a list of LUNs
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The command displays information for LUNs. Use the instance parameter to display additional LUN
details, such as serial number and space-reservation settings.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the LUNs that match this parameter value.
{ [-path <path>] - LUN Path
Selects the LUNs that match this parameter value. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1
| [-volume <volume name>] - Volume Name
Selects the LUNs that match this parameter value.
[-qtree <qtree name>] - Qtree Name
Selects the LUNs that match this parameter value.
[-lun <text>]} - LUN Name
Selects the LUNs that match this parameter value.
[-size | -s <size>] - LUN Size
Selects the LUNs that match this parameter value.
[-prefix-size | -P <size>] - Prefix Size (privilege: advanced)
Selects the LUNs that match the prefix stream size that you specify.
[-ostype | -t <LUN Operating System Format>] - OS Type
Selects the LUNs that match this parameter value. The operating system types are:
• aix - the LUN stores AIX data.
• hpux- the LUN stores HP-UX data.
• hyper_v- the LUN stores Windows Server 2008 or Windows Server 2012 Hyper-V data
• linux- the LUN stores a Linux raw disk without a partition table.
• netware- the LUN stores NetWare data.
• openvms- the LUN stores Open-VMS data
• solaris- the LUN stores Solaris raw disk in a single-slice partition.
• solaris_efi- the LUN stores Solaris_EFI data.
• vmware- the LUN stores VMware data
• windows- the LUN stores a raw disk type in a single-partition Windows disk using the Master Boot
Record (MBR) partitioning style.
• windows_gpt- the LUN stores Windows data using the GUID Partition Type (GPT) partitioning style.
• windows_2008- the LUN stores Windows data for Windows 2008 or later systems.
• xen- the LUN stores Xen data
[-space-reserve {enabled|disabled}] - Space Reservation
Selects the LUNs that match this parameter value. A value of enabled selects LUN that are space-
reserved. A value of disabled select LUNs that are non-space reserved.
• The number sign (#) indicates the beginning of a comment to the command line and will cause the
remainder of the line to be ignored. To avoid this, enclose the serial number in double quotes (").
• The less than (<), greater than (>), asterisk (*), and exclamation point (!) influence the query
behavior of the command. To use these as characters in a LUN's serial query, you must first press
escape (ESC). To use these characters to influence the query, enclose the serial number, or partial
serial number, in double quotes (") and apply <, >, *, or !, outside of the double quotes.
Alternatively, the -serial-hex parameter can be used to select LUNs using the serial number encoded
in hexadecimal form.
[-serial-hex <Hex String>] - Serial Number (Hex)
Selects the LUNs that match this parameter value. This parameter applies to the LUN serial number
encoded in hexadecimal form. See the description of the -serial parameter for additional details.
[-comment <text>] - Comment
Selects the LUNs that match this parameter value.
[-space-reserve-honored {true|false}] - Space Reservations Honored
Selects the LUNs that match this parameter value. A value of true select LUNs that have their space
reservation honored by the container volume. A value of false displays the LUNs that are non-space
reserved.
[-space-allocation {enabled|disabled}] - Space Allocation
Selects the LUNs that match this parameter value. The space allocation attribute determines if the LUN
supports the SCSI Thin Provisioning features defined in the Logical Block Provisioning section of the SCSI
SBC-3 standard.
Specifying enabled for this parameter selects LUNs with support enabled for the SCSI Thin Provisioning
features.
Specifying disabled for this parameter selects LUNs with support disabled for the SCSI Thin Provisioning
features.
Hosts and file systems that do not support SCSI Thin Provisioning should not enable space allocation.
[-container-state {online|aggregate-offline|volume-offline|error}] - LUN Container State (privilege:
advanced)
Selects the LUNs that match this parameter value. The container states are:
• online- The LUN's aggregate and volume are online.
• aggregate-offline- The LUN's aggregate is offline.
• volume-offline- The LUN's volume is offline.
• error- An error occurred accessing the LUN's volume.
[-state {online|offline|nvfail|space-error|foreign-lun-error}] - State
Selects the LUNs that match this parameter value. The states are:
• online- The LUN is online.
• offline- The LUN is administratively offline, or a more detailed offline reason is not available.
• foreign-lun-error- The LUN has been automatically taken offline due to a media error on the associated
foreign LUN.
• nvfail- The LUN has been automatically taken offline due to an NVRAM failure.
• space-error- The LUN has been automatically taken offline due to insufficient space.
[-uuid <UUID>] - LUN UUID
Selects the LUNs that match this parameter value.
[-mapped {mapped|unmapped}] - Mapped
Selects the LUNs that match this parameter value. A value of mapped selects the LUNs that are mapped
to an initiator group.
[-block-size {512|4KB}] - Physical Size of Logical Block
Selects the LUNs that match this parameter value.
[-device-legacy-id <integer>] - Device Legacy ID
Selects the LUNs that match this parameter value.
[-device-binary-id <text>] - Device Binary ID
Selects the LUNs that match this parameter value.
[-device-text-id <text>] - Device Text ID
Selects the LUNs that match this parameter value.
[-read-only {true|false}] - Read Only
Selects the LUNs that match this parameter value.
[-restore-inaccessible {true|false}] - Fenced Due to Restore
Selects the LUNs that match the state you specify. A value of true means that a LUN is fenced for I/O and
management due to a restore operation.
[-size-used <size>] - Used Size
Selects the LUNs that match this parameter value.
[-max-resize-size <size>] - Maximum Resize Size
Selects the LUNs that match this parameter value.
[-creation-timestamp <MM/DD/YYYY HH:MM:SS>] - Creation Time
Selects the LUNs that match this parameter value.
[-class {regular|protocol-endpoint|vvol}] - Class
Selects the LUNs that match this parameter value.
[-node <nodename>] - Node Hosting the LUN
Selects the LUNs that match this parameter value.
[-qos-policy-group <text>] - QoS Policy Group
Selects the LUNs that match this parameter value.
A policy group defines measurable service level objectives (SLOs) that apply to the storage objects with
which the policy group is associated. If you do not assign a policy group to a LUN, the system will not
monitor and control the traffic to it.
[-qos-adaptive-policy-group <text>] - QoS Adaptive Policy Group
Selects the LUNs that match this parameter value.
An adaptive policy group defines measurable service level objectives (SLOs) that apply to the storage
objects with which the adaptive policy group is associated.
[-caching-policy <text>] - Caching Policy Name
Display the LUNs that match the specified cache.
A caching policy defines the caching behavior of this LUN at the Flash Cache level. If a caching policy is
not assigned to this LUN, the system uses the caching policy that is assigned to the containing volume
or Vserver. If a caching policy is not assigned to the containing volume or Vserver, the system uses the
default cluster-wide policy. The available caching policies are:
• none - Does not cache any user data or metadata blocks.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• meta - Read caches only metadata blocks.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read, and randomly
written user data.
• all - Read caches all data blocks read and written. It does not do any write caching.
Default caching-policy is auto.
[-is-clone {true|false}] - Clone
Selects the LUNs that match this parameter value.
[-is-clone-autodelete-enabled {true|false}] - Clone Autodelete Enabled
Selects the LUNs that match this parameter value.
[-inconsistent-import {true|false}] - Inconsistent Import
Selects the LUNs that match this parameter value. A value of true means that the import of this LUN is
incomplete.
[-serial-7-mode <text>] - 7-mode Serial Number (privilege: advanced)
Selects the LUNs that match this parameter value.
LUNs transitioned from Data ONTAP 7-Mode are assigned new serial numbers for use with Clustered
Data ONTAP. The original 7-Mode serial number is displayed in this field for reference.
[-application <text>] - Application
Selects the LUNs that are part of an application that matches the parameter value.
[-include-offline-containers [true]] - Include LUNs on Offline Volumes and Aggregates (privilege:
advanced)
If true, include available information for LUNs in offline aggregates and offline volumes in the output.
By default, LUNs in offline aggregates and offline volumes are excluded from the output.
● Examples
The following example displays details of the LUN at path /vol/vol1/lun1 in Vserver vs1.
State: offline
LUN UUID: 76d2eba4-dd3f-494c-ad63-1995c1574753
Mapped: mapped
Block Size: 512
Device Legacy ID: -
Device Binary ID: -
Device Text ID: -
Read Only: false
Fenced Due to Restore: false
Used Size: 5MB
Maximum Resize Size: 64.00GB
Creation Time: 9/14/2016 13:55:09
Class: regular
Node Hosting the LUN: node1
QoS Policy Group: -
Caching Policy Name: -
Clone: false
Clone Autodelete Enabled: false
Inconsistent Import: false
Application: -
The following example displays information for the LUN with serial number 1r/wc+9Cpbls:
The following example displays all the LUNs on Vserver vs1 and volume vol1:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command creates a new binding between a protocol endpoint and a vvol LUN. If a binding between the
specified endpoint and vvol already exists, the reference count for the binding is incremented by one.
Note
For optimal results, the protocol endpoint and vvol must be hosted by the same node in the cluster.
● Parameters
-vserver <Vserver Name> - Vserver name
Specifies the name of the Vserver.
-protocol-endpoint-path <path> - Protocol Endpoint
Specifies the path to the protocol endpoint. The specified LUN must already exist and be of class
"protocol-endpoint". Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
-vvol-path <path> - VVol Path
Specifies the path to the vvol. The specified LUN must already exist and be of the class "vvol". Examples
of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
cluster1::*> lun bind create -vserver vs1 -protocol-endpoint-path /vol/VV1/PE1 -vvol-path /vol/VV3/234a
ce
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
Decrement the reference count of the binding between a protocol endpoint and vvol LUN. If the resulting
reference count is zero, the binding is removed.
● Parameters
-vserver <Vserver Name> - Vserver name
Specifies the Vserver.
-protocol-endpoint-path <path> - Protocol Endpoint
Specifies the path of the protocol endpoint LUN. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
-vvol-path <path> - VVol Path
Specifies the path of the vvol LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
[-force [true]] - If true, unbind the Vvol completely even if the current reference count is greater than 1.
The default is false.
Completely remove the specified binding, regardless of the current reference count.
● Examples
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
Shows the configured VVol to protocol endpoint bindings.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the bindings that match this parameter value.
[-protocol-endpoint-msid <integer>] - PE MSID
Selects the bindings that match this parameter value.
[-protocol-endpoint-vdisk-id <text>] - PE Vdisk ID
Selects the bindings that match this parameter value.
[-vvol-msid <integer>] - VVol MSID
Selects the bindings that match this parameter value.
[-vvol-vdisk-id <text>] - VVol Vdisk ID
Selects the bindings that match this parameter value.
[-vserver-uuid <UUID>] - Vserver UUID
Selects the bindings that match this parameter value.
[-protocol-endpoint-path <path>] - Protocol Endpoint
Selects the bindings that match this parameter value. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
[-protocol-endpoint-node <nodename>] - PE Node
Selects the bindings that match this parameter value.
[-vvol-path <path>] - VVol
Selects the bindings that match this parameter value. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
[-vvol-node <nodename>] - VVol Node
Selects the bindings that match this parameter value.
[-secondary-lun <Hex 64bit Integer>] - Secondary LUN
Selects the bindings that match this parameter value.
[-is-optimal {true|false}] - Optimal binding
Selects the bindings that match this parameter value.
[-reference-count <integer>] - Reference Count
Selects the bindings that match this parameter value.
● Examples
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The lun copy cancel command cancels an ongoing LUN copy operation prior to creation of the new LUN.
The command fails if the LUN already exists at the destination path; in that case, use the lun delete
command to delete the LUN at the destination path.
All data transfers will be halted.
Note
This is an advanced command because the preferred way to cancel a LUN copy operation is to wait
until the new LUN becomes visible, and then use the lun delete command to delete the LUN.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun delete (page 230)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun copy modify command modifies the maximum throughput of an ongoing copy operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
Note
The specified value will be rounded up to the nearest megabyte.
● Examples
cluster1::> lun copy modify -vserver vs1 -destination-path /vol/vol2/lun2 -max-throughput 25MB
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun copy pause command pauses an ongoing copy operation. Use the lun copy resume command
to resume the copy operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun copy resume (page 246)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun copy resume command resumes a paused copy operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun copy pause (page 246)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun copy show command shows information about LUNs currently being copied in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Destination Vserver Name
Selects LUN copy operations that match this parameter value.
[-destination-path <path>] - Destination Path
Selects LUN copy operations that match this parameter value.
[-source-vserver <vserver name>] - Source Vserver Name
Selects LUN copy operations that match this parameter value.
[-source-path <path>] - Source Path
Selects LUN copy operations that match this parameter value.
[-source-snapshot <snapshot name>] - Source Snapshot Name
Selects LUN copy operations that match this parameter value.
[-is-promoted-early {true|false}] - Is Destination Promoted Early
Selects LUN copy operations that match this parameter value.
[-max-throughput {<integer>[KB|MB|GB|TB|PB]}] - Maximum Transfer Rate (per sec)
Selects LUN copy operations that match this parameter value.
[-job-status {Preparing|Allocation-Map|Data|Destroying|Paused-Admin|Paused-Error|Complete|
Destroyed}] - LUN Copy Status
Selects LUN copy operations that match this parameter value. The possible values are:
• Preparing - the LUN copy job is in Preparing status.
• Allocation-Map - the LUN copy job is in Allocating status.
• Data - the LUN copy job is in Moving Data status.
• Destroying - the LUN copy job is in Destroying status.
• Paused-Admin - the LUN copy job is in Paused By Admin status.
• Paused-Error - the LUN copy job is in Paused By Error status.
• Complete - the LUN copy job is in Complete status.
● Examples
The example above displays information about all the LUN copy operations in the cluster.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun copy start command initiates copying of a LUN from one volume to another. The destination
volume can be located in the same Vserver as the source volume (intra-Vserver copy) or in a different
Vserver (inter-Vserver).
Note
A cluster administrator must first create a Vserver peering relationship using vserver peer create
before initiating an inter-Vserver LUN copy operation.
● Parameters
-vserver <Vserver Name> - Destination Vserver Name
Specifies the name of the Vserver that will host the new LUN.
| -destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-source-path <path>} - Source Path
Specifies the full path to the source LUN, in the format /vol/<volume>[/.snapshot/<snapshot>][/
<qtree>]/<lun>.
[-source-vserver <vserver name>] - Source Vserver Name
Optionally specifies the name of the Vserver hosting the LUN to be copied.
If this parameter is not specified, it is assumed that an intra-Vserver copy operation is being initiated.
The source volume is expected to be in the same Vserver as the destination volume.
[-promote-early [true]] - Promote Early
Optionally specifies that the destination LUN needs to be promoted early.
If the destination is promoted early, the new LUN will be visible immediately. However, Snapshot copies
of the volume containing the new LUN cannot be taken until the LUN copy operation reaches 'Moving
Data' status.
If the destination is promoted late, the new LUN will be visible only after it has been fully framed.
However, the LUN copy job will not block the creation of Snapshot copies of the volume containing the
new LUN.
If this parameter is not specified, the destination LUN will be promoted late.
[-max-throughput {<integer>[KB|MB|GB|TB|PB]}] - Maximum Transfer Rate (per sec)
Optionally specifies the maximum amount of data, in bytes, that can be transferred per second in
support of this operation. This mechanism can be used to throttle a transfer, to reduce its impact on the
performance of the source and destination nodes.
If this parameter is not specified, throttling is not applied to the data transfer.
Note
The specified value will be rounded up to the nearest megabyte.
● Examples
cluster1::> lun copy start -vserver vs2 -destination-path /vol/vol2/lun2 -source-vserver vs1 -source-pa
th /vol/vol1/lun1
Starts an inter-Vserver copy of LUN lun1 from volume vol1 in Vserver vs1 to lun2 on volume vol2 in Vserver
vs2.
cluster1::> lun copy start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/lun1
Starts an intra-Vserver copy of LUN lun1 from volume vol1 in Vserver vs1 to lun2 on volume vol2 in Vserver
vs1.
cluster1::> lun copy start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/.snapsh
ot/snap1/lun1
Related reference
lun copy resume (page 246)
vserver peer create (page 2275)
lun copy modify (page 245)
lun copy pause (page 246)
lun copy show (page 247)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command adds initiators to an existing initiator group (igroup). You can add an initiator to an initiator
group only if there are no LUN mapping conflicts. Mapping conflicts occur when an initiator is already paired
with a LUN. If you attempt to run this command and there are LUN mapping conflicts, the command returns
an error.
An initiator cannot be a member of two igroups of different OS types. For example, if you have an initiator
that belongs to a Solaris igroup, the command does not allow you to add this initiator to an AIX igroup.
When you add FCP initiators, you can specify an alias instead of the initiator's World Wide Port Name
(WWPN) or the iSCSI Qualified name (IQN).
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group to which you want to add a new initiator.
-initiator <text>, ... - Initiators
Specifies the initiator that you want to add. You can specify the WWPN, IQN, or alias of the initiator.
● Examples
cluster1::> lun igroup add -vserver vs1 -igroup ig1 -initiator iqn.1992-08.com.mv.mvinitiator
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command binds an initiator group to a port set so the host knows which LIFs or TPGs to access. When
you bind a port set to an igroup, the host knows which iSCSI or FCP LIF to access. If you do not bind an igroup
to a port set, and you map a LUN to the igroup, then the initiators in the igroup can access the LUN on any
port on the Vserver.
The initiator group cannot be bound to another port set when you use this command. If you attempt to bind
a port set to an initiator group that is already bound to an existing port set, the command returns an error.
You can only bind an initiator group to one port set at a time.
If the initiator group is bound, use the lun igroup unbind command to unbind the initiator group from
the port set. After the initiator group is unbound, you can bind it to another port set.
You can only bind an initiator group to a non-empty port set.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group that you want to bind a port set to.
-portset <text> - Portset Binding Igroup
Specifies the port set name that you want to bind an initiator group to.
● Examples
cluster1::> lun igroup bind -vserver vs1 -igroup ig1 -portset-name ps1
Related reference
lun igroup unbind (page 257)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command creates a new initiator group (igroup). Use igroups to control which hosts have access to
specific LUNs. When you bind an igroup to a port set, a host in the igroup can access the LUNs only by
connecting to the target ports in the port set.
When you create an igroup, you can add multiple existing initiators by specifying them in a list, separating
them with commas. Later, you can add or remove initiators from the initiator group. Use the lun igroup
add command to add initiators. Use the lun igroup remove command to remove an initiator. Unless the
-initiator option is supplied, no initiators are added to a new igroup.
You can also bind a port set to an initiator when you create an initiator group. You can modify the port set
binding of an initiator group by using the lun igroup bind command or the lun igroup unbind
command.
The name you assign to an igroup is independent of the name of the host that is used by the host operating
system, host files, or Domain Name Service (DNS). If you name an igroup aix1, for example, it is not mapped
to the actual IP host name (DNS name) of the host.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
Note
It might be useful to provide meaningful names for igroups, ones that describe the hosts that can
access the LUNs mapped to them.
● Examples
cluster1::> lun igroup create -vserver vs1 -igroup ig1 -protocol mixed -ostype linux -initiator iqn.200
1-04.com.example:abc123
Related reference
lun igroup add (page 250)
lun igroup remove (page 254)
lun igroup bind (page 250)
lun igroup unbind (page 257)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command deletes an existing initiator group. By default, you cannot delete an initiator group if LUN
maps for that initiator group exist. You need to unmap all the LUNs that are associated with that initiator
group before you can delete the initiator group. Use the lun unmap command to remove LUNS from an
initiator group.
You can specify the force option to delete an initiator group and remove existing LUN maps defined for
that initiator group.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group that you want to delete.
[-force | -f [true]] - Force
Deletes an initiator group and all associated LUN maps.
● Examples
Related reference
lun mapping delete (page 267)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command disables the SAN AIX support across the cluster (all Vservers and all AIX initiator groups).
However, before you can disable SAN AIX support, you must remove all SAN AIX related objects from the
cluster. You need to unmap all the LUNs that are associated with the AIX initiator groups. Then you need to
delete all of the AIX initiator groups. Use the lun unmap command to remove LUNS from an initiator group.
Use the igroup delete command to delete an initiator group.
Note
This command is not intended to be used in normal operation. Use only when you are downgrading to
a release that does not support SAN AIX operation.
● Examples
Related reference
lun mapping delete (page 267)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command modifies an attribute for an initiator group. Currently, the only settable attribute is the
operating system.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group whose attribute you want to modify.
[-ostype | -t <Initiator Group OS Type>] - OS Type
Specifies the operating system that you want to modify. The operating system types of initiators are
• solaris
• windows
• hpux
• aix
• linux
• netware
• vmware
• openvms
• xen
• hyper_v
[-delete-on-unmap {true|false}] - Delete on Last Unmap
Specifies if this initiator group will be deleted automatically when no longer a member of a LUN
mapping relationship.
● Examples
cluster1::> lun igroup modify -vserver vs1 -igroup ig1 -ostype windows
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command removes an initiator from an initiator group. You can only remove an initiator if no existing
LUN maps are defined for that initiator group. You must unmap the LUNs from the initiator group with the
lun unmap command before you can remove initiators from the initiator group.
You can use the force option to remove an initiator and associated LUN maps.
● Parameters
-vserver <Vserver Name> - Vserver Name
● Examples
cluster1::> lun igroup remove -vserver vs1 -igroup ig1 -initiator iqn.1992-08.com.mv.mvinitiator
Related reference
lun mapping delete (page 267)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command renames an existing initiator group. When you rename an initiator group, this action does
not affect access to the LUNs mapped to the initiator group you want to rename.
An initiator group name is a case-sensitive name and must meet the following requirements:
• Must contain one to 96 characters. Spaces are not allowed.
• Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), colon
(:), and period (.).
• Must start with a letter or number.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group you want to rename.
-new-name <text> - New Igroup Name
Specifies the new name of the initiator group.
● Examples
cluster1::> lun igroup rename -vserver vs1 -igroup ig1 -new-name ignew1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays status information for initiator groups (igroup). By default, the command displays
status for all initiator groups.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Specifies the Vserver.
[-igroup <text>] - Igroup Name
Selects the initiator groups that match this parameter value.
[-protocol <protocol_enum>] - Protocol
Selects the initiator groups that match this parameter value (FCP, iSCSI, or mixed).
[-ostype | -t <Initiator Group OS Type>] - OS Type
Selects the initiator groups that match this parameter value. The operating system types are
• solaris
• windows
• hpux
• aix
• linux
• netware
• vmware
• openvms
• xen
• hyper_v
[-portset | -a <text>] - Portset Binding Igroup
Selects the initiator groups that match this parameter value.
[-initiator <text>, ...] - Initiators
Selects the initiator groups that match this parameter value.
[-uuid <UUID>] - Igroup UUID
Selects the initiator groups that match this parameter value.
[-delete-on-unmap {true|false}] - Delete on Last Unmap
Selects the initiator groups that match this parameter value. A value of true displays all the initiator
groups that will be deleted automatically when they are no longer used in a LUN mapping relationship.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command unbinds an initiator group from a port set. When you unbind an initiator group from a port
set, all of the initiators in the initiator group have access to all target LUNs on all network interfaces.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-igroup <text> - Igroup Name
Specifies the initiator group that you want to unbind from the port set.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command creates an import relationship between a specified LUN and a specified foreign disk so you
can import the foreign disk data into a LUN.
The foreign disk must be marked as foreign using storage disk set-foreign-lun command before you
can begin the import progress.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN where you import data to from the foreign disk data.
-foreign-disk <text> - Foreign Disk Serial Number
Specifies the serial number of the Foreign Disk.
-path <path> - LUN Path
Specifies the path of the LUN where you want to import the data of the foreign disk to. Examples of
correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
cluster1::> lun import create -vserver vs1 -path /vol/dvol1/lun1 -foreign-disk 6000B5D0006A0000006A020E
00040000
Related reference
storage disk set-foreign-lun (page 1081)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command deletes the import relationship of a specified LUN or a specified foreign disk.
You cannot use this command if an import is in-progress between the foreign disk and the LUN unless you
use the force option. The import has to either successfully completed or be stopped before deleting the
import relationship.
You can use the lun import stop command to stop the data import, and then you delete the import
relationship.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN that you want to delete the import relationship.
-path <path> - LUN Path
Specifies the path of the LUN where you want to delete the import relationship. Examples of correct LUN
paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
| -foreign-disk <text>} - Foreign Disk Serial Number
Specifies the serial number of the foreign disk.
[-force {true|false}] - Force Delete
When set to true, stops the in progress data import.
● Examples
Related reference
lun import stop (page 263)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command pauses the data import to a specified LUN.
This command does not reset all import checkpoints. To resume a paused import, use the lun import resume
command to retart from the last checkpoint taken before you paused the data import.
If you want to resume the data import from the beginning, use the lun import stop command. Then use the
lun import start command.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to pause the data import to.
-path <path> - LUN Path
Specifies the path of the LUN you want to pause the data import to. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command prepares the cluster for a downgrade to a version of Data ONTAP earlier than 8.3.1 by
disabling the online LUN import feature. Before using this command verify that all LUNs in an import
relationships are offline by running lun show.
● Examples
Related reference
lun show (page 237)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Resumes the data import to a specified LUN.
The import starts from the last checkpoint taken before you paused the data import.
If you want to resume the data import from the beginning, use the lun import stop command. Then use the
lun import start command.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to resume the data import to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to resume the data import to. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays information about the import relationships.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Displays import relationships for a specified Vserver.
[-foreign-disk <text>] - Foreign Disk Serial Number
Enables you to see the import relationship for a particular foreign disk with the specified serial number.
[-path <path>] - LUN Path
Enables you to see the import relationship for a particular LUN path. Examples of correct LUN paths
are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
[-import-home-node {<nodename>|local}] - Import Home Node
Enables you to see the node that initially started the data import and where the I/O for the foreign disk
is directed. If failover occurs, any in-progress data import restarts on the partner node.
[-import-current-node {<nodename>|local}] - Import Current Node
Displays the node that is currently importing data and where the I/O for the foreign disk is directed.
During giveback and if the import home node is different from the current home node , import restarts
on the initial node (import-home-node).
[-operation-in-progress {import|verify}] - Operation in Progress
Enables you to see the imports in progress or import verification in progress.
[-admin-state {stopped|started|paused}] - Admin State
Enables you to see the import relationships for a specified administrative state. For example, you can
list all the imports that have started in a cluster.
[-operational-state {in_progress|failed|completed|stopped|paused}] - Operational State
Enables you to see the import relationships for a specified operational state. For example, you can list
all the imports that have completed in a cluster.
[-percent-complete <integer>] - Percent Complete
Enables you to see the percentage of completion for both import and verification. If you want to see all
the complete imports and verifications, you would specify 100.
[-imported-blocks <integer>] - Blocks Imported
Enables you to see the number of blocks that have been imported to the LUN.
[-compared-blocks <integer>] - Blocks Compared
Enables you to see the number of LUN and foreign disk blocks that have been compared.
[-total-blocks <integer>] - Total Blocks
Enables you to see the total number of blocks that must be imported to complete the data import to a
LUN or the number of blocks that must be compared to complete the import verification.
[-estimated-remaining-duration {<seconds>|[<d> days] <hh>:<mm>[:<ss>]}] - Estimated
Remaining Duration
If this parameter is specified, the command displays import or verify operations that match the specified
time.
[-failure-reason <text>] - Failure Reason
Selects LUN import operations that match this parameter value.
[-max-throughput-limit {<integer>[KB|MB|GB|TB|PB]}] - Maximum Throughput Limit (per sec)
Selects the LUN import operations that match this parameter value. This value is the throughput limit at
which an import or verify will be throttled. By default, there is no throttling.
[-current-throughput {<integer>[KB|MB|GB|TB|PB]}] - Current Throughput (per sec)
Selects the LUN import operations that match this parameter value. This value is the current throughput
for an in-progress import or verify operation.
[-qos-policy-group <text>] - QoS Policy Group
Selects the LUN import operations that match this parameter value. This value is the QoS policy group
associated with an import relationship.
● Examples
-------------------------------------------------------------------------------------------------------
-----------
vs1 /vol/dvol1/lun1 vgv3040f46a:vgbr300s70:9.126L1 stop - 0%
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command initiates the data import to a specified LUN.
You must use the lun import create command to create an import relationship between a LUN and a foreign
disk before you can initiate the data import.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to import data to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to import data to. Examples of correct LUN paths are /vol/
vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command stops the data import into a specified LUN.
After you stop the data import and if you start the import again using lun import start command, then
the import restarts from the beginning.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to stop importing data to.
-path <path> - LUN Path
Specifies the path of the LUN that you want to stop the data import to.
● Examples
Related reference
lun import start (page 263)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command throttles the speed of the import for a given LUN by specifying a maximum throughput limit
on the import.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN to which data from the foreign disk is imported.
-path <path> - LUN Path
Specifies the path of the LUN to which data from the foreign disk is imported. Examples of correct LUN
paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
-max-throughput-limit {<integer>[KB|MB|GB|TB|PB]} - Maximum Throughput Limit (per sec)
Specifies the maximum amount of throughput to be allocated for processing import requests on the
bound LUN. At the time of creation, the default is zero. A value of zero implies that import traffic is
processed by the system at best effort rate along with on-going user I/O. A non-zero value indicates that
import will be throttled at a rate which is at most the maximum throughput limit set.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command compares the LUN and the foreign disk block by block. You are not required to run this
command; it is optional. Before you can do this verification process, the operation state must be stopped or
completed. Use the lun import show command to determine the operation state.
If a block mismatch occurs, the verification process stops.
Verification must be done offline. Ensure the foreign disk and LUN cannot be accessed by a host. To prevent
access of the LUN, the LUN should be taken offline administratively using the lun offline command.
Note: The specified LUN must be in an import relationship with a foreign disk before you can verify the data
import.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to compare block by block with the foreign disk.
-path <path> - LUN Path
Specifies the path of the LUN that you want to compare the foreign disk to. Examples of correct LUN
paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command stops the block by block verification of the foreign disk and LUN data.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to stop block by block comparison with the foreign
disk.
-path <path> - LUN Path
Specifies the path of the LUN that you want to stop the block by block comparison. Examples of correct
LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
● Examples
Related reference
lun import verify start (page 264)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command is used before or after a data mobility event that modifies the owning node of the LUN to add
the new optimized nodes to the specified LUN mapping's reporting nodes.
For more information on managing reporting nodes in response to data mobility events, please see the
Data ONTAP SAN Administration Guide.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver containing the LUN.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN.
-lun <text>} - LUN Name
Specifies the LUN name.
-igroup | -g <text> - Igroup Name
Specifies the igroup the LUN is mapped to.
{ -local-nodes [true] - Add Nodes for Current LUN Location
Add the current LUN owner node and HA partner to the LUN mapping's reporting nodes.
This option should be used after a LUN mobility event to restore optimized access to the LUN.
| -destination-aggregate <aggregate name> - Add Nodes for Aggregate
Add the specified aggregate's owner node and HA partner to the LUN mapping's reporting nodes.
This option may be used prior to a LUN mobility event that changes the LUN's containing aggregate.
| -destination-volume <volume name> - Add Nodes for Volume
Add the specified volume's owner node and HA partner to the LUN mapping's reporting nodes.
This option may be used prior to a LUN mobility event that changes the LUN's containing volume.
| -all [true]} - Add All Nodes (privilege: advanced)
Set the LUN mapping to report on all nodes in preparation for a revert to a previous version of Data
ONTAP.
● Examples
cluster1::> lun mapping add-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Add the current owner node and HA partner for the LUN mapping of /vol/vol1/lun1 to igroup ig1
cluster1::> lun mapping add-reporting-nodes -vserver vs1 -volume vol1 -lun * -igroup ig1 -destination-a
ggregate aggr2
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command maps a LUN to all of the initiators in an initiator group (igroup). After you map the LUN, the
LUN is visible to all initiators in the igroup.
Data ONTAP ensures that there are no LUN map conflicts whether the LUN is offline or online. A LUN map
conflict is a mapping that would violate either of the following rules:
• Each LUN can be mapped to an initiator only once. A LUN can be mapped to multiple igroups as long as
each igroup has a distinct set of initiators.
• LUN IDs must be unique such that every initiator has a unique ID for each LUN to which it is mapped. If
you map a LUN to an igroup, the LUN ID for that mapping cannot be reused by any of the initiators in that
igroup.
In order to determine if a LUN ID is valid for a mapping, Data ONTAP checks each initiator in the igroup to
make sure that the LUN ID is not used for another mapping that includes that initiator.
Note
Prior to mapping a LUN, you must have at least one iSCSI or FCP LIF provisioned on the LUN's owner
node and high-availability partner node.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver that contains the LUN you want to map.
{ -path <path> - LUN Path
Specifies the path of the LUN that you want to map. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN you want to map.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN you want to map.
-lun <text>} - LUN Name
Specifies the LUN name that you want to map.
-igroup | -g <text> - Igroup Name
Specifies the igroup that you want to map.
[-lun-id <integer>] - LUN ID
Specifies the LUN ID for the mapping. The LUN ID is specific to the mapping, not to the LUN itself. This is
used by the initiators in the igroup as the Logical Unit Number for the initiator when accessing the
storage.
[-additional-reporting-node <nodename>] - Additional Reporting Node (privilege: advanced)
Specifies an additional node to populate the -reporting-nodes list when creating the LUN mapping.
The specified node's high availability partner will be automatically populated as well. Use this parameter
when preferred data mobility destinations are known ahead of time and the appropriate paths can be
pre-configured.
● Examples
cluster1::> lun mapping create -vserver vs1 -path /vol/vol1/lun1 -igroup ig1 -lun-id 8
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command unmaps a LUN from an initiator group. After you use this command, the LUN is not visible to
any of the initiators in the initiator group.
● Parameters
-vserver <Vserver Name> - Vserver Name
Selects the LUN maps for the Vserver that matches the parameter value.
{ -path <path> - LUN Path
Specifies the path of the LUN you want to unmap. Examples of correct LUN paths are /vol/vol1/lun1
and /vol/vol1/qtree1/lun1.
| -volume <volume name> - Volume Name
Specifies the volume of the LUN you want to unmap.
-qtree <qtree name> - Qtree Name
Specifies the qtree of the LUN you want to unmap.
-lun <text>} - LUN Name
Specifies the name of the LUN you want to unmap.
-igroup | -g <text> - Igroup Name
Specifies the initiator group that you want to unmap the LUN from.
● Examples
cluster1::> lun mapping delete -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command is used after a data mobility event to remove reporting nodes that are no longer required for
optimized access from the specified LUN mapping.
For more information on managing reporting nodes in response to data mobility events, please see the
Data ONTAP SAN Administration Guide.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver containing the LUN.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
| -volume <volume name> - Volume Name
Specifies the volume that contains the LUN.
[-qtree <qtree name>] - Qtree Name
Specifies the qtree that contains the LUN.
-lun <text>} - LUN Name
Specifies the LUN name.
-igroup | -g <text> - Igroup Name
Specifies the igroup the LUN is mapped to.
● Examples
cluster1::> lun mapping remove-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup ig1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command lists the mappings between LUNs and initiator groups.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the LUN maps for the Vserver that matches the parameter value.
{ [-path <path>] - LUN Path
Selects the LUN maps for the LUN with the path that matches the parameter value. Examples of correct
LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/lun1.
| [-volume <volume name>] - Volume Name
Selects the LUN maps for the volumes that match the parameter value.
[-qtree <qtree name>] - Qtree Name
Selects the LUN maps for the queue trees that match the parameter value.
[-lun <text>]} - LUN Name
Selects the LUN maps for the LUNs with a name that matches the parameter value.
[-igroup | -g <text>] - Igroup Name
Selects the LUN maps for the igroup that matches the parameter value.
[-ostype <Initiator Group OS Type>] - Igroup OS Type
Selects the LUN maps for the initiator groups with the OS type that matches the parameter value.
[-protocol <protocol_enum>] - Igroup Protocol Type
Selects the LUN maps for initiator groups with a protocol that matches the parameter value. Possible
values include FCP, iSCSI, or mixed.
[-lun-id <integer>] - LUN ID
Selects the LUN maps with a LUN ID that matches the parameter value.
[-portset <text>] - Portset Binding Igroup
Selects the LUN maps for initiator groups bound to the portset that matches the parameter value.
[-alua {true|false}] - ALUA
Selects the LUN maps with ALUA settings that match the parameter value.
[-initiators | -n <text>, ...] - Initiators
Selects the LUN maps for initiator groups containing the initiators that match the parameter value.
[-node <nodename>] - LUN Node
Selects the LUN maps for nodes that match the parameter value.
[-reporting-nodes <nodename>, ...] - Reporting Nodes
Selects the LUN maps that match the parameter value.
● Examples
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The lun mapping show-initiator command lists the LUNs which are mapped to an initiator group
which contains a specific initiator.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the LUN mappings for the vserver that you specify.
-initiator <text> - Initiator Name
Selects the LUN mappings for the initiator that you specify.
[-lun-id <integer>] - Logical Unit Number
Selects the LUN mappings with a LUN ID that you specify.
[-igroup <text>] - Igroup Name
Selects the LUN mappings for the initiator group that you specify.
[-path <path>] - LUN Path
Selects the LUN mappings for the LUN path that you specify.
[-node <nodename>] - LUN Node
Selects the LUN mappings for the LUNs which are being hosted on the node that you specify.
[-reporting-nodes <nodename>, ...] - Reporting Nodes
Selects the LUN mappings for the LUNs which have reporting nodes that you specify.
[-vserver-uuid <UUID>] - Vserver UUID
Selects the LUN mappings for the Vserver UUID that you specify.
● Examples
The following example displays the LUN mappings for initiator 20:10:0a:50:00:01:01:01 in Vserver vs1.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The lun move cancel command cancels an ongoing LUN move operation prior to creation of the new LUN.
The command fails if the LUN already exists at the destination path; in that case, allow the current move
operation to complete and then move it back using the lun move start command.
All data transfers will be halted. If the source LUN was quiesced, it will be restored to normal operation.
Note
This is an advanced command because the preferred way to cancel a LUN move operation is to wait
until the new LUN becomes visible, and then move it back.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun move start (page 275)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun move modify command modifies the maximum throughput of an ongoing move operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-max-throughput {<integer>[KB|MB|GB|TB|PB]} - Maximum Transfer Rate (per sec)
Specifies the maximum amount of data, in bytes, that can be transferred per second in support of this
operation. This mechanism can be used to throttle a transfer, to reduce its impact on the performance of
the source and destination nodes.
Note
The specified value will be rounded up to the nearest megabyte.
● Examples
cluster1::> lun move modify -vserver vs1 -destination-path /vol/vol2/lun2 -max-throughput 25MB
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun move pause command pauses an ongoing move operation. Use the lun move resume command
to resume the move operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun move resume (page 273)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun move resume command resumes a paused move operation.
● Parameters
{ -vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the destination LUN.
-destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
● Examples
Related reference
lun move pause (page 272)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun move show command shows information about LUNs currently being moved in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects LUN move operations that match this parameter value.
[-destination-path <text>] - Destination Path
Selects LUN move operations that match this parameter value.
● Examples
The example above displays information about all the LUN move operations in the cluster.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The lun move start command initiates moving of a LUN from one volume to another. The destination
volume can be located on the same node as the original volume or on a different node.
Note
Use lun move-in-volume command if you want to rename the LUN or move it within the same
volume.
Note
This command does not support movement of LUNs that are created from files.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the name of the Vserver that will host the new LUN.
| -destination-path <path> - Destination Path
Specifies the full path to the new LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
-source-path <path>} - Source Path
Specifies the full path to the source LUN, in the format /vol/<volume>[/<qtree>]/<lun>.
[-promote-late [true]] - Promote Late
Optionally specifies that the destination LUN needs to be promoted late.
If the destination is promoted early, the new LUN will be visible immediately. However, Snapshot copies
of the volume containing the new LUN cannot be taken until the LUN move operation reaches 'Moving
Data' status.
If the destination is promoted late, the new LUN will be visible only after it has been fully framed.
However, the LUN move job will not block the creation of Snapshot copies of the volume containing the
new LUN.
If this parameter is not specified, the destination LUN will be promoted early.
[-max-throughput {<integer>[KB|MB|GB|TB|PB]}] - Maximum Transfer Rate (per sec)
Optionally specifies the maximum amount of data, in bytes, that can be transferred per second in
support of this operation. This mechanism can be used to throttle a transfer, to reduce its impact on the
performance of the source and destination nodes.
If this parameter is not specified, throttling is not applied to the data transfer.
Note
The specified value will be rounded up to the nearest megabyte.
● Examples
cluster1::> lun move start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/lun1
Related reference
lun move resume (page 273)
lun move-in-volume (page 235)
lun move modify (page 272)
lun move pause (page 272)
lun move show (page 273)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
Clears the persistent reservation for the specified LUN.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
| -volume <volume name> - Volume Name
Specifies the volume.
-lun <text> - LUN Name
Specifies the name of the LUN.
[-qtree <qtree name>]} - Qtree Name
Specifies the qtree.
● Examples
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
Displays reservation information for a specified LUN in a Vserver. Unlike other show commands, the user
must specify the LUN.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
{ -path <path> - LUN Path
Specifies the path of the LUN. Examples of correct LUN paths are /vol/vol1/lun1 and /vol/vol1/qtree1/
lun1.
| -volume <volume name> - Volume Name
Specifies the volume.
-lun <text> - LUN Name
Specifies the name of the LUN.
[-qtree <qtree name>]} - Qtree Name
Specifies the qtree.
[-scsi-revision {scsi2|scsi3}] - SCSI Revision
Selects the reservations that match this parameter value.
[-entry-type {reservation|registration}] - Reservation or Registration
Selects the reservations that match this parameter value.
[-protocol {fcp|iscsi}] - Protocol
Selects the reservations that match this parameter value.
[-reservation-key <text>] - Reservation Key
Selects the reservations that match this parameter value.
[-reservation-type-code <text>] - Reservation Type
Selects the reservations that match this parameter value. The possible values for SCSI-3 reservations are:
• write exclusive
• exclusive access
• write exclusive registrants only
• exclusive access registrants only
• write exclusive all registrants
• exclusive access all registrants
and for SCSI-2 are:
• regular
• third party
[-initiator-name <text>] - Initiator Name
Selects the reservations that match this parameter value.
[-aptpl {true|false}] - Persist Through Power Loss
Selects the reservations that match this parameter value. If true, the reservation will be preserved over
a power loss. If false, it will not. This value is for SCSI-3 reservations only.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command adds existing iSCSI and FCP LIFs to a port set. To create a new port set, use the lun portset
create command.
Use the network interface create command to create new LIFs.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the port set you want to add the LIFs to.
-port-name <port_name>, ... - LIF or TPG Name
Specifies the LIF name you want to add to the port set.
● Examples
Related reference
lun portset create (page 279)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command creates a new port set for FCP and iSCSI. The port set name can include a maximum of 96
characters. You can add LIFs to the new port set. If you do not add a LIF to the port set, you create an empty
port set. To add LIFs to an existing port set, use the lun portset add command.
After you create a port set, you must bind the port set to an igroup so the host knows which FC or iSCSI LIFs
to access. If you do not bind an igroup to a port set, and you map a LUN to an igroup, then the initiators in
the igroup can access the LUN on any LIF on the Vserver.
Note
You cannot bind an igroup to an empty port set because the initiators in the igroup would have no
LIFs to access the LUN.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the name of the new port set. A port set name is a case-sensitive name that must contain one
to 96 characters. Spaces are not allowed.
[-port-name <port_name>, ...] - LIF Or TPG Name
Specifies the name of the logical interface that you want to add to the portset you want to create.
{ [-protocol {mixed|fcp|iscsi}] - Protocol
Specifies if the portset protocol type is fcp, iscsi, or mixed. The default is mixed.
| [-fcp | -f [true]] - FCP
Specifies FCP as the protocol type of the new port set.
| [-iscsi | -i [true]]} - iSCSI
Specifies iSCSI as the protocol type of the new port set.
● Examples
Creates a port set ps1 on Vserver vs1 with the protocol type of mixed.
Creates a port set iscsips on Vserver vs1 with the protocol type of iscsi.
Creates a port set fcppc on Vserver vs1 with the protocol type of fcp.
cluster1::> portset create -vserver vs1 -portset ps2 -protocol mixed -port-name l11
Related reference
lun portset add (page 278)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command deletes an existing port set. By default, you cannot delete a port set if it is bound to an
initiator group. If a port set is bound to an initiator group, you can do one of the following:
• specify the force option to unbind the port set from the initiator group and delete the port set.
• use the lun igroup unbind command to unbind the port set from the initiator group. Then you can
delete the port set.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the port set you want to delete.
[-force | -f [true]] - Force
Forcibly unbinds the port set from the initiator group.
● Examples
Related reference
lun igroup unbind (page 257)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command removes a LIF from a port set.
You cannot remove the last LIF in a port set if the port set is bound to an initiator group (igroup). To remove
the last LIF in a port set, use the lun igroup unbind command to unbind the port set from the igroup.
Then you can remove the last LIF in the port set.
● Parameters
-vserver <Vserver Name> - Vserver Name
Specifies the Vserver.
-portset <text> - Portset Name
Specifies the port set you want to remove a LIF from.
-port-name <port_name>, ... - LIF or TPG Name
Specifies the LIF name you want to remove from the port set.
● Examples
cluster1::> port set remove -vserver vs1 -portset ps1 -port-name lif1
Related reference
lun igroup unbind (page 257)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the LIFs in a port set. By default, the command displays all LIFs in all port sets.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the port sets that match this parameter value.
[-portset <text>] - Portset Name
Selects the port sets that match this parameter value.
[-port-name <port_name>, ...] - LIF Or TPG Name
Selects the port sets that match this parameter value.
[-protocol {mixed|fcp|iscsi}] - Protocol
Selects the port sets that match this parameter value.
[-port-count <integer>] - Number Of Ports
Selects the port sets that match this parameter value.
[-igroups <igroup>, ...] - Bound To Igroups
Selects the port sets that match this parameter value.
● Examples
The example above displays the port sets that contain zero LIFs.
vs1 p1 iscsi - -
vs1 iscsips iscsi lif1 igroup1
2 entries were displayed.
The example above displays the port sets that have the iSCSI protocol.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The lun transition show command displays information about the LUN transition processing status of
volumes. If no parameters are specified, the command displays the following information about all volumes:
• Vserver name
• Volume name
• Transition status
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the volumes in the specified Vserver.
[-volume <volume name>] - Volume Name
Selects the volumes with the specified name.
[-status {none|complete|failed|active}] - Transition Status
Selects the volumes that match the specified transition status. The possible status values are:
• active - The volume is in an active SnapMirror transition relationship and not yet transitioned.
• complete - LUN transition has completed for the volume.
• failed - LUN transition has failed for the volume.
• none - The volume did not contain LUNs to transition from Data ONTAP 7-Mode.
[-vserver-uuid <UUID>] - Vserver UUID
Selects the volumes in the Vserver that matches the specified UUID.
[-node <nodename>] - Filer ID
Selects the volumes that match the specified node.
● Examples
The following example displays LUN transition information for all volumes in a Vserver named vs1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The lun transition start command starts LUN transition for the specified volume. Normally, transition
is started automatically when snapmirror break is issued for the volume, this command allows restarting
in the event automatic transitioning was interrupted or failed.
● Parameters
-vserver <Vserver Name> - Vserver Name
The name of the Vserver containing the volume. If only one data Vserver exists, you do not need to
specify this parameter.
-volume <volume name> - Volume Name
The name of the volume to restart LUN transition.
● Examples
The following example starts LUN transition on a volume named volume1 in a Vserver named vs1:
Related reference
snapmirror break (page 750)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The lun transition 7-mode delete command deletes an untransitioned LUN copied from a Data
ONTAP 7-Mode system. This allows the admin to recover space from the volume for LUNs that may not be
transitioned to clustered Data ONTAP without distrupting LUNs that have transitioned, for example, if the
LUN is an unsupported OS type.
● Parameters
-vserver <Vserver Name> - Vserver Name
This specifies the name of the Vserver from which the LUN is to be deleted. If only one data Vserver
exists, you do not need to specify this parameter.
-path <path> - LUN Path
This specifies the path to the LUN to delete.
● Examples
The following example deletes the LUN /vol/vol1/lun1 in a Vserver named vs1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The lun transition 7-mode show command displays information about LUNs copied from a Data
ONTAP 7-Mode system. If no parameters are specified, the command displays the following information
about all 7-Mode LUNs:
• Vserver name
• LUN path
• Operating system type
• Size
• Whether or not the LUN has been transitioned to clustered Data ONTAP
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the 7-Mode LUNs in the specified Vserver.
[-path <path>] - LUN Path
Selects the 7-Mode LUNs with the specified path.
[-volume <volume name>] - Volume Name
Selects the 7-Mode LUNs that match the specified volume.
[-ostype <LUN Operating System Format>] - OS Type
Selects the 7-Mode LUNs that match the specified operating system type.
● Examples
The following example displays a summary of all 7-Mode LUNs for the volume vol1 in a Vserver named vs1:
The following example displays detailed information for the 7-Mode LUN /vol/vol1/lun2 in a Vserver named
vs1:
metrocluster configure
Configure MetroCluster and start DR mirroring for the node and its DR group
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configure command creates a MetroCluster configuration on either all the nodes in
both MetroCluster clusters or solely on nodes in a DR group. The command configures a HA partner, DR
partner, and a DR auxiliary partner for the nodes and also starts NVRAM mirroring between the configured
DR partner nodes.
In MetroCluster, a DR group is a group of four nodes, two in each of the MetroCluster clusters:
• In the local cluster, a node and its HA partner,
• In the peer cluster, a node and its HA partner. These nodes are DR partners to the nodes in the local
cluster.
In a two node MetroCluster configuration, a DR group is a group of two nodes, one in each of the
MetroCluster clusters.
There can be several DR groups in the MetroCluster configuration. MetroCluster provides synchronous DR
protection to all data sets belonging to nodes within a properly configured DR group.
Without the -node parameter, the metrocluster configure command configures all the DR groups in
both the MetroCluster clusters.
With the -node mynode parameter, the command configures both the mynode node and its HA partner
node from the local cluster, and its DR partner and DR auxiliary partner from the peer cluster. Before
running the metrocluster configure command, the aggregates and Vservers on each node must be
prepared for the MetroCluster configuration. Each node should have:
• At least one non-root, mirrored aggregate of size greater than 10GB. This non-root aggregate should not
have any volumes in it.
• No other non-root aggregates. Any other non-root, unmirrored aggregates and volumes should be
deleted.
• No Vservers other than Vservers of type "node" or "admin." Any Vservers that are not of type "node" or
"admin" should be deleted.
• A mirrored and healthy root aggregate.
After the command is successful all nodes in the local and remote clusters will have HA, DR, and DR auxiliary
partners and NVRAM mirroring between the DR partners will be turned on. The same conditions apply before
running the metrocluster configure -node mynode command, except that only one DR group is
configured. For a MetroCluster over IP configuration, the metrocluster configuration-settings
commands must be completed before using the metrocluster configure command. The commands
required to be completed are:
• metrocluster configuration-settings dr-group create
• metrocluster configuration-settings interface create
• metrocluster configuration-settings connection connect
● Parameters
[-node-name {<nodename>|local}] - Node to Configure
This optional parameter specifies the name of a single node in the local cluster. The command creates
MetroCluster configuration on the local node specified by this parameter and the three other nodes
belonging to the same DR group.
[-refresh {true|false}] - Refresh Configuration (privilege: advanced)
This optional parameter specifies if the node partner configuration steps should be done again. Not
specifying this parameter will cause the MetroCluster configuration to continue using the current node
partner information.
[-allow-with-one-aggregate {true|false}] - Override the Two Data Aggregates Requirement
(privilege: advanced)
This optional parameter specifies if MetroCluster configuration should be allowed with only one data
aggregate in each cluster. This option has no effect if two or more aggregates are present.
● Examples
The following example shows the creation of the MetroCluster configuration for a single DR group:
The following example shows the creation of the MetroCluster configuration for all DR groups:
Related reference
metrocluster configuration-settings commands (page 327)
metrocluster configuration-settings dr-group create (page 340)
metrocluster configuration-settings interface create (page 344)
metrocluster configuration-settings connection connect (page 332)
metrocluster show (page 293)
metrocluster heal
Heal DR data aggregates and DR root aggregates
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster heal command heals DR data aggregates and DR root aggregates in preparation for a
DR switchback. You must issue this command twice to complete the two phases of the healing process: first
to heal the aggregates by resynchronizing the mirrored plexes and then to heal the root aggregates by
switching them back to the disaster site. The DR partner nodes must be powered off and remote disk shelves
must be powered on before running this command.
● Parameters
-phase {aggregates|root-aggregates} - MetroCluster Healing Phase
This parameter specifies the healing phase. The first phase, aggregates, heals aggregates by
resynchronizing mirrored plexes. The second phase, root-aggregates, heals the root aggregates of
partner nodes. Healing root aggregates switches them back to the disaster site, allowing the site to boot
up.
[-override-vetoes [true]] - Override All Soft Vetoes
This optional parameter overrides almost all heal operation soft vetoes. If this optional parameter is set
to true, the system overrides subsystem soft vetoes that might prevent the heal operation. Hard vetoes
cannot be overridden and can still prevent the switchback operation.
● Examples
The following example performs the healing of both the aggregates and root aggregates:
metrocluster modify
Modify MetroCluster configuration options
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster modify command modifies MetroCluster parameters for nodes in the MetroCluster
configuration.
● Parameters
{ -auto-switchover-failure-domain <MetroCluster AUSO Failure Domain> - Cluster Level AUSO
Option
This parameter specifies the configuration of automatic switchover. Modifying auto-switchover-failure-
domain is not supported on a MetroCluster over IP configuration.
The parameter values are:
• auso-on-cluster-disaster - triggers an unplanned switchover if all nodes in a DR cluster are down.
• auso-on-dr-group-disaster - triggers an unplanned switchover if both nodes of a DR group are down.
• auso-disabled - automatic switchover is disabled.
On a MetroCluster over IP configuration, auto-switchover-failure-domain is set to auso-disabled;
otherwise, it is set to auso-on-cluster-disaster.
A change to the parameter affects only the local cluster where the metrocluster modify command is
used.
| -enable-unmirrored-aggr-deployment {true|false} - Enable or Disable Unmirrored Aggregate
Deployment
Enable or disable unmirrored aggregates deployment
| -node-name {<nodename>|local} - Node to Change the Option On
This parameter is used to specify the node in the cluster for which the parameter needs to be modified.
[-automatic-switchover-onfailure [true]]} - Node Level AUSO Option (privilege: advanced)
This parameter is used to enable automatic switchover on node failures. The automatic-switchover-
onfailure parameter is not supported on a MetroCluster over IP configuration.
All nodes in a MetroCluster configuration must have this option enabled to enable automatic switchover
on failure.
● Examples
The following example shows the output of Metrocluster modification done on a node:
Related reference
metrocluster configure (page 287)
metrocluster remove-dr-group
Remove a DR group from a MetroCluster configuration with multiple DR groups
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster remove-dr-group command removes a DR group from a MetroCluster configuration
with multiple DR groups.
The command removes the local nodes in the identified DR group from the MetroCluster configuration. The
command must be repeated on the partner cluster to remove the remote nodes in the identified DR group.
A DR group cannot be removed when the MetroCluster configuration has only a single DR group. Two or
more DR groups must be configured. The command does not allow a user from unconfiguring the entire
MetroCluster. Contact technical support to unconfigure the entire MetroCluster.
The following preparation steps must be completed on the local and partner clusters before removing a DR
group.
• Move all data volumes to another DR group.
• Move all MDV_CRS metadata volumes to another DR group.
• Delete all MDV_aud metadata volumes that may exist in the DR group to be removed.
• Delete all data aggregates in the DR group to be removed. Root aggregates are not deleted.
• Migrate all data LIFs to home nodes in another DR group.
• Migrate the cluster management LIF to a home node in another DR group. Node management and inter-
cluster LIFs are not migrated.
• Transfer epsilon to a node in another DR group.
The command is vetoed if the preparation steps are not completed on the local and partner clusters.
● Parameters
-dr-group-id <integer> - DR Group ID
This parameter indentifies the DR group to be removed.
● Examples
Warning: Nodes in the DR group that are removed from the MetroCluster
configuration will lose their disaster recovery protection.
Info: The following preparation steps must be completed on the local and
partner clusters before removing a DR group.
metrocluster show
Display MetroCluster configuration information
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster show command displays configuration information for the pair of clusters configured in
MetroCluster.
This command displays the following details about the local cluster and the DR partner cluster:
• Configuration State: This field specifies the configuration state of the cluster.
• Mode: This field specifies the operational mode of the cluster.
• AUSO Failure Domain: This field specifies the AUSO failure domain of the cluster.
● Parameters
[-periodic-check-status ]
If this option is used the MetroCluster periodic check status is displayed.
● Examples
The following example shows the output of the command before MetroCluster configuration is done:
The following example shows the output of the command after MetroCluster configuration is done only for
some DR groups:
Mode -
AUSO Failure Domain -
Remote: clusB
Configuration State partially-configured
Mode -
AUSO Failure Domain -
The following example shows the output of the command after MetroCluster configuration is done:
The following example shows the output of the command in switchover mode:
The following example shows the output of the command when -peridiodic-check-status option is used:
Related reference
metrocluster node show (page 360)
metrocluster configure (page 287)
metrocluster switchback
Switch back storage and client access
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster switchback command initiates the switchback of storage and client access from
nodes in the DR site to their home nodes. The home nodes and storage shelves must be powered on and
reachable by nodes in the DR site. The metrocluster heal -phase aggregates and metrocluster heal
-phase root-aggregates commands must have successfully completed before running the metrocluster
switchback command.
● Parameters
[-override-vetoes | -f [true]] - Override All Soft Vetoes
This optional parameter overrides all switchback operation soft vetoes. If this optional parameter is
used, the system overrides subsystem soft vetoes that might prevent the switchback operation. Hard
vetoes cannot be overridden and can still prevent the switchover operation.
[-simulate [true]] - Simulate Switchback (privilege: advanced)
If this optional parameter is used, the system runs a simulation of the switchback operation to make
sure all the prerequisites for the operation are met. This parameter cannot be used with switchback
operations performed for switching back left-behind aggregates or for retrying a partially successful
switchback.
● Examples
The following is an example of how to start the switchback operation.
Related reference
metrocluster heal (page 290)
metrocluster switchover
Switch over storage and client access
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster switchover command initiates the switchover of storage and client access from the
source cluster to the disaster recovery (DR) site. This command is to be used after a disaster that renders all
the nodes in the source cluster unreachable and powered off. It can also be used for negotiated switchover
when the outage of the source cluster is anticipated as in cases such as disaster recovery testing or a site
going offline for maintenance. If a switchover operation previously failed on certain nodes on the DR site
then issuing the command retries the operation on all of those nodes.
● Parameters
{ [-simulate [true]] - Simulate Negotiated Switchover (privilege: advanced)
If this optional parameter is used, the system runs a simulation of the negotiated switchover operation
to make sure all the prerequisites for the operation are met. This parameter cannot be used with
switchover with the -forced-on-disaster parameter.
[-controller-replacement [true]] - Indicate Controller Replacement
If this optional parameter is used, the nodes in the DR site will be halted during the negotiated
switchover operation, to facilitate controller replacement.
| [-forced-on-disaster [true]] - Force Switchover on Disaster
This optional parameter forces a switchover on disaster. This parameter should be used if all the nodes
on the disaster stricken site are powered off and unreachable. In the absence of this parameter, the
command attempts to perform a negotiated switchover operation.
[-force-nvfail-all [true]] - Sets in-nvfailed-state on All Volumes (privilege: advanced)
If this parameter is used, the switchover command will set the in-nvfailed-state parameter to true for all
volumes being switched over and will set the -dr-force-nvfail parameter to true for any volumes
that do not already have it enabled. This parameter has no effect when performning a negotiated
switchover.
[-retry-failed-nodes <Node name>, ...]} - Nodes to Switchover
This optional parameter takes the list of nodes that previously failed the switchover operation and it
retries the switchover operation on each of the nodes. This parameter is applicable only for a switchover
with the -forced-on-disaster parameter.
[-override-vetoes [true]] - Override All Soft Vetoes
This optional parameter overrides all switchover operation soft vetoes. If this parameter is used, the
system overrides all subsystem soft vetoes that might prevent the switchover operation. Hard vetoes
cannot be overridden and can still prevent the switchover operation.
● Examples
When a disaster strikes one site, the metrocluster switchover command is issued on the disaster
recovery site as follows:
Related reference
metrocluster show (page 293)
metrocluster operation show (page 363)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check disable-periodic-check command disables the periodic checking of the
MetroCluster configuration.
After this command is run, the MetroCluster Check job will be prevented from periodically checking the
configuration for errors.
● Examples
Related reference
metrocluster check enable-periodic-check (page 297)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check enable-periodic-check command enables the periodic checking of the
MetroCluster configuration.
After this command is run, the MetroCluster Check job will able to run in the background and periodically
check the configuration for errors.
● Examples
Related reference
metrocluster check disable-periodic-check (page 297)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check run command performs checks on the MetroCluster configuration and reports
configuration errors if any.
To run this command, at least one DR group needs to be configured. The command checks the following
parts of the configuration:
Node Configuration:
• node-reachable: This check verifies that the node is reachable.
• metrocluster-ready: This check verifies that the node is ready for MetroCluster configuration.
• local-ha-partner: This check verifies that the HA partner node is in the same cluster.
• ha-mirroring-on: This check verifies that HA mirroring for the node is configured.
• symmetric-ha-relationship: This check verifies that the relationship between the node and its HA partner
is symmetric.
• remote-dr-partner: This check verifies that the DR partner node is in the remote cluster.
• dr-mirroring-on: This check verifies that DR mirroring for the node is configured.
• symmetric-dr-relationship: This check verifies that the relationship between the node and its DR partner is
symmetric.
• remote-dr-auxiliary-partner: This check verifies that the DR auxiliary partner node is in the remote cluster.
• symmetric-dr-auxiliary-relationship: This check verifies that the relationship between the node and its DR
auxiliary partner is symmetric.
• storage-failover-enabled: This check verifies that storage failover is enabled.
• has-intercluster-lif: This check verifies that the node has an intercluster LIF.
• node-object-limit: This check verifies that the node object limit option for the node is turned on.
Aggregate Configuration:
• mirroring-status: This check verifies that the aggregate is mirrored.
• disk-pool-allocation: This check verifies that the disks belonging to this aggregate have been correctly
allocated to the right pools.
At the end of the check the command displays a summary of the results. This summary output can be
viewed again by running metrocluster check show. If any of the rows in this output show any warnings
more details can be viewed by running the metrocluster check show command for that component.
● Parameters
[-skip-dr-simulation {true|false}] - Skip the DR Readiness Checks (privilege: advanced)
If this optional parameter is set to true, the switchover and switchback simulations are not run.
● Examples
The following example shows the execution of the command when there are no warnings:
Component Result
------------------- ---------
nodes ok
clusters ok
lifs ok
config-replication ok
aggregates ok
5 entries were displayed.
Command completed. Use the "metrocluster check show -instance" command or sub-commands in "metr
ocluster check" directory for detailed results.
The following example shows the execution of the command when there are some warnings:
Component Result
------------------- ---------
nodes warning
clusters ok
lifs ok
config-replication ok
aggregates ok
5 entries were displayed.
Command completed. Use the "metrocluster check show -instance" command or sub-commands in "metr
ocluster check" directory for detailed results.
Related reference
metrocluster check show (page 299)
metrocluster check node show (page 316)
metrocluster check cluster show (page 305)
metrocluster check config-replication show (page 308)
metrocluster check aggregate show (page 302)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check show command displays the results of the metrocluster check run
command.
This command displays the high-level verification results for each of the components. If there are any errors
for a component, running the show command for that component (for example metrocluster check
node show or metrocluster check aggregate show) will display more information about the warning.
Note
Please note that this command does not run the checks but only displays the results of checks. To look
at the latest results, run the metrocluster check run command and then run this command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-timestamp <MM/DD/YYYY HH:MM:SS>] - Time of Check
This is the time at which the metrocluster check run command was last run in this cluster and
these results were produced. If this parameter is specified, only rows with this timestamp will be
displayed.
[-component <MetroCluster Check Components>] - Name of the Component
This is the name of the component. If this parameter is specified, only rows with this component will be
displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check for the component. If this parameter is specified, only rows with this result
will be displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is the additional info for the verification for this component. This field will have detailed
information about the warning and recovery steps. If this parameter is specified, only rows with this
additional info will be displayed.
● Examples
The following example shows the execution of the command when there are no warnings:
Component Result
------------------- ---------
nodes ok
clusters ok
lifs ok
config-replication ok
aggregates ok
connections ok
6 entries were displayed.
The following example shows the execution of the command when there are some warnings:
Component Result
------------------- ---------
nodes warning
clusters ok
lifs ok
config-replication ok
aggregates ok
connections ok
6 entries were displayed.
The following example shows the execution of the command with -instance option:
Related reference
metrocluster check run (page 298)
metrocluster check node show (page 316)
metrocluster check aggregate show (page 302)
metrocluster check cluster show (page 305)
metrocluster check config-replication show (page 308)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check aggregate show command displays the results of aggregate checks
performed by the metrocluster check run command.
The command verifies the following aspects of the configuration of all aggregates in MetroCluster:
• mirroring-status: This check verifies that the aggregate is mirrored.
• disk-pool-allocation: This check verifies that the disks belonging to this aggregate have been correctly
allocated to the right pools.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance option.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <Node name>] - Node Name
This is the name of the node for which the check was run. If this parameter is specified, only rows with
this node will be displayed.
[-aggregate <aggregate name>] - Name of the Aggregate
This is the name of the aggregate for which the check was run. If this parameter is specified, only rows
with this aggregate will be displayed.
[-check <MetroCluster Aggregate Check>] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-cluster <Cluster name>] - Name of Cluster
This is the name of the cluster the node belongs to. If this parameter is specified, only rows with this
cluster will be displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>, ...] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for
the warning. If this parameter is specified, only rows with this additional info will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes
per cluster:
The following example shows the execution of the command with -instance option:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check node show (page 316)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check cluster show command displays the results of cluster checks performed by
the metrocluster check run command.
The command displays the results of the following cluster configuration checks:
• negotiated-switchover-ready: This check verifies that the cluster is ready for a negotiated switchover
operation.
• switchback-ready: This check verifies that the cluster is ready for a switchback operation.
• job-schedules: This check verifies that the job schedules between the local and remote clusters are
consistent.
• licenses: This check verifies that the licenses between the local and remote clusters are consistent.
• periodic-check-enabled: This check verifies that the periodic MetroCluster Check Job is enabled.
• onboard-key-management: This check verifies that the Onboard Key Management hierarchies are
consistent.
• external-key-management: This check verifies that the External Key Management configurations are
consistent.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-check {negotiated-switchover-ready|switchback-ready|job-schedules|licenses|periodic-check-
enabled|onboard-key-management|external-key-management}] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-cluster <Cluster name>] - Cluster Name
This is the name of the cluster the check results apply to. If this parameter is specified, only rows
matching the specified cluster will be displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for
the warning. If this parameter is specified, only rows with this additional info will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration:
The following example shows the execution of the command with the -instance parameter:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check node show (page 316)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check config-replication show command displays the results of MetroCluster
configuration replication.
The command verifies the following aspects of MetroCluster configuration replication :
• Enabled: Verifies that MetroCluster configuration replication is enabled on the cluster.
• Running: Verifies that MetroCluster configuration replication is running on the cluster.
• Remote Heartbeat: Verifies that the MetroCluster configuration replication heartbeat with the remote
cluster is healthy.
• Last Heartbeat Sent: Prints the timestamp of the last MetroCluster configuration replication heartbeat
sent to the remote cluster.
• Last Heartbeat Received: Prints the timestamp of the last MetroCluster configuration replication hearbeat
received from the remote cluster.
• Storage Status: Verifies that MetroCluster configuration replication storage is healthy.
• Storage In Use: Prints the location of MetroCluster configuration replication storage.
• Storage Remarks: Prints the underlying root cause for non healthy MetroCluster configuration storage.
• Vserver Streams: Verifies that MetroCluster configuration replication Vserver streams are healthy.
• Cluster Streams: Verifies that MetroCluster configuration replication Cluster streams are healthy.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance option.
● Parameters
[-instance ]
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example shows the output of metrocluster check config-replication show:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check config-replication show-aggregate-eligibility (page 309)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check config-replication show-aggregate-eligibility command displays
the MetroCluster configuration replication aggregate eligibility.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
This is the aggregate name. If this parameter is specified, only rows with this aggregate will be
displayed.
[-hosted-configuration-replication-volumes <volume name>, ...] - Currently Hosted
Configuration Replication Volumes
This is the list of the configuration replication volumes hosted on this aggregate. If this parameter is
specified, only rows with these configuration replication volumes will be displayed.
[-is-eligible-to-host-additional-volumes {true|false}] - Eligibility to Host Another Configuration
Replication Volume
This is the eligibility of the aggregate to host additional configuration replication volumes. If this
parameter is specified, only rows with this eligibility will be displayed.
[-comment <text>] - Comment for Eligibility Status
This is a comment regarding the eligibility of the aggregate to host configuration replication volumes. If
this parameter is specified, only rows with this comment will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration with thirteen
aggregates in the cluster:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check config-replication show (page 308)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster check config-replication show-capture-status command indicates whether
or not a configuration change that would prevent a negotiated switchover is currently being captured for
replication.
● Examples
The following example shows the execution of the command in a MetroCluster configuration when capture
is not in progress:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check connection show command displays the check results of connections for
nodes in a MetroCluster over IP configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by
uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching node specified by
uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
● Examples
The following example shows the output of the metrocluster check connection show command:
Check Result: ok
Home Port: e0f
10.140.113.249 10.140.113.214 DR Auxiliary completed
Check Result: ok
Home Port: e0g
10.140.113.25 10.140.113.248 HA Partner completed
Check Result: ok
Home Port: e0g
10.140.113.25 10.140.113.217 DR Partner completed
Check Result: ok
Home Port: e0g
10.140.113.25 10.140.113.215 DR Auxiliary completed
Check Result: ok
24 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check lif repair-placement command reruns LIF placement for those LIFs
displayed by the metrocluster check lif show command. This command is expected to be run after
the admin manually rectifies the LIF placement failures displayed in the metrocluster check lif show
command output. The command is successful if the LIF placement rerun does not encounter any LIF
placement failure. This is to be confirmed by subsequent running of the metrocluster check lif
show .
● Parameters
-vserver <Vserver Name> - sync-source Vserver Name
This is the name of the sync source Vserver that has LIF placement failures as reported by the
metrocluster check lif show command. This input ensures that the command is run on the
specified Vserver.
[-lif <lif-name>] - Logical Interface Name
This is the Logical Interface name that belongs to the sync source Vserver that has a LIF placement
failure in the destination cluster as reported by the metrocluster check lif show command. This
input ensures that the command is run on the specified LIF only.
● Examples
The following example shows the execution of the command with a sync source Vserver and a LIF specified:
The following example shows the execution of the command with only a sync-source Vserver specified:
Command completed. Run the "metrocluster check lif show" command for results.
clusA::>
Related reference
metrocluster check lif show (page 315)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check lif show command displays the LIF placement failures in the MetroCluster
configuration.
The command verifies the following aspects of the LIF placements of all the data LIFs in Metrocluster:
• lif-placed-on-dr-node: This check verifies that the LIF is placed on DR partner node.
• port-selection: This check verifies that the LIF is placed on correct port.
The LIF placement failures are mostly fabric/network connectivity issues that require manual intervention.
Once the connectivity issues are resolved manually, the admin is expected to run metrocluster check
lif repair-placement command to resolve the LIF placement issues for the sync source Vserver.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance option.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Name of the Cluster
This is the name of the cluster the LIF belongs to. If this parameter is specified, only rows with this
cluster will be displayed.
[-vserver <text>] - Name of the Vserver
This is the name of the Vserver in the MetroCluster configuration
[-lif <lif-name>] - Name of the Lif
This is the name of the LIF.
[-check <MetroCluster LIF placement Check>] - Description
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check performed. If this parameter is specified, only rows with this result will be
displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for
the warning. If this parameter is specified, only rows with this additional info will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes
per cluster:
Related reference
metrocluster check lif repair-placement (page 314)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check node show command displays the results of node checks performed by the
metrocluster check run command.
The command displays the results of the following node configuration checks:
• node-reachable: This check verifies that the node is reachable.
• metrocluster-ready: This check verifies that the node is ready for MetroCluster configuration.
• local-ha-partner: This check verifies that the HA partner node is in the same cluster.
• ha-mirroring-on: This check verifies that HA mirroring for the node is configured.
• ha-mirroring-op-state: This check verifies that the HA mirroring operation is online.
• symmetric-ha-relationship: This check verifies that the relationship between the node and its HA partner
is symmetric.
• remote-dr-partner: This check verifies that the DR partner node is in the remote cluster.
• dr-mirroring-on: This check verifies that DR mirroring for the node is configured.
• dr-mirroring-op-state: This check verifies that the DR mirroring operation is online.
• symmetric-dr-relationship: This check verifies that the relationship between the node and its DR partner is
symmetric.
• remote-dr-auxiliary-partner: This check verifies that the DR auxiliary partner node is in the remote cluster.
• symmetric-dr-auxiliary-relationship: This check verifies that the relationship between the node and its DR
auxiliary partner is symmetric.
• storage-failover-enabled: This check verifies that storage failover is enabled.
• has-intercluster-lif: This check verifies that the node has an intercluster LIF.
• node-object-limit: This check verifies that the node object limit option for the node is turned on.
• automatic-uso: This check verifies that the automatic USO option for the node is enabled.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <Node name>] - Node Name
This is the name of the node for which the check was run. If this parameter is specified, only rows with
this node will be displayed.
[-check <MetroCluster Node Check>] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-cluster <Cluster name>] - Cluster Name
This is the name of the cluster the node belongs to. If this parameter is specified, only rows with this
cluster will be displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for
the warning. If this parameter is specified, only rows with this additional info will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration with two nodes
per cluster:
clusB-02
node-reachable ok
metrocluster-ready ok
local-ha-partner ok
ha-mirroring-on warning
ha-mirroring-op-state ok
symmetric-ha-relationship warning
remote-dr-partner ok
dr-mirroring-on ok
dr-mirroring-op-state ok
symmetric-dr-relationship ok
remote-dr-auxiliary-partner ok
symmetric-dr-auxiliary-relationship warning
storage-failover-enabled ok
has-intercluster-lif ok
node-object-limit ok
automatic-uso ok
64 entries were displayed.
The following example shows the execution of the command with the -instance parameter:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check aggregate show (page 302)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster check volume show command displays the results of volume checks performed by
the metrocluster check run command.
The command displays the results of the following volume configuration checks:
• unmirrored-flexgroups: This check looks for flexgroups residing on unmirrored aggregates.
• mixed-flexgroups: This check looks for flexgroups residing on a mix of mirrored and unmirrored
aggregates.
Additional information about the warnings, if any, and recovery steps can be viewed by running the
command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
This is the name of the vserver that contains the volume that the check results apply to. If this
parameter is specified, only rows matching the specified cluster will be displayed.
[-volume <volume name>] - Volume Name
This is the name of the volume that the check results apply to. If this parameter is specified, only rows
matching the specified volume will be displayed.
[-check <MetroCluster Volume Check>] - Type of Check
This is the type of the check performed. If this parameter is specified, only rows with this check will be
displayed.
[-result {ok|warning|not-run|not-applicable}] - Result of the Check
This is the result of the check. If this parameter is specified, only rows with this result will be displayed.
[-additional-info <text>, ...] - Additional Information/Recovery Steps
This is additional information about the check. This field has more information and recovery steps for
the warning. If this parameter is specified, only rows with this additional info will be displayed.
● Examples
The following example shows the execution of the command in a MetroCluster configuration:
Related reference
metrocluster check run (page 298)
metrocluster check show (page 299)
metrocluster check node show (page 316)
metrocluster check aggregate show (page 302)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster config-replication cluster-storage-configuration modify command
modifies the configuration of storage used for configuration replication.
● Parameters
[-disallowed-aggregates <aggregate name>, ...] - Disallowed Aggregates
Use this parameter to set the list of storage aggregates that are not available to host storage for
configuration replication.
● Examples
The following example disallows two aggregates named aggr1 and aggr2:
Related reference
metrocluster config-replication cluster-storage-configuration show (page 325)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster config-replication cluster-storage-configuration show command shows
details of the configuration of the storage used for configuration replication.
The information displayed is the following:
• Disallowed Aggregates - The list of storage aggregates that are configured as not allowed to host storage
areas.
• Auto-Repair - Displays true if the automatic repair of storage areas used by configuration replication is
enabled.
• Auto-Recreate - Displays true if the automatic recreation of storage volumes used by configuration
replication is enabled.
• Use Mirrored Aggregate - Displays true if storage areas for configuration replication are to be hosted on a
mirrored aggregate.
● Examples
The following is an example of the metrocluster config-replication cluster-storage-
configuration show command:
Disallowed Aggregates: -
Auto-Repair: true
Auto-Recreate: true
Related reference
metrocluster config-replication cluster-storage-configuration modify (page 325)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster config-replication resync-status show command displays the state of the
configuration synchronization operation between the two clusters in the MetroCluster configuration.
This command displays the following details about the local cluster and the peer cluster:
• Source: This is the source side whose configuration is being replicated to the destination side.
• Destination: This is the destination side where the configuration is being replicated to from the source
side.
• State: This is the state of the synchronization operation.
• % Complete: This is completion percentage of the operation.
● Examples
The following example shows the output of the command when synchronization is in progress:
The following example shows the output of the command when synchronization from clusB to clusA is in
progress:
Related reference
metrocluster show (page 293)
metrocluster check config-replication show (page 308)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings show-status command displays the configuration
settings status for nodes in a MetroCluster setup. If a DR group has not been created, then status for nodes
in the local cluster only are displayed.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays detailed information about all nodes in the cluster
matching the specified cluster-uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays detailed information about all the nodes in the
specified cluster.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-configuration-status <text>] - Configuration Settings Status
If this parameter is specified, the command displays detailed information about all nodes with the
specified configuration status.
[-config-error-info <text>] - Configuration Error Information
If this parameter is specified, the command displays detailed information about all nodes with the
specified configuration error information.
● Examples
The following example shows the display of MetroCluster setup status:
Nodes do not have a valid platform-specific personality value (equivalent to HAOSC parameter on non-Ap
ollo platforms) for a MetroCluster setup.
Output of the command when MetroCluster setup uses IP links and before "metrocluster configuration-setting
s dr-group create" command is run:
Output of the command after "metrocluster configuration-settings dr-group create" command is run:
Output of the command after "metrocluster configuration-settings interface create" command is run for every
node:
Output of the command after "metrocluster configuration-settings connection connect" command is run:
Output of the command after "metrocluster configuration-settings connection connect" command is run and th
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster configuration-settings calibration measure command measures the
bandwidth and latency between local and remote nodes in a MetroCluster over IP configuration.
This command should not be run when the MetroCluster configuration is under a heavy load. Bandwidth
measurements will attempt to fully saturate network connections to the remote cluster and may disrupt the
quality of service.
● Examples
The following example shows the output for the calibration measure command in MetroCluster over IP
configurations:
Warning: This operation will attempt to fully saturate the network connection to
the remote cluster for measuring bandwidth. This may disrupt
performance and should not be run when MetroCluster is under heavy
load. Do you want to continue? {y|n}: y
Measuring latency and bandwidth between node "A1" (10.140.113.214) and node "B1" (10.140.113.26) ov
er port "e0f".
Measuring latency and bandwidth between node "A1" (10.140.113.215) and node "B1" (10.140.113.27) ov
er port "e0g".
Measuring latency and bandwidth between node "A2" (10.140.113.216) and node "B2" (10.140.113.25) ov
er port "e0f".
Measuring latency and bandwidth between node "A2" (10.140.113.217) and node "B2" (10.140.113.28) ov
er port "e0g".
Measurements complete. Use the "metrocluster configuration-settings calibration show" command to di
splay the results.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings calibration show command shows the bandwidth
and latency between local and remote nodes in a MetroCluster over IP configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-calibration-id <integer>] - Calibration ID
This field specifies Calibration ID.
[-subnet <IP Address>] - IP Subnet
This field specifies subnet.
[-node-uuid <UUID>] - Node UUID
This field specifies UUID of the node.
[-home-port {<netport>|<ifgrp>}] - Home Port
This field specifies name of the port used to measure latency and bandwidth.
[-node {<nodename>|local}] - Node Name
This field specifies name of the node.
● Examples
The following example shows the output for the calibration show command in MetroCluster over IP
configurations:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster configuration-settings connection check command checks the settings of a
MetroCluster over IP configuration.
This command is used for MetroCluster configurations that are connected through IP links.
● Examples
The following example shows the output for the check command in MetroCluster over IP configurations:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings connection connect command configures the
connections that mirror NV logs and access remote storage between partner nodes in a MetroCluster setup.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that
are connected through FC links will configure the FC connections automatically.
The metrocluster configuration-settings commands are run in the following order to set up
MetroCluster:
• metrocluster configuration-settings dr-group create,
• metrocluster configuration-settings interface create,
• metrocluster configuration-settings connection connect.
Before this command is run
• The DR groups must have been configured. Run the metrocluster configuration-settings dr-
group show command to verify that every node is partnered in a DR group.
• The network logical interfaces must have been configured on every node. Use the metrocluster
configuration-settings interface show command to verify that every node has network logical
interfaces configured to mirror NV logs and access remote storage.
After this command completes successfully, every node will:
• Have NV log mirroring configured and mirroring disabled. NV log mirroring will be enabled by the
metrocluster configure command.
• Have access to remote storage. Use the storage disk show -pool Pool1 command to view the
remote disks that are hosted on DR partner nodes.
The DR groups and network logical interfaces that were configured by the metrocluster
configuration-settings commands cannot be deleted after the connections have been configured. The
metrocluster configuration-settings connection disconnect command must be run to remove
the connections before the DR groups and network logical interfaces can be deleted.
● Examples
The following example shows configuration of connections in a MetroCluster over IP setup:
Related reference
metrocluster configuration-settings commands (page 327)
metrocluster configuration-settings dr-group create (page 340)
metrocluster configuration-settings interface create (page 344)
metrocluster configuration-settings dr-group show (page 343)
metrocluster configuration-settings interface show (page 348)
metrocluster configure (page 287)
metrocluster configuration-settings connection disconnect (page 334)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings connection disconnect command removes the
connections between nodes in a DR group that are used to mirror NV logs and access remote storage.
This command cannot be run if a node in the DR group has remote disks assigned to the node. The assigned
ownership of remote disks can be removed by running the storage disk removeowner command.
The metrocluster configuration-settings commands are run in the following order to remove
MetroCluster over IP configuration:
• metrocluster configuration-settings connection disconnect,
• metrocluster configuration-settings interface delete,
• metrocluster configuration-settings dr-group delete.
● Parameters
-dr-group-id <integer> - DR Group ID
This parameter identifies the DR group to be disconnected.
● Examples
The following example illustrates removal of connections in a four-node MetroCluster setup:
Related reference
storage disk removeowner (page 1078)
metrocluster configuration-settings commands (page 327)
metrocluster configuration-settings interface delete (page 347)
metrocluster configuration-settings dr-group delete (page 341)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings connection show command displays the connection
configuration information between the nodes in a MetroCluster setup.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by
uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching node specified by
uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-home-port {<netport>|<ifgrp>}] - Home Port
If this parameter is specified, the command displays information for the matching home-port.
[-relationship-type <Roles of MetroCluster Nodes>] - Relationship Role Type
If this parameter is specified, the command displays information for the matching relationship-type.
[-source-address <IP Address>] - Source Network Address
If this parameter is specified, the command displays information for the matching source address.
[-destination-address <IP Address>] - Destination Network Address
If this parameter is specified, the command displays information for the matching destination address.
[-partner-cluster-uuid <UUID>] - Partner Cluster UUID
If this parameter is specified, the command displays information for the matching partner-cluster-uuid.
[-partner-node-uuid <UUID>] - Partner Node UUID
If this parameter is specified, the command displays information for the matching partner-node-uuid.
[-partner-node <text>] - Partner Node Name
If this parameter is specified, the command displays information for the matching partner-node.
[-partner-type <text>] - Partner Relationship Type
If this parameter is specified, the command displays information for the matching partner-type.
[-config-state <text>] - Configuration State
If this parameter is specified, the command displays information for the matching config-state.
[-config-error-info <text>] - Configuration Error Information
If this parameter is specified, the command displays information for the matching config-error-info.
● Examples
The following example shows the output of metrocluster configuration-settings connection
connect command:
Output of the command before the connections are established using the metrocluster configuration-setting
Output of the command after the connections are established using the metrocluster configuration-settings
connection connect command:
Related reference
metrocluster configuration-settings connection connect (page 332)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings dr-group create command partners the nodes that
will comprise a DR group in a MetroCluster setup.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that
are connected through FC links will configure DR groups automatically and do not require the
metrocluster configuration-settings commands.
The metrocluster configuration-settings commands are run in the following order to set up
MetroCluster:
• metrocluster configuration-settings dr-group create,
• metrocluster configuration-settings interface create,
• metrocluster configuration-settings connection connect.
Before running this command, cluster peering must be configured between the local and partner clusters.
Run the cluster peer show command to verify that peering is available between the local and partner
clusters. This command configures a local node and a remote node as DR partner nodes. The command also
configures the HA partner of the local node and the HA partner of the remote node as the other DR partner
nodes in the DR group.
● Parameters
-partner-cluster <Cluster name> - Partner Cluster Name
Use this parameter to specify the name of the partner cluster.
-local-node {<nodename>|local} - Local Node Name
Use this parameter to specify the name of a node in the local cluster.
-remote-node <text> - Remote Node Name
Use this parameter to specify the name of a node in the partner cluster that is to be the DR partner of
the specified local node.
● Examples
The following example shows the creation of the MetroCluster DR group:
Related reference
metrocluster configuration-settings commands (page 327)
metrocluster configuration-settings interface create (page 344)
metrocluster configuration-settings connection connect (page 332)
cluster peer show (page 114)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings dr-group delete command deletes a DR group and its
node partnerships that were configured using the metrocluster configuration-settings dr-group
create command.
This command cannot be run if the metrocluster configuration-settings interface create
command has configured a network logical interface on a network port provisioned for MetroCluster. The
metrocluster configuration-settings interface delete command must then be run to delete
the network logical interfaces on every node in the DR group.
The metrocluster configuration-settings commands are run in the following order to remove the
MetroCluster over IP configuration:
• metrocluster configuration-settings connection disconnect,
• metrocluster configuration-settings interface delete,
• metrocluster configuration-settings dr-group delete.
● Parameters
-dr-group-id <integer> - Dr group Id
This parameter indentifies the DR group to be deleted.
● Examples
The following example shows the deletion of the MetroCluster DR group:
Warning: This command deletes the existing DR group relationship. Are you sure
you want to proceed ? {y|n}: y
[Job 279] Job succeeded: DR Group Delete is successful.
Related reference
metrocluster configuration-settings dr-group create (page 340)
metrocluster configuration-settings interface create (page 344)
metrocluster configuration-settings interface delete (page 347)
metrocluster configuration-settings commands (page 327)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings dr-group show command displays the DR groups and
their nodes in the MetroCluster over IP setup.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the specified cluster.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching nodes uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-dr-partner-node-uuid <UUID>] - DR Partner Node UUID
If this parameter is specified, the command displays information for the matching DR partner node uuid.
[-dr-partner-node <text>] - DR Partner Node Name
If this parameter is specified, the command displays information for the matching DR partner nodes.
● Examples
The following example illustrates the display of DR group configuration in a four-node MetroCluster setup:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings interface create command configures the network
logical interfaces that will be used on a node in a MetroCluster setup to mirror NV logs and access remote
storage.
This command is used for MetroCluster setups that are connected though IP links. MetroCluster setups that
are connected through FC links do not require the user to provision network logical interfaces to mirror NV
logs and access remote storage.
The metrocluster configuration-settings commands are run in the following order to set up
MetroCluster:
• metrocluster configuration-settings dr-group create,
• metrocluster configuration-settings interface create,
• metrocluster configuration-settings connection connect.
Before running this command , the node's DR group must be configured using the metrocluster
configuration-settings dr-group create command. Run the metrocluster configuration-
settings dr-group show command to verify that the node's DR group has been configured.
● Parameters
-cluster-name <Cluster name> - Cluster Name
Use this parameter to specify the name of the local or partner cluster.
-home-node <text> - Home Node
Use this parameter to specify the home node in the cluster which hosts the interface.
-home-port {<netport>|<ifgrp>} - Home Port
Use this parameter to specify the home port provisioned for MetroCluster.
-address <IP Address> - Network Address
Use this parameter to specify the network address to be assigned to the home port.
-netmask <Contiguous IP Mask> - Netmask
Use this parameter to specify the network mask to be assigned to the interface.
[-vlan-id <integer>] - Virtual LAN ID
Use this parameter to specify the VLAN id.
● Examples
This example shows configuring logical interface on MetroCluster IP capable port:
Related reference
metrocluster configuration-settings commands (page 327)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings interface delete command deletes the network
logical interface that was configured on a network port provisioned for MetroCluster.
This command cannot be run if the metrocluster configuration-settings connection connect
command has set up the connections between the nodes in a DR group. The metrocluster
configuration-settings connection disconnect command must then be run to remove the
connections.
The metrocluster configuration-settings commands are run in the following order to remove the
MetroCluster over IP configuration:
• metrocluster configuration-settings connection disconnect,
• metrocluster configuration-settings interface delete,
• metrocluster configuration-settings dr-group delete.
● Parameters
-cluster-name <Cluster name> - Cluster Name
Use this parameter to specify the name of the local or partner cluster.
-home-node <text> - Home Node
Use this parameter to specify the home node in the cluster which hosts the interface.
-home-port {<netport>|<ifgrp>} - Home Port
Use this parameter to specify the home port provisioned for MetroCluster.
● Examples
The following example shows the deletion of interface in a MetroCluster setup:
Related reference
metrocluster configuration-settings connection connect (page 332)
metrocluster configuration-settings connection disconnect (page 334)
metrocluster configuration-settings commands (page 327)
metrocluster configuration-settings dr-group delete (page 341)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings interface show command displays the network logical
interfaces that were provisioned for MetroCluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, the command displays information for the matching DR group.
[-cluster-uuid <UUID>] - Cluster UUID
If this parameter is specified, the command displays information for the matching cluster specified by
uuid.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, the command displays information for the matching cluster..
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the command displays information for the matching nodes uuid.
[-node <text>] - Node Name
If this parameter is specified, the command displays information for the matching nodes.
[-home-port {<netport>|<ifgrp>}] - Home Port
If this parameter is specified, all interfaces with home-port set to this value are displayed.
[-address <IP Address>] - Network Address
If this parameter is specified, the command displays information for the matching network address.
[-netmask <Contiguous IP Mask>] - Netmask
If this parameter is specified, all interfaces with netmask set to this value are displayed.
[-gateway <IP Address>] - Gateway
If this parameter is specified, all interfaces with gateway set to this value are displayed.
[-config-state <text>] - Configuration State
If this parameter is specified, all interfaces with this field set to the specified value are displayed.
[-config-error-info <text>] - Configuration Error Information
If this parameter is specified, all interfaces with this field set to the specified value are displayed.
[-vlan-id <integer>] - Virtual LAN ID
If this parameter is specified, all interfaces with vlan-id set to this value are displayed.
● Examples
The following example illustrates display of logical interfaces configured in a four-node MetroCluster setup:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings mediator add command configures the connection
between the mediator and all the nodes in a MetroCluster configuration.
Before this command is run MetroCluster should be configured on all the clusters.
Mediator username and password are required to successfully add the mediator.
After this command completes successfully, every node will:
• Have a connection with the mediator.
• The mediator disks will be assigned to the nodes in the MetroCluster configuration.
• AUSO will be enabled.
● Parameters
-mediator-address <IP Address> - Mediator IP Address
Specifies the IP address of the mediator to which the nodes in the MetroCluster configuration will be
connected.
● Examples
The following example connects the nodes to the mediator in a MetroCluster configuration over IP setup:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings mediator remove command removes the connection
between the mediator and all the nodes in a MetroCluster configuration.
Mediator username and password are required to successfully remove the mediator.
● Examples
The following example removes the connections between the nodes and the mediator in a MetroCluster
configuration over IP setup:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster configuration-settings mediator show command displays the connection
status of the nodes with the Mediator in a MetroCluster configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-mediator-address <IP Address>] - Mediator IP Address
Specifies the IP address of the mediator to which the nodes in the MetroCluster configuration are
connected.
[-port <integer>] - Mediator port number
Specifies the port of the mediator to which the nodes in the MetroCluster configuration are connected.
[-node {<nodename>|local}, ...] - Node Name
Specifies the nodes in the MetroCluster configuration which are connected to the mediator.
[-reachable {true|false}, ...] - Connection Status of the Mediator
Specifies the connection status of the nodes with the mediator in the MetroCluster configuration.
[-configured {true|false}, ...] - Mediator Configuration Status
Specifies the configuration status of the nodes with the mediator in the MetroCluster configuration.
● Examples
The following example shows the mediator connection status in a MetroCluster configuration over IP setup:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster interconnect adapter modify command enables you to modify settings of the
MetroCluster interconnect adapter.
● Parameters
-node {<nodename>|local} - Node Name
This parameter specifies the node name.
-is-ood-enabled {true|false} - Is Out-of-Order Delivery Enabled?
This parameter specifies the out-of-order delivery setting on the adapter.
● Examples
The following example enables out-of-order delivery for the port 'fcvi_device_0' on the node 'clusA-01':
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster interconnect adapter show command displays interconnect adapter information
for the nodes in a MetroCluster configuration.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Adapter Name: This field specifies the name of the interconnect adapter.
• Adapter Type: This field specifies the type of the interconnect adapter.
• Link Status: This field specifies the physical link status of the interconnect adapter.
• Is OOD Enabled: This field specifies the out-of-order delivery status of the interconnect adapter.
• IP Address: This field specifies the IP address assigned to the interconnect adapter.
• Port Number: This field specifies the port number of the interconnect adapter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-connectivity ]
Displays the connectivity information from all the interconnect adapters to the connected nodes.
| [-switch ]
Displays details of switches connected to all the interconnect adapters.
| [-connectivity-hidden ] (privilege: advanced)
Displays additional connectivity information (IP address, Area ID, Port ID) from all the interconnect
adapters to the connected nodes.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Displays information only about the interconnect adapters that are hosted by the specified node.
[-adapter <text>] - Adapter
Displays information only about the interconnect adapters that match the specified name.
[-port-name <text>] - Port Name
Displays information only about the interconnect adapters that host the specified port name.
[-type <text>] - Adapter Type
Displays information only about the interconnect adapters that match the specified adapter type.
[-physical-status <text>] - Physical Status
Displays information only about the interconnect adapters that match the specified physical status.
[-wwn <text>] - Adapter Port World Wide Name
Displays information only about the interconnect adapters that match the specified world wide name.
[-address <text>] - IP Address
Displays information only about the interconnect adapters that match the specified IP address.
[-firmware-version <text>] - Firmware Version
Displays information only about the interconnect adapters that match the specified firmware version.
[-link-speed <text>] - Link Speed
Displays information only about the interconnect adapters that match the specified link speed.
[-link-speed-neg-type <text>] - Link Speed Negotiation Type
Displays information only about the interconnect adapters that match the specified negotiated link
speed type.
[-switch-name <text>] - Switch Name
Displays information only about the interconnect adapters that are connected to the specified switch.
[-switch-model <text>] - Switch Model
Displays information only about the interconnect adapters that are connected to the switch with the
specified model.
[-switch-wwn <text>] - Switch WWName
Displays information only about the interconnect adapters that are connected to the switch with the
specified world wide name.
[-switch-vendor <text>] - Switch Vendor
Displays information only about the interconnect adapters that are connected to the switch with the
specified vendor.
[-switch-status <text>] - Switch Status
Displays information only about the interconnect adapters that are connected to the switch with the
specified operational status.
[-switch-port-number <text>] - Switch Port Number
Displays information only about the interconnect adapters that are connected to the switch with the
specified port number.
[-switch-port-wwpn <text>] - Switch Port WWPN
Displays information only about the interconnect adapters that are connected to the switch with the
specified word wide port name.
[-remote-adapter-name-list <text>, ...] - Remote Adapter Name List
Displays information only about the interconnect adapters that are connected to the specified remote
adapters.
[-remote-adapter-wwn-list <text>, ...] - Remote Adapter WWName List
Displays information only about the interconnect adapters that are connected to the remote adapters
with the specified world wide names.
[-remote-adapter-address-list <text>, ...] - Remote Adapter IP Address List
Displays information only about the interconnect adapters that are connected to the remote adapters
with the specified IP addresses.
[-remote-adapter-port-id-list <Hex Integer>, ...] - Remote Adapter Port ID List
Displays information only about the interconnect adapters that are connected to the remote adapters
with the specified port IDs.
[-remote-adapter-domain-id-list <integer>, ...] - Remote Adapter Domain ID List
Displays information only about the interconnect adapters that are connected to the remote adapters
with the specified domain IDs.
[-remote-adapter-area-id-list <integer>, ...] - Remote Adapter Area ID List
Displays information only about the interconnect adapters that are connected to the remote adapters
with the specified Area IDs.
[-remote-partner-system-id-list <integer>, ...] - Remote Partner System ID List
Displays information only about the interconnect adapters that are connected to the remote nodes with
the specified System IDs.
[-remote-partner-name-list {<nodename>|local}, ...] - Remote Partner Name List
Displays information only about the interconnect adapters that are connected to the specified remote
nodes.
[-is-ood-enabled {true|false}] - Is Out-of-Order Delivery Enabled?
Displays information only about the interconnect adapters that match the specified out-of-order delivery
setting.
● Examples
The following example shows the output of the command during normal operation (neither cluster is in
switchover state):
The following example shows the output of the command after MetroCluster switchover is performed:
The following example shows the output of the command with connectivity field during normal operation
(neither cluster is in swithover state):
clusA::> metrocluster interconnect adapter show -connectivity -node local -type FC-VI
WWName: 21:00:00:24:ff:32:01:68
PortNo: 1a
Remote Adapters:
Remote Adapters:
The following example shows the output of the command with connectivity field after MetroCluster
swithover is performed.
clusA::> metrocluster interconnect adapter show -connectivity -node local -type FC-VI
Remote Adapters:
Remote Adapters:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster interconnect mirror show command displays NVRAM mirror information for the
nodes configured in a MetroCluster.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Partner Name: This field specifies the name of the partner node.
• Partner Type: This field specifies the type of the partner.
• Mirror Admin Status: This field specifies the administrative status of the NVRAM mirror between partner
nodes.
• Mirror Oper Status: This field specifies the operational status of the NVRAM mirror between partner nodes.
• Adapter: This field specifies the name of the interconnect adapter used for NVRAM mirroring.
• Type: This field specifies the type of the interconnect adapter used for NVRAM mirroring.
• Status: This field specifies the physical status of the interconnect adapter used for NVRAM mirroring.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, mirror details of the specified node are displayed.
[-partner-type {HA|DR|AUX}] - Partner Type
If this parameter is specified, mirror details of the specified partner type are displayed.
[-adapter <text>] - Adapter
If this parameter is specified, mirror details of the specified adapter are displayed.
[-type <text>] - Adapter Type
If this parameter is specified, mirror details of the specified adapter type are displayed.
[-status <text>] - Status
If this parameter is specified, mirror details of the adapter with the specified status are displayed.
[-mirror-oper-status {unknown|online|offline}] - Mirror Operational Status
If this parameter is specified, only mirror details with the specified operational status are displayed.
● Examples
The following example shows the output of the command during normal operation (neither cluster is in
switchover state):
The following example shows the output of the command after MetroCluster switchover is performed:
FC-VI Up
fcvi_device_1
FC-VI Up
clusA-02
clusA-01
HA enabled online
cxgb3_0 iWARP Up
cxgb3_0 iWARP Up
clusB-02
DR disabled offline
fcvi_device_0
FC-VI Up
fcvi_device_1
FC-VI Up
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster interconnect mirror multipath show command displays the NVRAM mirror
multipath policy for the nodes configured in a MetroCluster.
This command displays the following details about the local node and the HA partner node:
• Node: This field specifies the name of the node in the cluster.
• Multipath Policy: This field specifies the multipath policy used for NVRAM mirroring.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, mirror details of the specified node are displayed.
[-multipath-policy {no-mp|static-map|dynamic-map|round-robin}] - Multipath Policy
If this parameter is specified, nodes with the specidifed multipath policy are displayed.
● Examples
The following example shows the output of the command:
clusA-2 static-map
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster node show command displays configuration information for the nodes in the
MetroCluster configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-partners ]
If this option is used the MetroCluster node partnership view will be displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-dr-group-id <integer>] - DR Group ID
If this parameter is specified, all nodes belonging to the specified DR group are displayed.
[-cluster <Cluster name>] - Cluster Name
If this parameter is specified, all nodes belonging to the specified cluster are displayed.
[-node <Node name>] - Node Name
If this parameter is specified, the specified node is displayed.
[-ha-partner <Node name>] - HA Partner Name
If this parameter is specified, the node with the specified HA partner is displayed.
[-dr-cluster <Cluster name>] - DR Cluster Name
If this parameter is specified, all nodes belonging to the specified cluster are displayed.
[-dr-partner <Node name>] - DR Partner Name
If this parameter is specified, the node with the specified DR partner is displayed.
[-dr-auxiliary <Node name>] - DR Auxiliary Name
If this parameter is specified, the node with the specified DR auxiliary partner is displayed.
[-node-uuid <UUID>] - Node UUID
If this parameter is specified, the node with the specified Uuid is displayed.
[-ha-partner-uuid <UUID>] - HA Partner UUID
If this parameter is specified, the nodes with the specified HA partner is displayed.
[-dr-partner-uuid <UUID>] - DR Partner UUID
If this parameter is specified, the node with the specified DR partner is displayed.
[-dr-auxiliary-uuid <UUID>] - DR Auxiliary UUID
If this parameter is specified, the node with the specified DR auxiliary partner is displayed.
● Examples
The following example shows the output of the command before the MetroCluster configuration is done:
The following example shows the output of the command when some DR groups in the MetroCluster
configuration are not yet configured:
The following example shows the output of the command after after all DR groups in the MetroCluster
configuration are configured:
Related reference
metrocluster configure (page 287)
metrocluster show (page 293)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster operation show command displays information about the most recent MetroCluster
operation run on the local cluster.
This command will display information about all MetroCluster commands except for the commands in the
metrocluster check directory. This command will not display any information after MetroCluster has
been completely unconfigured using the metrocluster unconfigure command.
● Examples
The following example shows the output of metrocluster operation show after running a
metrocluster configure command was successful:
Related reference
metrocluster check commands (page 297)
metrocluster configure (page 287)
metrocluster operation history show (page 364)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster operation history show command displays information about all the MetroCluster
operations run on the local cluster.
This command will display information about all MetroCluster commands except for the commands in the
metrocluster check directory. This command will not display any information after MetroCluster has
been completely unconfigured using the metrocluster unconfigure command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-operation-uuid <UUID>] - Identifier for the Operation
This is the UUID of the operation. If this parameter is specified, only the operation with this UUID is
displayed.
● Examples
The following example shows the output of metrocluster operation history show after some
MetroCluster operations have been performed:
Related reference
metrocluster check commands (page 297)
metrocluster operation show (page 363)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster transition disable command is used to end the transition from a MetorCluster FC
to a MetroCluster IP configuration. The command clears the setting for transition mode on the local and
partner clusters.
● Examples
The following example shows how to disable the transition mode:
Related reference
metrocluster transition show-mode (page 367)
metrocluster transition enable (page 366)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster transition enable command is used to start the transition from a MetroCluster FC
to a MetroCluster IP configuration. The command sets the transition mode on the local and partner cluster.
● Parameters
-transition-mode <Mode For FC To Ip Transition> - Transition Mode
This parameter specifies the setting for the transition-mode.
[-force {true|false}] - Force enable
This optional parameter forces the setting for the transition-mode.
● Examples
The following example shows how to enable the non-disruptive transition mode:
The following example shows how to enable the disruptive transition mode:
Related reference
metrocluster transition show-mode (page 367)
metrocluster transition disable (page 366)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster transition show-mode command is used to display the current setting of transition.
● Examples
The following example shows the output of the command when transition is not enabled:
The following example shows the output of the command when non-disruptive transition is enabled:
The following example shows the output of the command when disruptive transition is enabled:
disruptive
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster vserver recover-from-partial-switchback command executes the necessary
steps needed for a Vserver to be in healthy state after partial completion of the Switchback.
● Examples
Related reference
metrocluster vserver recover-from-partial-switchover (page 368)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The metrocluster vserver recover-from-partial-switchover command executes the necessary
steps needed for a Vserver to be in healthy state after partial completion of the Switchover.
● Examples
Related reference
metrocluster vserver recover-from-partial-switchback (page 368)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster vserver resync command resynchronizes the Vserver with its partner Vserver
● Parameters
-cluster <Cluster name> - Cluster Name
Name of the cluster where the Vserver belongs
-vserver <vserver> - Vserver
Name of the Vserver to be resynchronized
● Examples
Related reference
metrocluster vserver show (page 369)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The metrocluster vserver show command displays configuration information for all pairs of Vservers in
MetroCluster.
● Parameters
{ [-fields <fieldname>, ...]
The command output includes the specified field or fields
| [-creation-time ] (privilege: advanced)
Shows the last configuration modification time on the Vserver
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster <Cluster name>] - Cluster Name
Name of the cluster where the Vserver belongs
[-vserver <vserver>] - Vserver
Name of the Vserver
[-partner-vserver <vserver>] - Partner Vserver
Name of the partner Vserver
[-configuration-state {healthy|unhealthy|degraded|pending-setup|syncing|replication-paused|
pending-switchback}] - Configuration State
Configuration states include:
• healthy
• unhealthy
• degraded indicates that Vservers are not in sync
• syncing indicates that the Vserver configuration is being synchronized
● Examples
The following example shows the output of the command when partner Vservers are created
Cluster: clusA
Partner Configuration
Vserver Vserver State
------------------- ---------------------- -----------------
clusA clusB healthy
vs1 vs1-mc healthy
Cluster: clusB
Partner Configuration
Vserver Vserver State
------------------- ---------------------- -----------------
clusB clusA healthy
3 entries were displayed.
The following example shows the output of the command when the partner Vserver creation is pending
Cluster: clusA
Partner Configuration
Vserver Vserver State
------------------- ---------------------- -----------------
clusA clusB healthy
vs1 - pending-setup
Corrective Action: Create Ipspace ips1 on the partner cluster.
2 entries were displayed.
Related reference
metrocluster vserver resync (page 368)
The network commands enable you to manage the network interfaces in a cluster.
network ping
Ping
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network ping command displays whether a remote address is reachable and responsive, the (if
specified) number of transmitted and received packets, and their round-trip time. The command requires a
source node or logical interface from where the ping will be run, and a destination IP address. You can
specify the source node by name, or a logical interface and its Vserver.
● Parameters
{ -node <nodename> - Node
Use this parameter to send the ping from the node you specify.
| -lif <lif-name>} - Logical Interface
Use this parameter to send the ping from the logical interface you specify.
-vserver <vserver> - Vserver
Use this parameter to send the ping from the Vserver where the intended logical interface resides. The
default value is the system Vserver for cluster administrators.
[-use-source-port {true|false}] - (DEPRECATED)-Use Source Port of Logical Interface (privilege:
advanced)
This parameter is only applicable when the -lif parameter is specified. When set to true, the ping
packet will be sent out via the port which is currently hosting the IP address of the logical interface.
Otherwise, the ping packet will be sent out via a port based on the routing table.
Note
The use-source-port parameter is deprecated and may be removed in a future release of Data
ONTAP.
Use this parameter to specify the maximum number of ECHO_REQUESTS to be sent to the destination.
The default is 20 requests. In the absence of the 'show-detail' option, ping reports that the destination is
alive after receiving the first ECHO_REPLY response, independent of this value.
[-wait <integer>] - Packet Send Wait Time (secs)
Use this parameter to specify the number of seconds to wait between sending packets. The default is
one second.
[-flood [true]] - Flood Ping (privilege: advanced)
Use this parameter to execute the command in flood mode. In flood mode, the command issues pings as
fast as they are received, unless you specify a wait time.
[-disallow-fragmentation | -D [true]] - Disallow Packet Fragmentation
Use this parameter to prevent transport mechanisms from fragmenting ping packets in transit.
Preventing fragmentation assures consistent packet size, making it easier to see transport bottlenecks.
[-wait-response <integer>] - Packet Response Wait Time (ms)
Use this parameter to specify the number of milliseconds to wait for each response packet. The default is
10000 milliseconds (10 seconds).
● Examples
This example shows a ping from node xena to the destination server 10.98.16.164 with the server
responding that it is up and running.
network ping6
Ping an IPv6 address
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network ping6 command uses the ICMPv6 protocol's mandatory ICMP6_ECHO_REQUEST datagram to
elicit an ICMP6_ECHO_REPLY from a host or gateway. ICMP6_ECHO_REQUEST datagrams ("pings") have an
IPv6 header, and ICMPv6 header formatted as documented in RFC2463.
● Parameters
{ -node <nodename> - Node Name
Use this parameter to originate ping6 from the specified node.
| -lif <lif-name>} - Logical Interface
Use this parameter to originate ping6 from the specified logical interface.
-vserver <vserver name> - Vserver Name
Use this parameter to originate ping6 from the specified Vserver. The default value is the system Vserver
for cluster administrators.
-destination <Remote InetAddress> - Destination
Use this parameter to specify the IPv6 address of the destination node.
[-buffer-size | -b <integer>] - Socket Buffer Size
Use this parameter to set the socket buffer size.
[-count | -c <integer>] - Max Requests to Send/Recieve
Use this parameter to specify the maximum number of requests and replies. The default value is 20.
[-reverse-lookup | -H [true]] - Reverse-lookup of IPv6 addresses
Use this parameter to specify reverse-lookup of IPv6 addresses. Unless this parameter is specified, ping6
command does not attempt reverse lookup.
[-interval | -i <integer>] - Wait between Packets (secs)
Use this parameter to specify the delay time between packets in seconds. The default value is 1 second.
This parameter is incompatible with the flood parameter.
[-preload | -l <integer>] - Send Packets as Fast as Possible (privilege: advanced)
Use this parameter if preload is required. If specified, ping6 sends that many packets as fast as possibile
before falling into its normal mode of behaviour.
[-use-source-port {true|false}] - Use Source Port of Logical Interface (privilege: advanced)
This parameter is only applicable when the -lif parameter is specified. When set to true, the ping
packet will be sent out via the port which is currently hosting the IP address of the logical interface.
Otherwise, the ping packet will be sent out via a port based on the routing table.
[-pattern | -p <text>] - Up to 16 'pad' Specified for Out Packet
Use this parameter to fill the -16 'pad' bytes in the sent packet. This is useful for diagnosing data
dependent problems in a network. For example, -pattern ff causes the sent packet to be filled with all
ones.
[-packet-size <integer>] - Packet Size
Use this parameter to specify the number of data bytes to be sent. The default is 56, which translates to
64 ICMP data bytes when combined with the 8 bytes of ICMP header data.
[-verbose | -v [true]] - Show All ICMP Packets
Use this parameter to get verbose output. Verbose output displays both source address and destination
addresses. Received ICMP packets other than ECHO_RESPONSE are listed. This parameter can be used
only in conjunction with the show-detail parameter.
[-show-detail | -s [true]] - Show Detail Output
Use this parameter to display detailed output about the ping.
[-flood | -f [true]] - Flood Ping (privilege: advanced)
Use this parameter to output packets as fast as they come back or one hundred times per second,
whichever is more. For every ECHO_REQUEST sent a period "." is printed, while for every ECHO_REPLY
received a backspace is printed. This provides a rapid display of how many packets are being dropped.
This can be very hard on a network and should be used with caution.
[-disable-fragmentation | -D [true]] - Disable Fragmentation.
Use this parameter to disallow fragmentation of the outgoing packets, if they do not fit in the Maximum
Transmission Unit.
● Examples
This example shows a ping6 from node 'node1' to the destination server ipv6.google.com with the server
responding that it is up and running.
Related reference
network ping (page 371)
network traceroute (page 375)
network traceroute6 (page 376)
network test-path
Test path performance between two nodes
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network test-path command runs a performance test between two nodes. The command requires a
source node, destination node, destination cluster, and application, or session type. All tests are run using
intracluster or intercluster LIFs, depending on whether the test is between two nodes in the same cluster, or
between nodes in peered clusters.
The test itself is different from most bandwidth test tools. It creates a "session" consisting of TCP connections
between all possible paths between the nodes being tested. This is how internal Data ONTAP applications
communicate between nodes. This means the test is using multiple paths, and thus the bandwidth reported
might exceed the capacity of a single 10 Gb path.
● Parameters
-source-node {<nodename>|local} - Node Initiating Session
Use this parameter to specify the node that initiates the test. Source-node parameter must be a member
of the cluster in which the command is run.
-destination-cluster <Cluster name> - Cluster Containing Passive Node
Use this parameter to specify the destination cluster; the local cluster, or a peered cluster.
-destination-node <text> - Remote Node in Destination Cluster
Use this parameter to specify the destination node in the destination cluster
-session-type {AsyncMirrorLocal|AsyncMirrorRemote|RemoteDataTransfer} - Type of Session to Test
The session type parameter is used to mimic the application settings used. A session consists of multiple
TCP connections.
• AsyncMirrorLocal: settings used by SnapMirror between nodes in the same cluster
• AsyncMirrorRemote: settings used by SnapMirror between nodes in different clusters
• RemoteDataTransfer: settings used by Data ONTAP for remote data access between nodes in the same
cluster
The default session-type is AsyncMirrorRemote.
● Examples
The following example runs a test between two nodes in the same cluster:
Related reference
network test-link commands (page 497)
network traceroute
Traceroute
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network traceroute command performs a network probe from a node to a specified IP address. The
command requires a source node or logical interface and a destination IP address. You can specify the
source node by name, or specify a logical interface and its Vserver. The traceroute is performed between the
source and destination.
● Parameters
{ -node <nodename> - Node
Use this parameter to originate the traceroute from the node you specify.
| -lif <lif-name>} - Logical Interface
Use this parameter to originate the traceroute from the specified network interface.
-vserver <vserver> - LIF Owner
Use this parameter to originate the traceroute from the Vserver where the intended logical interface
resides. The default value is the system Vserver for cluster administrators.
-destination <Remote InetAddress> - Destination
Use this parameter to specify the remote internet address destination of the traceroute.
[-maxttl | -m <integer>] - Maximum Number of Hops
Use this parameter to specify the maximum number of hops (time-to-live) setting used by outgoing
probe packets. The default is 30 hops.
[-numeric | -n [true]] - Print Hop Numerically
Use this parameter to print the hop addresses only numerically rather than symbolically and
numerically.
[-port <integer>] - Base UDP Port Number
Use this parameter to specify the base UDP port number used in probes. The default is port 33434.
[-packet-size <integer>] - Packet Size
Use this parameter to specify the size of probe packets, in bytes.
[-nqueries | -q <integer>] - Number of Queries
Use this parameter to specify the number of probes per hop. The default is 3 probes.
[-verbose | -v [true]] - Verbose Output
Use this parameter to display all received ICMP packets, rather than just TIME_EXCEEDED and
UNREACHABLE packets.
[-waittime | -w <integer>] - Wait Between Packets (secs)
Use this parameter to specify the time (in seconds) to wait for the response to a probe. The default is 5
seconds.
● Examples
This example shows a traceroute from node node1 to a destination address of 10.98.16.164, showing a
maximum of five hops.
5 * * *
network traceroute6
traceroute6
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network traceroute6 command performs a network probe from a node to a specified IPv6 address.
The command requires a source node or logical interface, Vserver from where traceroute6 will originate and
a destination IPv6 address. traceroute6 is performed between the source and destination.
● Parameters
{ -node <nodename> - Node
Use this parameter to originate traceroute6 from the node you specify. This parameter is available only
to cluster administrators.
| -lif <lif-name>} - Logical Interface
Use this parameter to originate traceroute6 from the logical interface you specify.
-vserver <vserver name> - LIF Owner
Use this parameter to originate traceroute6 from the Vserver you specify. The default value is the system
Vserver for cluster administrators.
[-debug-mode | -d [true]] - Debug Mode
Use this parameter to enable socket level debugging. The default value is false.
{ [-icmp6 | -I [true]] - ICMP6 ECHO instead of UDP
Use this parameter to specify the use of ICMP6 ECHO instead of UDP datagrams for the probes. The
default value is false.
| [-udp | -U [true]]} - UDP
Use this parameter to specify the use of UDP datagrams for the probes. The default value is true.
[-numeric | -n [true]] - Print Hops Numerically
Use this parameter to print the hop addresses only numerically rather than symbolically and
numerically. The default value is false.
[-verbose | -v [true]] - Verbose Output
Use this parameter to display all received ICMP packets, rather than just TIME_EXCEEDED and
UNREACHABLE packets. The default value is false.
[-first-hop | -f <integer>] - Number of Hops to Skip in Trace
Use this parameter to specify the number of hops to skip in trace. The default value is 1.
[-gateway | -g <Remote InetAddress>] - Intermediate Gateway
Use this parameter to specify the intermediate gateway.
[-hop-limit | -m <integer>] - Maximum Number of Hops
Use this parameter to specify the maximum hoplimit, upto 255. The default value is 64 hops.
[-port | -p <integer>] - Base UDP Port Number
Use this parameter to specify the base UDP port number used in probes. The default value is port 33434.
[-nqueries | -q <integer>] - Number of Queries
Use this parameter to specify the number of probes per hop. The default value is 3 probes.
[-wait-time | -w <integer>] - Wait Between Packets (secs)
Use this parameter to specify the delay time between probes in seconds. The default value is 5 seconds.
-destination <Remote InetAddress> - Destination
Use this parameter to specify the remote IPv6 address destination of traceroute6.
[-packet-size <integer>] - Packet Size
Use this parameter to specify the size of probe packets, in bytes. The default value is 16 bytes for ICMP6
ECHO and 12 bytes for UDP datagrams.
● Examples
The following example shows traceroute6 from node node1 to the destination
fd20:8b1e:b255:4071:d255:1fcd:a8cd:b9e8.
Related reference
network traceroute (page 375)
network ping6 (page 372)
network ping (page 371)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network arp create command creates a static ARP entry for a given Vserver. Statically created ARP
entries will be stored permanently in the Vserver context and will be used by the network stack.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver on which the ARP entry is created.
-remotehost <IP Address> - Remote IP Address
Use this parameter to specify the IP address to be added as an ARP entry.
-mac <MAC Address> - MAC Address
Use this parameter to specify the MAC address (Ethernet address) for the host specified with -
remotehost. Specify the MAC address as six hex bytes separated by colons.
● Examples
The following example creates a static ARP entry on Vserver vs1 for the remote host with the IP address
10.63.0.2 having MAC address 40:55:39:25:27:c1
cluster1::> network arp create -vserver vs1 -remotehost 10.63.0.2 -mac 40:55:39:25:27:c1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network arp delete command deletes static ARP entries from the Vserver and from the network
stack.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver from which the ARP entry is deleted.
-remotehost <IP Address> - Remote IP Address
Use this parameter to specify the IP address of the ARP entry being deleted.
● Examples
The following example deletes the ARP entry for IP address 10.63.0.2 from the Vserver vs1.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network arp show command displays static ARP entries present in a given Vserver. This command will
not display dynamically learnt ARP entries in the network stack. Use the network arp active-entry
show command to display dynamically learned ARP entries in the network stack.
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the ARP table.
| [-instance ]}
Use this parameter to display all the fields of the ARP table.
[-vserver <vserver name>] - Vserver Name
Use this parameter to display ARP entries that are specific to a given Vserver.
[-remotehost <IP Address>] - Remote IP Address
Use this parameter to display ARP entries for the specified IP address
[-mac <MAC Address>] - MAC Address
Use this parameter to display ARP entry for the specified MAC address
● Examples
The following example displays static ARP entries from the Vserver vs1.
Related reference
network arp active-entry show (page 380)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network arp active-entry delete command deletes dynamically learned ARP entries from the
network stack of a node. To delete statically configured ARP entries use the network arp delete
command.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node in which the ARP entry is deleted.
-vserver <vserver> - System or Admin Vserver Name
Use this parameter to specify the name of the Vserver in which the ARP entry is deleted. Only Vservers
with a type of Admin or System have dynamically learned ARP entries.
-subnet-group <IP Address/Mask> - Subnet Group Name
Use this parameter to specify the name of the routing group in which the ARP entry is deleted.
-remotehost <text> - Remote IP Address
Use this parameter to specify the IP address to be deleted from the active ARP entries.
-port <text> - Port
Use this parameter to specify the name of the Port to be deleted from the active ARP entries.
● Examples
The following example deletes the active ARP entry with an IP address of 10.224.64.1, subnet group of
0.0.0.0/0, port e0c on node node2 in the Admin Vserver cluster1:
cluster1::network arp active-entry*> delete -node cluster1-01 -vserver cluster1 -subnet-group 0.0.0.0/0
-remotehost 10.224.64.1 -port e0c
Related reference
network arp delete (page 378)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network arp active-entry show command displays ARP entries present in the network stack of
the node. The entries includes both dynamically learned ARP entries and user configured static ARP entries.
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the active ARP table.
| [-instance ]}
Use this parameter to display all the fields of the active ARP table.
[-node {<nodename>|local}] - Node
Use this parameter to display active ARP entries that are specific to a given node.
[-vserver <vserver>] - System or Admin Vserver Name
Use this parameter to display active ARP entries that are specific to a given System or Admin Vserver.
Data and Node Vservers will not have any active-arp entries.
[-subnet-group <IP Address/Mask>] - Subnet Group Name
Use this parameter to display active ARP entries that are specific to a given subnet group.
[-remotehost <text>] - Remote IP Address
Use this parameter to display active ARP entries for the specified IP address.
[-port <text>] - Port
Use this parameter to display active ARP entries for the specified Port name.
[-mac <text>] - MAC Address
Use this parameter to display the active ARP entry for the specified MAC address.
[-ipspace <IPspace>] - IPspace
Use this parameter to specify the IPspace associated with the System or Admin Vserver.
● Examples
The following example displays active ARP entries for the Admin Vserver cluster1:
Node: node-01
Vserver: cluster1
Subnet Group: 169.254.0.0/16
Remote IP Address MAC Address Port
----------------- ----------------- -------
169.254.106.95 0:55:39:27:d1:c1 lo
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network bgp config create command is used to create the border gateway protocol (BGP)
configuration for a node. It can be used to override the BGP parameters defined in the global BGP defaults.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which configuration details will be created.
-asn <integer> - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-
bit integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private
Use" or the AS number assigned to the operator's organization.
-hold-time <integer> - Hold Time
This parameter specifies the hold time in seconds. The default value is 180.
-router-id <IP Address> - Router ID
This parameter specifies the local router ID. The router-id value takes the form of an IPv4 address. The
default router-id will be initialized using a local IPv4 address in admin vserver.
● Examples
cluster1::> network bgp config create -node node1 -asn 10 -hold-time 180 -router-id 10.0.1.112
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network bgp config delete command deletes a node's border gateway protocol (BGP)
configuration. A BGP configuation cannot be deleted if there are BGP peer groups configured on the
associated node.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the BGP configuration will be deleted.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network bgp config modify command is used to modify a node's border gateway protocol (BGP)
configuration.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which BGP configuration will be modified.
[-asn <integer>] - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-
bit integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private
Use" or the AS number assigned to the operator's organization.
[-hold-time <integer>] - Hold Time
This parameter specifies the hold time in seconds.
[-router-id <IP Address>] - Router ID
This parameter specifies the local router ID. The router-id value takes the form of an IPv4 address.
● Examples
cluster1::> network bgp config modify -node node1 -router-id 1.1.1.1 -asn 20
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp config show command displays the border gateway protocol (BGP) configuration for
each node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the BGP configurations that match the specified node.
[-asn <integer>] - Autonomous System Number
This parameter selects the BGP configurations that match the specified autonomous system number.
[-hold-time <integer>] - Hold Time
This parameter selects BGP configurations that match the specified hold time.
[-router-id <IP Address>] - Router ID
This parameter selects the BGP configurations that match the specfied router ID.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network bgp defaults modify command modifies the global defaults for border gateway protocol
(BGP) configurations.
● Parameters
[-asn <integer>] - Autonomous System Number
This parameter specifies the autonomous system number (ASN). The ASN attribute is a non-negative 16-
bit integer. It should typically be chosen from RFC6996 "Autonomous System (AS) Reservation for Private
Use", or the AS number assigned to the operator's organization. The default ASN is 65501.
[-hold-time <integer>] - Hold Time
This parameter specifies the hold time in seconds. The default value is 180.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp defaults show command displays the global defaults for border gateway protocol
(BGP) configurations.
● Examples
(Seconds)
------------- --------------
10 180
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp peer-group create command is used to create a border gateway protocol (BGP) peer
group. A BGP peer group will advertise VIP routes for the list of vservers in the peer group's vserver-list
using the BGP LIF of the peer group. A BGP peer group will advertise VIP routes to a peer router using the
border gateway protocol. The address of the peer router is identfied by the peer-address value.
● Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the peer group being created.
-peer-group <text> - Peer Group Name
This parameter specifies the name of the peer group being created.
-bgp-lif <lif-name> - BGP LIF
This parameter specifies the BGP interface (BGP LIF) of the peer group being created.
-peer-address <IP Address> - Peer Router Address
This parameter specifies the IP address of the peer router for the peer group being created.
[-peer-asn <integer>] - Peer Router Autonomous number
This parameter specifies the peer router autonomous system number (ASN) in the peer group being
created. The default value is the value of the local node's ASN.
-route-preference <integer> - Route Preference
This parameter specifies the preference field in BGP update messages for VIP routes. If a router receives
multiple VIP route announcements for the same VIP LIF from different BGP LIFs, it will install the one
that has the highest preference value. The default route preference value is 100.
[-asn-prepend-type <ASN Prepend type>] - ASN prepend type (privilege: advanced)
This parameter specifies the ASN that will be prepended in the BGP attributes. The possible values are
local-asn and peer-asn. The default behaviour is not to prepend any ASN.
[-asn-prepend-count <integer>] - ASN prepend count (privilege: advanced)
This parameter specifies the number of times ASN, as specified in asn-prepend-type will be
prepended in the BGP path attributes. The default behaviour is not to prepend any ASN.
[-community <BGP community>, ...] - BGP Community (privilege: advanced)
This parameter specifies the communities that will be included in the BGP path attributes. The default
behaviour is not to include any community in BGP path attributes.
● Examples
cluster1::> network bgp peer-group create -peer-group group1 -ipspace Default -bgp-lif bgp_lif -peer-ad
dress 10.0.1.112
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp peer-group delete command is used to delete border gateway protocol (BGP) peer
group configuration.
● Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the BGP peer group being deleted.
-peer-group <text> - Peer Group Name
This parameter specifies the name of the BGP peer group being deleted.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp peer-group modify command is used to modify a border gateway protocol (BGP) peer
group configuration.
● Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the peer group being modified.
-peer-group <text> - Peer Group Name
This parameter specifies the name of the peer group being modified.
[-peer-address <IP Address>] - Peer Router Address
This parameter specifies an updated value for the IP address of the peer router.
● Examples
cluster1::> network bgp peer-group modify -ipspace Default -peer-group peer1 -peer-address 10.10.10.10
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network bgp peer-group rename command is used to assign a new name to a BGP peer group.
● Parameters
-ipspace <IPspace> - IPspace Name
This parameter specifies the IPspace of the peer group being renamed.
-peer-group <text> - Peer Group Name
The name of the peer group to be updated.
-new-name <text> - New Name
The new name for the peer group.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp peer-group show command displays the BGP peer groups configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
This parameter selects peer groups that match the specified IPspace.
[-peer-group <text>] - Peer Group Name
This parameter selects peer groups that match the specified name.
[-bgp-lif <lif-name>] - BGP LIF
This parameter selects peer groups that match the specified BGP Interface.
[-peer-address <IP Address>] - Peer Router Address
This parameter selects peer groups that match the specified peer router address.
[-peer-asn <integer>] - Peer Router Autonomous number
This parameter selects peer groups that match the specified autonomous system number.
[-state <BGP Session State>] - Peer Group State
This parameter selects peer groups that match the specified BGP session state.
[-bgp-node <nodename>] - BGP LIF Node
This parameter selects peer groups that match the specified bgp-node value. This value is calculated
based on the current node of the corresponding BGP LIF.
[-bgp-port <netport>] - BGP LIF Port
This parameter selects peer groups that match the specified bgp-port value. This value is calculated
based on the current port of the associated BGP LIF.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network bgp vserver-status show command displays the per-node border gateway protocol
(BGP) status for each vserver. The BGP status for a particular vserver is "up" when at least one BGP peer
group supporting that vserver is able to communicate with its peer router.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the BGP status that match the specified node.
[-vserver <vserver name>] - Vserver
This parameter selects the BGP status for specified vserver.
[-ipv4-status {unknown|unconfigured|up|down}] - IPv4 status
This parameter selects the BGP status that matches the specified status for IPv4 address family.
[-ipv6-status {unknown|unconfigured|up|down}] - IPv6 status
This parameter selects the BGP status that matches the specified status for IPv6 address family.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network cloud routing-table create command creates a new external routing table.
● Parameters
-route-table-id <text> - Route Table ID
This parameter is used to provide the name of the external routing table to be created.
● Examples
The following example creates an external routing table "eni-123456":
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network cloud routing-table delete deletes an existing external routing table.
● Parameters
-route-table-id <text> - Route Table ID
This parameter is used to provide the name of an existing external routing table to be deleted.
● Examples
The following example deletes the external routing table "eni-123456":
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network cloud routing-table show command retrieves the configured routing tables on mediator
and displays them.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-route-table-id <text>] - Route Table ID
This parameter is the name of the external routing table to be shown.
● Examples
The following example shows external routing tables, for this cluster, that are configured on the mediator
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections active show command displays information about active network
connections.
Note
The results of this command set are refreshed independently every 30 seconds and might not reflect
the immediate state of the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-print-ip-addresses ]
Print IP addresses for remote hosts -- do not attempt to resolve the addresses to a hostname.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the connections that match this parameter value.
[-cid <Cid>] - Connection ID
Selects the connections that match this parameter value.
[-vserver <vserver>] - Vserver
Selects the connections that match this parameter value.
[-lif-name <lif-name>] - Logical Interface Name
Selects the connections that match this parameter value.
[-local-address <IP Address>] - Local IP address
Selects the connections that match this parameter value.
[-local-port <integer>] - Local Port
Selects the connections that match this parameter value.
[-remote-ip <InetAddress>] - Remote IP Address
Selects the connections that match this parameter value.
[-remote-host <Remote IP>] - Remote Host
Selects the connections that match this parameter value.
[-remote-port <integer>] - Remote Port
Selects the connections that match this parameter value.
[-proto {UDP|TCP}] - Protocol
Selects the connections that match this parameter value. Possible values are tcp (TCP), udp (UDP), and
NA (not applicable).
[-lifid <integer>] - Logical Interface ID
Selects the connections that match this parameter value.
[-service <protocol service>] - Protocol Service
Selects the connections that match this parameter value. Possible values include: nfs, iscsi, and
loopback.
[-lru {yes|no}] - Least Recently Used
Selects the connections that match this parameter value.
[-blocks-lb {true|false}] - Connection Blocks Load Balance Migrate
Selects the logical interfaces that are blocked (true) or not blocked (false) from migrating due to an
active client connection.
● Examples
The following example displays information about active network connections for the node named node0:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections active show-clients command displays information about client
connections, including the client's IP address and the number of client connections.
Note
The results of this command set are refreshed independently every 30 seconds and might not reflect
the immediate state of the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
[-vserver <vserver>] - Vserver
This parameter is used by the system to break down the output per vserver.
[-remote-address <Remote IP>] - Remote IP Address
Use this parameter to display information only about the connections that use the remote IP address
you specify.
[-count <integer>] - Client Count
Use this parameter to only clients with the number of active client connections you specify.
● Examples
The following example displays information about active client connections:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections active show-lifs command displays the number of active connections on
each logical interface, organized by node and Vserver.
Note
The results of this command set are refreshed independently every 30 seconds and might not reflect
the immediate state of the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
Note
This parameter has been deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to display information only about data logical interfaces blocked from migrating and
the connection that is blocking it.
● Examples
The following example displays information about the servers and logical interfaces being used by all active
connections:
node2
vs1 datalif2 1 0
vs1 cluslif1 5 0
vs1 cluslif2 3 2
node3
vs1 datalif1 1 0
vs1 cluslif1 2 0
vs1 cluslif2 1 0
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections active show-protocols command displays the number of active
connections per protocol, organized by node.
Note
The results of this command set are refreshed independently every 30 seconds and might not reflect
the immediate state of the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
[-vserver <vserver>] - Vserver
This parameter is used by the system to break down the output per vserver.
[-proto {UDP|TCP}] - Protocol
Use this parameter to display information only about the connections that use the network protocol you
specify. Possible values include tcp (TCP), udp (UDP), and NA (not applicable).
[-count <integer>] - Client Count
Use this parameter to display only protocols with the number of active client connections you specify.
● Examples
The following example displays information about all network protocols being used by active connections:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections active show-services command displays the number of active
connections by protocol service, organized by node.
Note
The results of this command set are refreshed independently every 30 seconds and might not reflect
the immediate state of the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the connections on the node you specify.
[-vserver <vserver>] - Vserver
This parameter is used by the system to break down the output per vserver
[-service <protocol service>] - Protocol Service
Use this parameter to display information only about the connections that use the protocol service you
specify. Possible values include: nfs, iscsi, and loopback.
[-count <integer>] - Client Count
Use this parameter to display information only about protocol services with the number of active client
connections you specify.
● Examples
The following example displays information about all protocol services being used by active connections:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network connections listening show command displays information about network connections
that are in an open and listening state.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the listening connections that match this parameter value.
[-mgmt-cid <integer>] - Management Connection ID
Selects the listening connections that match this parameter value.
[-vserver <vserver>] - Vserver
Selects the listening connections that match this parameter value.
[-cid <integer>] - System Connection ID
Selects the listening connections that match this parameter value.
[-lif-name <lif-name>] - Logical Interface Name
Selects the listening connections that match this parameter value.
[-local-address <IP Address>] - Local IP Address
Selects the listening connections that match this parameter value.
[-local-port <integer>] - Local Port
Selects the listening connections that match this parameter value.
[-remote-ip <InetAddress>] - Remote IP Address
Selects the listening connections that match this parameter value.
[-remote-host <Remote IP>] - Remote Host
Selects the listening connections that match this parameter value.
[-remote-port <integer>] - Remote Port
Selects the listening connections that match this parameter value.
[-proto {UDP|TCP}] - Protocol
Selects the listening connections that match this parameter value. Possible values include tcp (TCP), udp
(UDP), and NA (not applicable).
[-lifid <integer>] - Logical Interface ID
Selects the listening connections that match this parameter value.
[-service <protocol service>] - Protocol Service
Selects the listening connections that match this parameter value. Possible values include: nfs, iscsi, and
loopback.
[-lru {yes|no}] - Least Recently Used
Selects the listening connections that match this parameter value.
● Examples
The following example displays information about all listening network connections:
The following example displays detailed information about listening network connections for the node
named node0:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network device-discovery show command displays information about discovered devices. This
information may be useful in determining the network topology or investigating connectivity issues. By
default, the command displays the following information:
• Local interface
• Discovered device
• Discovered interface
• Discovered platform
● Parameters
{ [-fields <fieldname>, ...]
Include the specified field or fields in the command output. Use '-fields ?' to display the valid fields.
| [-instance ]}
Use this parameter to display detailed information about all fields.
[-node <nodename>] - Node
Displays the discovery ports that match the node name.
[-protocol {cdp|lldp}] - Protocol
Displays the devices that are discovered by the given protocol.
[-port <text>] - Port
Displays the discovery ports that match the physical network port. For example, e0a will display devices
discovered on port e0a.
[-discovered-device <text>] - Discovered Device
Displays the discovered devices that match the discovered device name.
[-interface <text>] - Discovered Device Interface
Displays the discovered devices that match this interface port name. The format is dependent on the
reporting device. For example: FastEthernet0/12
[-device-ip <IP Address>, ...] - Discovered Device IP Addresses
Displays the discovered devices that match the IP address(es). At present, only IPv4 addresses are
included. It is recommended to use wildcards around the desired value.
[-platform <text>] - Discovered Device Platform
Displays the discovery ports that contain the platform of discovered devices. For example: N5K-C5010P-
BF
[-version <text>] - Discovered Device Version
Displays the discovery ports that contain the version of discovered devices.
[-chassis-id <text>] - Discovered Device Chassis ID
Displays the discovered devices that match the chassis ID.
[-system-name <text>] - Discovered Device System Name
Displays the discovered devices that match the system name.
[-hold-time-remaining <integer>] - Discovered Device's Remaining Hold Time
Displays the discovered devices that match the remaining packet hold time in seconds. If an
advertisement from the device isn't received before this time reaches zero, the entry will expire and be
removed from the list. For example, "<120" will display discovered devices which will expire within the
next 120 seconds.
[-capabilities {router|trans-bridge|source-route-bridge|switch|host|igmp|repeater|phone}, ...] -
Discovered Device Capabilities
Displays the discovered devices that match the capability or capabilities. Possible values:
• "router" - Router
• "trans-bridge" - Trans Bridge
• "source-route-bridge" - Source Route Bridge
• "switch" - Switch
• "host" - Host
• "igmp" - IGMP
• "repeater" - Repeater
• "phone" - Phone
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Modifies the FCP target adapter information.
The adapter argument is in the form Xy or Xy_z where X and z are integers and y is a letter. An example is 4a
or 4a_1.
You cannot bring an adapter offline until all logical interfaces connected to that adapter are offline. Use the
network interface modify command to take your logical interfaces offline.
The speed option sets the Fibre Channel link speed of an adapter. You can set adapters that support:
• 10Gb/s to 10 or auto
• 8Gb/s to 2, 4, 8 or auto
• 4Gb/s to 2, 4 or auto
• 2Gb/s to 2 or auto
By default, the link speed option is set to auto for auto negotiation. Setting the link speed to a specific value
disables auto negotiation. Under certain conditions, a speed mismatch can prevent the adapter from
coming online.
Note
The system reports the actual link speed with the "Data Link Rate (Gbit)" field in the output of
network fcp adapter show -instance.
● Parameters
-node {<nodename>|local} - Node
Specifies the node of the target adapter.
-adapter <text> - Adapter
Specifies the target adapter.
[-status-admin {down|up}] - Administrative Status
Specifies the desired (administrative) status of the adapter. To view the actual operational status, run
network fcp adapter show -fields status-oper.
[-speed {1|2|4|8|10|16|32|auto}] - Configured Speed
Specifies the adapter configuration speed in Gigabytes.
● Examples
Related reference
network fcp adapter show (page 400)
network interface modify (page 414)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays FCP target adapter information. You can also use this information to determine if adapters are
active and online.
The adapter argument is in the form Xy or Xy_z where X and z are integers and y is a letter. An example is 4a
or 4a_1.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about the FCP target adapters that
are present on the specified node.
[-adapter <text>] - Adapter
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified name.
[-description <text>] - Description
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified description.
[-physical-protocol {fibre-channel|ethernet}] - Physical Protocol
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified physical protocol. Possible values are fibre-channel and ethernet.
[-max-speed {1|2|4|8|10|16|32|auto}] - Maximum Speed
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified maxium speed.
[-status-admin {down|up}] - Administrative Status
If this parameter is specified, the command displays information only about the FCP target adapters that
match the administrative state. Possible values are up and down.
[-status-oper <text>] - Operational Status
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified operational status.
[-status-extended <text>] - Extended Status
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified extended status.
[-portaddr <Hex Integer>] - Host Port Address
If this parameter is specified, the command displays information only about the FCP target adapters
connected with the specified fabric port address.
[-firmware-rev <text>] - Firmware Revision
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified firmware revision.
[-data-link-rate <integer>] - Data Link Rate (Gbit)
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified data link rate.
[-fabric-established {true|false}] - Fabric Established
If this parameter is specified, the command displays information only about the FCP target adapters that
match the specified fabric login establishment state.
● Examples
The example above displays information regarding FCP adapters within cluster1.
Node: sti6280-021
Adapter: 0a
Description: Fibre Channel Target Adapter 0a (QLogic 2532 (2562), rev. 2, 8G)
Physical Protocol: fibre-channel
Maximum Speed: 8
Administrative Status: up
Operational Status: online
Extended Status: ADAPTER UP
Host Port Address: 30012c
Firmware Revision: 5.8.0
Data Link Rate (Gbit): 4
Fabric Established: true
Fabric Name: 20:14:54:7f:ee:54:b9:01
Connection Established: ptp
Is Connection Established: true
Mediatype: ptp
Configured Speed: auto
Adapter WWNN: 50:0a:09:80:8f:7f:8b:1c
Adapter WWPN: 50:0a:09:81:8f:7f:8b:1c
Switch Port: RTP-AG01-410B51:1/41
Form Factor Of Transceiver: SFP
Vendor Name Of Transceiver: OPNEXT,INC
Part Number Of Transceiver: TRS2000EN-SC01
Revision Of Transceiver: 0000
Serial Number Of Transceiver: T10H64793
FC Capabilities Of Transceiver: 10 (Gbit/sec)
Vendor OUI Of Transceiver: 0:11:64
Wavelength In Nanometers: 850
Date Code Of Transceiver: 10:08:17
Validity Of Transceiver: true
Connector Used: LC
Encoding Used: 64B66B
Is Internally Calibrated: true
Diagnostic Monitoring Type: 68
Status Monitoring Available: fa {Rx_Loss_of_Sig, Tx_Fault, Tx_Disable}
SFF-8472 Compliance: 5
Received Optical Power: 441.3 (uWatts)
Is Received Power In Range: true
SFP Transmitted Optical Power: 600.4 (uWatts)
Is Xmit Power In Range: true
DDM Status: 30
Is Xmit Disabled: false
Is Xmit In Fault: false
Is Receiver In LOS: false
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display FCP topology interconnect elements per adapter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to select the interconnect elements for adapters that are located on the node that
you specify.
[-adapter <text>] - Adapter
Use this parameter to select the interconnect elements for the specified adapter.
[-domain-id <integer>] - Domain Id
Use this parameter to select the interconnect elements with the specified domain identifier.
[-port-wwpn <text>] - Port WWPN
Use this parameter to select the interconnect elements with the specified port world wide name.
[-switch-name <text>] - Switch Name
Use this parameter to select the interconnect elements with the specified switch.
[-switch-vendor <text>] - Switch Vendor
Use this parameter to select the interconnect elements with the specified vendor.
[-switch-release <text>] - Switch Release
Use this parameter to select the interconnect elements with the specified release.
[-switch-wwpn <text>] - Switch WWPN
Use this parameter to select the interconnect elements with the specified world wide port name.
[-switch-wwn <text>] - Switch WWN
Use this parameter to select the interconnect elements with the specified world wide name.
[-port-count <integer>] - Port Count
Use this parameter to select the interconnect elements with the specified port count.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays the active zone set information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to select the active zone set information for adapters that located on the node name
that you specify.
[-adapter <text>] - Adapter
Use this parameter to select the active zone set information for the specified adapter.
[-zoneset-name <text>] - Zoneset Name
Use this parameter to select the active zone set information for the specified zone set name.
[-zone-name <text>] - Zone Name
Use this parameter to select the active zone set information for the specified zone name.
[-unique <integer>] - Unique
A unique index for each zoneset record.
[-type-name <text>] - Type Name
Use this parameter to select the active zone set information with the specified symbolic type.
[-type <integer>] - Type
Use this parameter to select the active zone set information with the specified port type.
[-port-id <Hex Integer>] - Member Port Id
Use this parameter to select the active zone set information with the specified member port id.
[-domain-id <integer>] - Member Domain Id
Use this parameter to select the active zone set information with the specified member domain id.
[-port <integer>] - Member Port
Use this parameter to select the active zone set information with the specified member port.
[-wwn <text>] - Member WWN
Use this parameter to select the active zone set information with the specified member WWN.
[-zone-count <integer>] - Zone Count
Use this parameter to select the active zone set information with the specified number of zones.
[-zone-member-count <integer>] - Zone Member Count
Use this parameter to select the active zone set information with the specified number of zone members
in a zone.
[-contents <text>] - Member Contents
Use this parameter to select the active zone set information using any type.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface create command creates a logical interface (LIF).
Note
A logical interface is an IP address associated with a physical network port. For logical interfaces using
NAS data protocols, the interface can fail over or be migrated to a different physical port in the event
of component failures, thereby continuing to provide network access despite the component failure.
Logical interfaces using SAN data protocols do not support migration or failover.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the LIF is created.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the LIF that is created. For iSCSI and FC LIFs, the name cannot
be more than 254 characters.
[-service-policy <text>] - Service Policy
Use this parameter to specify a service policy for the LIF. If no policy is specified, a default policy will be
assigned automatically. Use the network interface service-policy show command to review
available service policies.
[-role {cluster|data|node-mgmt|intercluster|cluster-mgmt}] - (DEPRECATED)-Role
Note
This parameter has been deprecated and may be removed in a future version of ONTAP. Use the -
service-policy parameter instead.
Use this parameter to specify the role of the LIF. LIFs can have one of five roles:
• Cluster LIFs, which provide communication among the nodes in a cluster
• Intercluster LIFs, which provide communication among peered clusters
• Data LIFs, which provide data access to NAS and SAN clients
• Node-management LIFs, which provide access to cluster management functionality
• Cluster-management LIFs, which provide access to cluster management functionality
LIFs with the cluster-management role behave as LIFs with the node-management role except that
cluster-management LIFs can failover between nodes.
[-data-protocol {nfs|cifs|iscsi|fcp|fcache|none|fc-nvme|s3}, ...] - Data Protocol
Use this parameter to specify the list of data protocols that can be served by the LIF. The supported
protocols are NFS, CIFS, iSCSI, FCP, and FC-NVMe. NFS and CIFS are available by default when you create
a LIF. If you specify "none", the LIF does not support any data protocols. Also, none, iscsi, fcp or fc-nvme
cannot be combined with any other protocols.
Note
The data-protocol field must be specified when the LIF is created and cannot be modified later.
Note
The NFS protocol relies on firewall services included in the built-in "data" and "mgmt-nfs" firewall
policies. Assigning a different firewall policy might disrupt some NFS client implementations.
Note
A cluster LIF cannot be on the same subnet as a management or data LIF.
Use this parameter to specify the LIF's home port or interface group. The home port is the port or
interface group to which the LIF returns when the network interface revert command is run on
the LIF.
[-status-admin {up|down}] - Administrative Status
Use this parameter to specify whether the initial administrative status of the LIF is up or down. The
default setting is up. The administrative status can differ from the operational status For example, if you
specify the status as up but a network problem prevents the interface from functioning, the operational
status remains as down.
[-failover-policy {system-defined|local-only|sfo-partner-only|disabled|broadcast-domain-wide}] -
Failover Policy
Use this parameter to specify the failover policy for the LIF.
• system-defined - The system determines appropriate failover targets for the LIF. The default behavior
is that failover targets are chosen from the LIF's current hosting node and also from one other non-
parter node when possible.
• local-only - The LIF fails over to a port on the local or home node of the LIF.
• sfo-partner-only - The LIF fails over to a port on the home node or SFO partner only.
• broadcast-domain-wide - The LIF fails over to a port in the same broadcast domain as the home port.
• disabled - Failover is disabled for the LIF.
The failover policy for cluster logical interfaces is local-only and cannot be changed. The default failover
policy for data logical interfaces is system-defined. This value can be changed.
Note
Logical interfaces for SAN protocols do not support failover. Thus, such interfaces will always show
this parameter as disabled.
Note
The NFS data protocol relies on firewall services included in the built-in "data" and "mgmt-nfs"
firewall policies. Assigning a different firewall policy might disrupt some NFS client
implementations.
Note
Logical interfaces for SAN traffic do not support auto-revet. Thus, this parameter is always false
on such interfaces.
Note
This parameter has been deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to specify whether load balancing migration is activated for this data LIF. The default
value of this parameter is false. If you set the value of this parameter to true, automatic revert
capability for this data LIF is disabled (the -auto-revert parameter is set to false). Also, data LIFs
that migrate as a result of load balancing adhere to network interface failover rules.
Note
During times when a LIF is hosting active NFSv4, CIFS, or NRV connections, load balancing based
LIF migrations between nodes will be temporarily disabled.
Note
Logical interfaces for SAN protocols do not support failover. Thus, this parameter cannot be
specified for such interfaces.
● Examples
The following example creates an IPv4 LIF named datalif1 and an IPv6 LIF named datalif2 on a Vserver
named vs0. Their home node is node0 and home port is e0c. The failover policy broadcast-domain-wide is
assigned to both LIFs. The firewall policy is data and the LIFs are automatically reverted to their home node
at startup and under other circumstances. The datalif1 has the IP address 192.0.2.130 and netmask
255.255.255.128, and datalif2 has the IP address 3ffe:1::aaaa and netmask length of 64.
cluster1::> network interface create -vserver vs0 -lif datalif1 -role data -home-node node0 -home-port
e0c -address 192.0.2.130 -netmask 255.255.255.128 -failover-policy broadcast-domain-wide -firewall-poli
cy data -auto-revert true
cluster1::> network interface create -vserver vs0 -lif datalif2 -role data -home-node node0 -home-port
e0c -address 3ffe:1::aaaa -netmask-length 64 -failover-policy broadcast-domain-wide -firewall-policy da
ta -auto-revert true
Related reference
network interface service-policy show (page 443)
network interface revert (page 418)
system services firewall policy show (page 1532)
system services firewall policy modify (page 1531)
network interface failover-groups commands (page 428)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface delete command deletes a logical interface (LIF) from a Vserver. Only
administratively down LIFs can be deleted. To make a LIF administratively down, use the network
interface modify command to set the "status-admin" parameter to "down".
Note
If the LIF is configured for a SAN protocol and is part of a port set, the LIF must be removed from the
port set before it can be deleted. To determine if a LIF is in a port set, use the lun portset show
command. To remove the LIF from the port set, use the lun portset remove command.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to be deleted is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the logical interface to delete.
● Examples
The following example deletes a logical interface named cluslif3 that is located on a Vserver named vs0.
Related reference
network interface modify (page 414)
lun portset show (page 281)
lun portset remove (page 280)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface migrate command migrates a logical interface to a port or interface group on
the node you specify.
Note
Manual migration of a logical interface can take up to 15 seconds to complete. Also, when you
migrate a cluster logical interface, you must do so from the local node. Logical interface migration is a
best-effort command, and can only be completed if the destination node and port are operational
Note
Logical interfaces for SAN protocols do not support migration. Attempts to do so will result in an error.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver that owns the logical interface that is to be migrated.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the logical interface that is to be migrated.
-destination-node <nodename> - Destination Node
Use this parameter to specify the node to which the logical interface is to be migrated.
[-destination-port {<netport>|<ifgrp>}] - Destination Port
Use this parameter to specify the port or interface group to which the logical interface is to be migrated.
[-force [true]] - Force Migrate Data LIF Flag (privilege: advanced)
Use this parameter to force the migration operation.
● Examples
The following example migrates a logical interface named datalif1 on a Vserver named vs0 to port e0c on a
node named node2:
cluster1::> network interface migrate -vserver vs0 -lif datalif1 -dest-node node2 -dest-port e0c
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface migrate-all command migrates all data logical interfaces from the node you
specify.
Note
Manual migration of a logical interface can take up to 15 seconds to complete. Logical interface
migration is a best-effort command and can only be completed if the destination node and port are
operational. Logical interface migration requires that the logical interface be pre-configured with valid
failover rules to facilitate failover to a remote node.
Note
Logical interfaces for SAN protocols do not support migration. Attempts to do so will result in an error.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-node <nodename> - Node
Use this parameter to specify the node from which all logical interfaces are migrated. Each data logical
interface is migrated to another node in the cluster, assuming that the logical interface is configured
with failover rules that specify an operational node and port.
[-port {<netport>|<ifgrp>}] - Port
Use this parameter to specify the port from which all logical interfaces are migrated. This option cannot
be used with asynchronous migrations. If this parameter is not specified, then logical interfaces will be
migrated away from all ports on the specified node.
● Examples
The following example migrates all data logical interfaces from the current (local) node.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface modify command modifies attributes of a logical interface (LIF).
Note
You cannot modify some properties of an iSCSI or FCP LIF, such as -home-node or -home-port, if the
LIF is in a port set. To modify these properties, first remove the LIF from the port set. To determine if a
LIF is in a port set, use the lun portset show command. To remove the LIF from the port set, use
the lun portset remove command.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the LIF to be modified is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the LIF that is to be modified
[-service-policy <text>] - Service Policy
Use this parameter to modify the service policy associated with the LIF.
[-address <IP Address>] - Network Address
Use this parameter to modify the LIF's IP address.
Note
A cluster LIF cannot be on the same subnet as a data or management LIF.
Note
If you change this parameter for a cluster or management LIF, you must reboot the storage system
to force the change to take effect.
Note
The failover policy for cluster logical interfaces is local-only and cannot be changed. The default
failover policy for data logical interfaces is system-defined. This value can be changed.
Note
Logical interfaces for SAN protocols do not support failover. Thus, such interfaces always show this
parameter as disabled.
Note
The NFS data protocol relies on firewall services included in the built-in "data" and "mgmt-nfs"
firewall policies. Assigning a different firewall policy policy might disrupt some NFS client
implementations.
Note
Logical interfaces for SAN traffic do not support auto-revert. Thus, this parameter is always false
on such interfaces.
Note
This parameter has been deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to modify whether or not load balancing migration is enabled for this data LIF. The
default value of this parameter is false. If you set the value of this parameter to true, the automatic
revert capability of the data LIF is disabled (the -auto-revert parameter is set to false). Also, data
LIFs that migrate as a result of load balancing adhere to network interface failover rules.
Note
During times when a LIF is hosting active NFSv4, CIFS, or NRV connections, load balancing based
LIF migrations between nodes will be temporarily disabled.
Note
Logical interfaces for SAN protocols do not support failover. Thus, this parameter cannot be
specified for such interfaces.
● Examples
The following example modifies a LIF named datalif1 on a logical server named vs0. The LIF's netmask is
modified to 255.255.255.128.
cluster1::> network interface modify -vserver vs0 -lif datalif1 -netmask 255.255.255.128
Related reference
network interface revert (page 418)
system services firewall policy show (page 1532)
system services firewall policy modify (page 1531)
network interface failover-groups commands (page 428)
lun portset show (page 281)
lun portset remove (page 280)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use the network interface rename command to change the name of an existing logical interface.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to rename is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the name of the logical interface to rename.
-newname <text> - The new name for the interface
Use this parameter to specify the new name of the logical interface. For iSCSI and FC LIFs, the name
cannot be more than 254 characters.
● Examples
The following example renames a cluster logical interface named cluslif1 to cluslif4 on a Vserver named vs0.
cluster1::> network interface rename -vserver vs0 -lif cluslif1 -newname cluslif4
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface revert command reverts a logical interface that is not currently on its home
port to its home port, assuming that the home node and port are both operational. A logical interface's
home port is specified when the logical interface is created. Determine a logical interface's home port by
using the network interface show command.
Note
When you revert a cluster logical interface, you must do so from the local node.
Note
On some cloud platforms, this operation might perform changes to the external route tables.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the logical interface to be reverted is located.
-lif <lif-name> - Logical Interface Name
Use this parameter to specify the logical interface that is to be reverted.
Note
Logical interfaces for SAN protocols are always home. Thus, this command has no effect on such
interfaces. The same applies to logical interfaces for NAS protocols that are already home.
● Examples
The following example returns any logical interfaces that are not currently on their home ports to their
home ports.
Related reference
network interface show (page 419)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network interface show command displays information about logical interfaces.
Running the command with the -failover parameter displays information relevant to logical interface
failover rules.
Running the command with the -status parameter displays information relevant to logical interface
operational status.
Running the command with the -by-ipspace parameter displays information relevant to logical interfaces
on a specific IPspace.
See the examples for more information.
You can specify additional parameters to display only information that matches those parameters. For
example, to display information only about logical interfaces whose operational status is down, run the
command with the -status-oper down parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-by-ipspace ]
Use this parameter to display logical-interfaces sorted by IPspace and Vserver.
| [-dns-zones ]
Use this parameter to display logical-interfaces and whether the interface is associated with a Domain
Name System (DNS) load balancing zone.
| [-failover ]
Use this parameter to display logical-interface failover information.
| [-status ]
Use this parameter to display detailed logical-interface status information.
| [-instance ]}
Use this parameter to display all the fields for the specified logical-interfaces.
[-vserver <vserver>] - Vserver Name
Use this parameter to display information only about logical interfaces on the Vserver you specify.
Use this parameter plus the -lif parameter to display detailed information only about the logical
interface you specify.
[-lif <lif-name>] - Logical Interface Name
Use this parameter to display information only about logical interfaces that match the name you specify.
Use this parameter with the -vserver parameter to display detailed information only about the logical
interface you specify.
[-service-policy <text>] - Service Policy
Use this parameter to display information only about logical interfaces that have the service policy you
specify.
[-services <LIF Service Name>, ...] - Service List
Use this parameter to display information only about logical interfaces that support all services in a
comma-separated list of service names.
[-role {cluster|data|node-mgmt|intercluster|cluster-mgmt}] - (DEPRECATED)-Role
Note
This parameter has been deprecated and may be removed in a future version of ONTAP. Use either
the -service-policy or -services parameter instead.
Use this parameter to display information only about logical interfaces that are associated with network
ports that have the role you specify.
[-data-protocol {nfs|cifs|iscsi|fcp|fcache|none|fc-nvme|s3}, ...] - Data Protocol
Use this parameter to display information only about logical interfaces that have the enabled data
protocols you specify.
[-address <IP Address>] - Network Address
Use this parameter to display information only about logical interfaces that match the IP address or
address range you specify.
[-netmask <IP Address>] - Netmask
Use this parameter to display information only about logical interfaces that have the netmask you
specify.
[-netmask-length <integer>] - Bits in the Netmask
Use this parameter to display information only about logical interfaces with a netmask that has the
number of bits you specify.
[-is-vip [true]] - Is VIP LIF
Use this parameter to display information only about logical interfaces that are VIP LIFs or not as you
specify.
[-subnet-name <subnet name>] - Subnet Name
Use this parameter to display the logical interfaces that matches the subnet name.
[-home-node <nodename>] - Home Node
Use this parameter to display information only about logical interfaces that have the home node you
specify.
[-home-port {<netport>|<ifgrp>}] - Home Port
Use this parameter to display information only about logical interfaces that have the home port or
interface group you specify.
[-curr-node <nodename>] - Current Node
Use this parameter to display information only about logical interfaces that are currently located on the
node you specify.
[-curr-port {<netport>|<ifgrp>}] - Current Port
Use this parameter to display information only about logical interfaces that are currently located on the
port or interface group you specify.
[-status-oper {up|down}] - Operational Status
Use this parameter to display information only about logical interfaces that have the operational status
you specify.
[-status-extended <text>] - Extended Status
Use this parameter to display information only about logical interfaces that match the extended status
that you specify.
[-numeric-id <integer>] - Numeric ID (privilege: advanced)
Use this parameter to display information only about logical interfaces with the numeric ID (or range of
IDs) you specify. The numeric ID is an integer that identifies the logical interface in the cluster.
[-is-home {true|false}] - Is Home
Use this parameter to display information only about logical interfaces that are (true) or are not (false)
currently located on their home node and port.
[-status-admin {up|down}] - Administrative Status
Use this parameter to display information only about logical interfaces that have the administrative
status you specify.
[-failover-policy {system-defined|local-only|sfo-partner-only|disabled|broadcast-domain-wide}] -
Failover Policy
Use this parameter to display information only about logical interfaces that use the failover policy you
specify.
[-firewall-policy <policy>] - Firewall Policy
Use this parameter to display information only about logical interfaces that use the firewall policies you
specify.
[-auto-revert {true|false}] - Auto Revert
Use this parameter to display information only about logical interfaces that have auto-revert setting you
specify.
[-sticky {true|false}] - Sticky Flag (privilege: advanced)
Use this parameter to display information only about logical interfaces that are "sticky". A sticky logical
interface is one that has been manually migrated to another node and is not subject to auto-revert
settings. A sticky logical interface remains at the migrated location until it is manually reverted or until
it fails over to another node.
[-dns-zone {<zone-name>|none}] - Fully Qualified DNS Zone Name
Use this parameter to display information only about logical interfaces in the specified DNS zone.
[-listen-for-dns-query {true|false}] - DNS Query Listen Enable
Use this parameter to display information only about logical interfaces that have the DNS query listen
value you specify.
[-allow-lb-migrate {true|false}] - (DEPRECATED)-Load Balancing Migrate Allowed (privilege:
advanced)
Note
This parameter has been deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to display information only about logical interfaces for which load balancing
migration is activated (true) or not activated (false).
[-lb-weight {load|0..100}] - Load Balanced Weight (privilege: advanced)
Use this parameter to display information only about logical interfaces that have the load balancing
weight you specify.
[-failover-group <failover-group>] - Failover Group Name
Use this parameter to display information only about logical interfaces that are in the failover group you
specify. Logical interfaces in the same failover group are capable of failing over to the same set of ports.
[-wwpn <text>] - FCP WWPN
Use this parameter to display information only about logical interfaces that have the Fibre Channel
Protocol port identifier (World Wide Port Name) you specify.
[-address-family {ipv4|ipv6|ipv6z}] - Address family
Use this parameter to view the address family that is in use on the interface. Only IPv4 and IPv6 non-
zoned addresses can be configured. Configuration of IPv6z addresses is not allowed.
[-comment <text>] - Comment
Use this parameter to display information only about logical interfaces that have the comment you
specify.
[-ipspace <IPspace>] - IPspace of LIF
Use this parameter to display information only about logical interfaces on the IPspace you specify.
[-is-dns-update-enabled {true|false}] - Is Dynamic DNS Update Enabled?
Use this parameter to display information only about logical interfaces that have (true) or do not have
(false) dynamic DNS updates enabled for them.
[-probe-port <integer>] - Probe-port for Cloud Load Balancer
Use this parameter display the probe-port for the logical interface in the Azure environment.
[-broadcast-domain <text>] - Broadcast Domain
Use this parameter to display the broadcast domain that contains the home port of the logical interface.
[-vserver-type <vserver type>] - Vserver Type
Use this parameter to display information only about logical interfaces owned by Vservers of the
specified type.
● Examples
The following example displays general information about all logical interfaces.
The following example displays failover information about all logical interfaces.
node0:e0e,
node0:e0f,
node1:e0M,
node1:e0d,
node1:e0e,
node1:e0f
node0_mgmt1 node0:e0M local-only Default
Failover Targets: node0:e0M,
node0:e0d,
node0:e0e,
node0:e0f
node1_mgmt1 node1:e0M local-only Default
Failover Targets: node1:e0M,
node1:e0d,
node1:e0e,
node1:e0f
Cluster
node0_clus1 node0:e0a local-only Cluster
Failover Targets: node0:e0a,
node0:e0b
node0_clus2 node0:e0a local-only Cluster
Failover Targets: node0:e0b,
node0:e0a
node1_clus1 node1:e0a local-only Cluster
Failover Targets: node1:e0a,
node1:e0b
node1_clus2 node1:e0a local-only Cluster
Failover Targets: node1:e0b,
node1:e0a
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface start-cluster-check command initiates an accessibility check from every
logical interface to every aggregate. Automatic checks run periodically, but this command manually initiates
a check immediately.
This command produces no direct output. Any errors encountered during the check are reported in the event
log. See the event log show command for more information.
● Examples
This example shows an execution of this command, with all parameters and output.
Related reference
event log show (page 171)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface capacity show command displays the number of IP LIFs of role data supported
on the cluster, as well as the number of IP LIFs of role data currently configured on the cluster.
Note
The number of IP LIFs of role data that are supported on a node depends on the hardware platform
and the Cluster's Data ONTAP version. If one or more nodes in the cluster cannot support additional
LIFs, then none of the nodes in the cluster can support additional LIFs.
● Examples
The following displays the IP data LIF capacity.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface capacity details show command displays the number of IP LIFs of role data
that can be configured on each node, the number of IP data LIFs of role data that are supported on each
node, and the number of IP data LIFs of role data that are configured to be homed on each node.
Note
The number of IP LIFs of role data that are supported on a node depends on the hardware platform
and the Cluster's Data ONTAP version. If one or more nodes in the cluster cannot support additional
LIFs, then none of the nodes in the cluster can support additional LIFs.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Use this parameter to specify the node for which to obtain data LIF capacity.
[-capacity-for-node <integer>] - Number of IP data LIFs that can be configured on the node
This parameter specifies the number of IP LIFs of role data that can be configured on the node at the
currently running Data ONTAP version. To view the version of a node, use the cluster image show
command.
[-limit-for-node <integer>] - Number of IP data LIFs that are supported on the node
This parameter specifies the number of IP LIFs of role data that are supported on the node at the current
effective cluster version (ECV). To view the version of a node, use the cluster image show command.
[-count-for-node <integer>] - Number of IP data LIFs that are assigned to the node
This parameter specifies the number of IP LIFs of role data currently configured to be homed on the
node. To view LIFs homed on this node, use the network interface show -home-node command.
● Examples
The following displays the IP data LIF capacity.
Related reference
cluster image show (page 83)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command identifies logical interfaces (LIFs) at risk of becoming inaccessible if their hosting nodes were
to experience an outage. The source-nodes parameter is the only required input.
The tuple <destination-nodes, vserver-name, lif-name> is sufficient to uniquely identify a record in the
returned listing. All fields other than source-nodes can be filtered on in the usual fashion. There are some
examples of this filtering below.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-destination-nodes <nodename>, ...] - Set Of Nodes Over Capacity
Use this parameter to display the nodes an at-risk LIF or LIFs could fail over to.
[-vserver-name <vserver>] - Vserver Name
Use this parameter to display only LIFs on the Vserver you specify.
[-lif-name <lif-name>] - LIF Name
Use this parameter to display at-risk information only about the LIF or LIFs whose name you specify.
-source-nodes <nodename>, ... - Nodes Going Down
List of nodes to test. At-risk LIFs currently hosted on these nodes will be identified. The list should
contain no more than half the nodes in the cluster.
[-over-amount <integer>] - Amount Capacity Exceeded
Use this parameter to select only at-risk LIFs associated with a set of destination nodes whose amount
over capacity matches the number you specify.
Note that the number of LIFs considered to be at risk may be higher than the actual amount over
capacity a given set of nodes is. Once a given set of nodes is determined to be potentially over capacity,
all LIFs whose set of failover target nodes is an exact match are marked as at risk. The amount over
capacity is an upper bound on the number LIFs which could become unhosted if LIFs were to fail over in
a random order, each to a target randomly selected from that LIF's configured failover targets.
[-failover-group <failover-group>] - Failover Group Name
Use this parameter to display information only about at-risk LIFs whose failover-group you specify.
[-failover-policy {system-defined|local-only|sfo-partner-only|disabled|broadcast-domain-wide}] -
Failover Policy
Use this parameter to display information only about at-risk LIFs whose failover-policy you specify.
● Examples
The following example shows all the at-risk LIFs for a specific two-node outage in a six-node cluster.
The following example shows the same two-node outage scenario, but now with some filtering applied to
the results.
cluster1::> network interface check failover show -source-nodes node1,node5 -destination-nodes node2,no
de3,node4,node6 -failover-group Def*
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network interface dns-lb-stats show command displays the statistics for DNS load-balancing
lookups for the zones belonging to the specified Vserver. These statistics represent the data for the Vserver
on the local node. The following counts can be seen in the statistics output:
• success-count : Number of successful lookups.
• authoritative-count : Number of authoritative answers sent.
• nonauthoritative-count : Number of non authoritative answers sent.
• rr-set-missing-count : Number of times the RR set was missing.
• domain-missing-count : Number of times the domain was not be found.
• failure-count : Number of failed lookups.
• dropped-count : Number of lookups dropped.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver
Use this parameter to display DNS load-balancer statistics only for the specified Vservers.
[-zone <text>] - DNS Zone
Use this parameter to display DNS load-balancer statistics only for the specified DNS zones.
[-success-count <integer>] - Successful Lookup Count
Use this parameter to display DNS load-balancer statistics only for the specified number of successful
lookups.
[-authoritative-count <integer>] - Authoritative Answer Count
Use this parameter to display DNS load-balancer statistics only for the specified number of authoritative
answers sent.
[-nonauthoritative-count <integer>] - Non Authoritative Answer Count
Use this parameter to display DNS load-balancer statistics only for the specified number of non-
authoritative answers sent.
[-rr-set-missing-count <integer>] - RR Set Missing Count
Use this parameter to display DNS load-balancer statistics only for the specified number of times the RR
set was missing.
● Examples
The following example displays stats for the zone "x.com".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups add-targets command enables you to add a list of
failover targets such as network ports, interface groups, or VLANs to an existing logical interface failover
group.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to extend.
-targets <<node>:<port>>, ... - Failover Targets
Use this parameter to specify the failover targets such as network ports, interface groups, or VLANs you
wish to add to the failover group.
● Examples
This example shows the failover group "clyde" being extended to include additional failover targets.
cluster1::> network interface failover-group add-targets -vserver vs1 -failover-group clyde -targets xe
na1:e0c, xena1:e0d-100, xena2:a0a
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups create command creates a grouping of failover targets for
logical interfaces on one or more nodes. Use this command to add a new network port or interface group to
an existing failover group.
Note
Interfaces for SAN protocols do not support failover. Such interfaces are not valid failover targets.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the name of the logical interface failover group that you want to create.
-targets <<node>:<port>>, ... - Failover Targets
Use this parameter to specify the list of failover targets (network ports, interface groups, or VLANs on a
node) belonging to this failover group.
● Examples
The following example shows how to create a failover group named failover-group_2 containing ports e1e
and e2e on node Xena.
cluster1::> network interface failover-groups create -vserver vs0 -failover-group failover-group_2 -tar
gets xena:e1e,xena:e2e
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups delete command deletes a logical interface failover group.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the name of the logical interface failover group to be deleted.
● Examples
The following example shows how to delete a failover group named failover-group_2.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups modify command enables you modify the list of network
ports, interface groups, or VLANs belonging to an existing logical interface failover group. The specified list
will overwrite the existing list of network ports, interface groups, and VLANs currently belonging to the
logical interface failover group.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver(s) from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to modify.
[-targets <<node>:<port>>, ...] - Failover Targets
Use this parameter to specify the network ports, interface groups, or VLANs you wish to now belong to
the failover group.
● Examples
This example shows the failover group "clyde" being modified to now contain the specified network ports.
cluster1::> network interface failover-group modify -vserver vs1 -failover-group clyde -targets xena1:e
0c, xena1:e0d-100, xena2:a0a
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups remove-targets command enables you to specify a list of
failover targets such as network ports, interface groups, or VLANs to be removed from an existing logical
interface failover group.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver(s) from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to remove failover targets from.
-targets <<node>:<port>>, ... - Failover Targets
Use this parameter to specify the failover targets such as network ports, interface groups, or VLANs you
wish to remove from the failover group.
● Examples
This example shows the failover targets xena1:e0c and xena1:e0d-100 being removed from the failover
group "clyde".
cluster1::> network interface failover-group remove-targets -vserver vs1 -failover-group clyde -targets
xena1:e0c, xena1:e0d-100, xena2:a0a
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface failover-groups rename command enables you to rename an existing
logical interface failover group.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vservers from which this failover group is accessible.
-failover-group <text> - Failover Group Name
Use this parameter to specify the failover group that you want to rename.
-new-failover-group-name <text> - New name
Use this parameter to specify the new name of the failover group.
● Examples
This example shows the failover group "clusterwide" being renamed "clyde".
cluster1::> network interface failover-group rename -failover -vserver vs1 -failover-group clusterwide
-new-failover-group-name clyde
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network interface failover-groups show command displays information about logical interface
failover groups.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver Name
Use this parameter to display information only about the logical interface failover groups that have the
target Vserver you specify.
[-failover-group <text>] - Failover Group Name
Use this parameter to display information only about the logical interface failover groups you specify.
[-targets <<node>:<port>>, ...] - Failover Targets
Use this parameter to display information only about the logical interface failover groups that have the
failover target (physical port, interfacde group, or VLAN) you specify.
[-broadcast-domain <Broadcast Domain>] - Broadcast Domain
Use this parameter to display information only about the logical interface failover groups that have the
broadcast domain you specify.
● Examples
The following example displays information about all logical interface failover groups on a two node cluster.
node2:e1a, node2:e2a,
cluster1
Default
node1:e0M, node1:e0a,
node1:e0b, node1:e0c,
node1:e0d, node2:e0M,
node2:e0a, node2:e0b,
node2:e0c, node2:e0d
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network interface lif-weights show command displays the weights assigned to each LIF in a
DNS load-balancing zone in a Vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver
Use this parameter to display information only for the specified Vservers.
[-zone <text>] - DNS Zone
Use this parameter to display information only for the specified DNS zones.
[-address <IP Address>] - Network Address
Use this parameter to display information only for the specified IP addresses.
[-weight <double>] - Load Balancer Weight
Use this parameter to display information only for the specified load balancer weights
● Examples
The following example displays LIF weights for vserver "vs1".
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy add-service command adds an additional service to an
existing service-policy. When an allowed address list is specified, the list applies to only the service being
added. Existing services included in this policy are not impacted.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be updated.
-service <LIF Service Name> - Service entry to be added
Use this parameter to specify the name of service to be added to the existing service policy.
[-allowed-addresses <IP Address/Mask>, ...] - Allowed address ranges for the service
Use this parameter to specify a list of subnet masks for addresses that are allowed to access this service.
Use the value 0.0.0.0/0 to represent the wildcard IPv4 address and ::/0 to represent the wildcard
IPv6 address.
● Examples
The following example shows the addition of a service to an existing service policy.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy clone command creates a new service policy that includes
the same services and allowed addresses as an existing policy. Once the new service policy has been
created, it can be modified as necessary without impacting the original policy.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be cloned.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be cloned.
-target-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the vserver on which the new service policy should be created.
-target-policy <text> - New Service Policy Name
Use this parameter to specify the name of the new service policy.
● Examples
The following example shows the cloning of a service policy.
ipspace1
default-intercluster intercluster-core: 0.0.0.0/0
management-https: 0.0.0.0/0
cluster1::> network interface service-policy clone -vserver cluster1 -policy custom1 -target-vserver ip
space1 -target-policy custom2
ipspace1
custom2 intercluster-core: 0.0.0.0/0
management-core: 0.0.0.0/0
management-ssh: 0.0.0.0/0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy create command creates a new service policy with a list of
included services. LIFs can reference this policy to control the list of services that they are able to transport
on their network. Services can represent applications accessed by a LIF as well as applications served by this
cluster.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver on which the service policy will be created.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be created.
[-services <LIF Service Name>, ...] - Included Services
Use this parameter to specify a list of services that should be included in this policy.
[-allowed-addresses <IP Address/Mask>, ...] - Allowed Addresses
Use this parameter to specify a list of subnet masks for addresses that are allowed to access the services
in this policy. Use the value 0.0.0.0/0 to represent the wildcard IPv4 address and ::/0 to represent
the wildcard IPv6 address.
● Examples
The following example shows the creation of a service policy with no initial services.
empty -
The following example shows the creation of a new service policy with a specified service list.
cluster1::> network interface service-policy create -vserver cluster1 -policy custom -services interclu
ster-core,management-ssh
cluster1
custom intercluster-core: 0.0.0.0/0
management-ssh: 0.0.0.0/0
empty -
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy delete command deletes an existing service policy.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be deleted.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be deleted.
● Examples
The following example shows the deletion of a service policy.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy modify-service command modifies the policy-specific
attributes of a service that is already included in a particular service policy. Other services in the policy are
not impacted by the change.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be updated.
-service <LIF Service Name> - Service entry to be modified
Use this parameter to specify the name of service to be updated.
-allowed-addresses <IP Address/Mask>, ... - Allowed address ranges for the service
Use this parameter to specify a list of subnet masks for addresses that are allowed to access this service.
Use the value 0.0.0.0/0 to represent the wildcard IPv4 address and ::/0 to represent the wildcard
IPv6 address.
● Examples
The following example shows the modification of a service on an existing service policy.
management-https: 0.0.0.0/0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy remove-service command removes an individual service
from an existing service policy. Other services in the the policy are not impacted by the change.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be updated.
-policy <text> - Policy Name
Use this parameter to specify the name of service policy to be updated.
-service <LIF Service Name> - Service entry to be removed
Use this parameter to specify the name of service to be removed from the existing service policy.
● Examples
The following example shows the removal of a service from an existing service policy.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy rename command assigns a new name to an existing service
policy without disrupting the LIFs using the policy.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be renamed.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be renamed.
-new-name <text> - New Service Policy Name
Use this parameter to specify the new name for the service policy.
● Examples
The following example shows the renaming of a service policy.
cluster1::> network interface service-policy rename -vserver cluster1 -policy custom -new-name system
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network interface service-policy restore-defaults command restores a built-in service-
policy to its original state. The default list of services replaces any customizations that have been applied by
an administrator. All included services will be updated to use the default allowed address list.
● Parameters
-vserver <vserver name> - Vserver
Use this parameter to specify the name of the Vserver of the service policy to be restored.
-policy <text> - Policy Name
Use this parameter to specify the name of the service policy to be restored.
● Examples
The following example shows the restoration of a service policy's default settings.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network interface service-policy show command displays existing service policies.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Selects service policies that match the specified vserver name.
[-policy <text>] - Policy Name
Selects service policies that match the specified service policy name.
[-services <LIF Service Name>, ...] - Included Services
Selects service policies that contain all services in the specified list of service names.
[-service-allowed-addresses <text>, ...] - Service: Allowed Addresses
Selects service policies that contain all "<service>:<allowed-addresses>" in the specified list of
addresses.
● Examples
The following example displays the built-in service policies.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network interface service show command displays available services for IP LIFs and the TCP or
UDP ports that each service listens on. The ports listed in this table correspond to well-known ports that
each service can be expected to open a listening socket. Services that do not listen for ingress connections
are presented with an empty port list.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-restrictions ] (privilege: advanced)
The network interface service show-restrictions command displays available services for IP
LIFs and usage restrictions for each service. The restrictions determine which LIFs are permitted to use
each service and what restrictions the service implies for the LIFs that do use it.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <LIF Service Name>] - Service Name
Selects services that match the specified service name.
[-ports <integer>, ...] - TCP/UDP Port Numbers
Selects services that contain all IP ports in the specified list.
[-protocol-ports <text>, ...] - Protocol: Port Numbers
Selects services that match the <protocol>:<port> combination.
[-vserver-policy <svc_vserver_policy>] - Vserver Restrictions
Selects services that match a specific vserver restriction.
[-failover-policy <svc_failover_policy>] - Failover Restrictions
Selects services that match a specific interface failover restriction.
● Examples
The following example displays the built-in services.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
IPspaces are distinct IP address spaces in which Storage Virtual Machines (SVMs) reside. The "Cluster"
IPspace and "Default" IPspace are created by default. You can create more custom IPspaces when you need
your SVMs to have overlapping IP addresses, or you need more control over networking configurations for
cluster peering. Please reference the "Network Management Guide" for the limit of how many custom
IPspaces are supported on your system..
● Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be created.
• The name must contain only the following characters: A-Z, a-z, 0-9, ".", "-" or "_".
• The first character of each label, delimited by ".", must be one of the following characters: A-Z or a-z.
• The last character of each label, delimited by ".", must be one of the following characters: A-Z, a-z or
0-9.
• The maximum supported length is 47 characters.
• The system reserves the following names: "all", "local" and "localhost".
• The system provides the following IPspaces: "Cluster" and "Default".
● Examples
The following example creates IPspace "ips1".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete an IPspace that contains no ports or Vservers.
● Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be deleted. If the IPspace is associated with one or more logical-interfaces,
you must delete them before you can delete the IPspace.
● Examples
The following example deletes the IPspace "ips1".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Rename an IPspace.
● Parameters
-ipspace <IPspace> - IPspace name
The name of the IPspace to be renamed.
-new-name <IPspace> - New Name
The new name for the IPspace.
• The name must contain only the following characters: A-Z, a-z, 0-9, ".", "-" or "_".
• The first character of each label, delimited by ".", must be one of the following characters: A-Z or a-z.
• The last character of each label, delimited by ".", must be one of the following characters: A-Z, a-z or
0-9.
• The maximum supported length is 47 characters.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
● Examples
The following example renames IPspace "ips1" to "ips2".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display network IPspaces.
● Parameters
{ [-fields <fieldname>, ...]
Specify the fields to be displayed for each IPspace.
| [-instance ]}
Display all parameters of the IPspace objects.
[-ipspace <IPspace>] - IPspace name
Display the names of the IPspaces.
[-ports <<node>:<port>>, ...] - Ports
The list of network ports assigned to each IPspace.
[-broadcast-domains <Broadcast Domain>, ...] - Broadcast Domains
The list of broadcast domains that belong to the IPspace.
[-vservers <vserver name>, ...] - Vservers
The list of Vservers assigned to each IPspace.
● Examples
The following example displays general information about IPspaces.
Cluster
Cluster -
Default
cluster1, vs1, vs2 br1, br2, br3
2 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp default-router delete-all command deletes default router lists from the specified
IPspace.
● Parameters
-ipspace <IPspace> - IPspace Name
Use this parameter to specify the IPspace where the default routers are to be deleted.
● Examples
The following example deletes default routers from IPspace ips1.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp default-router show command displays Neighbor Discovery Protocol (NDP) default
routers learned on a specified port.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays the NDP default routers from the specified node.
● Examples
The following example displays NDP default routers on local port e0f.
Node: node1
IPspace: Default
Port Router Address Flag Expire Time
-------- -------------------------- -------------- --------------
e0f fe80::5:73ff:fea0:107 none 0d0h23m9s
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network ndp neighbor create command creates a static Neighbor Discovery Protocol (NDP)
neighbor entry within a Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the NDP neighbor is to be created.
-neighbor <IP Address> - Neighbor Address
Use this parameter to specify the neighbor's IPv6 address.
-mac-address <MAC Address> - MAC Address
Use this parameter to specify the neighbor's MAC address.
● Examples
The following example creates a NDP neighbor entry within Vserver vs0.
cluster1::*> network ndp neighbor create -vserver vs0 -neighbor 20:20::20 -mac-address 10:10:10:0:0:1
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network ndp neighbor delete command deletes a static Neighbor Discovery Protocol (NDP)
neighbor from a Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the NDP neighbor is to be deleted.
-neighbor <IP Address> - Neighbor Address
Use this parameter to specify the neighbor's IPv6 address.
● Examples
The following example deletes a NDP neighbor entry within Vserver vs0.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network ndp neighbor show command displays a group of static Neighbor Discovery Protocol (NDP)
neighbors within one or more Vservers. You can view static NDP neighbors within specified Vservers,
neighbors with specified IPv6 address, and neighbors with specified MAC address.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Displays the static NDP neighbors that have the specified Vserver as their origin.
[-neighbor <IP Address>] - Neighbor Address
Displays the static NDP neighbors that have the specified IPv6 address.
[-mac-address <MAC Address>] - MAC Address
Displays the static NDP neighbors that have the specified MAC address.
● Examples
The following example displays all of the static NDP neighbors configured on Vserver vs0.
20:20::20 01:01:01:01:01:01
2 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp neigbhor active-entry delete command deletes a Network Discovery Protocol
(NDP) neighbor entry on the specified port from a given Vserver's subnet group.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the neighbor entry is to be deleted.
-vserver <vserver> - System or Admin Vserver Name
Use this parameter to specify the System or Admin Vserver on which the neighbor entry is to be deleted.
-subnet-group <IP Address/Mask> - Subnet Group
Use this parameter to specify the subnet group from which the neighbor entry is to be deleted.
-neighbor <IP Address> - Neighbor
Use this parameter to specify the IPv6 address of the neighbor entry which is to be deleted.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port on which the neighbor entry is to be deleted.
● Examples
The following example deletes a neighbor entry from the Admin Vserver cluster1:
cluster1::*> network ndp neighbor active-entry delete -vserver cluster1 -node local -subnet-group ::/0
-neighbor fe80:4::5:73ff:fea0:107 -port e0d
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp neighbor active-entry show command displays Network Discovery Protocol (NDP)
neighbor cache entries on one or more nodes. You can view ndp neighbors within specified nodes and
within specified System or Admin Vservers.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
Displays the expire time, state, is-router, and probe count fields.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays the NDP neighbors from the specified node.
[-vserver <vserver>] - System or Admin Vserver Name
Displays the NDP neighbors from the specified System or Admin Vserver. Data and Node Vservers do not
have dynamic NDP neighbors.
[-subnet-group <IP Address/Mask>] - Subnet Group
Displays the NDP neighbors in the specified subnet group.
[-neighbor <IP Address>] - Neighbor
Displays the NDP neighbors that have the specified IPv6 address.
[-port {<netport>|<ifgrp>}] - Port
Displays the NDP neighbors on the specified port.
[-mac-address <MAC Address>] - MAC Address
Displays the NDP neighbors have the specified MAC address.
[-expire-time {[<integer>d][<integer>h][<integer>m][<integer>s]|never|expired}] - Expire Time
Displays the NDP neighbors have the specified expire time.
[-state {<nostate|incomplete|reachable|stale|delay|probe|unknown>}] - State
Displays the NDP neighbors in the specified state.
[-is-router {true|false}] - Is Router
Displays the NDP neighbor which is a router.
[-probe-count <integer>] - Probe Count
Displays the NDP neighbors with the specified probe count. Probe count is the number of times that this
neighbor's MAC address has been queried.
[-is-static {true|false}] - Is Static
Displays the NDP neighbors which are statically configured.
● Examples
The following example displays NDP neighbors on the Admin Vserver cluster1:
Node: node1
Vserver: cluster1
Subnet Group: ::/0
Neighbor MAC Address Port
------------------------------ -------------------- --------
fe80:4::5:73ff:fea0:107 00:05:73:a0:01:07 e0d
fe80:4::226:98ff:fe0c:b6c1 00:26:98:0c:b6:c1 e0d
fe80:4::4255:39ff:fe25:27c1 40:55:39:25:27:c1 e0d
3 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp prefix delete-all command deletes all prefixes learned from the specified IPspace.
● Parameters
-ipspace <IPspace> - IPspace Name
Use this parameter to specify the IPspace where the IPv6 prefixes are to be deleted.
● Examples
The following example deletes all IPv6 prefixes within IPspace ips1.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network ndp prefix show command displays IPv6 prefixes on one or more nodes.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
Displays the valid-lifetime, preferred-lifetime, origin and advertising-router fields.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays the IPv6 prefixes from the specified node.
[-ipspace <IPspace>] - IPspace name
Displays the IPv6 prefixes from the specified IPspace.
[-port {<netport>|<ifgrp>}] - Port
Displays the IPv6 prefixes on the specified port.
[-prefix <IP Address/Mask>] - Prefix
Displays the IPv6 prefixes with the specified prefix value.
[-flag {none|on-link|autonomous|on-link-autonomous}] - Flag
Displays the IPv6 prefixes with the specified flag. The flag indicates whether a prefix is on-link and
whether it can be used in autonomous address configuration.
[-valid-lifetime {<unsigned integer>|infinity}] - Valid Lifetime
Displays the IPv6 prefixes having the specified valid lifetime in seconds.
[-preferred-lifetime {<unsigned integer>|infinity}] - Preferred Lifetime
Displays the IPv6 prefixes having the specified preferred lifetime in seconds.
[-expire-time {[<integer>d][<integer>h][<integer>m][<integer>s]|never|expired}] - Expire Time
Displays the IPv6 prefixes having the specified expire time.
[-origin {router-advertise|renumber-request|static|kernel|unknown}] - Origin of the Prefix
Displays the IPv6 prefixes with the specified origin.
[-advertising-router <IP Address>, ...] - Router that Advertised the Prefix
Displays the IPv6 prefixes which are propagated by the specified router list.
● Examples
The following example displays IPv6 prefixes on port e0f.
Node: node1
IPspace: Default
Port Prefix Flag Expire Time
--------- ------------------------- ------------------ -------------
e0f fd20:8b1e:b255:814e::/64 on-link-autonomous 29d23h56m48s
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command enables or disables cluster health notifications on the specified node.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the cluster health notification status will be modified.
[-enabled {true|false}] - Cluster Health Notifications Enabled
Setting this parameter to true enables cluster health notification. Setting it to false disables cluster
health notification.
● Examples
The following example modifies the cluster health notification status for a node:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network options cluster-health-notifications show command displays whether the node's
cluster health notifications are enabled.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the node for which the cluster health notification status will be displayed.
[-enabled {true|false}] - Cluster Health Notifications Enabled
Selects the entries that match this parameter value.
● Examples
The following example displays the cluster health notification status for a node:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command enables or disables the automatic detection of a switchless cluster. A switchless cluster
consists of two nodes where the cluster ports are directly connected without a switch between them.
● Parameters
[-enabled {true|false}] - Enable Switchless Cluster Detection
This parameter specifies whether switchless cluster detection is enabled or not. Setting this parameter to
true enables switchless cluster detection.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network options detect-switchless-cluster show command displays whether switchless cluster detection
is enabled.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command sets the state of IPv6 options for the cluster.
● Parameters
[-enabled [true]] - IPv6 Enabled
Setting this parameter to true enables IPv6 for the cluster. IPv6 cannot be disabled once it is enabled for
the cluster. Call technical support for guidance regarding disabling IPv6.
[-is-ra-processing-enabled {true|false}] - Router Advertisement (RA) Processing Enabled
Setting this parameter to true enables cluster to process IPv6 router advertisements. Setting it to false
disables router advertisement processing by the cluster.
● Examples
The following example enables IPv6 Router Advertisement processing for the
cluster:
cluster1::> network options ipv6 modify -is-ra-processing-enabled true
The following example disables IPv6 Router Advertisement processing for the
cluster:
cluster1::> network options ipv6 modify -is-ra-processing-enabled false
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the current state of IPv6 options for the cluster.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command sets the state of geometric mean algorithm for load balancing
● Parameters
[-enable {true|false}] - Geometric Mean Algorithm for load balancing
Setting this parameter to true enables the geometric mean algorithm for load balancing. Setting it to
false disables the geometric mean algorithm for the cluster.
● Examples
The following example will enable the geometric mean algorithm for load
balancing.
cluster1::> network options load-balancing modify -enable true
The following example will disable the geometric mean algorithm for load
balancing.
cluster1::> network options load-balancing modify -enable false
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays the use of geometric mean load balancing algorithm.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network options multipath-routing modify command is used to modify cluster-wide multipath
routing configuration.
● Parameters
[-is-enabled {true|false}] - Is Multipath Routing Enabled
This parameter specifies whether multipath routing configuration is enabled or not. Setting this
parameter to true enables multipath routing for all nodes in the cluster.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network options multipath-routing show command displays the multipath routing
configuration for the cluster.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command disables the given port health monitors for the given IPspaces in the cluster.
● Parameters
-ipspace <IPspace> - IPspace Name
The name of the IPspace for which the specified port health monitors are disabled.
-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ... - List of Port Health
Monitors to Disable
The port health monitors to disable.
● Examples
The following example disables the "l2_reachability" health monitor for the "Default" IPspace.
Note
The status of the "link_flapping" monitor is unaffected by the command.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command enables the given port health monitors for the given IPspaces in the cluster.
● Parameters
-ipspace <IPspace> - IPspace Name
The name of the IPspace for which the specified port health monitors are enabled.
-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ... - List of Port Health
Monitors to Enable
The port health monitors to enable. Upon enabling the l2_reachability health monitor, it runs in an
"unpromoted" state. While in this state, the monitor does not mark any ports as unhealthy due to the
l2_reachability health check. The monitor is promoted in the "Cluster" IPspace when the "Cluster"
broadcast domain is found to have passed the l2_reachability health check. An EMS event called
"vifmgr.hm.promoted" event is generated when the health monitor is promoted for the IPspace.
● Examples
The following example enables the "l2_reachability" health monitor for the "Default" IPspace:
Note
The status of the "link_flapping" monitor is unaffected by the command.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command modifies the enabled port health monitors for the given IPspaces in the cluster.
● Parameters
-ipspace <IPspace> - IPspace Name
The name of the IPspace for which enabled port health monitors are modified.
[-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ...] - List of Enabled Port
Health Monitors
All of the port health monitors that you want to enable. This command enables any port health monitors
in this list that are currently disabled, and it disables any currently enabled monitors that are not in this
list. Upon enabling the l2_reachability health monitor, it runs in an "unpromoted" state. While in this
state, the monitor does not mark any ports as unhealthy due to the l2_reachability health check. The
monitor is promoted in the "Cluster" IPspace when the "Cluster" broadcast domain is found to have
passed the l2_reachability health check. An EMS event called "vifmgr.hm.promoted" event is generated
when the health monitor is promoted for the IPspace.
● Examples
The following example modifies the port health monitor configuration of the "Default" IPspace such that
only the "link_flapping" port health monitor is enabled. enabled for all IPspaces in the cluster.
Note
Only the specified monitor is enabled after the modify command is issued.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays the enabled port health monitors for the IPspaces in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
Displays the port health monitors that are enabled only for the given IPspace name.
[-health-monitors {l2-reachability|link-flapping|crc-errors|vswitch-link}, ...] - List of Enabled Port
Health Monitors
Displays the IPspaces that have the given monitors enabled.
● Examples
The following example lists all port health monitors that are enabled for all IPspaces in the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command sets the status of sending statement of authority record in the DNS response.
● Parameters
[-enable {true|false}] - Enable sending SOA
Setting this parameter to true enables sending the statement of authority (SOA) record in the DNS
response. Setting it to false disables sending the statement of authority (SOA) record in the DNS
response for the cluster.
● Examples
The following example will enable the sending of statement of authority (SOA)
in the DNS response.
cluster1::> network options send-soa modify -enable true
The following example will disable the sending of statement of authority (SOA)
in the DNS response.
cluster1::> network options send-soa modify -enable false
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays whether sending the statement of authority record (SOA) in the DNS response is
enabled or not.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command sets whether the cluster network is in switchless or switched mode. A switchless cluster is
physically formed by connecting two nodes back-to-back, without a switch between them.
● Parameters
[-enabled {true|false}] - Enable Switchless Cluster
This parameter specifies whether the switchless cluster is enabled or not. Setting this parameter to true
enables the switchless cluster.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network options switchless-cluster show command displays the attributes of a switchless cluster.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network port delete command deletes a network port that is no longer physically present on the
storage system.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the port is located.
-port {<netport>|<ifgrp>} - Port
This specifies the port to delete.
● Examples
The following example deletes port e0c from a node named node0. The command works only when the port
does not physically exist on the storage system.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port modify command enables you to change the maximum transmission unit (MTU)
setting, autonegotiation setting, administrative duplex mode, and administrative speed of a specified
network port.
The MTU of ports that belong to broadcast-domains must be updated through the broadcast-domain
modify command.
Modification of a port's IPspace will only work before a node is added to a cluster, when the cluster version is
below Data ONTAP 8.3, or when the node is offline. To change the IPspace of a port once the node is in a
Data ONTAP 8.3 cluster, the port should be added to a broadcast-domain that belongs to that IPspace.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port is located.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port that you want to modify.
[-mtu <integer>] - MTU
The port's MTU setting. The default setting for ports in the "Cluster" IPspace is 9000 bytes. All other ports
use a default value of 1500 bytes.
[-autonegotiate-admin {true|false}] - Auto-Negotiation Administrative
Whether the port uses Ethernet autonegotiation to determine the highest speed and duplex mode that
the port and its endpoint can support. The default setting when you create a port is true.
[-duplex-admin {auto|half|full}] - Duplex Mode Administrative
The administrative setting for the port's duplex mode. This is the duplex mode that you prefer the port
to use. Depending on network limitations, the operational value can be different from the
administrative setting. The default setting when you create a port is full.
[-speed-admin {auto|10|100|1000|10000|100000|40000|25000}] - Speed Administrative
The administrative speed setting, in megabits per second. This is the speed setting that you prefer the
port to use. Depending on network limitations, the operational value can be lower than the
administrative setting.
[-flowcontrol-admin {none|receive|send|full}] - Flow Control Administrative
The administrative flow control setting of the port. this is the flow control setting that you prefer the port
to use. Depending on network and port limitations, the operational value can be different from the
administrative setting.
[-up-admin {true|false}] - Up Administrative (privilege: advanced)
The administrative state of the port. If set to true, the port is used if it is operational. If set to false,
the port is configured down.
[-ipspace <IPspace>] - IPspace Name
Use this parameter to specify the IPspace the network port is assigned to. Modification of a port's
IPspace will only work before a node is added to a cluster, when the cluster version is below Data ONTAP
8.3, or when the node is offline. To change the IPspace of a port once the node is in a Data ONTAP 8.3
cluster, the port should be added to a broadcast-domain that belongs to that IPspace. If there is an
inconsistency between the broadcast-domain and IPspace, this parameter can be set to bring the
IPspace into alignment with the broadcast-domain.
[-ignore-health-status {true|false}] - Ignore Port Health Status (privilege: advanced)
Use this parameter to specify that the system ignore network port health status of the specified port for
the purpose of hosting a logical interface.
● Examples
The following example modifies port e0a on a node named node0 not to use auto-negotiation, to preferably
use half duplex mode, and to preferably run at 100 Mbps.
cluster1::> network port modify -node node0 -port e0a -autonegotiate-admin false -duplex-admin half -sp
eed-admin 100
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network port show command displays information about network ports. The command output
indicates any inactive links, and lists the reason for the inactive status.
Some parameters can have "administrative" and "operational" values. The administrative setting is the
preferred value for that parameter, which is set when the port is created or modified. The operational value
is the actual current value of that parameter. Administrative and operational settings are not shown for
virtual ports, '-' will be displayed. Please see the physical port hosting the target virtual port for these values.
If the operational duplex mode and speed of a port cannot be determined (for instance, if the link is down),
that port's status is listed as undef, meaning undefined. This is different from '-', meaning no value.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-health ]
Use this parameter to display detailed health information for the specified network ports.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the network ports that match this parameter value. Use this parameter with the -port
parameter to select a port.
[-port {<netport>|<ifgrp>}] - Port
Selects the network ports that match this parameter value. If you do not use this parameter, the
command displays information about all network ports.
[-link {off|up|down}] - Link
Selects the network ports that match this parameter value.
[-mtu <integer>] - MTU
Selects the network ports that match this parameter value.
[-autonegotiate-admin {true|false}] - Auto-Negotiation Administrative
Selects the network ports that match this parameter value.
[-autonegotiate-oper {true|false}] - Auto-Negotiation Operational
Selects the network ports that match this parameter value.
[-duplex-admin {auto|half|full}] - Duplex Mode Administrative
Selects the network ports that match this parameter value.
[-duplex-oper {auto|half|full}] - Duplex Mode Operational
Selects the network ports that match this parameter value.
[-speed-admin {auto|10|100|1000|10000|100000|40000|25000}] - Speed Administrative
Selects the network ports that match this parameter value.
[-speed-oper {auto|10|100|1000|10000|100000|40000|25000}] - Speed Operational
Selects the network ports that match this parameter value.
[-flowcontrol-admin {none|receive|send|full}] - Flow Control Administrative
Selects the network ports that match this parameter value.
[-flowcontrol-oper {none|receive|send|full}] - Flow Control Operational
Selects the network ports that match this parameter value.
[-mac <MAC Address>] - MAC Address
Selects the network ports that match this parameter value.
[-up-admin {true|false}] - Up Administrative (privilege: advanced)
Selects the network ports that match this parameter value.
[-type {physical|if-group|vlan|vip}] - Port Type
Selects the network ports that match this parameter value.
[-ifgrp-node <nodename>] - Interface Group Parent Node
Selects the network ports that match this parameter value.
[-ifgrp-port {<netport>|<ifgrp>}] - Interface Group Parent Port
Selects the network ports that match this parameter value.
[-ifgrp-distr-func {mac|ip|sequential|port}] - Distribution Function
Selects the network ports that match this parameter value.
[-ifgrp-mode {multimode|multimode_lacp|singlemode}] - Create Policy
Selects the network ports that match this parameter value.
[-vlan-node <nodename>] - Parent VLAN Node
Selects the network ports that match this parameter value.
● Examples
The following example displays information about all network ports.
Node: node1
Ignore
Speed(Mbps) Health Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
--------- ------------ ---------------- ---- ---- ----------- -------- ------
e0a Cluster Cluster up 9000 auto/1000 healthy false
e0b Cluster Cluster up 9000 auto/1000 healthy false
e0c Default Default up 1500 auto/1000 degraded false
e0d Default Default up 1500 auto/1000 degraded true
Node: node2
Ignore
Speed(Mbps) Health Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
--------- ------------ ---------------- ---- ---- ----------- -------- ------
e0a Cluster Cluster up 9000 auto/1000 healthy false
e0b Cluster Cluster up 9000 auto/1000 healthy false
e0c Default Default up 1500 auto/1000 healthy false
e0d Default Default up 1500 auto/1000 healthy false
The following example displays health information about all network ports.
node2
e0a up healthy false -
e0b up healthy false -
e0c up healthy false -
e0d up degraded false -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port show-address-filter-info command displays information about the port's
address filter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node <nodename> - Node
Use this parameter to specify the node.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the port. For example, e0c.
[-num-total <integer>] - Total Number Of Entries
Use this parameter to specify the total number of entries.
[-num-used <integer>] - Number Of Used Entries
Use this parameter to specify the number of used entries.
[-used-entries <text>, ...] - The Used Entries
● Examples
The following example displays information of the given port's address filter on the specified node of the
cluster.
Node: node1
Total Number Number of
Port of Address Used Address Used Address
Name Filter Entries Filter Entries Filter Entries
---- -------------- -------------- ------------------
e0c 1328 3 U 0 a0 98 40 e 6
M 1 80 c2 0 0 e
M 1 0 5e 0 0 fb
Network port broadcast-domain commands. A broadcast domain object is used to define a layer 2 broadcast
domain network configuration. It is used to group the ports which belong to the same layer 2 network.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Add ports to a broadcast domain.
Note
The IPSpace of the ports added will be updated to the IPSpace of the broadcast-domain. The ports will
be added to the failover-group of the broadcast-domain. The MTU of the ports will be updated to the
MTU of the broadcast-domain.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain for this port assignment.
-ports <<node>:<port>>, ... - List of ports
The ports to be added to this broadcast domain.
● Examples
The following example adds the port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" to
broadcast domain "mgmt" in IPspace "Default".
cluster1::network port broadcast-domain> add-ports -ipspace Default -broadcast-domain mgmt -ports clust
er1-1:e0d, cluster1-2:e0d
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Create a new broadcast domain.
Note
The IPSpace of the ports added will be updated to the IPSpace of the broadcast-domain. A failover-
group will be generated containing the ports of the broadcast-domain. The MTU of all of the ports in
the broadcast-domain will be updated to the MTU specified for the broadcast-domain.
● Parameters
[-ipspace <IPspace>] - IPspace Name
The IPspace to which the new broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain to be created. The name of the broadcast domain needs to be
unique within the IPspace.
-mtu <integer> - Configured MTU
MTU of the broadcast domain.
[-ports <<node>:<port>>, ...] - Ports
The network ports to be added to the broadcast domain. Ports need to be added to the broadcast
domain before interfaces can be hosted on the port. By default, no port will be added to the broadcast
domain.
● Examples
The following example creates broadcast domain "mgmt" in IPspace "Default" with an MTU of 1500 and
network ports e0c from node "gx1" and node "gx2".
cluster1::> network port broadcast-domain create -ipspace Default -broadcast-domain mgmt -mtu 1500 -por
ts gx1:e0c,gx2:e0c
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete a broadcast domain that contains no ports.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain to be deleted.
● Examples
The following example deletes the broadcast domain "mgmt" in IPspace "Default".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Merges a broadcast domain into an existing broadcast domain.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The merging broadcast domain.
-into-broadcast-domain <Broadcast Domain> - Merge with This Layer 2 Broadcast Domain
The target broadcast domain for the merge operation.
● Examples
The following example merges broadcast domain "bd-mgmt" in IPspace "Default" to broadcast domain "bd-
data".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Modify a broadcast domain.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain.
[-mtu <integer>] - Configured MTU
MTU of the broadcast domain.
● Examples
The following example modifies the mtu attribute of broadcast domain "mgmt" in IPspace "Default" to 1500
cluster1::network port broadcast-domain*> modify -ipspace Default -broadcast-domain mgmt -mtu 1500
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Move a broadcast domain to another IPspace.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain.
-to-ipspace <IPspace> - Name of the destination IPspace
The name of the destination IPspace.
● Examples
The following example moves the broadcast domain named "mgmt" from IPspace "Default" to IPspace
"Default-1".
cluster1::network port broadcast-domain> move -ipspace Default -broadcast-domain mgmt -to-ipspace Defau
lt-1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Remove port assignments from a broadcast domain.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain of the ports.
-ports <<node>:<port>>, ... - List of ports
The ports to removed from the broadcast-domain.
● Examples
The following example removes port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" from
broadcast domain "mgmt" in IPspace "Default".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Rename a broadcast domain.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the broadcast domain belongs.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The name of the broadcast domain.
● Examples
The following example renames the broadcast domain named "mgmt" to "mgmt2" in IPspace "Default".
cluster1::network port broadcast-domain> rename -ipspace Default -broadcast-domain mgmt -new-name mgmt2
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display broadcast domain information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
Selects the broadcast domains that match the IPspace name.
[-broadcast-domain <Broadcast Domain>] - Layer 2 Broadcast Domain
Selects the broadcast domains that match the broadcast domain name.
[-mtu <integer>] - Configured MTU
Selects the broadcast domains that match the MTU value. This field is the MTU that was configured by
the user, which might be different from the operational MTU.
[-ports <<node>:<port>>, ...] - Ports
Selects the broadcast domains that contain the network ports. For example, node1:e0a will display
broadcast domains that contain node1:e0a network port.
[-port-update-status {complete|in-progress|error|overridden-while-offline}, ...] - Port Update Status
Selects the broadcast domains that contain the network port status. For example, specifying "error" will
display broadcast domains that contain "Error" network port status.
[-port-update-status-details <text>, ...] - Status Detail Description
Selects the broadcast domains that contain the network port status detail text.
[-port-update-status-combined {complete|in-progress|error|overridden-while-offline}] - Combined
Port Update Status
Selects the broadcast domains that contain the combined network port status. For example, specifying
"error" will display broadcast domains that contain a combined network port status of "Error".
[-failover-groups <failover-group>, ...] - Failover Groups
Selects the broadcast domains that contain the failover groups.
[-subnet-names <subnet name>, ...] - Subnet Names
Selects the broadcast domains that contain the subnet name or names.
[-is-vip {true|false}] - Is VIP Broadcast Domain
Selects the broadcast domains that match a specific "is-vip" flag. Specifying "true" matches only
broadcast domains associated with the Virtual IP feature; "false" matches only broadcast domains that
do not.
● Examples
The following example displays general information about broadcast domains.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Splits ports from a broadcast domain into a new broadcast domain.
The following restrictions apply to this command:
• If the ports are in a failover group, all ports in the failover group must be provided. Use network
interface failover-groups show to see which ports are in failover groups.
• If the ports have LIFs associated with them, the LIFs cannot be part of a subnet's ranges and the LIF's
curr-port and home-port must both be provided. Use network interface show -fields subnet-
name, home-node, home-port, curr-node, curr-port to see which ports have LIFs associated with them and
whether the LIFs are part of a subnet's ranges. Use network subnet remove-ranges with the LIF's IP
address and -force-update-lif-associations set to true to remove the LIF's association with a
subnet.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace of the broadcast domain.
-broadcast-domain <Broadcast Domain> - Layer 2 Broadcast Domain
The broadcast domain to split.
-new-broadcast-domain <Broadcast Domain> - New Layer 2 Broadcast Domain Name
The new broadcast domain.
-ports <<node>:<port>>, ... - List of Ports
The ports to be split from this broadcast domain.
● Examples
The following example splits port "e0d" on node "cluster1-1" and port "e0d" on node "cluster1-2" from
broadcast domain "bd-mgmt" in IPspace "Default" to broadcast domain "bd-data".
cluster1::> network port broadcast-domain split -ipspace Default -broadcast-domain bd-mgmt -new-broadca
st-domain bd-data -ports cluster1-1:e0d, cluster1-2:e0d
Related reference
network interface failover-groups show (page 431)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port ifgrp add-port command adds a network port to a port interface group. The port
interface group must already exist. You can create a port interface group by using the network port
ifgrp create command.
The following restrictions apply to port interface groups:
• A port that is already a member of a port interface group cannot be added to another port interface group.
• Cluster ports and management ports cannot be in a port interface group.
• A port to which a logical interface is already bound cannot be added to a port interface group.
• A port that already has an assigned failover role cannot be added to a port interface group.
• A VLAN port cannot be added to a port interface group.
• A port which attaches to a VLAN cannot be added to a port interface group.
• An interface group port cannot be added to a port interface group.
• A port that is assigned to a broadcast domain cannot be added to a port interface group.
• All ports in a port interface group must be physically located on the same node.
● Parameters
-node {<nodename>|local} - Node
The node on which the port interface group is located.
-ifgrp <ifgrp name> - Interface Group Name
The port interface group to which a port is to be added.
-port <netport> - Specifies the name of port.
The network port that is to be added to the port interface group.
[-skip-broadcast-domain-placement [true]] - Skip Placement Into Broadcast Domain (privilege:
advanced)
When specified along with the first port added to the ifgrp, the ifgrp will not be placed into any
broadcast domain.
● Examples
The following example adds port e0c to port interface group a1a on a node named node1:
cluster1::> network port ifgrp add-port -node node1 -ifgrp a1a -port e0c
Related reference
network port ifgrp create (page 475)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port ifgrp create command creates a port interface group. See the documentation for
the network port ifgrp add-port command for a list of restrictions on creating port interface groups.
● Parameters
-node {<nodename>|local} - Node
The node on which the port interface group will be created.
-ifgrp <ifgrp name> - Interface Group Name
The name of the port interface group that will be created. Port interface groups must be named using
the syntax "a<number><letter>", where <number> is an integer in the range [0-999] without leading
zeros and <letter> is a lowercase letter. For example, "a0a", "a0b", "a1c", and "a2a" are all valid port
interface group names.
-distr-func {mac|ip|sequential|port} - Distribution Function
The distribution function of the port interface group that will be created. Valid values are:
• mac - Network traffic is distributed based on MAC addresses
• ip - Network traffic is distributed based on IP addresses
• sequential - Network traffic is distributed in round-robin fashion from the list of configured, available
ports
• port - Network traffic is distributed based on the transport layer (TCP/UDP) ports
-mode {multimode|multimode_lacp|singlemode} - Create Policy
The create policy for the interface group that will be created. Valid values are:
• multimode - Bundle multiple member ports of the interface group to act as a single trunked port
• multimode_lacp - Bundle multiple member ports of the interface group using Link Aggregation
Control Protocol
• singlemode - Provide port redundancy using member ports of the interface group for failover
● Examples
The following example creates a port interface group named a0a on node node0 with a distribution function
of ip:
cluster1::> network port ifgrp create -node node0 -ifgrp a0a -distr-func ip -mode multimode
Related reference
network port ifgrp add-port (page 474)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port ifgrp delete command destroys a port interface group.
Note
When you delete an interface group port, it is automatically removed from failover rules and groups to
which it belongs.
● Parameters
-node {<nodename>|local} - Node
The node on which the port interface group is located.
-ifgrp <ifgrp name> - Interface Group Name
The port interface group that will be deleted.
● Examples
The following example deletes port interface group a0b from a node named node0.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port ifgrp remove-port command removes a network port from a port interface group.
● Parameters
-node {<nodename>|local} - Node
The node on which the port interface group is located.
-ifgrp <ifgrp name> - Interface Group Name
The port interface group from which a port will be removed.
-port <netport> - Specifies the name of port.
The network port that will be removed from the port interface group.
● Examples
The following example removes port e0d from port interface group a1a on a node named node1:
cluster1::> network port ifgrp remove-port -node node1 -ifgrp a1a -port e0d
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port ifgrp show command displays information about port interface groups. By default, it
displays information about all port interface groups on all nodes in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays information about all port interface groups.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Repair network port configuration to match the detected reachable broadcast domain. If the reachability
scan has completed with a reachability-status of misconfigured-reachability, meaning the network port does
not have reachability to its configured broadcast domain, but has reachability to another broadcast domain,
then repairing the network port's reachability will assign the port to the detected broadcast domain.
Similarly, if the reachability scan has completed with a reachability status of no-reachability, then repairing
the network port's reachability will assign the port to an empty broadcast domain. LIFs configured on the
port will be adjusted to be configured on another port in their current broadcast domain if possible. Vlans on
the specified port that do not have reachability to their configured broadcast domain will be removed. If the
port was part of an ifgrp, the port will be removed from the ifgrp. If the port is not configured on a broadcast
domain and has no reachability to any existing broadcast domains, it will be configured in a newly created
broadcast domain.
● Parameters
-node {<nodename>|local} - Node
Selects the node on which the port resides.
-port <netport> - Port
Selects the port on which to repair configuration.
● Examples
The following example applies the scanned broadcast domain reachability information to the specified
port's configuration.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Scan network port configuration to detect reachable broadcast domains.
● Parameters
-node {<nodename>|local} - Node
Selects the node on which the port resides.
-port <netport> - Port
Selects the port on which to scan broadcast domain reachability.
● Examples
The following example applies the scanned broadcast domain reachability information to the specified
port's configuration.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display broadcast domain reachability information for the specified port. If the port is discovered, via a
broadcast scan on the network, that it has reachability to broadcast domains other than the one on which it
is expected, the command will list the reachable broadcast domains and an appropriate reachability status
message. If the reachable broadcast domain matches the expected one, the reachability status is displayed
as Ok.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
When true, additional details regarding which broadcast domains have been found to be reachable from
the specified network port are displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the node on which the port resides.
[-port <netport>] - Port
Selects the port for which to display broadcast domain reachability information.
[-expected-broadcast-domain <<IPspace>:<Broadcast Domain>>] - Expected Broadcast Domain
The IPspace and broadcast domain currently assigned to the network port.
[-reachable-broadcast-domains <<IPspace>:<Broadcast Domain>>, ...] - Reachable Broadcast
Domains
The name of the IPspaces and broadcast domains that have reachability to the specified port, as
discovered by a broadcast domain ping scan.
[-reachability-status {unknown|ok|no-reachability|misconfigured-reachability|multi-domain-
reachability|internal-error}] - Reachability Status
The status of the broadcast domain reachability for the specified port. Ok if the expected broadcast-
domain matches the reachable-broadcast-domains, or an appropriate status otherwise.
[-unreachable-ports <<node>:<port>>, ...] - Unreachable Ports
The list of network ports that are expected in the same broadcast domain as the specified port but
cannot be reached, either because those ports are down or because there is no network connectivity to
those ports.
[-unexpected-ports <<node>:<port>>, ...] - Unexpected Ports
The list of network ports that are not expected in the same broadcast domain yet have network
connectivity to the specified port.
● Examples
The following example displays the broadcast domain reachability for the specified port.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network port vip create command creates a VIP port in the specified IPspace on the specified
node. Only one VIP port can be created per IPspace on the given node.
● Parameters
-node {<nodename>|local} - Node
The node where the VIP port should be created.
-port <netport> - Network Port
The name of the VIP port to be created in the format v<slot-number><port-letter>
-ipspace <IPspace> - IPspace Name
The IPspace where the VIP port should be created. The default value for this parameter is "Default",
which identifies the default IPspace.
● Examples
This example shows how to create a VIP port named v0a in ipspace ips on node1.
cluster1::> network port vip create -node node1 -port v0a -ipspace ips
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network port vip delete command deletes a VIP port.
● Parameters
-node {<nodename>|local} - Node
The node associated with the VIP port to be deleted.
-port <netport> - Network Port
The name of the VIP port to be deleted.
● Examples
This example shows how to delete VIP Port v0a on node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port vip show command displays information about VIP ports.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The example below shows VIP port v0a is created in IPspace ips on node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port vlan create command attaches a VLAN to a network port on a specified node.
● Parameters
-node {<nodename>|local} - Node
The node to which the VLAN is to be attached.
Note
You cannot attach a VLAN to a cluster port.
● Examples
This example shows how to create VLAN e1c-80 attached to network port e1c on node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port vlan delete command deletes a VLAN from a network port.
Note
When you delete a VLAN port, it is automatically removed from all failover rules and groups that use
it.
● Parameters
-node {<nodename>|local} - Node
The node from which the VLAN is to be deleted.
{ -vlan-name {<netport>|<ifgrp>} - VLAN Name
The name of the VLAN that is to be deleted
| -port {<netport>|<ifgrp>} - Associated Network Port
The network port to which the VLAN is to be attached.
-vlan-id <integer>} - Network Switch VLAN Identifier
The ID tag of the deleted VLAN.
● Examples
This example shows how to delete VLAN e1c-80 from network port e1c on node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network port vlan show command displays information about network ports that are attached to
VLANs. The command output indicates any inactive links and lists the reason for the inactive status.
If the operational duplex mode and speed cannot be determined (for instance, if the link is down), they are
listed as undef, meaning undefined.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the VLAN network ports that match this parameter value.
{ [-vlan-name {<netport>|<ifgrp>}] - VLAN Name
Selects the VLAN network ports that match this parameter value.
| [-port {<netport>|<ifgrp>}] - Associated Network Port
Selects the VLAN network ports that match this parameter value. If neither this parameter nor -name are
used, the command displays information about all network ports.
[-vlan-id <integer>]} - Network Switch VLAN Identifier
Selects the VLAN network ports that match this parameter value.
[-mac <MAC Address>] - MAC address
Selects the VLAN network ports that match this parameter value.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network qos-marking modify command modifies the QoS marking values for different protocols, for
each IPspace.
● Parameters
-ipspace <IPspace> - IPspace name
Use this parameter to specify the IPspace for which the QoS marking entry is to be modified.
-protocol <text> - Protocol
Use this parameter to specify the protocol for which the QoS marking entry is to be modified. The
possible values are NFS, CIFS, iSCSI, SnapMirror, SnapMirror-Sync, NDMP, FTP, HTTP-admin, HTTP-filesrv,
SSH, Telnet, and SNMP.
[-dscp <integer>] - DSCP Marking Value
Use this parameter to specify the DSCP value. The possible values are 0 to 63.
[-is-enabled {true|false}] - Is QoS Marking Enabled
Use this parameter to enable or disable the QoS marking for the specified protocol and IPspace.
● Examples
The following example modifies the QoS marking entry for the NFS protocol in the Default IPspace:
cluster1::> network qos-marking modify -ipspace Default -protocol NFS -dscp 10 -is-enabled true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network qos-marking show command displays the QoS marking values for different protocols, for
each IPspace.
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the QoS marking table.
| [-instance ]}
Use this parameter to display all the fields of the QoS marking table.
[-ipspace <IPspace>] - IPspace name
Use this parameter to display the QoS marking entries for the specified IPspace.
[-protocol <text>] - Protocol
Use this parameter to display the QoS marking entries for the specified protocol. The possible values are
NFS, CIFS, iSCSI, SnapMirror, SnapMirror-Sync, NDMP, FTP, HTTP-admin, HTTP-filesrv, SSH, Telnet, and
SNMP.
[-dscp <integer>] - DSCP Marking Value
Use this parameter to display the QoS marking entries matching the specified DSCP value. The possible
values are 0 to 63.
[-is-enabled {true|false}] - Is QoS Marking Enabled
Use this parameter to display the QoS marking entries matching the specified flag.
● Examples
The following example displays the QoS marking entries for the Default IPspace.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network route create command creates a static route within a Vserver.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the route is to be created.
-destination <IP Address/Mask> - Destination/Mask
Use this parameter to specify the IP address and subnet mask of the route's destination. The format for
this value is: address, slash ("/"), mask. 0.0.0.0/0 is a valid destination value to create default IPv4
route. And ::/0 is a valid destination value to create default IPv6 route
-gateway <IP Address> - Gateway
Use this parameter to specify the IP address of the gateway server leading to the route's destination.
[-metric <integer>] - Metric
Use this parameter to specify the metric of the route.
● Examples
The following example creates default routes within Vserver vs0 for IPv4 and IPv6.
cluster1::> network route create -vserver vs0 -destination 0.0.0.0/0 -gateway 10.61.208.1
cluster1::> network route create -vserver vs0 -destination ::/0 -gateway 3ffe:1::1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network route delete command deletes a static route from a Vserver.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver on which the route is to be deleted.
-destination <IP Address/Mask> - Destination/Mask
Use this parameter to specify the IP address and subnet mask of the route's destination. The format for
this value is: address, slash ("/"), mask. For example, 0.0.0.0/0 is a correctly formatted value for the -
destination parameter.
-gateway <IP Address> - Gateway
Use this parameter to specify the gateway on which the route is to be deleted.
● Examples
The following example deletes a route within Vserver vs0 for destination 0.0.0.0/0.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network route show command displays a group of static routes within one or more Vservers. You can
view routes within specified Vservers, routes with specified destinations, and routes with specified gateways.
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only certain fields of the routing tables.
| [-instance ]}
Use this parameter to display all fields of the routing tables.
[-vserver <vserver>] - Vserver Name
Use this parameter to display only routes that have the specified Vserver as their origin.
[-destination <IP Address/Mask>] - Destination/Mask
Use this parameter to display only routes that have the specified IP address and subnet mask as their
destination. The format for this value is: address, slash ("/"), mask. The example below has 0.0.0.0/0
as a valid value for the -destination parameter.
[-gateway <IP Address>] - Gateway
Use this parameter to display only routes that have the specified IP address as their gateway.
[-metric <integer>] - Metric
Use this parameter to display only routes that have the specified metric.
[-ipspace <IPspace>] - IPspace Name
Use this parameter to optionally specify the IPspace associated with the Vserver. This parameter can be
used in conjunction with the Vserver parameter in order to configure the same route across multiple
Vservers within an IPspace.
[-address-family {ipv4|ipv6|ipv6z}] - Address family of the route
Use this parameter to display only the routes that have the specified address-family.
● Examples
The following example displays information about all routing groups.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network route show-lifs command displays the association of static routes and Logical Interfaces
(LIFs) within one or more Vservers. You can view routes within specified Vservers, routes with specified
destinations, and routes with specified gateways.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver>] - Vserver Name
Use this parameter to display only routes that have the specified Vserver as their origin.
[-destination <IP Address/Mask>] - Destination/Mask
Use this parameter to display only routes that have the specified IP address and subnet mask as their
destination. The format for this value is: address, slash ("/"), mask. For example, 0.0.0.0/0 is a valid
value for the -destination parameter.
[-gateway <IP Address>] - Gateway
Use this parameter to display only routes that have the specified IP address as their gateway.
[-lifs <lif-name>, ...] - Logical Interfaces
Use this parameter to display only the routes that are associated with the specified Logical Interfaces
(LIFs).
[-address-family {ipv4|ipv6|ipv6z}] - Address Family
Use this parameter to display only the routes that belong to specified address family.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
The network route active-entry show command displays installed routes on one or more nodes. You
can view routes within specified nodes, within specified Vservers, routes in specified subnet groups, and
routes with specified destinations.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-verbose ]
Use this parameter to display the reference count, use, interface, and Path MTU fields.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays active routes on all nodes in Vserver vs0 with subnet-group 10.10.10.0/24.
Vserver: vs0
Node: node1
Subnet Group: 10.10.10.0/24
Destination Gateway Interface Metric Flags
---------------------- ------------------- --------- ------ -----
default 10.10.10.1 e0c 0 UGS
Vserver: vs0
Node: node2
Subnet Group: 10.10.10.0/24
Destination Gateway Interface Metric Flags
---------------------- ------------------- --------- ------ -----
default 10.10.10.1 e0c 0 UGS
2 entries were displayed.
Related reference
network route show (page 486)
Network Subnet commands. These commands are used to define a subnet with its gateway and groups of IP
addresses. The IP addresses can be used to create interfaces.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Add new address ranges to a subnet.
Note
All addresses in a range must be the same address family (IPv4 or IPv6) and must have the same
subnet mask. Ranges that overlap or are next to existing ranges will be merged with the existing
ranges.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace in which the range resides.
-subnet-name <subnet name> - Subnet Name
The name of the subnet.
-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ... - IP Ranges
● Examples
The following example allocates addresses for subnet s1 in IPspace Default.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Create a new subnet.
● Parameters
[-ipspace <IPspace>] - IPspace Name
The IPspace to which the new subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to be created. The name of the subnet needs to be unique within the IPspace.
-broadcast-domain <Broadcast Domain> - Broadcast Domain
The broadcast domain to which the new subnet belongs.
-subnet <IP Address/Mask> - Layer 3 Subnet
The address and mask of the subnet.
[-gateway <IP Address>] - Gateway
The gateway of the subnet.
[-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ...] - IP Addresses or IP Address Ranges
The IP ranges associated with this subnet.
[-force-update-lif-associations [true]] - Change the subnet association
This command will fail if any service processor interfaces or network interfaces are using the IP addresses
in the ranges provided. Using this parameter will associate any manually addressed interfaces with the
subnet and will allow the command to succeed.
● Examples
The following examples create subnets named s1 and s6 in IPspace Default.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Delete a subnet that contains no ports.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to be deleted.
[-force-update-lif-associations [true]] - Change the subnet association
This command will fail if the subnet has ranges containing any existing service processor interface or
network interface IP addresses. Setting this value to true will remove the network interface associations
with the subnet and allow the command to succeed. However, it will not affect service processor
interfaces.
● Examples
The following example deletes subnet s1 in IPspace Default.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Modify a subnet.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to modify.
[-subnet <IP Address/Mask>] - Layer 3 Subnet
The new address and mask of the subnet.
[-gateway <IP Address>] - Gateway
The new gateway address.
[-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ...] - IP Addresses or IP Address Ranges
The new IP ranges for this subnet.
[-force-update-lif-associations [true]] - Change the subnet association
This command will fail if any existing service processor interfaces or network interfaces are using IP
addresses in the IP ranges being added. It will also fail if any existing service processor interfaces or
network interfaces are using IP addresses in the IP ranges being removed. Using this parameter will
associate the interfaces with the IP addresses in the ranges being added to the subnet. It will also
remove the subnet's association with the interfaces with IP addresses in the IP ranges being removed
and will allow the command to succeed.
● Examples
The following example modifies the subnet address and gateway.
cluster1::> network subnet modify -ipspace Default -subnet-name s1 -subnet 192.168.2.0/24 -gateway 192.
168.2.1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Remove address ranges from a subnet.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace in which the range resides.
-subnet-name <subnet name> - Subnet Name
The name of the subnet.
-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ... - IP Ranges
IP ranges to remove.
[-force-update-lif-associations [true]] - Force Update LIF Associations
This command will fail if any existing service processor interfaces or network interfaces are using IP
addresses in the ranges provided. Using this parameter will remove the subnet's association with those
interfaces and allow the command to succeed.
● Examples
The following example removes an address range with starting address of 10.98.1.1 from subnet s1 in
IPspace Default.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Rename a Subnet.
● Parameters
-ipspace <IPspace> - IPspace Name
The IPspace to which the subnet belongs.
-subnet-name <subnet name> - Subnet Name
The name of the subnet to rename.
-new-name <text> - New Name
The new name for the subnet.
● Examples
The following example renames subnet s1 to s3.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display subnet information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ipspace <IPspace>] - IPspace Name
Selects the subnets that match the given IPspace name.
[-subnet-name <subnet name>] - Subnet Name
Selects the subnets that match the given subnet name.
[-broadcast-domain <Broadcast Domain>] - Broadcast Domain
Selects the subnets that match the given broadcast domain name.
[-subnet <IP Address/Mask>] - Layer 3 Subnet
Selects the subnets that match the given address and mask.
[-gateway <IP Address>] - Gateway
Selects the subnets that match the given gateway address.
[-ip-ranges {<ipaddr>|<ipaddr>-<ipaddr>}, ...] - IP Addresses or IP Address Ranges
Selects the subnets that match the given IP range.
[-total-count <integer>] - Total Address Count
Selects the subnets that match the given total address count.
[-used-count <integer>] - Used Address Count
Selects the subnets that match the given number of addresses allocated.
[-available-count <integer>] - Available Address Count
Selects the subnets that match the given number of addresses available.
● Examples
The following example displays general information about the subnets.
IPspace: Default
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
s4 192.168.4.0/24 bd4 192.168.4.1 5/5 192.168.5.6-192.168.5.10
s6 192.168.6.0/24 bd4 192.168.6.1 5/5 192.168.6.6-192.168.6.10
IPspace: ips1
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
s10 192.168.6.0/24 bd10 192.168.6.1 0/0 -
3 entries were displayed.
Network tcpdump commands. The trace files generated during tcpdump packet captures could be located
in /mroot/etc/log/packet_traces/.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network tcpdump show command shows currently running packet traces (via tcpdump) on a
matching node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Use this parameter optionally to show the details of running packet traces on a matching node.
[-port {<netport>|<ifgrp>}] - Port
Use this parameter optionally to show the details of running packet trace on a matching network
interface.
● Examples
The following example shows the details of running packet traces on nodes "node1" and "node2":
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The network tcpdump start command starts packet tracing (via tcpdump) with the given parameters.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node on which the packet trace should run.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the network interface for packet tracing.
[-address <IP Address>] - IP Address
Use this parameter to optionally specify the address for packet tracing.
[-protocol-port <integer>] - Protocol Port Number
Use this parameter to optionally specify the protocol port number for packet tracing.
[-buffer-size <integer>] - Buffer Size in KB
Use this parameter to optionally specify the buffer size for packet tracing. The default buffer size is 4 KB.
[-file-size <integer>] - Trace File Size in MB
Use this parameter to optionally specify the trace file size for packet tracing. The default trace file size is
1 GB.
[-rolling-traces <integer>] - Number of Rolling Trace Files
Use this parameter to optionally specify the number of rolling trace files for packet tracing. The default
number of rolling trace files is 2.
● Examples
The following example starts packet tracing on node "node1" with address "10.98.16.164", network port
"e0c", buffer size "10 KB", and protocol port number "10000":
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network tcpdump stop command stops a running packet trace (via tcpdump) on a given network
interface. The trace files could be located in /mroot/etc/log/packet_traces/.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node on which the packet tracing must be stopped.
-port {<netport>|<ifgrp>} - Port
Use this parameter to specify the network interface on which the packet tracing must be stopped.
● Examples
The following example stops a packet trace on network interface "e0a" from node "node1":
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network tcpdump trace delete command deletes the tcpdump trace file from a matching node.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to delete the tcpdump trace file from a matching node.
-trace-file <text> - Trace File
Use this parameter to specify the tcpdump trace file to be deleted.
● Examples
The following example deletes the list of tcpdump trace files from node "node1" using wildcard pattern:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The network tcpdump trace show command shows the list of tcpdump trace files located in /
mroot/etc/log/packet_traces/ directory.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Use this parameter to show the list of traces files of a matching node.
[-trace-file <text>] - Trace File
Use this parameter optionally to show the list of trace files with a matching trace-file name.
● Examples
The following example shows the list of trace files on nodes "node1" and "node2":
e0a_20170314_115624.trc0
node2
e0c_20170314_115624.trc0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network test-link run-test command runs a performance test between two nodes. The command
requires a source node, Vserver, and destination address.
Before executing the network test-link run-test command, the network test-link start-
server command must be run to start a server on the node hosting the destination LIF. After all tests to
that node are complete the network test-link stop-server command must be run to stop the server.
The test results are stored non-persistently and can be viewed using the network test-link show
command. Results include input parameters, the bandwidth achieved, and the date and time of the test.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node which initiates the test.
-vserver <vserver> - Vserver
Use this parameter to specify the Vserver to access the destination LIF. DC (Data Channel) Vserver option
is available only in an ONTAP Select or ONTAP Cloud cluster. It is a special vserver that hosts LIFs that are
used to mirror data aggregates to partner node.
-destination <Remote InetAddress> - Destination
Use this parameter to specify the destination IP address.
● Examples
The following example runs a test between the cluster LIFs, including the start and stop of the server side of
the test:
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/22/2016 15:33:18
MB/s: 41.2678
Related reference
network test-link start-server (page 499)
network test-link stop-server (page 500)
network test-link show (page 498)
network test-link commands (page 497)
network test-path (page 374)
storage iscsi-initiator commands (page 1165)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network test-link show command displays the results of prior network test-link run-test
commands.
The test results are stored non-persistently and can be viewed using the network test-link show
command. Results include input parameters, the bandwidth achieved, and the date and time of the test.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node {<nodename>|local} - Node
Selects the nodes that match this parameter value. Use this parameter to display the test results specific
to a node. By default, the test results across all nodes are shown.
-vserver <vserver> - Vserver
Use this parameter to display the test results specific to a Vserver. Use DC (Data Channel) Vserver option
only in an ONTAP Select or ONTAP Cloud cluster to show network performance of links hosting DC LIFs.
DC Vserver is a special Vserver that hosts LIFs that are used to mirror data aggregates to partner node
[-destination <Remote InetAddress>] - Destination
Use this parameter to display the test results associated with the specified destination.
[-timestamp <MM/DD/YYYY HH:MM:SS>] - Time of Test
Use this parameter to display the test results with the specified timestamp.
[-bandwidth <double>] - MB/s
Use this parameter to display the test results only matching the specified bandwidth.
● Examples
The following example runs a test between the cluster LIFs twice and then demonstrates the show
command results:
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/25/2016 10:37:52
MB/s: 29.9946
cluster1::*> network test-link run-test -node node2 -vserver Cluster -destination 172.31.112.173
Node: node2
Vserver: Cluster
Destination: 172.31.112.173
Time of Test: 4/25/2016 10:38:32
MB/s: 39.8192
Related reference
network test-link run-test (page 497)
network test-link start-server (page 499)
network test-link stop-server (page 500)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network test-link start-server command starts the server side of the network test-link test
on the designated node.
Only one server at a time can be running for the network test-link command on a given node. If the
network test-link start-server command is issued and a server is already running on the node,
then the command is ignored, and the existing server continues to run.
The server started is listening on port 5201.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node where the server is to be started.
● Examples
The following example starts a server:
Related reference
network test-link commands (page 497)
network test-link run-test (page 497)
network test-link stop-server (page 500)
network test-link show (page 498)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The network test-link stop-server command stops the network test-link server running on the
designated node.
● Parameters
-node {<nodename>|local} - Node Name
Use this parameter to specify the node where the server is to be stopped.
● Examples
The following example starts a server and stops it:
Related reference
network test-link commands (page 497)
network test-link run-test (page 497)
network test-link start-server (page 499)
network test-link show (page 498)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays options which can be used to fine tune icmp protocol behavior.
● Parameters
-node {<nodename>|local} - Node
Sets this parameter to indicate on which node the ICMP tuning options are modified.
[-is-drop-redirect-enabled {true|false}] - Drop redirect ICMP
Sets this parameter to drop redirect ICMP message.
[-tx-icmp-limit <integer>] - Maximum number of ICMP packets sent per second
Sets the maximum number of ICMP messages including TCP RSTs can be sent per second.
[-redirect-timeout <integer>] - Maximum seconds for route redirect timeout
Sets this parameter to indicate the number of seconds after which the route is deleted. Value of zero
means infinity. The default value is 300 seconds.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the current state of the ICMP tuning options for the given node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all ICMP tuning options.
[-node {<nodename>|local}] - Node
Specifies the node for which the ICMP tuning options are displayed.
[-is-drop-redirect-enabled {true|false}] - Drop redirect ICMP
Displays all entries that match the "is-drop-redirect-enabled" value.
[-tx-icmp-limit <integer>] - Maximum number of ICMP packets sent per second
Displays all entries that match the "tx-icmp-limit" value.
[-redirect-timeout <integer>] - Maximum seconds for route redirect timeout
Displays all the entries that match the "redirect-timeout" value.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays options which can be used to fine tune icmpv6 protocol behavior.
● Parameters
-node {<nodename>|local} - Node
Sets this parameter to indicate on which node the ICMPv6 tuning options are modified.
[-is-v6-redirect-accepted {true|false}] - Accept redirects via ICMPv6
Sets this parameter to indicate whether or not redirect ICMPv6 messages are accepted.
[-redirect-v6-timeout <integer>] - Maximum seconds for route redirect timeout
Sets this parameter to indicate the number of seconds after which the route is deleted. Value of zero
means infinity. The default value is 300 seconds.
[-tx-icmp6-err-limit <integer>] - Maximum number of ICMPv6 error messages sent per second
Sets the maximum number of ICMPv6 error messages that can be sent per second.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the current state of the ICMPv6 tuning options for the given node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all ICMPv6 tuning options.
[-node {<nodename>|local}] - Node
Specifies the node for which the ICMPv6 tuning options are displayed.
[-is-v6-redirect-accepted {true|false}] - Accept redirects via ICMPv6
Displays all entries that match the "is-v6-redirect-accepted" value.
[-redirect-v6-timeout <integer>] - Maximum seconds for route redirect timeout
Displays all the entries that match the "redirect-v6-timeout" value.
[-tx-icmp6-err-limit <integer>] - Maximum number of ICMPv6 error messages sent per second
Displays all entries that match the "tx-icmp6-err-limit" value.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This commands sets TCP tuning options on the node.
● Parameters
-node {<nodename>|local} - Node
Indicates on which node the TCP tuning options will be modified.
[-is-path-mtu-discovery-enabled {true|false}] - Path MTU discovery enabled
Enables path MTU discovery feature.
[-is-rfc3465-enabled {true|false}] - RFC3465 enabled
Enables the rfc3465 feature.
[-max-cwnd-increment <integer>] - Maximum congestion window segments incrementation
Sets the maximum congestion window increment segements during slow start.
[-is-rfc3390-enabled {true|false}] - RFC3390 enabled
Enables the rfc3390 feature.
[-is-sack-enabled {true|false}] - SACK support enabled
Enables the selective ACK feature.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the current state of the TCP tuning options for the given node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays all TCP tuning options.
[-node {<nodename>|local}] - Node
Specifies the node for which the TCP tuninng options will be displayed.
[-is-path-mtu-discovery-enabled {true|false}] - Path MTU discovery enabled
Displays all entries that match the "is-path-mtu-discovery-enabled" value.
[-is-rfc3465-enabled {true|false}] - RFC3465 enabled
Displays all entries that match the "is-rfc3465-enabled" value.
[-max-cwnd-increment <integer>] - Maximum congestion window segments incrementation
Displays all entries that match the "max-cwnd-increment" value.
[-is-rfc3390-enabled {true|false}] - RFC3390 enabled
Displays all entries that match the "is-rfc3390-enabled" value.
[-is-sack-enabled {true|false}] - SACK support enabled
Displays all entries that match the "is-sack-enabled" value.
● Examples
protection-type show
Display the supported protection types and available RPOs
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays the protection types available for application provisioning.
● Parameters
{ [-fields <fieldname>, ...]
Specifies fields that you want included in the output. You can use -fields ? to display the available
fields.
| [-instance ]}
Specifies the display of all available fields for each selected protection type.
[-vserver <vserver name>] - Vserver
Selects the protection types of Vservers that match the parameter value.
[-protection-type {local|remote}] - Protection Type
Selects the protection types that match the parameter value.
[-rpo-list <text>, ...] - List of available RPOs
Selects the protection types whose list of available RPOs matches the parameter value.
[-rpo-list-description <text>, ...] - List of descriptions of available RPOs
Selects the protection types whose list of description of available RPOs matches the parameter value.
[-description <text>] - Description of Protection Type
Selects the protection types with a description that matches the parameter value.
● Examples
The following example displays the protection types and the associated available RPOs for all Vservers in the
cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos adaptive-policy-group create command creates a new adaptive policy group. After the
adaptive policy group is created, you can assign one or more storage objects to the policy. When a storage
object is assigned to an adaptive policy group, the maximum throughput QoS setting automatically adjusts
based on the storage object used space or the storage object allocated space. QoS minimum throughput
setting is calculated from the expected-iops parameter and the storage object allocated size. It is set only for
the storage objects that reside on AFF platforms.
After you create an adaptive policy group, use the storage object create command or storage object modify
command to apply the adaptive policy group to a storage object.
● Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group. Adaptive policy group names must be unique and are
restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy
group names must start with an alphanumeric character. Use the qos adaptive-policy-group
rename command to change the adaptive policy group name.
-vserver <vserver name> - Vserver
Specifies the data Vserver to which this adaptive policy group belongs to. You can apply this adaptive
policy group to only the storage objects contained in the specified Vserver. If the system has only one
Vserver, then the command uses that Vserver by default.
-expected-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)} - Expected IOPS
Specifies the minimum expected IOPS per TB or GB allocated based on the storage object allocated size.
-peak-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)} - Peak IOPS
Specifies the maximum possible IOPS per TB or GB allocated based on the storage object allocated size
or the storage object used size.
[-absolute-min-iops <qos_tput>] - Absolute Minimum IOPS
Specifies the absolute minimum IOPS which is used as an override when the expected IOPS is less than
this value. The default value is computed as follows:
if expected-iops >= 6144/TB, then absolute-min-iops = 1000IOPS; if expected-iops >= 2048/TB
and expected-iops < 6144/TB, then absolute-min-iops = 500IOPS; if expected-iops >= 1/MB
and expected-iops < 2048/TB, then absolute-min-iops = 75IOPS.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
Specifies the expected IOPS allocation policy. The allocation policy is either allocated-space or used-
space. When the expected-iops-allocation policy is set to allocated-space, the expected IOPS is
calculated based on the size of the storage object. When the expected-iops-allocation policy is set to
used-space, the expected IOPS is calculated based on the amount of data stored in the storage object
taking into account storage efficiencies. The default value is allocated-space.
[-peak-iops-allocation {used-space|allocated-space}] - Peak IOPS Allocation
Specifies the peak IOPS allocation policy. The allocation policy is either allocated-space or used-space.
When the peak-iops-allocation policy is set to allocated-space, the peak IOPS is calculated based on the
size of the storage object. When the peak-iops-allocation policy is set to used-space, the peak IOPS is
calculated based on the amount of data stored in the storage object taking into account storage
efficiencies. The default value is used-space.
[-block-size {ANY|4K|8K|16K|32K|64K|128K}] - Block Size
Specifies the I/O block size for the QoS adaptive policy group. The default value is "ANY". When block-size
of "ANY" is specified, then control is by IOPS. When block-size other than "ANY" is specified, then control
is by IOPS and bytes per second(bps). bps is the product of IOPS and block-size.
● Examples
Creates the "p1" adaptive policy group which belongs to Vserver "vs1" with expected-iops of 100IOPS/TB and
peak-iops of 1000IOPS/TB with default value for absolute-min-iops
Related reference
qos adaptive-policy-group rename (page 509)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos adaptive-policy-group delete command deletes an adaptive policy group from a cluster.
You cannot delete a policy group if a QoS workload associated with a storage object is assigned to it unless
you use the -force parameter. Using the -force parameter will delete all the QoS workloads for storage
objects associated with the specified adaptive policy groups.
Only user created adaptive policy groups can be deleted. Default adaptive policy groups are read only and
cannot be deleted.
● Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group that you want to delete.
[-force [true]] - Force Delete Workloads for the QoS adaptive policy group (privilege: advanced)
Specifies whether to delete an adaptive policy group along with any underlying workloads.
● Examples
The following example deletes "p1" adaptive policy group:
Deletes the "p1" adaptive policy group along with any underlying QoS workloads.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos adaptive-policy-group modify command modifies an adaptive policy group.
Only user-created adaptive policy groups can be modified. Default adaptive policy groups are read-only and
cannot be modified.
● Parameters
-policy-group <text> - Name
Specifies the name of the adaptive policy group. Adaptive policy group names must be unique and are
restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy
group names must start with an alphanumeric character. Use the qos adaptive-policy-group
rename command to change the adaptive policy group name.
[-expected-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Expected IOPS
Specifies the minimum expected IOPS per TB or GB allocated based on the storage object allocated size.
QoS minimum throughput setting is calculated from the expected-iops parameter. It is set only for the
storage objects that reside on AFF platforms.
[-peak-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Peak IOPS
Specifies the maximum possible IOPS per TB or GB allocated based on the storage object allocated size
or the storage object used size.
[-absolute-min-iops <qos_tput>] - Absolute Minimum IOPS
Specifies the absolute minimum IOPS which is used as an override when the expected IOPS is less than
this value. The default value is computed as follows:
if expected-iops >= 6144/TB, then absolute-min-iops = 1000IOPS; if expected-iops >= 2048/TB
and expected-iops < 6144/TB, then absolute-min-iops = 500IOPS; if expected-iops >= 1/MB
and expected-iops < 2048/TB, then absolute-min-iops = 75IOPS.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
Specifies the expected IOPS allocation policy. The allocation policy is either allocated-space or used-
space. When the expected-iops-allocation policy is set to allocated-space, the expected IOPS is
calculated based on the size of the storage object. When the expected-iops-allocation policy is set to
used-space, the expected IOPS is calculated based on the amount of data stored in the storage object
taking into account storage efficiencies. The default value is allocated-space.
[-peak-iops-allocation {used-space|allocated-space}] - Peak IOPS Allocation
Specifies the peak IOPS allocation policy. The allocation policy is either allocated-space or used-space.
When the peak-iops-allocation policy is set to allocated-space, the peak IOPS is calculated based on the
size of the storage object. When the peak-iops-allocation policy is set to used-space, the peak IOPS is
calculated based on the amount of data stored in the storage object taking into account storage
efficiencies. The default value is used-space.
[-block-size {ANY|4K|8K|16K|32K|64K|128K}] - Block Size
Specifies the I/O block size for the QoS adaptive policy group. The default value is "ANY". When block-size
of "ANY" is specified, then control is by IOPS. When block-size other than "ANY" is specified, then control
is by IOPS and bytes per second(bps). bps is the product of IOPS and block-size.
● Examples
The following example modifies the "p1" adaptive policy group with specified values.
Related reference
qos adaptive-policy-group rename (page 509)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos adaptive-policy-group rename command changes the name of an existing adaptive policy
group.
● Parameters
-policy-group <text> - Name
Specifies the existing name of the adaptive policy group that you want to rename.
-new-name <text> - New adaptive policy group name
Specifies the new name of the adaptive policy group. Adaptive policy group names must be unique and
are restricted to 127 alphanumeric characters including underscores "_" and hyphens "-". Adaptive policy
group names must start with an alphanumeric character.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos adaptive-policy-group show command shows the current settings of the adaptive policy
groups on a cluster. You can view the list of adaptive policy groups and also the detailed information about
a specific adaptive policy group.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-policy-group <text>] - Name
Selects the adaptive policy groups that match this parameter value.
Adaptive policy groups define measurable service level objectives (SLOs) that adjust based on the
storage object used space or the storage object allocated space.
[-vserver <vserver name>] - Vserver
If this parameter is specified, the command displays information only about the adaptive policy groups
with a matching vserver.
[-uuid <UUID>] - Uuid
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified UUID.
[-pgid <integer>] - ID
If this parameter is specified, the command displays information only about the adaptive policy groups
that match the given policy group ID, which is an integer that uniquely identifies the adaptive policy
group.
[-expected-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Expected IOPS
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified minimum expected IOPS per TB or GB.
[-peak-iops {<integer>[IOPS[/{GB|TB}]] (default: TB)}] - Peak IOPS
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified maximum possible IOPS per TB or GB.
[-absolute-min-iops <qos_tput>] - Absolute Minimum IOPS
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified absolute minimum IOPS.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified expected IOPS allocation policy used to compute the expected IOPS per TB or GB.
[-peak-iops-allocation {used-space|allocated-space}] - Peak IOPS Allocation
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified peak IOPS allocation policy used to compute the maximum possible IOPS per TB or GB.
[-block-size {ANY|4K|8K|16K|32K|64K|128K}] - Block Size
Specifies the I/O block size for the QoS adaptive policy group. The default value is "ANY". When block-size
of "ANY" is specified, then control is by IOPS. When block-size other than "ANY" is specified, then control
is by IOPS and bytes per second(bps). bps is the product of IOPS and block-size.
[-num-workloads <integer>] - Number of Workloads
If this parameter is specified, the command displays information only about the adaptive policy groups
with the specified number of workloads.
● Examples
The example above displays all adaptive policy groups on the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos policy-group create command creates a new policy group. You can use a QoS policy group to
control a set of storage objects known as "workloads" - LUNs, volumes, files, or Vservers. Policy groups define
measurable service level objectives (SLOs) that apply to the storage objects with which the policy group is
associated.
After you create a policy group, you use the storage object create command or the storage object modify
command to apply the policy group to a storage object.
● Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group. Policy group names must be unique and are restricted to 127
alphanumeric characters including underscores "_" and hyphens "-". Policy group names must start with
an alphanumeric character. You use the qos policy-group rename command to change the policy
group name.
-vserver <vserver name> - Vserver
Specifies the data Vserver to which this policy group belongs. You can apply this policy group to only the
storage objects contained in the specified Vserver. For example, if you want to apply this policy group to
a volume, that volume must belong to the specified Vserver. Using this parameter does not apply the
policy group's SLOs to the Vserver. You need to use the vserver modify command if you want to apply
this policy group to the Vserver. If the system has only one Vserver, then the command uses that Vserver
by default.
[-max-throughput <qos_tput>] - Maximum Throughput
Specifies the maximum throughput for the policy group. A maximum throughput limit specifies the
throughput that the policy group must not exceed. It is specified in terms of IOPS or MB/s, or a
combination of comma separated IOPS and MB/s. The range is one to infinity. A value of zero is accepted
but is internally treated as infinity.
The values entered here are case-insensitive, and the units are base ten. There should be no space
between the number and the units. The default value for max-throughput is infinity, which can be
specified by the special value "INF". Note that there is no default unit - all numbers except zero require
explicit specification of the units.
Two reserved keywords, "none" and "INF", are available for the situation that requires removal of a value,
and the situation that needs to specify the maximum available value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s",
"100iops", "100iops,400KB/s", and "800KB/s,100iops"
[-min-throughput <qos_tput>] - Minimum Throughput
Specifies the minimum throughput for the policy group. A minimum throughput specifies the desired
performance level for a policy group. It is specified in terms of IOPS or MB/s, or a combination of comma
separated IOPS and MB/s.
The values entered here are case-insensitive, and the units are base ten. There should be no space
between the number and the units. The default value for min-throughput is "0". The default unit is IOPS.
One reserved keyword, 'none' is available for the situation that requires removal of a value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s",
"100iops", "100iops,400KB/s", and "800KB/s,100iops"
[-is-shared {true|false}] - Is Shared
Specifies whether the policy group can be shared or not. The default value is "true". This parameter
specifies if the SLOs of the policy group are shared between the workloads or if the SLOs are applied
separately to each workload.
● Examples
Creates the "p1" policy group which belongs to Vserver "vs1" with default policy values.
Creates the "p2" policy group which belongs to Vserver "vs1" with the maximum throughput set to 500 MB/s.
cluster1::> qos policy-group create p3 -vserver vs1 -max-throughput 500MB/s -is-shared false
Related reference
qos policy-group rename (page 513)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos policy-group delete command deletes a policy group from a cluster. You cannot delete a
policy group if a qos workload associated with storage object is assigned to it unless you use "-force". Using
"-force" will delete all the qos workloads for storage objects associated with the specified policy groups.
You can only delete user-defined policy groups. You cannot delete preset policy groups.
● Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group that you want to delete.
[-force [true]] - Force Delete Workloads for the QoS Policy Group (privilege: advanced)
Specifies whether to delete a policy group along with any underlying workloads.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos policy-group modify command modifies a user-created policy group.
● Parameters
-policy-group <text> - Policy Group Name
Specifies the name of the policy group that you want to modify.
[-max-throughput <qos_tput>] - Maximum Throughput
Specifies the maximum throughput for the policy group. A maximum throughput limit specifies the
throughput that the policy group must not exceed. It is specified in terms of IOPS or MB/s, or a
combination of comma separated IOPS and MB/s. The range is one to infinity. A value of zero is accepted
but is internally treated as infinity.
The values entered here are case-insensitive, and the units are base ten. There should be no space
between the number and the units. The default value for max-throughput is infinity, which can be
specified by the special value "INF". Note there is no default unit - all numbers except zero require
explicit specification of the units.
Two reserved keywords, "none" and "INF", are available for the situation that requires removal of a value,
and the situation that needs to specify the maximum available value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s", and
"100iops".
[-min-throughput <qos_tput>] - Minimum Throughput
Specifies the minimum throughput for the policy group. A minimum throughput specifies the desired
performance level for a policy group. It is specified in terms of IOPS or MB/s, or a combination of comma
separated IOPS and MB/s.
The values entered here are case-insensitive, and the units are base ten. There should be no space
between the number and the units. The default value for min-throughput is "0". The default unit is IOPS.
One reserved keyword, 'none' is available for the situation that requires removal of a value.
Examples of valid throughput specifications are: "100B/s", "10KB/s", "1gb/s", "500MB/s", "1tb/s",
"100iops", "100iops,400KB/s", and "800KB/s,100iops"
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos policy-group rename command changes the name of an existing policy group.
● Parameters
-policy-group <text> - Policy Group Name
Specifies the existing name of the policy group that you want to rename.
-new-name <text> - New Policy Group Name
Specifies the new name of the policy group. Policy group names must be unique and are restricted to
127 alphanumeric characters including underscores "_" and hyphens "-". Policy group names must start
with an alphanumeric character.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos policy-group show command shows the current settings of the policy groups on a cluster. You
can display a list of the policy groups and you can view detailed information about a specific policy group.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-policy-group <text>] - Policy Group Name
Selects the policy groups that match this parameter value
Policy groups define measurable service level objectives (SLOs) that apply to the storage objects with
which the policy group is associated.
[-vserver <vserver name>] - Vserver
Selects the policy groups that match this parameter value
[-uuid <UUID>] - Uuid
Selects the policy groups that match this parameter value
[-class <QoS Configuration Class>] - Policy Group Class
Selects the policy groups that match this parameter value
[-pgid <integer>] - Policy Group ID
Selects the policy groups that match this parameter value
This uniquely identifies the policy group
[-max-throughput <qos_tput>] - Maximum Throughput
Selects the policy groups that match this parameter value
A maximum throughput limit specifies the throughput (in IOPS or MB/s) that the policy group must not
exceed.
[-min-throughput <qos_tput>] - Minimum Throughput
Selects the policy groups that match this parameter value
A minimum throughput specifies the desired performance level for a policy group.
[-num-workloads <integer>] - Number of Workloads
Selects the policy groups that match this parameter value.
[-throughput-policy <text>] - Throughput Policy
Selects the policy groups that match this parameter value. You can specify the throughput range in
terms of IOPS or data rate. For example, 0-INF, 0-400IOPS, 0-200KB/s, 0-400MB/s.
[-is-shared {true|false}] - Is Shared
Selects the policy groups that match this parameter value.
The shared value specifies whether the policy group is a shared policy group or not.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The qos settings throughput-floors-v2 command is used to enable or disable floors v2 on AFF. The
default is enabled. With floors v2 enabled, throughput floors can be met when controllers are heavily
utilized at the expense of higher latency on other workloads. Floors v2 applies to both QoS and Adaptive
QoS.
● Parameters
-enable {true|false} - enable or disable throughput floors v2 on AFF
This specifies if floors v2 is enabled or disabled. If this parameter is specified with false floors v2 will be
disabled.
● Examples
The following example disables floors v2 on AFF.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos settings cache modify command modifies the existing default caching-policy. The list of
caching policies can be obtained from the qos setting cache show -fields cache-setting
command.
● Parameters
-cache-setting <text> - Cache Policy Name
Valid inputs to this parameter include any one of the listed caching-policies. This command is to be used
together with the default parameter. If you use this parameter, the command modifies the specified
caching-policy based on the default parameter.
[-default {true|false}] - Is Default?
Valid inputs to this parameter are true and false. Together with cache-setting, this parameter helps set
or unset a caching-policy as default.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos settings cache show shows the current caching-policies, class to which they belong, the
number of workloads associated with each of the policies, and whether or not they are set to default. The
following external-cache policies are available:
• none - Does not cache any user data or metadata blocks.
• auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly
overwritten user data blocks.
• all - Read caches all data blocks read and written. It does not do any write caching.
• all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten
user data blocks.
• all_read - Read caches all metadata, randomly read, and sequentially read user data blocks.
• all_read-random_write - Read caches all metadata, randomly read, and sequentially read user data blocks.
It also write caches randomly overwritten user data blocks.
• all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly
written user data.
• all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read, and
randomly written user data blocks. It also write caches randomly overwritten user data blocks.
• meta - Read caches only metadata blocks.
• meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
• noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read
caching.
• random_read - Read caches all metadata and randomly read user data blocks.
• random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
• random_read_write-random_write - Read caches all metadata, randomly read, and randomly written user
data blocks. It also write caches randomly overwritten user data blocks.
Note
Note that in a caching-policy name, a hyphen (-) separates read and write caching policies.
● Parameters
{ [-fields <fieldname>, ...]
The input to this parameter is one of the following: {cache-setting|class|default|num-workloads}. If you
use this parameter, the command displays information related to the specified input field.
| [-instance ]}
If you use this parameter, the command displays information about the caching-policies in a list format.
[-cache-setting <text>] - Cache Policy Name
The input to this parameter is any one of the above listed caching-policies. If you use this parameter, the
command displays information corresponding to the specified caching-policy.
[-class <QoS Configuration Class>] - Cache Policy Class
The input to this parameter is one of the following: {undefined|preset|user-defined|system-defined|
autovolume}. If you use this parameter, the command displays information corresponding to the
specified policy-group class.
[-default {true|false}] - Is Default?
The input to this parameter is true and false. If you use this parameter, the command displays
information corresponding to entries that have the specified default value.
[-num-workloads <integer>] - Number Of Workloads With This Policy
The input to this parameter is an integer. If you use this parameter, the command displays information
about policy-groups matching the specified number of workloads.
● Examples
preset 0 false
random_read_write-random_write
preset 28 true
14 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics characteristics show command displays data that characterizes the behavior of
QoS policy groups.
The command displays the following data:
• The QoS policy group name (Policy Group)
• Input/output operations performed per second (IOPS)
• Throughput achieved in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate
(Throughput)
• Request size in bytes (B) (Request size)
• Read percentage from total I/O (Read)
• Concurrency, which indicates the number of concurrent users generating the I/O traffic (Concurrency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total
IOPS used across all QoS policy groups. Other columns in this row are either totals or averages.
● Parameters
[-node {<nodename>|local}] - Node
Selects the policy groups that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
| [-policy-group <text>] - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
| [-adaptive-policy-group <text>]} - Adaptive QoS Policy Group Name
Selects the QoS adaptive policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
● Examples
The example above displays the characteristics of the 4 QoS policy groups with the highest IOPS values and
refreshes the display 100 times before terminating.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics latency show command displays the average latencies for QoS policy groups
across the various Data ONTAP subsystems.
● Parameters
[-node {<nodename>|local}] - Node
Selects the policy groups that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
| [-policy-group <text>] - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
| [-adaptive-policy-group <text>]} - Adaptive QoS Policy Group Name
Selects the QoS adaptive policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
● Examples
The example above displays latencies for the 3 QoS policy groups with the highest latencies and refreshes
the display 100 times before terminating.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics performance show command shows the current system performance levels that
QoS policy groups are achieving.
The command displays the following data:
• The QoS policy group name (Policy Group)
• Input/output operations performed per second (IOPS)
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total
IOPS used across all QoS policy groups. Other columns in this row are either totals or averages.
● Parameters
[-node {<nodename>|local}] - Node
Selects the policy groups that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
| [-policy-group <text>] - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
| [-adaptive-policy-group <text>]} - Adaptive QoS Policy Group Name
Selects the QoS adaptive policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
● Examples
The example above displays the system performance for the 4 QoS policy groups with the highest IOPS and
it refreshes the display 100 times before terminating.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics resource cpu show command displays the CPU utilization for QoS policy groups
per node.
The command displays the following data:
• The QoS policy group name (Policy Group)
• CPU utilization observed in percentage (CPU)
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that
displays the total CPU utilization across all QoS policy groups.
● Parameters
-node {<nodename>|local} - Node
Selects the policy groups that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
● Examples
cluster1::> qos statistics resource cpu show -node nodeA -iterations 100 -rows 3
Policy Group CPU
-------------------- -----
-total- (100%) 9%
fast 1%
slow 3%
medium 5%
-total- (100%) 8%
slow 1%
fast 3%
medium 3%
The following example shows the output when the session privilege level is diagnostic.
cluster1::*> qos statistics resource cpu show -node nodeB -iterations 100 -rows 3
Policy Group CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
-------------------- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) 21% 0% 0% 0% 0% 18% 3%
fast 19% 0% 0% 0% 0% 16% 3%
medium 1% 0% 0% 0% 0% 1% 0%
slow 1% 0% 0% 0% 0% 1% 0%
-total- (200%) 22% 0% 0% 0% 0% 19% 3%
fast 18% 0% 0% 0% 0% 15% 3%
medium 2% 0% 0% 0% 0% 2% 0%
slow 2% 0% 0% 0% 0% 2% 0%
The example above displays the total CPU utilization for the 3 QoS policy groups with the highest CPU
utilization and it refreshes the display 100 times before terminating.
cluster1::> qos statistics resource cpu show -node local -iterations 100 -policy-group pg1
Policy Group CPU
-------------------- -----
-total- (100%) 7%
pg1 1%
-total- (100%) 7%
pg1 1%
-total- (100%) 7%
pg1 1%
-total- (100%) 10%
pg1 1%
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics resource disk show command displays the disk utilization for QoS policy groups
per node. The disk utilization shows the percentage of time spent on the disk during read and write
operations. The command displays disk utilization for system-defined policy groups; however, their disk
utilization is not included in the total utilization. The command only supports hard disks.
The command displays the following data:
• The QoS policy group name (Policy Group)
• Disk utilization (Disk)
• The number of HDD data disks utilized (Number of HDD Disks)
The results displayed are sorted by total disk utilization. Each iteration starts with a row that displays the
total disk utilization across all QoS policy groups.
● Parameters
-node {<nodename>|local} - Node
Selects the policy groups that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
| [-policy-group <text>] - QoS Policy Group Name
Selects the QoS policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
| [-adaptive-policy-group <text>]} - Adaptive QoS Policy Group Name
Selects the QoS adaptive policy group whose name matches the specified value. If you do not specify this
parameter, the command displays data for all QoS policy groups.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
● Examples
cluster1::> qos statistics resource disk show -node nodeA -iterations 100 -rows 3
Policy Group Disk Number of HDD Disks
The example above displays the total disk utilization for the 3 QoS policy groups with the highest disk
utilization and it refreshes the display 100 times before terminating.
cluster1::> qos statistics resource disk show -node local -iterations 100 -policy-group pg1
Policy Group Disk Number of HDD Disks
-------------------- ----- -------------------
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
-total- 3% 10
pg1 1% 24
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics volume characteristics show command displays data that characterizes the
behavior of volumes.
The command displays the following data:
• QoS volume name (Workload)
• QoS workload ID (ID)
• Input/output operations per second (IOPS)
• Throughput achieved in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate
(Throughput)
• Request size in bytes (B) (Request size)
● Parameters
[-node {<nodename>|local}] - Node
Selects the volumes that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed
range of values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the characteristic data that match this parameter value. Enter a complete volume name or press
the <Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
The example above displays characteristics for the 3 volumes with the highest IOPS and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics volume characteristics show -vserver vs0 -volume vs0_vol0 -iterations 10
0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics volume latency show command displays the average latencies for volumes on
Data ONTAP subsystems.
The command displays the following data:
• The QoS volume name (Workload)
• The QoS workload ID (ID)
• Total latency observed per I/O operation (Latency)
• Latency observed per I/O operation in the Network subsystem (Network)
• Latency observed per I/O operation across the internally connected nodes in a Cluster (Cluster)
• Latency observed per I/O operation in the Data management subsystem (Data)
• Latency observed per I/O operation in the Storage subsystem (Disk)
• Latency observed per I/O operation for QoS Policy Group Ceiling (QoS Max)
• Latency observed per I/O operation for QoS Policy Group Floor (QoS Min)
• Latency observed per I/O operation for NVRAM transfer (NVRAM)
• Latency observed per I/O operation for Object Store(Cloud) operations
• Latency observed per I/O operation for FlexCache (FlexCache) operations
• Latency observed per I/O operation for Synchronous Snapmirror (SM Sync) operations
• Latency observed per I/O operation for Volume Activation (VA) operations
The results displayed per iteration are sorted by the total latency field. Each iteration starts with a row that
displays the average latency, in microseconds (us) or milliseconds (ms) observed across all volumes.
● Parameters
[-node {<nodename>|local}] - Node
Selects the volumes that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed
range of values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the latency data that match this parameter value. Enter a complete volume name or press the
<Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times that the command refreshes the display with updated data before
terminating. If you do not specify this parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
The example above displays latencies for the 3 volumes with the highest latencies and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics volume latency show -vserver vs0 -volume vs0_vol0 -iterations 100
Workload ID Latency Network Cluster Data Disk Qos Max Qos Min
NVRAM Cloud FlexCache SM Sync VA
--------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---
------- ---------- ---------- ---------- ----------
-total- - 455.00us 158.00us 0ms 297.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
vs0_vol0-wid1.. 15658 428.00us 155.00us 0ms 273.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 337.00us 130.00us 0ms 207.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
vs0_vol0-wid1.. 15658 316.00us 128.00us 0ms 188.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 464.00us 132.00us 0ms 332.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
vs0_vol0-wid1.. 15658 471.00us 130.00us 0ms 341.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 321.00us 138.00us 0ms 183.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
vs0_vol0-wid1.. 15658 302.00us 137.00us 0ms 165.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 418.00us 142.00us 0ms 276.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
vs0_vol0-wid1.. 15658 424.00us 143.00us 0ms 281.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics volume performance show command shows the current system performance that
each volume is achieving.
The command displays the following data:
• The QoS volume name (Workload)
• The QoS workload ID (ID)
• Input/output operations performed per second (IOPS)
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total
IOPS used across all volumes. Other columns in this row are either totals or averages.
● Parameters
[-node {<nodename>|local}] - Node
Selects the volumes that match this parameter value. If you do not specify this parameter, the command
displays data for the entire cluster.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed
range of values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the performance data that match this parameter value. Enter a complete volume name or press
the <Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
The example above displays the system performance for the 3 volumes with the highest IOPS and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics volume performance show -vserver vs0 -volume vs0_vol0 -iterations 100
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
-total- - 1278 639.17KB/s 404.00us
vs0_vol0-wid1.. 15658 526 263.17KB/s 436.00us
-total- - 1315 657.33KB/s 86.00us
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics volume resource cpu show command displays the CPU utilization for volumes
per node.
The command displays the following data:
• The QoS volume name (Workload)
• The QoS workload ID (ID)
• CPU utilization observed in percentage (CPU)
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that
displays the total CPU utilization across all volumes.
● Parameters
-node {<nodename>|local} - Node
Selects the volumes that match this parameter value.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed
range of values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the CPU utilization data that match this parameter value. Enter a complete volume name or
press the <Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
cluster1::> qos statistics volume resource cpu show -node nodeA -iterations 100 -rows 3
Workload ID CPU
--------------- ----- -----
--total- (100%) - 9%
vs0vol1-wid-102 102 5%
vs0vol2-wid-121 121 2%
vs2_vol0-wid-.. 212 2%
-total- (100%) - 8%
vs0vol1-wid-102 102 5%
vs0vol2-wid-121 121 2%
vs2_vol0-wid-.. 212 1%
The example above displays total CPU utilization for the 3 volumes with the highest CPU utilization and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics volume resource cpu show -node local -vserver vs0 -volume vs0_vol1 -iter
ations 100
Workload ID CPU
--------------- ----- -----
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 2%
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 2%
-total- (100%) - 1%
vs0_vol1-wid7.. 7916 1%
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 1%
-total- (100%) - 2%
vs0_vol1-wid7.. 7916 2%
The following example shows the output when the session privilege level is "diagnostic".
cluster1::*> qos statistics volume resource cpu show -node nodeB -iterations 100 -rows 3
Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
--------------- ----- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) - 23% 0% 0% 0% 0% 18% 5%
vs0vol1-wid-102 102 18% 0% 0% 0% 0% 15% 3%
vs0vol2-wid-121 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
-total- (200%) - 24% 0% 0% 0% 0% 19% 5%
vs0vol1-wid-102 102 19% 0% 0% 0% 0% 16% 3%
vs0vol2-wid-121 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics volume resource disk show command displays the disk utilization for volumes
per node. The disk utilization shows the percentage of time spent on the disk during read and write
operations. The command only supports hard disks.
The command displays the following data:
• The QoS volume name (Workload)
• The QoS workload ID (ID)
• Disk utilization (Disk)
• The number of HDD data disks utilized (Number of HDD Disks)
The results displayed are sorted by total disk utilization. Each iteration starts with a row that displays the
total disk utilization across all volumes.
● Parameters
-node {<nodename>|local} - Node
Selects the volumes that match this parameter value.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. The default setting is 10. The allowed
range of values is 1 to 20.
| -vserver <vserver name> - Vserver Name
Specifies the Vserver to which the volume belongs.
-volume <volume name>} - Volume Name
Selects the disk utilization data that match this parameter value. Enter a complete volume name or
press the <Tab> key to complete the name. Wildcard query characters are not supported.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
cluster1::> qos statistics volume resource disk show -node nodeB -iterations 100 -rows 3
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- (100%) - 30% 4
vs0vol1-wid101 101 12% 2
vs0vol2-wid121 121 10% 1
vol0-wid1002 1002 8% 1
-total- (100%) - 30% 4
vs0vol1-wid101 101 12% 2
The example above displays total disk utilization for the 3 volumes with the highest disk utilization and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics volume resource disk show -node local -vserver vs0 -volume vs0_v
ol0 -iterations 100
Workload ID Disk Number of HDD Disks
--------------- ------ ------ -------------------
-total- - 5% 10
vs0_vol0-wid1.. 15658 1% 6
-total- - 5% 10
vs0_vol0-wid1.. 15658 1% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
-total- - 6% 10
vs0_vol0-wid1.. 15658 2% 6
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics workload characteristics show command displays data that characterizes the
behavior of QoS workloads.
The command displays the following data:
• The QoS workload name (Workload)
• The QoS workload ID (ID)
• Input/output operations performed per second (IOPS)
• Throughput achieved in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate
(Throughput)
• Request size in bytes (B) (Request size)
• Read percentage from total IOPS (Read)
• Concurrency, which indicates the number of concurrent users generating the I/O traffic (Concurrency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total
IOPS used across all QoS workloads. Other columns in this row are either totals or averages.
● Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you
do not specify this parameter, the command displays data for all QoS workloads.
| [-adaptive-policy-group <text>] - Adaptive QoS Policy Group Name
Selects the QoS workloads that belong to the QoS adaptive policy group specified by this parameter
value. If you do not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the
command displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
The example above displays characteristics for the 4 QoS workloads with the highest IOPS and it refreshes
the display 100 times before terminating.
cluster1::> qos statistics workload characteristics show -iterations 100 -rows 2 -policy-group pg1
Workload ID IOPS Throughput Request size Read Concurrency
--------------- ------ -------- ---------------- ------------ ---- -----------
-total- - 243 546.86KB/s 2307B 61% 1
file-test1_a-.. 6437 34 136.00KB/s 4096B 100% 0
file-test1_c-.. 5078 33 133.33KB/s 4096B 100% 0
-total- - 310 3.09MB/s 10428B 55% 1
file-test1_a-.. 6437 36 142.67KB/s 4096B 100% 0
file-test1_b-.. 9492 35 138.67KB/s 4096B 100% 0
-total- - 192 575.71KB/s 3075B 71% 1
file-test1-wi.. 7872 39 157.33KB/s 4096B 100% 0
file-test1_c-.. 5078 38 153.33KB/s 4096B 100% 0
The example above displays the characteristics for the 2 QoS workloads belonging to QoS policy group pg1
with the highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload characteristics show -iterations 100 -workload-id 9492
Workload ID IOPS Throughput Request size Read Concurrency
--------------- ------ -------- ---------------- ------------ ---- -----------
-total- - 737 2.14MB/s 3045B 79% 1
file-test1_b-.. 9492 265 1058.67KB/s 4096B 100% 0
-total- - 717 4.26MB/s 6235B 80% 1
file-test1_b-.. 9492 272 1086.67KB/s 4096B 100% 1
-total- - 623 2.50MB/s 4202B 86% 0
file-test1_b-.. 9492 263 1050.67KB/s 4096B 100% 0
-total- - 595 2.11MB/s 3712B 89% 0
file-test1_b-.. 9492 266 1064.00KB/s 4096B 100% 0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics workload latency show command displays the average latencies for QoS
workloads on Data ONTAP subsystems.
The command displays the following data:
• The QoS workload name (Workload)
• The QoS workload ID (ID)
• Total latency observed per I/O operation (Latency)
• Latency observed per I/O operation in the Network subsystem (Network)
• Latency observed per I/O operation across the internally connected nodes in a Cluster (Cluster)
• Latency observed per I/O operation in the Data management subsystem (Data)
• Latency observed per I/O operation in the Storage subsystem (Disk)
• Latency observed per I/O operation for QoS Policy Group Ceiling (QoS Max)
• Latency observed per I/O operation for QoS Policy Group Floor (QoS Min)
• Latency observed per I/O operation for NVRAM transfer (NVRAM)
• Latency observed per I/O operation for Object Store(Cloud) operations
• Latency observed per I/O operation for FlexCache (FlexCache) operations
• Latency observed per I/O operation for Synchronous Snapmirror (SM Sync) operations
• Latency observed per I/O operation for Volume Activation (VA) operations
The results displayed per iteration are sorted by the total latency field. Each iteration starts with a row that
displays the average latency, in microseconds (us) or milliseconds (ms) observed across all QoS workloads.
● Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
[-iterations <integer>] - Number of Iterations
Specifies the number of times that the command refreshes the display with updated data before
terminating. If you do not specify this parameter, the command continues to run until you interrupt it by
pressing Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you
do not specify this parameter, the command displays data for all QoS workloads.
| [-adaptive-policy-group <text>] - Adaptive QoS Policy Group Name
Selects the QoS workloads that belong to the QoS adaptive policy group specified by this parameter
value. If you do not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the
command displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
The example above displays latencies for the 3 QoS workloads with the highest latencies and it refreshes the
display 100 times before terminating.
cluster1::> qos statistics workload latency show -iterations 100 -rows 2 -policy-group pg1
Workload ID Latency Network Cluster Data Disk Qos Max Qos Min
NVRAM Cloud FlexCache SM Sync VA
--------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---
------- ---------- ---------- ---------- ----------
-total- - 4.80ms 287.00us 0ms 427.00us 4.08ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1-wi.. 7872 9.60ms 265.00us 0ms 479.00us 8.85ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_a-.. 6437 8.22ms 262.00us 0ms 424.00us 7.53ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 4.20ms 296.00us 0ms 421.00us 3.48ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1-wi.. 7872 8.70ms 211.00us 0ms 489.00us 8.00ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_a-.. 6437 6.70ms 297.00us 0ms 464.00us 5.94ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 5.90ms 303.00us 0ms 1.71ms 3.88ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1-wi.. 7872 11.36ms 263.00us 0ms 2.06ms 9.04ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_a-.. 6437 9.48ms 250.00us 0ms 2.30ms 6.93ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
The example above displays latencies for the 2 QoS workloads belonging to QoS policy group pg1 with the
highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload latency show -iterations 100 -workload-id 9492
Workload ID Latency Network Cluster Data Disk Qos Max Qos Min
NVRAM Cloud FlexCache SM Sync VA
--------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ----------
---------- ---------- ---------- ---------- ----------
-total- - 443.00us 273.00us 0ms 170.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_b-.. 9492 440.00us 272.00us 0ms 168.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 577.00us 313.00us 0ms 264.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_b-.. 9492 607.00us 316.00us 0ms 291.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 475.00us 291.00us 0ms 184.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_b-.. 9492 476.00us 293.00us 0ms 183.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
-total- - 628.00us 284.00us 0ms 344.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
file-test1_b-.. 9492 591.00us 281.00us 0ms 310.00us 0ms 0ms 0ms
0ms 0ms 0ms 0ms 0ms
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics workload performance show command shows the current system performance
that each QoS workload is achieving.
The command displays the following data:
• The QoS workload name (Workload)
• The QoS workload ID (ID)
• Input/output operations performed per second (IOPS)
• Throughput in kilobytes per second (KB/s) or megabytes per second (MB/s) as appropriate (Throughput)
• Latency observed per request in microseconds (us) or milliseconds (ms) as appropriate (Latency)
The results displayed per iteration are sorted by IOPS. Each iteration starts with a row that displays the total
IOPS used across all QoS workloads. Other columns in this row are either totals or averages.
● Parameters
[-node {<nodename>|local}] - Node
Selects the QOS workloads that match this parameter value. If you do not specify this parameter, the
command displays data for the entire cluster.
● Examples
The example above displays the system performance for the 4 QoS workloads with the highest IOPS and it
refreshes the display 100 times before terminating.
cluster1::> qos statistics workload performance show -iterations 100 -rows 2 -policy-group pg1
Workload ID IOPS Throughput Latency
The example above displays the system performance for the 2 QoS workloads belonging to QoS policy group
pg1 with the highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload performance show -iterations 100 -workload-id 11201
Workload ID IOPS Throughput Latency
--------------- ------ -------- ---------------- ----------
-total- - 2866 10.92MB/s 905.00us
file-testfile.. 11201 674 2.63MB/s 889.00us
-total- - 2761 10.55MB/s 1054.00us
file-testfile.. 11201 638 2.49MB/s 1055.00us
-total- - 2810 10.58MB/s 832.00us
file-testfile.. 11201 685 2.68MB/s 909.00us
-total- - 2593 9.86MB/s 1092.00us
file-testfile.. 11201 632 2.47MB/s 964.00us
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics workload resource cpu show command displays the CPU utilization for QoS
workloads per node.
The command displays the following data:
• The QoS workload name (Workload)
• The QoS workload ID (ID)
• CPU utilization observed in percentage (CPU)
The results displayed per iteration are sorted by total CPU utilization. Each iteration starts with a row that
displays the total CPU utilization across all QoS workloads.
● Parameters
-node {<nodename>|local} - Node
● Examples
cluster1::> qos statistics workload resource cpu show -node nodeA -iterations 100 -rows 3
Workload ID CPU
--------------- ----- -----
--total- (100%) - 9%
vs0-wid-102 102 5%
file-bigvmdk-.. 121 2%
vs2_vol0-wid-.. 212 2%
-total- (100%) - 8%
vs0-wid-101 102 5%
file-bigvmdk-.. 121 2%
vs2_vol0-wid-.. 212 1%
The following example shows the output when the session privilege level is "diagnostic".
cluster1::*> qos statistics workload resource cpu show -node nodeB -iterations 100 -rows 3
Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol
--------------- ----- ----- ----------- ------ ------- ----- ------ --------
-total- (200%) - 23% 0% 0% 0% 0% 18% 5%
vs0-wid-102 102 18% 0% 0% 0% 0% 15% 3%
file-bigvmdk-.. 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
-total- (200%) - 24% 0% 0% 0% 0% 19% 5%
vs0-wid-102 102 19% 0% 0% 0% 0% 16% 3%
file-bigvmdk-.. 121 3% 0% 0% 0% 0% 2% 1%
vs2_vol0-wid-.. 212 2% 0% 0% 0% 0% 1% 1%
The example above displays total CPU utilization for the 3 QoS workloads with the highest CPU utilization
and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource cpu show -node local -iterations 100 -rows 2 -policy-g
roup pg1
Workload ID CPU
--------------- ----- -----
-total- (100%) - 41%
file-test1_b-.. 9492 16%
file-test1_c-.. 5078 16%
-total- (100%) - 43%
file-test1_c-.. 5078 17%
file-test1_b-.. 9492 16%
-total- (100%) - 40%
file-test1_c-.. 5078 16%
file-test1_b-.. 9492 15%
The example above displays total CPU utilization for the 2 QoS workloads belonging to QoS policy group pg1
with the highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource cpu show -node local -iterations 100 -workload-id 9492
Workload ID CPU
--------------- ----- -----
-total- (100%) - 15%
file-test1_b-.. 9492 3%
-total- (100%) - 14%
file-test1_b-.. 9492 3%
-total- (100%) - 14%
file-test1_b-.. 9492 2%
-total- (100%) - 13%
file-test1_b-.. 9492 3%
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The qos statistics workload resource disk show command displays the disk utilization for QoS
workloads per node. The disk utilization shows the percentage of time spent on the disk during read and
write operations. The command displays disk utilization for system-defined workloads; however, their disk
utilization is not included in the total utilization. The command only supports hard disks.
● Parameters
-node {<nodename>|local} - Node
Selects the QOS workloads that match this parameter value.
[-iterations <integer>] - Number of Iterations
Specifies the number of times the display is refreshed before terminating. If you do not specify this
parameter, the command iterates until interrupted by Ctrl-C.
[-refresh-display {true|false}] - Toggle Screen Refresh Between Each Iteration
Specifies the display style. If true, the command clears the display after each data iteration. If false, the
command displays each data iteration below the previous one. The default is false.
{ [-rows <integer>] - Number of Rows in the Output
Specifies the number of busiest QoS policy groups to display. Valid values are from 1 to 20. The default
value is 10.
[-policy-group <text>] - QoS Policy Group Name
Selects the QoS workloads that belong to the QoS policy group specified by this parameter value. If you
do not specify this parameter, the command displays data for all QoS workloads.
| [-adaptive-policy-group <text>] - Adaptive QoS Policy Group Name
Selects the QoS workloads that belong to the QoS adaptive policy group specified by this parameter
value. If you do not specify this parameter, the command displays data for all QoS workloads.
| [-workload <text>] - QoS Workload Name
Selects the QoS workload that match this parameter value. If you do not specify this parameter, the
command displays data for all QoS workloads.
| [-workload-id <integer>]} - QoS Workload ID
Selects the QoS workload that match the QoS workload ID specified by this parameter value.
[-show-flexgroup-as-constituents {true|false}] - Display Flexgroups as Constituents
If the parameter is specified and if the value is true, it will display data for FlexVols and Flexgroup
Constituents. Otherwise it will display data for FlexVols and Flexgroups.
● Examples
cluster1::> qos statistics workload resource disk show -node nodeB -iterations 100 -rows 3
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- (100%) - 30% 4
_RAID - 20% 4
vs0-wid101 101 12% 2
file-1-wid121 121 10% 1
vol0-wid1002 1002 8% 1
_WAFL - 7% 3
-total- (100%) - 30% 4
vs0-wid101 101 12% 2
file-1-wid121 121 10% 1
_RAID - 10% 4
vol0-wid1002 1002 8% 1
_WAFL - 7% 3
The example above displays total disk utilization for the 3 QoS workloads with the highest disk utilization
and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource disk show -node local -iterations 100 -rows 2 -policy-grou
p pg1
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- - 3% 10
file-test1_a-.. 6437 6% 6
file-test1-wi.. 7872 6% 6
-total- - 3% 10
file-test1_a-.. 6437 5% 6
file-test1-wi.. 7872 5% 6
-total- - 3% 10
file-test1_a-.. 6437 6% 6
file-test1-wi.. 7872 6% 6
The example above displays total disk utilization for the 2 QoS workloads belonging to QoS policy group pg1
with the highest IOPS and it refreshes the display 100 times before terminating.
cluster1::> qos statistics workload resource disk show -node local -iterations 100 -workload-id 6437
Workload ID Disk Number of HDD Disks
--------------- ------ ----- -------------------
-total- - 3% 10
file-test1_a-.. 6437 6% 6
-total- - 3% 10
file-test1_a-.. 6437 5% 6
-total- - 3% 10
file-test1_a-.. 6437 6% 6
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Deletes a workload from a cluster. When you delete a workload, the associated data object is no longer
controlled by its policy group.
You can only delete user-defined workloads. You cannot delete system-defined or preset workloads.
● Parameters
-workload <text> - Workload Name
Specifies the name of the workload that you want to delete.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Shows the current status of workloads on a cluster. Use this command to determine the types of workloads
that are currently on a cluster. The types of workloads include: system-defined, preset, and user-defined. The
system generates system-defined and preset workloads. You cannot create, modify, or delete these
workloads. Also, you can only modify or delete a user-defined workload , but cannot create one
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-workload <text>] - Workload Name
If you use this parameter, the command displays the workloads that contain the specified workload
name.
[-uuid <UUID>] - Workload UUID (privilege: advanced)
If you use this parameter, the command displays the workloads that contain the specified UUID.
[-class <QoS Configuration Class>] - Workload Class
If you use this parameter, the command displays the workloads that contain the specified class. The
Class options include system-defined, preset, and user-defined.
[-wid <integer>] - Workload ID
If you use this parameter, the command displays the workloads that contain the specified internal
workload ID.
[-category <text>] - Workload Category
If you use this parameter, the command displays the workloads that contain the specified category. The
category options include Scanner and Efficiency.
[-policy-group <text>] - Policy Group Name
If you use this parameter, the command displays the workloads that match the specified policy group
name.
[-read-ahead <text>] - Read-ahead Tunables
If you use this parameter, the command displays the workloads that contain the specified read-ahead
cache tunable.
[-vserver <vserver name>] - Vserver
If you use this parameter, the command displays the workloads that match the specified Vserver.
[-volume <volume name>] - Volume
If you use this parameter, the command displays the workloads that match the specified volume.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The command displays cluster SAN configuration options.
● Examples
The following example displays the cluster SAN configuration.
The security commands enable you to manage security for the management interface.
security snmpusers
Show SNMP users
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security snmpusers displays the following information about SNMP users:
• User name
• Authentication method
• Hexadecimal engine ID
• Authentication protocol
• Privacy protocol
• Security group
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
If this parameter is specified, the command displays information only about the SNMP user or users that
belong to the specified Vserver.
[-username <text>] - User Name
If this parameter is specified, the command displays information only about the SNMP user with the
specified user name.
[-authmethod <text>] - Authentication Method
If this parameter is specified, the command displays information only about the SNMP user or users that
use the specified authentication method. Possible values include the following:
• community-SNMP community strings
• usm-SNMP user security model
[-remote-switch-ipaddress <IP Address>] - Remote Switch IP Address
If this parameter is specified, the command displays information only about the remote SNMP user or
users that belong to the specified remote switch.
[-engineid <Hex String>] - Engine Id
If this parameter is specified, the command displays information only about the SNMP user or users that
use the specified engine ID, specified in hexadecimal format.
[-authprotocol <text>] - Authentication Protocol
If this parameter is specified, the command displays information only about the SNMP user or users that
use the specified authentication protocol.
[-privprotocol <text>] - Privacy Protocol
If this parameter is specified, the command displays information only about the SNMP user or users that
use the specified privacy protocol.
[-securitygroup <text>] - Security Group
If this parameter is specified, the command displays information only about the SNMP user or users that
belong to the specified security group.
● Examples
The following example displays information about all SNMP users:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security audit modify command modifies the following audit-logging settings for the
management interface:
• Whether get requests for the CLI are audited
• Whether get requests for the Data ONTAP API (ONTAPI) are audited
● Parameters
[-cliget {on|off}] - Enable Auditing of CLI Get Operations
This specifies whether get requests for the CLI are audited. The default setting is off.
[-httpget {on|off}] - Enable Auditing of HTTP Get Operations
This specifies whether get requests for the web (HTTP) interface are audited. The default setting is off.
[-ontapiget {on|off}] - Enable Auditing of Data ONTAP API Get Operations
This specifies whether get requests for the Data ONTAP API (ONTAPI) interface are audited. The default
setting is off.
● Examples
The following example turns off auditing of get requests for the CLI interface:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security audit show command displays the following audit-logging settings for the management
interface:
• Whether get requests for the CLI are audited
• Whether get requests for the web (HTTP) interface are audited
• Whether get requests for the Data ONTAP API (ONTAPI) are audited
Audit log entries are written to the 'audit' log, viewable via the 'security audit log show' command.
● Examples
The following example displays the audit-logging settings for the management interface:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security audit log show command displays cluster-wide audit log messages. Messages from each
node are interleaved in chronological order.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
This display option shows the individual fields of the audit record.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-timestamp <Date>] - Log Entry Timestamp
Selects the entries that match the specified input for timestamp. This will be in a human-readable
format <day> <month> <day of month> <hour>:<min>:<sec> <year> in the local timezone.
[-node {<nodename>|local}] - Node
Selects the entries that match the specified input for node.
[-entry <text>] - Log Message Entry
Selects the entries that match the specified input for entry.
[-session-id <text>] - Session ID
This is the "session id" for this audit record. Eash ssh/console session is assigned a unique session ID.
Eash ZAPI/HTTP/SNMP request is assigned a uniqueue session ID
[-command-id <text>] - Command ID
This is useful with ssh/console sessions. Each command in a session is assigned a unique command ID.
Each ZAPI/HTTP/SNMP request does not have a command ID.
[-application <text>] - Protocol
This is the application used to connect to the cluster. Possible values include the following: internal,
console, ssh, http, ontapi, snmp, rsh, telnet, service-processor
[-location <text>] - Remote user location
The remote IP address or remote access point.
[-vserver <text>] - Vserver name
Storage Virtual Machine name
[-username <text>] - Username
Username
[-input <text>] - Command being executed
The operation being attempted
[-state {Pending|Success|Error}] - State of this audit request
State of this request
[-message <text>] - Additional information and/or error message
Additional information which may be error or informative message.
● Examples
The following example displays specific fields based on a custom query:
cluster1::> security audit log show -fields application, location, state, input, message -location 10.6
0.* -state Error|Success -input v*|st* -timestamp >"Jul 10 12:00:00 2020"
timestamp node application location input st
ate message
-------------------------- ----- ----------- ------------ ------------------------------------------ --
----- -------
"Fri Jul 17 11:32:44 2020" node1 ssh 10.60.250.79 storage aggregate create test -diskcount 5 Su
ccess -
"Fri Jul 17 11:36:47 2020" node1 ssh 10.60.250.79 vserver create vs1 Su
ccess -
"Fri Jul 17 11:37:33 2020" node1 ssh 10.60.250.79 volume create vol1 Er
ror One of the following parameters is required: -aggregate, -aggr-list, -auto-provision-as
"Fri Jul 17 11:38:08 2020" node1 ssh 10.60.250.79 volume create vol1 -aggregate test Su
ccess -
Show Commands
Manage Digital Certificates
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security certificate create command creates and installs a self-signed digital certificate, which
can be used for server authentication, for signing other certificates by acting as a certificate authority (CA),
or for Data ONTAP as an SSL client. The certificate function is selected by the -type field. Self-signed digital
certificates are not as secure as certificates signed by a CA. Therefore, they are not recommended in a
production environment.
● Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the name of the Vserver on which the certificate will exist.
-common-name <FQDN or Custom Common Name> - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common
name or the name of a person. The supported characters, which are a subset of the ASCII character set,
are as follows:
• Letters a through z, A through Z
• Numbers 0 through 9
• Asterisk (*), period (.), underscore (_) and hyphen (-)
The common name must not start or end with a "-" or a ".". The maximum length is 253 characters.
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• server - creates and installs a self-signed digital certificate and intermediate certificates to be used for
server authentication
• root-ca - creates and installs a self-signed digital certificate to sign other certificates by acting as a
certificate authority (CA)
• client - includes a self-signed digital certificate and private key to be used for Data ONTAP as an SSL
client
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
This specifies a certificate subtype. This optional parameter can have an empty value (the default). The
only valid value is as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It must be unique within a Vserver. If not
provided, it is automatically generated by the system.
● Examples
This example creates a server type, self-signed digital certificate for a Vserver named vs0 at a company
whose custom common name is www.example.com and whose Vserver name is vs0.
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type server
This example creates a root-ca type, self-signed digital certificate with a 2048-bit private key generated by
the SHA256 hashing function that will expire in 365 days for a Vserver named vs0 for use by the Software
group in IT at a company whose custom common name is www.example.com, located in Sunnyvale,
California, USA. The email address of the contact administrator who manages the Vserver is
web@example.com.
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type root-ca -size 2048
-country US -state California -locality Sunnyvale -organization IT -unit Software -email-addr web@example.com
This example creates a client type of self-signed digital certificate for a Vserver named vs0 at a company
that uses Data ONTAP as an SSL client. The company's custom common name is www.example.com and its
Vserver name is vs0.
cluster1::> security certificate create -vserver vs0 -common-name www.example.com -type client -size 2048 -
country US -state California -locality Sunnyvale -organization IT -unit Software -email-addr web@example.com
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command deletes an installed digital security certificate.
● Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the Vserver that contains the certificate.
-common-name <FQDN or Custom Common Name> - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common
name or the name of a person. The supported characters, which are a subset of the ASCII character set,
are as follows:
• Letters a through z, A through Z
• Numbers 0 through 9
• Asterisk (*), period (.), underscore (_) and hyphen (-)
The common name must not start or end with a "-" or a ".". The maximum length is 253 characters.
[-serial <text>] - Serial Number of Certificate
This specifies the certificate serial number.
-ca <text> - Certificate Authority
This specifies the certificate authority (CA).
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• server - includes server certificates and intermediate certificates
• root-ca - includes a self-signed digital certificate to sign other certificates by acting as a certificate
authority (CA)
• client-ca - includes the public key certificate for the root CA of the SSL client. If this client-ca certificate
is created as part of a root-ca, it will be deleted along with the corresponding deletion of the root-ca.
• server-ca - includes the public key certificate for the root CA of the SSL server to which Data ONTAP is a
client. If this server-ca certificate is created as part of a root-ca, it will be deleted along with the
corresponding deletion of the root-ca.
• client - includes a public key certificate and private key to be used for Data ONTAP as an SSL client
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
This specifies a certificate subtype. This optional parameter can have an empty value (the default). The
only valid value is as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It is unique within a Vserver.
● Examples
This example deletes a root-ca type digital certificate for a Vserver named vs0 in a company named
www.example.com with serial number 4F57D3D1.
cluster1::> security certificate delete -vserver vs0 -common-name www.example.com -ca www.example.com -type
root-ca -serial 4F57D3D1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command generates a digital certificate signing request and displays it on the console. A certificate
signing request (CSR or certification request) is a message sent to a certificate authority (CA) to apply for a
digital identity certificate.
● Parameters
[-common-name <text>] - FQDN or Custom Common Name
This specifies the desired certificate name as a fully qualified domain name (FQDN) or custom common
name or the name of a person. The supported characters, which are a subset of the ASCII character set,
are as follows:
• Letters a through z, A through Z
• Numbers 0 through 9
• Asterisk (*), period (.), underscore (_) and hyphen (-)
The common name must not start or end with a "-" or a ".". The maximum length is 253 characters.
{ [-size <size of requested certificate in bits>] - (DEPRECATED)-Size of Requested Certificate in Bits
This specifies the number of bits in the private key. A larger size value provides for a more secure key.
The default is 2048. Possible values include 512, 1024, 1536, and 2048.
Note
This parameter has been deprecated in ONTAP 9.8 and may be removed in future releases of Data
ONTAP. Use the security-strength parameter instead.
Use this parameter to specify the Subject Alternate Name extension - a list of URIs.
[-dns-name <text>, ...] - DNS Name SAN
Use this parameter to specify the Subject Alternate Name extension - a list of DNS names.
[-ipaddr <IP Address>, ...] - IP Address SAN
Use this parameter to specify the Subject Alternate Name extension - a list of IP addresses.
● Examples
This example creates a certificate-signing request with a 2048-bit RSA private key generated by the SHA256
hashing function for use by the Engineering group in IT at a company whose custom common name is
www.example.com, located in Durham, NC, USA. The email address of the contact administrator who
manages the Vserver is web@example.com The request also specifies the subject alternative names, key-
usage and extended-key-usage extensions.
Private Key :
-----BEGIN PRIVATE KEY-----
MIIG/AIBADANBgkqhkiG9w0BAQEFAASCBuYwggbiAgEAAoIBgQC6jzomD+zOHCzK
RiIRe9Fp/+PaMkCskyhlvL1n/Ig2yjB7fRdufnCd+/CLAgJQI/APV7eOErdOtYes
1Gf/4yoT6NADvIB259jEPnftwNxPjqKGRqMhJ7QSL3fJ/ogqa9QC+Ym9DdHtngRA
I5YePnrJD4tePfpth3BrvhLy7yWkKzdjDHx+CuH+/ofSueUoNBJC4VHfm7tlnKa1
EDsQkxjrZ/0AuBwU2FA+4/qmNzxo/tO/sEpcCeKevIMNygx1bxRzN9M+DMIOHnNL
7540/ez/twZ9FJDxpecNNVAtim6yt0OBXBldeC75k+hm4J/YBgFP7+t2rdWSBa5A
z6YZEwHnc7R9/hA6L5JVxdJZerfbDNyK/Uqxb+IH2HZP6RV3/xnsRJ5guoXGFj7S
fBpyyQxEbE65MuDH5cf78Se2OsjiuMX39KFgIhzLy+qnU7EVo8KW73p5uduIQ67L
pG8InZ6Bwzo55FsdYHl8gnRlndyt2PP93XPiS0G4bsThxRiQS+8CAwEAAQKCAYBW
fqtWFFIVaWi2y3dmJcL840AP3PaxTHURXkVund3FkU6TIncnqoWqKbHnsSHDaDYX
1vJqc3D7lBx4W+5v7DGJE4rGALKK7olIyzGtUJqUZCwkF0Hw0EijmdBvHYyiJmYg
jvN2bJ7lDTspRZaHJS6mY4eZRSEDgST1PyXn7krEZ6kBSju58G/BWt88KyX80s+Y
pIDiLIDg5pVAI2tPDvQhyI+7sqCKZZQm5GpEgB2JDIS+PgzryUWBlSMp1ICcPcgx
rarFZQi1Ne7qrp6FfKvPAO5XLyI0xhgm8fCMJUpxmEb80XY4FeRDzB42a0Z/YL0P
HhpWAI4ZRsDyDd5S7jwLZQ3Hl9WsKvj2/FRU6hWTP+maH/Vel35iLkygfZWUAjNY
F6B0SoBBd9bVeKDODXrD/CwVbuaKZGMaVOenZbczmFUVSi4HZGyqVRxX6WIxVoD0
MZXwWUoWZ32C6II3vp/ReAsouhCnKDKhqfrvH58xF82FTMMXBZ/kDy7k5IySylkC
gcEA4tpiV1eKzC/ft0sPUNmZB/snHfXC+xohzTygCg4L1Rf8zjDnUT/o9D8SRe1/
crkG7ZcjKvIdPz0tatyjyNMsZ9TDISiAJQJ8Et1+jBP0uy2qG+ab+Ub761BR5TX0
O78UcmtEyxaaDZsESWj+qYerG4E7zGZiTscTe2Jma5fPlS1ekyfNzk1GBtya9bIM
r991o/PahSmCz5iPxf4avYM/vQm2p+wIk+o6ZhJIAUlRFrCv8y9lYivQjw+tZA+G
bdE7AoHBANKHg0Jb5BLJmN/5/PLkkELhaZG+UNUngtm46dm/84+sqtdTcUHpqdHv
M/skRYDVERmI50QZ2HmzVC8J+zzs9r01VNNA+Tzcoi3eB3FPdDYPTDtLSzRfsC82
kix8d2uVs+rfmvKwT0XucNvMQjUyYDII7IJln1iIJp2XQZaNleqgyi65kni+6FrQ
EJ9gVD4PtCkX7rKo8csMITe6n+HZIzFpOY6BX0HU/4VGa+RQHGfGIdfKDOJ5AtyG
RPYVvZ1E3QKBwE520sT7FpsBhBPV9no0iWXlTOZj9wj7RO3EJmbT7OvL3DlFWP0V
afHxTtS5DPgVX3wWZqeYDt2sv2TS5CO2Rwmy4bs6Uvh6H4g27GpvDJshdFEqNpDG
KKR/p5PsUYnI0b2xtJ26N5a1I4pwsoTY1CozTQep8h7lZKusoVhdrgMfKjMj9V+C
AtKkw0RwTUsXs4z973tXnFNJpZEKDx21o/oyvebfESh4P7LGZ/lp7o42luU6Y4rN
NNoGxiZx6EFbuQKBwGbMltJTTmXCHKzZQ6NS6gJOUR9CX/QFLAamHUIfUY3JUU59
RyNZNnv1IluyVWHYKFZgnBSLzkF2yFeDtzMDvmObZAUXh9wpG+Prs5SnqGYxSBb3
6Av14XDcY7nnOOTGn6jDcMSqRLsv99nLvlR9ea1U4C+38XvoV3rB/dvG3PpJcxAn
uxbMmWamjEdWYSxAvMcIEZ0Zk5+DF8E/loxQW7fn2pv0HhBmMjLgtRQx7fzaKXJW
Db6UOkp2IbxL11+w3QKBwDloDgwB7ukGyFHf3RKy3YX0en1WGBesXONf1m2fjwOU
nojccfaGwAUdb6m60JuZFhJ3qZ4ecoloY4GxIKV5krvBg1buow/aqDDkKmVVYNO6
FUuXp+BbTBSxjfftSaog7y5Db5aecLXU5FLE+sVlrhp17s9h8Ur+O04SytSVh9JS
SkzHYv+4GybZqmOeF2U+whib8JXD2bJkSfNI1dZZhKVqoTUQfEAE3VFY0EHkVQwk
rLHmjspsUjKc4BKfVRGWJg==
-----END PRIVATE KEY-----
Note: Please keep a copy of your certificate request and private key for future
reference.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security certificate install command installs digital security certificates signed by a certificate
authority (CA) and the public key certificate of the root CA. Digital security certificates also include the
intermediate certificates to construct the chain for server certificates (the server type), client-side root CA
certificates (the client-ca type), or server-side root CA certificates (the server-ca type). with FIPS enabled, the
following restrictions apply to the certificate getting installed. server/client/server-ca/client-ca: Key size >=
2048,server/client: Hash function (No MD-5, No SHA-1),server-ca/client-ca: (Intermediate CA), Hash
Function (No MD-5, No SHA-1), server-ca/client-ca: (Root CA), Hash Function (No MD-5)
● Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the Vserver that contains the certificate.
-type <type of certificate> - Type of Certificate
This specifies the certificate type. Valid values are the following:
• server - includes server certificates and intermediate certificates.
• client-ca - includes the public key certificate for the root CA of the SSL client
• server-ca - includes the public key certificate for the root CA of the SSL server to which Data ONTAP is a
client
• client - includes a self-signed or CA-signed digital certificate and private key to be used for Data ONTAP
as an SSL client
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
This specifies a certificate subtype. This optional parameter can have an empty value (the default). The
only valid value is as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It must be unique within a Vserver. If not
provided, it is automatically generated by the system.
● Examples
This example installs a CA-signed certificate (along with intermediate certificates) for a Vserver named vs0.
You should keep a copy of the private key and the CA-signed digital certificate
for future reference.
This example installs a CA certificate for client authentication for a Vserver named vs0.
You should keep a copy of the CA-signed digital certificate for future
reference.
This example installs a CA certificate for server authentication for a Vserver named vs0. In this case, Data
ONTAP acts as an SSL client.
You should keep a copy of the CA-signed digital certificate for future
reference.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the details of either an installed certificate or by reading a certificate from user
input.
● Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver that has the certificate installed.
{ [-cert-name <text>] - Installed Certificate Name
Use this parameter to specify the unique name of the installed certificate to read and display.
| [-cert-uuid <UUID>]} - Installed Certificate UUID
Use this parameter to specify the unique UUID of the installed certificate to read and display. With no
name or UUID specified, the certificate will read and display from user input.
● Examples
The following example reads and prints the details of the certificate.
keyid:52:D8:88:3A:C8:9F:78:66:ED:89:F3:7B:38:70:94:C9:02:02:36:D0
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command allows the user to modify the name of an installed digital certificate. This command does not
alter the certificate itself.
● Parameters
-vserver <Vserver Name> - Vserver Name
This specifies the name of the Vserver on which the certificate exists.
-cert-name <text> - Existing Certificate Name
This specifies the current name of the certificate.
-new-name <text> - New Certificate Name
This specifies the desired name of the certificate. It must be unique among certificates in the Vserver.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays information about the installed digital certificates. Some details are displayed only
when you use the command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It is unique within a Vserver.
[-size <size of requested certificate in bits>] - Size of Requested Certificate in Bits
Selects the certificates that match this parameter value.
[-start <Date>] - Certificate Start Date
Selects the certificates that match this parameter value.
[-expiration <Date>] - Certificate Expiration Date
Selects the certificates that match this parameter value.
[-public-cert <certificate>] - Public Key Certificate
Selects the certificates that match this parameter value.
[-country <text>] - Country Name
Selects the certificates that match this parameter value.
[-state <text>] - State or Province Name
Selects the certificates that match this parameter value.
● Examples
The examples below display information about digital certificates.
-----END CERTIFICATE-----
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Protocol: SSL
Hashing Function: SHA256
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays information about the Data ONTAP generated digital digital certificates. Some
details are displayed only when you use the command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It is unique within a Vserver.
[-size <size of requested certificate in bits>] - Size of Requested Certificate in Bits
Selects the certificates that match this parameter value.
[-start <Date>] - Certificate Start Date
Selects the certificates that match this parameter value.
● Examples
The examples below display information about Data ONTAP generated digital certificates.
Related reference
security certificate show (page 566)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays information about the default CA certificates that come pre-installed with Data
ONTAP. Some details are displayed only when you use the command with the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the Vserver whose digital certificates you want to display.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-type <type of certificate>] - Type of Certificate
Selects the certificates that match this parameter value.
[-subtype <kmip-cert>] - (DEPRECATED)-Certificate Subtype
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It is unique within a Vserver.
[-size <size of requested certificate in bits>] - Size of Requested Certificate in Bits
Selects the certificates that match this parameter value.
[-start <Date>] - Certificate Start Date
Selects the certificates that match this parameter value.
[-expiration <Date>] - Certificate Expiration Date
Selects the certificates that match this parameter value.
[-public-cert <certificate>] - Public Key Certificate
Selects the certificates that match this parameter value.
[-country <text>] - Country Name
Selects the certificates that match this parameter value.
[-state <text>] - State or Province Name
Selects the certificates that match this parameter value.
[-locality <text>] - Locality Name
Selects the certificates that match this parameter value.
[-organization <text>] - Organization Name
Selects the certificates that match this parameter value.
[-unit <text>] - Organization Unit
Selects the certificates that match this parameter value.
[-email-addr <mail address>] - Contact Administrator's Email Address
Selects the certificates that match this parameter value.
[-protocol <protocol>] - Protocol
Selects the certificates that match this parameter value.
[-hash-function <hashing function>] - Hashing Function
Selects the certificates that match this parameter value.
[-self-signed {true|false}] - Self-Signed Certificate
Selects the certificates that match this parameter value.
[-is-root {true|false}] - Is Root CA Certificate?
Selects the certificates that match this parameter value.
[-authority-key-identifier <text>] - Authority Key Identifier
Selects the certificates that match this parameter value.
[-subject-key-identifier <text>] - Subject Key Identifier
Selects the certificates that match this parameter value.
● Examples
The examples below display information about the pre-installed truststore digital certificates.
Related reference
security certificate show (page 566)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays information about the user installed digital certificates. Some details are displayed
only when you use the command with the -instance parameter. In systems upgraded to Data ONTAP 9.4 or
later, existing Data ONTAP generated certificates will also be shown as part of this command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
Note
This parameter has been deprecated in ONTAP 9.6 and may be removed in a future release of Data
ONTAP.
Selects the certificate subtype that matches the specified value. The valid values are as follows:
• kmip-cert - this is a Key Management Interoperability Protocol (KMIP) certificate
[-cert-name <text>] - Unique Certificate Name
This specifies the system's internal identifier for the certificate. It is unique within a Vserver.
[-size <size of requested certificate in bits>] - Size of Requested Certificate in Bits
Selects the certificates that match this parameter value.
[-start <Date>] - Certificate Start Date
Selects the certificates that match this parameter value.
[-expiration <Date>] - Certificate Expiration Date
Selects the certificates that match this parameter value.
[-public-cert <certificate>] - Public Key Certificate
Selects the certificates that match this parameter value.
[-country <text>] - Country Name
Selects the certificates that match this parameter value.
[-state <text>] - State or Province Name
Selects the certificates that match this parameter value.
[-locality <text>] - Locality Name
Selects the certificates that match this parameter value.
[-organization <text>] - Organization Name
Selects the certificates that match this parameter value.
[-unit <text>] - Organization Unit
Selects the certificates that match this parameter value.
[-email-addr <mail address>] - Contact Administrator's Email Address
Selects the certificates that match this parameter value.
[-protocol <protocol>] - Protocol
Selects the certificates that match this parameter value.
[-hash-function <hashing function>] - Hashing Function
Selects the certificates that match this parameter value.
[-self-signed {true|false}] - Self-Signed Certificate
Selects the certificates that match this parameter value.
[-is-root {true|false}] - Is Root CA Certificate?
● Examples
The examples below display information about user installed digital certificates.
Related reference
security certificate show (page 566)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command signs a digital certificate signing request and generates a certificate using a Self-Signed Root
CA certificate in either PEM or PKCS12 format. You can use the security certificate generate-csr
command to generate a digital certificate signing request.
● Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the name of the Vserver on which the signed certificate will exist.
-ca <text> - Certificate Authority to Sign
This specifies the name of the Certificate Authority that will sign the certificate.
-ca-serial <text> - Serial Number of CA Certificate
This specifies the serial number of the Certificate Authority that will sign the certificate.
[-expire-days <integer>] - Number of Days until Expiration
This specifies the number of days until the signed certificate expires. The default value is 365 days.
Possible values are between 1 and 3652.
[-format <certificate format>] - Certificate Format
This specifies the format of signed certificate. The default value is PEM. Possible values include PEM and
PKCS12.
[-destination {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...}] - Where to Send File
This specifies the destination to upload the signed certificate. This option can only be used when the
format is PKCS12.
[-hash-function <hashing function>] - Hashing Function
This specifies the cryptographic hashing function for the self-signed certificate. The default value is
SHA256. Possible values include SHA1, SHA256 and MD5.
● Examples
This example signs a digital certificate for a Vserver named vs0 using a Certificate Authority certificate that
has a ca of www.ca.com and a ca-serial of 4F4EB629 in PEM format using the SHA256 hashing function.
cluster1::> security certificate sign -vserver vs0 -ca www.ca.com -ca-serial 4F4EB629 -expire-days 36 -form
at PEM -hash-function SHA256
Signed Certificate: :
-----BEGIN CERTIFICATE-----
MIICwDCCAaigAwIBAgIET1oskDANBgkqhkiG9w0BAQsFADBdMREwDwYDVQQDEwh2
czAuY2VydDELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEyMDMwOTE2MTUx
M1oXDTEyMDQxNDE2MTUxM1owYDEUMBIGA1UEAxMLZXhhbXBsZS5jb20xCzAJBgNV
BAYTAlVTMQkwBwYDVQQIEwAxCTAHBgNVBAcTADEJMAcGA1UEChMAMQkwBwYDVQQL
EwAxDzANBgkqhkiG9w0BCQEWADBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQD1xWpz
-----END CERTIFICATE-----
This example signs and exports a digital certificate to destination ftp://10.98.1.1//u/sam/sign.pfx for a
Vserver named vs0 using a Certificate Authority certificate that expires in 36 days and has a ca value of
www.ca.com and a ca-serial value of 4F4EB629 in PKCS12 format by the MD5 hashing function.
cluster1::> security certificate sign -vserver vs0 -ca www.ca.com -ca-serial 4F4EB629
-expire-days 36 -format PKCS12 -destination ftp://10.98.1.1//u/sam/sign.pfx -hash-function MD5
Signed Certificate: :
-----BEGIN CERTIFICATE-----
MIICwDCCAaigAwIBAgIET1ot8jANBgkqhkiG9w0BAQsFADBdMREwDwYDVQQDEwh2
czAuY2VydDELMAkGA1UEBhMCVVMxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYD
VQQKEwAxCTAHBgNVBAsTADEPMA0GCSqGSIb3DQEJARYAMB4XDTEyMDMwOTE2MjEw
NloXDTEyMDQxNDE2MjEwNlowYDEUMBIGA1UEAxMLZXhhbXBsZS5jb20xCzAJBgNV
BAYTAlVTMQkwBwYDVQQIEwAxCTAHBgNVBAcTADEJMAcGA1UEChMAMQkwBwYDVQQL
EwAxDzANBgkqhkiG9w0BCQEWADBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQD1xWpz
oarXHSyDzv3T5QIxBGRJ0ACtgdjJuqtuAdmnKvKfLS1o4C90
-----END CERTIFICATE-----
Related reference
security certificate generate-csr (page 557)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command revokes a digital certificate signed by a Self-Signed Root CA.
● Parameters
-vserver <Vserver Name> - Name of Vserver
This specifies the name of the Vserver on which the certificate is stored.
-serial <text> - Serial Number of Certificate
This specifies the serial number of the certificate.
-ca <text> - Certificate Authority
This specifies the name of the Certificate Authority whose certificate will be revoked.
-ca-serial <text> - Serial Number of CA Certificate
This specifies the serial number of Certificate Authority.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
This specifies a fully qualified domain name (FQDN) or custom common name or the name of a person.
This field is optional if ca-serial is specified.
● Examples
This example revokes a signed digital certificate for a Vserver named vs0 with serial as 4F5A2DF2 for a
Certificate Authority certificate that has a ca of www.ca.com and a ca-serial of 4F4EB629.
cluster1::> security certificate ca-issued revoke -vserver vs0 -serial 4F5A2DF2 -ca www.ca.com -ca-seria
l 4F4EB629
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the following information about the digital certificates issued by the self-signed
root-ca:
• Vserver
• Serial number of certificate
• FQDN or custom common name or the name of a person
• Serial number of CA certificate
• Status (active, revoked)
• Certificate Authority
• Expiration date
• Revocation date
To display more details, run the command with the -instance parameter. This will add the following
information:
• Country name
• State or province name
• Locality name
• Organization name
• Organization unit
• Contact administrator's email address
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Name of Vserver
Selects the certificates that match this parameter value.
[-serial <text>] - Serial Number of Certificate
Selects the certificates that match this parameter value.
[-ca <text>] - Certificate Authority
Selects the certificates that match this parameter value.
[-ca-serial <text>] - Serial Number of CA Certificate
Selects the certificates that match this parameter value.
[-common-name <FQDN or Custom Common Name>] - FQDN or Custom Common Name
Selects the certificates that match this parameter value.
[-status <status of certificate>] - Status of Certificate
Selects the certificates that match this parameter value. Possible values include active and revoked.
[-expiration <Date>] - Certificate Expiration Date
Selects the certificates that match this parameter value.
[-revocation <Date>] - Certificate Revocation Date
Selects the certificates that match this parameter value.
[-country <text>] - Country Name (2 letter code)
Selects the certificates that match this parameter value.
[-state <text>] - State or Province Name (full name)
Selects the certificates that match this parameter value.
[-locality <text>] - Locality Name (e.g. city)
Selects the certificates that match this parameter value.
[-organization <text>] - Organization Name (e.g. company)
Selects the certificates that match this parameter value.
[-unit <text>] - Organization Unit (e.g. section)
Selects the certificates that match this parameter value.
[-email-addr <mail address>] - Email Address (Contact Name)
Selects the certificates that match this parameter value.
● Examples
The examples below display information about CA issued digital certificates.
Serial Number of
Vserver Serial Number Common Name CA's Certificate Status
---------- --------------- --------------------------- ---------------- -------
vs0 4F5A2C90 example.com 4F4EB629 active
Certificate Authority: vs0.cert
Expiration Date: Sat Apr 14 16:15:13 2012
Revocation Date: -
Vserver: vs0
Serial Number of Certificate: 4F5A2C90
Certificate Authority: vs0.cert
Serial Number of CA Certificate: 4F4EB629
FQDN or Custom Common Name: example.com
Status of Certificate: active
Certificate Expiration Date: Sat Apr 14 16:15:13 2012
Certificate Revocation Date: -
Country Name (2 letter code): US
State or Province Name (full name): California
Locality Name (e.g. city): Sunnyvale
Organization Name (e.g. company): example
Organization Unit (e.g. section): IT
Email Address (Contact Name): web@example.com
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command modifies the certificate management configuration information for the cluster.
● Parameters
[-min-security-strength <bits of security strength>] - Minimum Security Strength
Use this parameter to modify the allowed minimum security strength for certificates. The security bits
mapping to RSA and ECDSA key length are as follows:
Note
This does not affect root CA certificates.
● Examples
The following example modifies the minimum security strength allowed for certificates.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the certificate management configuration information for the cluster.
"min-security-strength" - If you specify this paramter, the command displays the minimum allowed security
strength for certificates.
"expiration-warn-threshold" - If you specify this paramter, the command displays the minimum number of
days before expiration date configured for event management system (EMS) notification of expiring
certificates.
● Examples
The following example lists minimum security strength certificate management configuration.
112
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command allows the user to check if the node can use the installed set of CA certificates to establish a
secure connection with the specified server. If the connection attempt fails, the system reports which
expected certificates are missing. If the attempt succeeds, the system displays details of the certificates
used.
● Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver that needs the connectivity check.
-server <Hostname and Port> - Server Name
Use this parameter to specify the server to establish a connection with and look up the required CA
certificate.
● Examples
The following example demonstrates a missing CA certificate:
Error: command failed: Missing certificate with subject name: "CN = ExampleRoot, C = US"
CA certificate with cert-name "ExampleRoot" is already installed in the truststore. Use "security certi
ficate show -cert-name ExampleRoot" to see the details of the CA certificate.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security certificate truststore clear command deletes the pre-installed certificates of the
type 'server-ca'. If you delete these certificates, some of the applications performing SSL communication can
fail.
● Examples
The following example removes the default certificate bundle:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security certificate truststore load command installs default root certificates in the
certificate table of type 'server-ca'. These are the certificates required to validate any incoming server
certificate during the SSL handshake process. Note: This command only installs PEM formatted certificates.
● Parameters
{ [-path <text>] - File to load PEM certificates from
This specifies the path to the PEM formatted certificate bundle. This optional parameter can have an
empty value (the default).
| [-uri <text>]} - URL to download PEM certificates from
This specifies the URL from which to download the PEM formatted certificate bundle.
[-ontap-version <ontap_version>] - Certificates from specific ONTAP version
This specifies the ONTAP version in which the certificates were introduced. Only those certificates will be
loaded. This optional parameter can have an empty value (the default) which indicates that no filtering
on version is done.
● Examples
The following example installs the default certificate bundle:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config modify command modifies the existing cluster-wide security configuration. If you
enable FIPS-compliant mode, the cluster will automatically select only compliant TLS protocols (currently
TLSv1.2 and TLSv1.1). Non-compliant protocols are not enabled when FIPS-compliant mode is disabled. Use
the -supported-protocols parameter to include or exclude TLS protocols independently from the FIPS
mode. All protocols at or above the lowest version specified will be enabled, even those not explicitly
specified. By default, FIPS mode is disabled, and Data ONTAP supports the TLSv1.2 and TLSv1.1 protocols. For
backward compatibility, Data ONTAP supports adding SSLv3 and TLSv1 to the supported-protocols list when
FIPS mode is disabled. Use the -supported-cipher-suites parameter to control which TLS cipher suites
are permitted by the system. By default the supported-cipher-suites setting is
TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_GCM_SHA384,
TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA, TLS_RSA_WITH_CAMELLIA_256_CBC_SHA,
TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384,
TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA, TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA,
TLS_DHE_DSS_WITH_SEED_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA,
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA, TLS_DHE_RSA_WITH_SEED_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
TLS_PSK_WITH_AES_128_CBC_SHA, TLS_PSK_WITH_AES_128_GCM_SHA256, TLS_PSK_WITH_AES_256_CBC_SHA,
TLS_PSK_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_SEED_CBC_SHA, TLS_SRP_SHA_WITH_AES_128_CBC_SHA,
TLS_SRP_SHA_WITH_AES_256_CBC_SHA, TLS_SRP_SHA_DSS_WITH_AES_128_CBC_SHA,
TLS_SRP_SHA_DSS_WITH_AES_256_CBC_SHA, TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA,
TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA. Select a cipher suite which is available with the corresponding
selected protocol. An invalid configuration may cause some functionality to fail to operate properly. Valid
values for supported-cipher-suites are listed at "https://www.iana.org/assignments/tls-parameters/tls-
parameters.xhtml" published by IANA. After modifying the security configuration, reboot all the nodes
manually.
● Parameters
-interface <SSL> - (DEPRECATED)-FIPS-Compliant Interface
Note
This parameter has been deprecated in ONTAP 9.8 and may be removed in a future release of Data
ONTAP.
Note
This parameter has been deprecated in ONTAP 9.8 and may be removed in a future release of Data
ONTAP. Use the supported-ciphers-suites parameter instead.
Selects the supported cipher suites for the selected interface. Default is ALL:!LOW:!aNULL:!EXP:!eNULL.
[-supported-cipher-suites <Cipher String>, ...] - Supported Cipher Suites
Selects the supported cipher suites for the selected interface.
● Examples
The following command enables FIPS mode in the cluster. (Default setting for FIPS mode is false)
The following command limits the supported protocols to just TLSv1.2 in the cluster. (Default setting for
supported protocols is TLSv1.2,TLSv1.1)
The following command limits the supported cipher suites in the cluster to the listed ciphers.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config show command displays the security configurations of the cluster in advanced
privilege mode.
Default values are as follows:
• SSL FIPS mode: disabled
• Supported protocols: TLSv1.2,TLSv1.1
• Supported cipher suites: All suites for the listed protocols except those that have no authentication, low
encryption strength (less than 56 bits), or utilize 3DES or static DH key exchange.
Enabling FIPS mode will cause the entire cluster to use FIPS-compliant crypto operations only.
Use the security config modify command to change the protocols and cipher suites that the cluster
will support. When all the nodes in the cluster are updated with the modified settings, the -cluster-
security-config-ready value will be shown as yes.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface <SSL>] - (DEPRECATED)-FIPS-Compliant Interface
Note
This parameter has been deprecated in ONTAP 9.8 and may be removed in a future release of Data
ONTAP. As there only ever existed one valid value for this parameter, filtering on it has never
altered the results.
Displays configurations that match the specified value for the interface.
[-is-fips-enabled {true|false}] - FIPS Mode
Display configurations that match the specified value for FIPS mode.
[-supported-protocols {TLSv1.2|TLSv1.1|TLSv1|SSLv3}, ...] - Supported Protocols
Displays configurations that match the specified protocols.
[-supported-ciphers <Cipher String>] - (DEPRECATED)-Supported Ciphers
Note
This parameter has been deprecated in ONTAP 9.8 and may be removed in a future release of Data
ONTAP. Use the supported-cipher-suites parameter instead.
● Examples
The following example shows the default security configurations for a cluster.
TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA,
TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA
The following example shows the security configuration after FIPS mode has been enabled.
Related reference
security config modify (page 582)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config ocsp disable command disables the OCSP-based certificate status check for
applications supporting SSL/TLS communications. For more information about the OCSP-based certificate
status check for applications supporting SSL/TLS communications, see the security config ocsp show
command.
● Parameters
-application <SSL/TLS Application supporting OCSP>, ... - Application Name
Use this parameter to specify the application to disable the OCSP support. To disable all applications, the
value 'all' can be used. Note: You cannot specify the value 'all' with other applications.
● Examples
The following example disables the OCSP support for AutoSupport and EMS applications:
The following example disables the OCSP support for all applications:
Related reference
security config ocsp show (page 588)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config ocsp enable command enables the OCSP-based certificate status check for
applications supporting SSL/TLS communications. For more information about the OCSP-based certificate
status check for applications supporting SSL/TLS communications, see the security config ocsp show
command.
● Parameters
-application <SSL/TLS Application supporting OCSP>, ... - List of Applications
Use this parameter to specify the application to enable the OCSP support. To enable all applications, the
value 'all' can be used. Note: You cannot specify the value 'all' with other applications.
● Examples
The following example enables the OCSP support for AutoSupport and EMS applications:
The following example enables the OCSP support for all applications:
Related reference
security config ocsp show (page 588)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config ocsp show command displays the support status of the OCSP-based certificate
status check for applications supporting SSL/TLS communications. If the OCSP support is enabled for an
application, this check is done in addition to the certificate chain validation as part of the SSL handshake
process. The OCSP-based certificate status check is done for all the certificates in the chain, provided that
each certificate has the OCSP URI access points mentioned in them. If no access points are specified, the
OCSP-based certificate revocation status check is ignored for that certificate and checking continues for the
rest of the certificates in the chain.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-application <SSL/TLS Application supporting OCSP>] - Application Name
Selects the application that matches this parameter value. Applications include:
• autosupport - AutoSupport
• audit_log - Audit Logging
• fabricpool - External capacity tiers
• ems - Event Management System
• kmip - Key Management Interoperability Protocol
• ldap_ad - Lightweight Directory Access Protocol - Active Directory (query and modify items in Active
Directory)
• ldap_nis_namemap - Lightweight Directory Access Protocol - NIS and Name Mapping (query Unix user,
group, netgroup and name mapping information)
[-is-ocsp-enabled {true|false}] - Is OCSP-based Certificate Status Check Enabled?
Selects the application that matches this parameter value.
● Examples
The following example displays the OCSP support for the applications supporting SSL/TLS communications:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security config status show command displays the required reboot status of the nodes in the
cluster after security configuration settings have been modified using the security config modify
command. Use this command to monitor the status of the required reboot process. When all nodes have
rebooted, the cluster is ready to use the new security configuration settings.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Select the node whose reboot-status you want to display.
[-reboot-needed {true|false}] - Reboot Needed
reboot-needed status of the node that tells if the node requires a reboot for security configuration to
take effect.
● Examples
The following example displays the status of a configuration change in a four-node cluster.
The following example shows the output of the command after the cluster reboot process is complete.
Related reference
security config modify (page 582)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ipsec show-ikesa command displays information about IKE Security Associations (SA).
Running this command with the -node parameter displays information relevant to IKE SAs generated at the
specified node.
Running this command with the -vserver parameter displays information relevant to IKE SAs associated
with the specified vserver.
Running this command with the -policy-name parameter displays information relevant to IKE SAs created
based on the specified security policy.
You can specify additional parameters to display only information matching those parameters. For example,
to display IKE SAs associated with a specific local address, run the command with the -local-address
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the specified
fields. Notice that key fields are always displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays all fields of the IKE SAs.
-node <nodename> - Node
This required parameter specifies the node from which the IKE SA information will be collected and
displayed.
[-vserver <vserver name>] - Vserver Name
Use this parameter to display the IKE SAs associated with the specified Vserver.
[-policy-name <text>] - Policy Name
Use this parameter to display the IKE SAs created based on the specified security policy.
[-local-address <text>] - Local Address
Use this parameter to display the IKE SAs with the specified local endpoint IP address.
[-remote-address <text>] - Remote Address
Use this parameter to display the IKE SAs with the specified remote endpoint IP address.
[-initiator-spi <text>] - Initiator SPI
Use this parameter to display the IKE SAs with the specified initiator Security Parameter Index (SPI).
[-responder-spi <text>] - Responder SPI
Use this parameter to display the IKE SAs with the specified responder SPI.
[-is-initiator {true|false}] - Is Initiator
Use this parameter to display the IKE SAs created when the given node matches the specified initiator
role: true means initiator role and false means responder role in IKE negotiation.
[-ike-version <integer>] - IKE Version
Use this parameter to display the IKE SAs created using the specified IKE version.
[-auth-method <IKE Authentication Method>] - Authentication Method
Use this parameter to display the IKE SAs created using the specified authentication method.
[-state <IKE SA State>] - IKE SA State
Use this parameter to display only the IKE SAs that are in the specified state.
[-cipher-suite <Cipher Suite Type>] - Cipher Suite
Use this parameter to display the IKE SAs created using the specified cipher suite.
[-lifetime <integer>] - Lifetime
Use this parameter to display the IKE SAs with the specified remaining lifetime. Notice that lifetime
keeps changing for the duration of the security association.
● Examples
This example displays all IKE SAs for node cluster1-node1:
This example displays selected fields of all IKE SAs for node cluster1-node1:
vs1 Policy1
192.186.10.1 192.186.10.2 e658e5bc7ece199e ESTABLISHED
This example displays instance view (all fields) for all IKE SAs associated with node cluster1-node1, vserver
vs1 and created using policy Policy1:
cluster-1::> security ipsec show-ikesa -node cluster1-node1 -vserver vs1 -policy-name Policy1 -instance
Node: cluster1-node1
Vserver Name: vs1
Policy Name: Policy1
Local Address: 192.168.10.1
Remote Address: 192.168.10.2
Initiator SPI: e658e5bc7ece199e
Responder SPI: 9b61befff71e8ca2
Is Initiator: false
IKE Version: 2
Authentication Method: PSK
IKE SA State: ESTABLISHED
Cipher Suite: SUITEB_GCM256
Lifetime: 6000
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ipsec show-ipsecsa command displays information about IPsec Security Associations
(SA).
Running the command with the -node parameter displays information relevant to IPsec SAs at the specified
node.
Running this command with the -vserver parameter displays information relevant to IPsec SAs associated
with the specified vserver.
Running this command with the -policy-name parameter displays information relevant to IPsec SAs
created using the specified security policy.
You can specify additional parameters to display only information matching those parameters. For example,
to display IPsec SAs only about a certain local address, run the command with the -local-address
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the specified
fields. Notice that key fields are always displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays all fields of matching IPsec SAs.
-node <nodename> - Node
This required parameter specifies from which node the IPsec SA information will be collected and
displayed.
● Examples
The this example displays all IPsec SAs for node cluster1-node1:
This example displays selected fields of all IPsec SAs for node cluster1-node1:
This example displays selected fields of all IPsec SAs associated with node cluster1-node1:
This example displays instance view (all fields) for all IPsec SAs associated with node cluster1-node1 ,
vserver vs1 and created using policy Policy1:
cluster-1::> security ipsec show-ipsecsa -node cluster1-node1 -vserver vs1 -policy-name Policy1 -instan
ce
Node: cluster1-node1
Vserver Name: vs1
Policy Name: Policy1
Inbound SPI: c68de9db
Outbound SPI: c84f913b
Local Address: 192.168.10.1
Remote Address: 192.168.10.2
IPsec Action: ESP_TRA
IPsec SA State: INSTALLED
Cipher Suite: SUITEB_GCM256
Inbound Bytes Processed: 4704
Inbound Pkts Processed: 56
Outbound Bytes Processed: 6720
Outbound Pkts Processed: 56
IPsec SA Lifetime Seconds: 1800
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command modifies IPsec configuration parameters.
● Parameters
[-is-enabled {true|false}] - Is IPsec Enabled
This parameter enables and disables IPsec on the storage system.
[-log-level <IPsec Log Level>] - IPsec Logging Level
This parameter sets the IPsec logging level, where logging level 0 means no logging, and logging level 5
is most verbose. Default value is 2.
[-replay-window <IPsec Replay Window Size>] - IPsec Replay Window Size
This parameter sets the IPsec replay window size. The possible values are 0, 64, 128, 256, 512 and 1024.
Default value is 0.
● Examples
The following example enables IPsec:
The following example sets the IPsec replay window size to 64:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command shows the current IPsec configuration parameters.
● Examples
The following example shows the state of IPsec (enabled/disabled) and the IPsec logging level:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command creates a new IPsec policy. The traffic to be protected is specified by the traffic selectors
(local-ip-subnets, remote-ip-subnets, local-ports, remote-ports, protocols).
● Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver to which the policy will belong. If there is only a single Vserver capable of
supporting IPsec, the Vserver parameter is implied.
-name <text> - Policy Name
This required parameter specifies the name of the policy which may be a text string (1-64 bytes), a
hexadecimal string (begining with '0x') or a base64 encoded binary string (beginning with '0s').
-local-ip-subnets <IP Address/Mask>, ... - Local IP Subnets
This required parameter specifies the IPv4 or IPv6 subnet (address and mask, can be subnet or individual
address) representing the local address (range) to be protected by this policy.
-remote-ip-subnets <IP Address/Mask>, ... - Remote IP Subnets
This required parameter specifies the IPv4 or IPv6 subnet (address and mask, can be subnet or individual
address) representing the remote address (range) to be protected by this policy.
[-local-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Local Ports
This optional parameter specifies logical port associated with the local address to be protected by this
policy. The port defaults to any port ('0-0' or '0') but a single port may be specified ('port number' or 'port
number-port number').
[-remote-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Remote Ports
This optional parameter specifies logical port associated with the remote address to be protected by this
policy. The port defaults to any port ('0-0' or '0') but a single port may be specified ('port number' or 'port
number-port number').
[-protocols {<Protocol Number>|<Protocol Name>}, ...] - Protocols
This optional parameter specifies the protocol to be protected by by this policy. The protocol defaults to
any protocol ('any' or '0') but a single protocol may be specified ('tcp', 'udp' or protocol number).
[-action <IPsec Action Type>] - Action
This optional parameter specifies the action to be performed when a packet meets the traffic selectors
described by this policy. The possible values are 'ESP_TRA' (IPsec protect traffic), 'DISCARD' (discard
matching traffic), or 'BYPASS' (send matching traffic in cleartext (not protected by IPsec)). NOTE: if the
action is 'ESP_TRA', then 'shared-key' becomes a required parameter. If the action is 'BYPASS' or 'DISCARD'
and a shared-key is provided, then the shared-key value will be ignored and discarded. The default value
is 'ESP_TRA'.
[-cipher-suite <Cipher Suite Type>] - Cipher Suite
This optional parameter specifies the suite of algorithms that will be used to protect the traffic. The
possible values are 'SUITEB_GCM256', 'SUTIEB_GMAC256' and 'SUITE_AESCBC' The default value is
'SUITEB_GCM256'.
[-ike-lifetime <integer>] - IKE Security Association Lifetime
This optional parameter specifies the lifetime of an IKE Security Association (in seconds). Shortly before
the expiration of the IKE-lifetime, a new IKE security association will be created and the existing IKE
security association (and child IPsec security associations) will be destroyed. The default value is 10800
seconds.
[-ipsec-lifetime <integer>] - IPsec Security Association Lifetime
This optional parameter specifies the lifetime of an IPsec Security Association (in seconds). Shortly
before the expiration of the ipsec-lifetime, a new IPsec security association will be created and the
existing IPsec security association will be destroyed. The default value is 3600 seconds.
[-ipsec-lifetime-bytes <integer>] - IPsec Security Association Lifetime (bytes)
This optional parameter specifies the byte lifetime of an IPsec Security Association. Shortly before the
expiration of the ipsec-lifetime-bytes (ipsec-lifetime-bytes have been processed by the IPsec security
association), a new IPsec security association will be created and the existing IPsec security association
will be destroyed. The default value is 0, i.e infinity bytes.
[-is-enabled {true|false}] - Is Policy Enabled
This optional parameter specifies whether the IPsec policy is enabled or not. Any policy that is created is
stored in a replicated database. The 'is-enabled' parameter determines if the policy will be included in
those evaluated when determining the best-matched policy to match the traffic selectors of the packet.
The default value is 'true'.
[-local-identity <text>] - Local Identity
This optional parameter specifies the local IKE endpoint's identity for authentication purpose. If this field
is not explicitly specified, local-ip-subnet will assume the role for identity.
[-remote-identity <text>] - Remote Identity
This optional parameter specifies the remote IKE endpoint's identity for authentication purpose. If this
field is not explicitly specified, remote-ip-subnet will assume the role for identity.
● Examples
This is an example of the creation of an IPsec policy that IPsec protects matching traffic, with all parameters
specified. The preshared key can be string of length 18-128 bytes, a sequence hexadecimal digits beginning
with 0x or a sequence of Base64 encoded binary data with 0s.
cluster-1::> security ipsec policy create -vserver vs_data1 -name Policy1 -local-ip-subnets 192.168.10.
1/32 -remote-ip-subnets 192.168.20.1/32 -local-ports 4000 -remote-ports 5001 -protocols tcp -action ESP
_TRA -shared-key This_is_a_shared_key_for_ipsec_policy -ike-version 2 -cipher-suite SUITEB_GCM256 -ike-
lifetime 4000 -ipsec-lifetime 1800 -ipsec-lifetime-bytes 104880 -is-enabled true
Enter the preshared key for IPsec Policy "Policy1" on Vserver "vs_data1":
This is an example of the creation of an IPsec policy that discards matching traffic:
cluster-1::> security ipsec policy create -vserver vs_data1 -name DiscardTraffic -local-ip-subnets 192.
168.10.1/32 -remote-ip-subnets 192.168.20.1/32 -action DISCARD
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command deletes an existing IPsec policy.
● Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver to which the policy belongs. If there is only a single Vserver capable of supporting
IPsec, the Vserver parameter is implied.
-name <text> - Policy Name
This required parameter specifies the name of the policy to be deleted. The name may be a text string
(1-64 bytes), a hexadecimal string (begining with '0x') or a base64 encoded binary string (beginning
with '0s').
● Examples
This is an example of IPsec policy deletion where two or more Vservers are capable of supporting IPsec:
This is an example of IPsec policy deletion where only a single Vserver is capable of supporting IPsec:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command modifies an existing IPsec policy. You cannot modify the name or vserver of a policy. Moving
a policy from one Vserver to another or renaming a policy requires that the existing policy be deleted and
then a new policy created in the desired Vserver with the desired name.
It is highly recommended that the user set the field -is-enabled to false prior to making any other
modifications to the policy. This will disable the policy and allow all existing IPsec and IKE Security
Associations associated with policy to get flushed. Then, the user can modify the policy with the desired
changes, along with setting the -is-enabled field to true to re-enable the policy.
● Parameters
-vserver <vserver name> - Vserver
Specifies the Vserver to which the policy belongs. If there is only a single Vserver capable of supporting
IPsec, the Vserver parameter is implied.
-name <text> - Policy Name
This required parameter specifies the name of the policy which may be a text string (1-64 bytes), a
hexadecimal string (begining with '0x') or a base64 encoded binary string (beginning with '0s').
[-local-ip-subnets <IP Address/Mask>, ...] - Local IP Subnets
This parameter specifies the IPv4 or IPv6 subnet (address and mask, can be subnet or individual
address) representing the local address (range) to be protected by this policy.
[-remote-ip-subnets <IP Address/Mask>, ...] - Remote IP Subnets
This parameter specifies the IPv4 or IPv6 subnet (address and mask, can be subnet or individual
address) representing the remote address (range) to be protected by this policy.
[-local-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Local Ports
This parameter specifies the logical port associated with the local address to be protected by this policy.
The value may be specified by 'port number' or 'port number-port number'.
[-remote-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Remote Ports
This parameter specifies the logical port associated with the remote address to be protected by this
policy. The value may be specified by 'port number' or 'port number-port number'.
[-protocols {<Protocol Number>|<Protocol Name>}, ...] - Protocols
This parameter specifies the protocol to be protected by this policy. The protocol may be specified as
'tcp', 'udp' or protocol number.
[-cipher-suite <Cipher Suite Type>] - Cipher Suite
This parameter specifies the suite of algorithms that will be used to protect the traffic. The possible
values are '256-bit AES-CBC', '256-bit AES-GCM' and '256-bit AES-GMAC'.
[-ike-lifetime <integer>] - IKE Security Association Lifetime
This parameter specifies the lifetime of an IKE Security Association (in seconds). Shortly before the
expiration of the IKE-lifetime, a new IKE security association will be created and the existing IKE security
association (and child IPsec security associations) will be destroyed.
[-ipsec-lifetime <integer>] - IPsec Security Association Lifetime
This parameter specifies the lifetime of an IPsec Security Association (in seconds). Shortly before the
expiration of the ipsec-lifetime, a new IPsec security association will be created and the existing IPsec
security association will be destroyed.
[-ipsec-lifetime-bytes <integer>] - IPsec Security Association Lifetime (bytes)
This parameter specifies the byte lifetime of an IPsec Security Association. Shortly before the expiration
of the ipsec-lifetime-bytes (ipsec-lifetime-bytes have been processed by the IPsec security association),
a new IPsec security association will be created and the existing IPsec security association will be
destroyed.
[-is-enabled {true|false}] - Is Policy Enabled
This parameter specifies the whether the IPsec policy is enabled or not. Any policy which is created is
stored in a replicated database. The 'is-enabled' parameter determines if the policy will be included in
those evaluated when determining the best-matched policy to match the traffic selectors of the packet.
The default value is 'true'.
[-local-identity <text>] - Local Identity
This optional parameter specifies the local IKE endpoint's identity for authentication purpose. If this field
is not explicitly specified, local-ip-subnet will assume the role for identity.
[-remote-identity <text>] - Remote Identity
This optional parameter specifies the remote IKE endpoint's identity for authentication purpose. If this
field is not explicitly specified, remote-ip-subnet will assume the role for identity.
● Examples
The following example modifies the local-ip-subnets value of an IPsec policy:
cluster-1::> security ipsec policy modify -vserver vs_data1 -name Policy1 -local-ip-subnets 192.168.3
0.2/32
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ipsec policy show command displays information about configured IPsec policies. All
parameters are optional. This command is supported only when IPsec is enabled.
Running the command with the -vserver parameter displays all policies associated with the specified
vserver.
You can specify additional parameters to display only information that matches those parameters. For
example, to display policies associated with a certain local ip subnet, run the command with the -local-
ip-subnets parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the specified
fields. Notice that key fields are always displayed.
| [-instance ]}
If you specify the -instance parameter, the command displays all fields of the policies.
[-vserver <vserver name>] - Vserver
If you specify this parameter, only policies associated with this Vserver will be displayed.
[-name <text>] - Policy Name
This parameter specifies the policy to be displayed.
[-local-ip-subnets <IP Address/Mask>, ...] - Local IP Subnets
If you specify this parameter, information about local-ip-subnets will be displayed.
[-remote-ip-subnets <IP Address/Mask>, ...] - Remote IP Subnets
If you specify this parameter, information about remote-ip-subnets will be displayed.
[-local-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Local Ports
If you specify this parameter, information about local-ports will be displayed.
[-remote-ports {<Number>|<StartingNumber>-<EndingNumber>}, ...] - Remote Ports
If you specify this parameter, information about remote-ports will be displayed.
[-protocols {<Protocol Number>|<Protocol Name>}, ...] - Protocols
If you specify this parameter, information about protocols will be displayed.
[-action <IPsec Action Type>] - Action
If you specify this parameter, information about action will be displayed.
[-cipher-suite <Cipher Suite Type>] - Cipher Suite
If you specify this parameter, information about cipher-suite will be displayed.
[-ike-lifetime <integer>] - IKE Security Association Lifetime
If you specify this parameter, information about ike-lifetime will be displayed.
● Examples
The this example displays all policies in all Vservers:
This example displays all of the IPsec policies from a single Vserver:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
external add-servers instead.
This command adds a key management server at the indicated IP address to its list of four possible active
key management servers. The command fails if there are already four key management servers configured.
This command is not supported when the Onboard Key Manager is enabled.
● Parameters
-address <IP Address> - IP Address
This parameter specifies the IP address of the key management server you want to use to store keys.
[-server-port <integer>] - Server TCP Port
This parameter specifies the TCP port on which the key management server will listen for incoming
connections.
● Examples
The following example adds the key management server with address 10.233.1.98, listening for incoming
connections on the default TCP port 5696, to the list of key management servers used by the external key
manager:
The following example adds the key management server with address 10.233.1.98, listening for incoming
connections on TCP port 15696, to the list of key management servers used by the external key manager:
Related reference
security key-manager external add-servers (page 620)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
key create instead.
This command creates a new authentication key (AK) and stores it on the configured key management
servers. The command fails if the configured key management servers are already storing more than 128
AKs. If command fails due to more than 128 keys in cluster, delete unused keys on your key management
servers and try the command again. This command is not supported when the Onboard Key Manager is
enabled.
● Parameters
[-key-tag <text>] - Key Tag
This parameter specifies the key tag that you want to associate with the new authentication key (AK).
The default value is the node name. This parameter can be used to help identify created authentication
keys (AKs). For example, the key-manager query command key-tag parameter can be used to query for
a specific key-tag value.
[-prompt-for-key {true|false}] - Prompt for Authentication Passphrase
If you specify this parameter as true, the command prompts you to enter an authentication passphrase
manually instead of generating it automatically. For security reasons, the authentication passphrase you
entered is not displayed at the command prompt. You must enter the authentication passphrase a
second time for verification. To avoid errors, copy and paste authentication passphrases electronically
instead of entering them manually. Data ONTAP saves the resulting authentication key/key ID pair
automatically on the configured key management servers.
● Examples
The following example creates an authentication key with the node name as the default key-tag value:
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 00000000000000000200000000000100D0F7C2462D626B739FE81B89F29A092F.
Node: node2
Key manager restore operation initialized.
Successfully restored key information.
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 00000000000000000200000000000100B8297A6189BC24B9B84C1916ED576857.
Node: node2
Key manager restore operation initialized.
Successfully restored key information.
The following example creates an authentication key with a user-specified authentication passphrase:
Verifying requirements...
Node: node1
Creating authentication key...
Authentication key creation successful.
Key ID: 000000000000000002000000000001006268333F870860128FBE17D393E5083B.
Node: node2
Key manager restore operation initialized.
Successfully restored key information.
Related reference
security key-manager key create (page 635)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
external remove-servers instead.
This command removes the key management server at the indicated IP address from the list of active key
management servers. If the indicated key management server is the sole storage location for any key that is
in use by Data ONTAP, you will be unable to remove the key server. This command is not supported when
the Onboard Key Manager is enabled.
● Parameters
-address <IP Address> - IP Address
This parameter specifies the IP address of the key management server you want to remove from use.
● Examples
The following example removes the key server at IP address 10.233.1.198 from the set of configured key
management servers:
Related reference
security key-manager external remove-servers (page 623)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This command is deprecated and might be removed in a future release. Use security key-manager
onboard disable instead.
The security key-manager delete-key-database command permanently deletes the Onboard Key
Manager configuration from all nodes of the cluster.
● Examples
The following example deletes the Onboard Key Manager configuration from all nodes of the cluster:
Warning: This command will permanently delete all keys from the Onboard Key Manager.
Do you want to continue? {y|n}: y
Related reference
security key-manager onboard disable (page 642)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
external disable instead.
Note
The keys stored by the external KMIP servers cannot be deleted by Data ONTAP, and must be deleted
by using external tools.
● Examples
The following example deletes the KMIP-server configuration from all nodes of the cluster:
Related reference
security key-manager external disable (page 620)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This command is deprecated and might be removed in a future release.
The security key-manager prepare-to-downgrade command disables the Onboard Key Manager
features that are not supported in releases prior to ONTAP 9.1.0. The features that are disabled are Onboard
Key Manager support for Metrocluster configurations and Volume Encryption (VE).
● Examples
The following example disables the Onboard Key Manager support for Metrocluster configurations and
Volume Encryption (VE):
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
key query instead.
This command displays the IDs of the keys that are stored on the key management servers. This command
does not update the key tables on the node. To refresh the key tables on the nodes with the key
management server key tables, run the security key-manager restore command. This command is
not supported when the Onboard Key Manager is enabled.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that queries the specified key management servers. If
this parameter is not specified, then all nodes will query the specified key management servers.
[-address <IP Address>] - IP Address
This parameter specifies the IP address of the key management server that you want to query.
[-key-id <key id>] - Key ID
If you specify this parameter, then the command displays only the key IDs that match the specified
value.
[-key-tag <text>] - Key Tag
If you specify this parameter, then the command displays only the key IDs that match the specified
value. The key-tag for Volume Encryption Keys (VEKs) is set to the UUID of the encrypted volume.
[-key-type <Key Usage Type>] - Key Type
If you specify this parameter, then the command displays only the key IDs that match the specified
value.
[-count <integer>] - (DEPRECATED)-Key Server's Total Key Count
The value count is deprecated and may be removed in a future release of Data ONTAP. This parameter
specifies the total number of keys stored in the key management servers. If you specify this parameter,
then the command displays only the key IDs retrieved from the key management servers whose total
key count matches the specified count number.
[-restored {yes|no}] - Key/Key ID Pair Present in Node's Key Table?
This parameter specifies whether the key corresponding to the displayed key ID is present in the
specified node's internal key table. If you specify 'yes' for this parameter, then the command displays the
key IDs of only those keys that are present in the system's internal key table. If you specify 'no' for this
parameter, then the command displays the key IDs of only those keys that are not present in the
system's internal key table.
[-key-manager-server-status {available|not-responding|unknown}] - Command Error Code
This parameter specifies the connectivity status of the key management server. If you specify this
parameter, then the command displays only the key IDs retrieved from the key management servers
with specified status.
● Examples
The following example shows all the keys on all configured key servers, and whether those keys have been
restored for all nodes in the cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Node: node2
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager
restore" to restore those keys.
The following example shows all keys stored on the key server with address "10.0.0.10" from node "node1"
with key-tag "node1":
cluster-1::> security key-manager query -address 10.0.0.10 -node node1 -key-tag node1
Node: node1
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager
restore" to restore those keys.
The following example shows the Volume Encryption Key (VEK) with key-tag (i.e., volume UUID)
"301a4e57-9efb-11e7-b2bc-0050569c227f" on nodes where that key has not been restored:
Node: node2
Key Manager: 10.0.0.10
Server Status: available
If any listed keys have "no" in the "Restored" column, run "security key-manager restore" to restore th
ose keys.
Related reference
security key-manager key query (page 637)
security key-manager restore (page 610)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
external restore instead.
This command retrieves and restores any current unrestored keys associated with the storage controller from
the specified key management servers. This command is not supported when the Onboard Key Manager is
enabled.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that is to load the key IDs into its internal key table. If
not specified, all nodes retrieve keys into their internal key table.
[-address <IP Address>] - IP Address
If this parameter is specified, the command restores only from key management server at the specified
IP address. If not specified the command restores from all available key management servers.
[-key-id <key id>] - Key ID
If this parameter is specified, the command restores only the specified key IDs.
[-key-tag <text>] - Key Tag
This parameter specifies the value associated with the key ID pair at the time of their creation. If
specified, restore only key ID pairs associated with the specified key tag. If not specified, all key ID pairs
for the cluster are retrieved.
[-count <integer>] - (DEPRECATED)-Key Server's total Key Count
The value count is deprecated and may be removed in a future release of Data ONTAP. This parameter
specifies the total number of keys stored in the key management servers. If this parameter is specified,
then the command displays only the key IDs retrieved from the key management servers whose total
key count matches the specified count number.
[-key-manager-server-status {available|not-responding|unknown}] - Command Error Code
This parameter specifies the connectivity status of the key management server. If you specify this
parameter the command displays only the key IDs retrieved from key management servers with
specified status.
● Examples
The following command restores keys that are currently on a key server but are not stored within the key
tables on the cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
000000000000000002000000000005004d03aca5b72cd20b2f83eae1531c605e0000000000000000
Node: node2
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
000000000000000002000000000005004d03aca5b72cd20b2f83eae1531c605e0000000000000000
The following loads any keys that exist on the key servers with IP address 10.0.0.10 with key-tag "node1"
that are not currently stored in key tables of the nodes in the cluster. In this example, a key with that key-
tag was missing from two nodes in the cluster:
Node: node1
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
Node: node2
Key Manager: 10.0.0.10
Server Status: available
Key IDs
-------------------------------------------------------
000000000000000002000000000001001d71f3b2468d7e16a6e6972d3e6645200000000000000000
Related reference
security key-manager external restore (page 624)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and might be removed in a future release. To set up external key
manager, use security key-manager external enable, and to set up the Onboard Key Manager
use security key-manager onboard enable instead.
The security key-manager setup command enables you to configure key management. Data ONTAP
supports two mutually exclusive key management methods: external via one or more key management
interoperability protocol (KMIP) servers, or internal via an Onboard Key Manager. This command is used to
configure an external or internal key manager. When configuring an external key management server, this
command records networking information on all node that is used during the boot process to retrieve keys
needed for booting from the KMIP servers. For the Onboard Key Manager, this command prompts you to
configure a passphrase to protect internal keys in encrypted form.
This command can also be used to refresh missing onboard keys. For example, if you add a node to a cluster
that has the Onboard Key Manager configured, you will run this command to refresh the missing keys.
For the Onboard Key Manager in a MetroCluster configuration, if the security key-manager update-
passphrase command is used to update the passphrase on one site, then run the security key-
manager setup command with the new passphrase on the partner site before proceeding with any key-
manager operations.
● Parameters
[-node <nodename>] - Node Name
This parameter is used only with the Onboard Key Manager when a refresh operation is required (see
command description). This parameter is ignored when configuring external key management and
during the initial setup of the Onboard Key Manager.
[-cc-mode-enabled {yes|no}] - Enable Common Criteria Mode?
When configuring the Onboard Key Manager, this parameter is used to specify that Common Criteria
(CC) mode should be enabled. When CC mode is enabled, you will be required to provide a cluster
passphrase that is between 64 and 256 ASCII character long, and you will be required to enter that
passphrase each time a node reboots.
[-sync-metrocluster-config {yes|no}] - Sync MetroCluster Configuration from Peer
When configuring the Onboard Key Manager in a MetroCluster configuration, this parameter is used to
indicate that the security key-manager setup command has been performed on the peer cluster,
and that the security key-manager setup command on this cluster should import the peer's
configuration.
● Examples
The following example creates a configuration for external key management:
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure the Onboard Key Manager? {yes, no} [yes]: no
Would you like to configure the KMIP server environment? {yes, no} [yes]: yes
The following example creates a configuration for the Onboard Key Manager:
"exit" if you want to quit the key manager setup wizard. Any changes
you made before typing "exit" will be applied.
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure the Onboard Key Manager? {yes, no} [yes]: yes
Enter the cluster-wide passphrase for the Onboard Key Manager. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring the Onboard Key Manager, save the encrypted configuration data
in a safe location so that you can use it if you need to perform a manual recovery
operation. To view the data, use the "security key-manager backup show" command.
The following example creates a configuration for the Onboard Key Manager with Common Critera mode
enabled:
Restart the key manager setup wizard with "security key-manager setup". To
accept a default or omit a question, do not enter a value.
Would you like to configure the Onboard Key Manager? {yes, no} [yes]: yes
Enter the cluster-wide passphrase for the Onboard Key Manager. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring the Onboard Key Manager, save the encrypted configuration data
in a safe location so that you can use it if you need to perform a manual recovery
operation. To view the data, use the "security key-manager backup show" command.
Related reference
security key-manager external enable (page 621)
security key-manager onboard enable (page 642)
security key-manager update-passphrase (page 616)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release. Use security key-manager
external show instead.
This command displays the key management servers configured on the cluster. This command is not
supported when the Onboard Key Manager is enabled.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-status ]
If you specify this parameter, the command displays the status of each key management server.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that you want to retrieve key management server status
for. If parameter is not specified, all nodes will retrieve the key management servers status.
[-address <IP Address>] - IP Address
Shows only a key management server registered with the input address. It is also possible to show
multiple key management servers.
[-server-port <integer>] - Server TCP Port
If you specify this parameter, the command displays only key servers listening on this port.
● Examples
The following example lists all configured key management servers:
The following example lists all configured key management servers, the TCP port on which those servers are
expected to listen for incoming KMIP connections, and their server status:
Related reference
security key-manager external show (page 625)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the list of configured key managers.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
If you specify this parameter, then the command will list the key manager configured for the given
Vserver.
[-key-store <Key Store>] - Key Store
If you specify this parameter, then the command displays only the vservers that have the given key-store
configured.
[-state <Key Store state>] - Key Store State
If you specify this parameter, then the command displays only the vservers that have the given state
configured.
[-keystore-type <Key Store Type>] - Key Store Type (Azure/AWS etc)
If you specify this parameter, then the command displays only the vservers that have the given keystore-
type configured. This parameter is used to specify a particular type of external key manager. If this
parameter is specified and 'key-store' is provided as 'onboard', the "security key-manager show-key-
store" command will not return any entries.
● Examples
The following example shows all configured key managers in the cluster. In the example, the admin vserver
has the Onboard Key Manager configured and the data vserver "datavs1" has external key management
configured:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This command is deprecated and might be removed in a future release. Use security key-manager
onboard update-passphrase instead.
The security key-manager update-passphrase command provides a way to update the cluster-wide
passphrase, created initially by running the security key-manager setup command, that is used for the
Onboard Key Manager. This command prompts for the existing passphrase, and if that passphrase is correct
then the command prompts for a new passphrase.
When the security key-manager update-passphrase command is executed in a MetroCluster
configuration, then run the security key-manager setup command with the new passphrase on the
partner site before proceeding with any key-manager operations. This allows the updated passphrase to be
replicated to the partner site.
● Examples
The following example updates the cluster-wide passphrase used for the Onboard Key Manager:
Warning: This command will reconfigure the cluster passphrase for the Onboard
Key Manager.
Do you want to continue? {y|n}: y
Related reference
security key-manager onboard update-passphrase (page 644)
security key-manager setup (page 612)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and might be removed in a future release. Use security key-manager
onboard show-backup instead.
This command displays the backup information for the Onboard Key Manager, which would be used to
recover the cluster in case of catastrophic situations. The information displayed is for the cluster as a whole
(not individual nodes). This command is not supported for an external key management configuration.
● Examples
The following example displays the Onboard Key Manager backup data for the cluster:
Related reference
security key-manager onboard show-backup (page 643)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command modifies the key management configuration options.
● Parameters
[-cc-mode-enabled {true|false}] - Enable Common Criteria Mode
This parameter modifies the configuration state of the Onboard Key Manager (OKM) Common Criteria
(CC) mode. CC mode enforces some of the policies required by the Common Criteria "Collaborative
Protection Profile for Full Drive Encryption-Authorization Acquisition" (FDE-AA cPP) and "Collaborative
Protection Profile for Full Drive Encryption-Encryption Engine" documents.
● Examples
The following command enables Common Criterial mode in the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays the key management configuration options.
The "cc-mode-enabled" option reflects the current configuraton state for Common-Criteria (CC) mode for the
Onboard Key Manager. CC mode is an operational mode that enforces some of the policies required by the
Common Criteria "Collaborative Protection Profile for Full Drive Encryption-Authorization Acquisition" (FDE-
AA cPP) and "Collaborative Protection Profile for Full Drive Encryption-Encryption Engine" documents. The
feature can be enabled when the Onboard Key Manager is configured using the security key-manager
setup command or after the Onboard Key Manager is configured using the security key-manager
config modify command.
● Examples
The following example displays the state of all key-manager configuration options:
Related reference
security key-manager setup (page 612)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command adds the key management servers of the given hosts and ports to the given Vserver's
external key manager's list of four possible key management servers. When adding key management
servers to the external key manager associated with the admin Vserver, you must run the same command
specifying the same set of key servers on the peer cluster. When adding key management servers to a data
Vserver, you can run the security key-manager external add-servers command on the active
cluster only, as the command is replicated to the peer cluster. However, you need to ensure that the key
management servers specified are reachable from both clusters. This command is not supported if external
key management is not enabled for the Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which to add the key management servers.
-key-servers <Hostname and Port>, ... - External Key Management Servers
Use this parameter to specify the list of additional key management servers that the external key
manager uses to store keys.
● Examples
The following example adds two key management servers to the list of servers used by the external key
manager for Vserver cluster-1. The first key management server's hostname is keyserver1.local and is
listening on the default port 5696, and the second key management server's IP is 10.0.0.20 and is listening
on port 15696:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command disables the external key manager associated with the given Vserver. If the key manager is in
use by ONTAP, you cannot disable it. When disabling the external key manager associated with the admin
Vserver, you must run the same command on the peer cluster. When disabling the external key manager for
a data Vserver, you can run the security key-manager external disable command on the active
cluster only, as the command is replicated on the peer cluster. This command is not supported when the
Onboard Key Manager is enabled for the given Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be disabled.
● Examples
The following example removes the external key manager for Vserver cluster-1:
Warning: This command will permanently delete the external key management
configuration for Vserver "cluster-1".
Do you want to continue? {y|n}: y
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command enables the external key manager associated with the given Vserver. This command is not
supported when a key manager for the given Vserver is already enabled. When enabling the external key
manager associated with the admin Vserver, you must run the same command specifying the same set of
key servers on the peer cluster. When enabling the external key manager for a data Vserver, you can run the
security key-manager external enable command on the active cluster only, as the configuration will
be replicated on the peer cluster. However, you must ensure that the key management servers specified in
the security key-manager external enable command are reachable from both clusters.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be enabled.
-key-servers <Hostname and Port>, ... - List of External Key Management Servers
Use this parameter to specify the list of up to four key management servers that the external key
manager uses to store keys.
-client-cert <text> - Name of the Client Certificate
Use this parameter to specify the unique name of the client certificate that the key management servers
use to ensure the identity of Data ONTAP.
-server-ca-certs <text>, ... - Names of the Server CA Certificates
Use this parameter to specify the unique names of server-ca certificates that Data ONTAP uses to ensure
the identify of the key management servers.
● Examples
The following example enables the external key manager for Vserver cluster-1. The command includes three
key management servers. The first key server's hostname is ks1.local and is listening on port 15696. The
second key server's IP address is 10.0.0.10 and is listening on the default port 5696. The third key server's
IPv6 address is fd20:8b1e:b255:814e:32bd:f35c:832c:5a09, and is listening on port 1234.
s ServerCaCert1,ServerCaCert2
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command modifies the external key manager configuration associated with the given Vserver. When
modifying the external key manaager configuration associated with the admin Vserver, you must run the
same command specifying the same parameters on the peer cluster. When modifying the external key
manager configuration associated with a data Vserver, you can run the security key-manager
external modify command on the active cluster only as the configuration modifications are replicated on
the peer cluster. This command is not supported when external key management is not enabled for the
given Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the key manager to be modified is located.
[-client-cert <text>] - Name of the Client Certificate
Use this parameter to modify the name of the client certificate that the key management servers use to
ensure the identity of Data ONTAP. If the keys of the new certificate do not match the keys of the
existing certificate, or if the TLS connectivity with key-management servers fails with the new certificate,
the operation fails. Running this command in the diagnostic privilege mode ignores failures and allows
the command to complete.
[-server-ca-certs <text>, ...] - Names of the Server CA Certificates
Use this parameter to modify the names of server-ca certificates that Data ONTAP uses to ensure the
identity of the key management servers. Note that the list provided completely replaces the existing list
of certificates. If the TLS connectivity with key-management servers fails with the new list of server-ca
certificates, the operation fails. Running this command in the diagnostic privilege mode ignores failures
and allows the command to complete.
● Examples
The following example updates the client certificate used with the key management servers:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command modifies configuration information for configured key management servers. When modifying
a key management server from the external key manager associated with the admin Vserver, you must run
the same command specifying the same set of parameters on the peer cluster. When modifying a key
management server from a data Vserver, you can run the security key-manager external modify-
server command on the active cluster only as the command is replicated on the peer cluster. However, if
the password associated with a key management server is modified, then you must run the security
key-manager external modify-server command specifying the same password on the peer cluster as
the password is not replicated between clusters. This command is supported only when external key
manager has been enabled for the given Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which to modify the key management server configuraiton.
-key-server <Hostname and Port> - External Key Server
Use this parameter to specify the key management server for which the command modifies the
configuration.
[-timeout <integer>] - Key Server I/O Timeout
Use this parameter to specify the I/O timeout, in seconds, for the selected key management server.
[-username <text>] - Authentication User Name
Use this parameter to specify the username with which Data ONTAP authenticates with the key
management server.
● Examples
The following example modifies the I/O timeout to 45 seconds for Vserver cluster-1, key server
keyserver1.local:
The following example modifies the username and passphrase used to authenticate with key server
keyserver1.local:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command removes the key management servers at the given hosts and ports from the given Vserver's
external key manager's list of key management servers. If any of the specified key management servers is
the sole storage location for any key that is in use by Data ONTAP, then you are unable to remove the key
server. When removing key management servers from the external key manager associated with the admin
Vserver, you must run the same command specifying the same set of key servers on the peer cluster. When
removing key management servers from a data Vserver, you can run the security key-manager
external remove-servers command on the active cluster only as the the command is replicated on the
peer cluster. This command is not supported when external key management is not enabled for the given
Vserver.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver on which the external key manager is to be removed.
● Examples
The following example removes the key management server keyserver1.local, listening on the default port
of 5696 and the key management server at IP 10.0.0.20, listening on port of 15696.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command retrieves and restores any current unrestored keys associated with the storage controller from
the specified key management servers. When restoring keys from the external key manager associated with
the admin Vserver, you must run the same command on the peer cluster. When restoring keys from a data
Vserver, you can run the security key-manager external restore command on the active cluster
only as the command is replicated on the peer cluster. This command is not supported when external key
management has not been enabled for the Vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter specifies the name of the node that will load unrestored key IDs into its internal key
table. If not specified, all nodes retrieve unrestored keys into their internal key table.
[-vserver <vserver name>] - Vserver Name
This parameter specifies the Vserver for which to list the keys. If not specified, this command restores
key for all Vservers.
[-key-server <Hostname and Port>] - Key Server
If this parameter is specified, this command restores keys from the key management server identified by
the host and port. If not specified, this command restores keys from all available key management
servers.
[-key-id <Hex String>] - Key ID
If you specify this parameter, then the command restores only the key IDs that match the specified
value.
[-key-tag <text>] - Key Tag
If you specify this parameter, then the command restores only the key IDs that match the specified key-
tag. The key-tag for Volume Encryption Keys (VEKs) is set to the UUID of the encrypted volume. If not
specified, all key ID pairs for any key tags are restored.
● Examples
The following command restores keys that are currently on a key server but are not stored within the key
tables on the cluster. One key is missing for vserver clus- ter-1 on node1, and another key is missing for
vserver datavs on node1 and node2:
Node: node1
Vserver: cluster-1
Key Server: 10.0.0.1:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000100a04fc7303d9abd1e0f00896192fa9c3f0000000000000000
Node: node1
Vserver: datavs
Key Server: tenant.keysever:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000400a05a7c294a7abc1e0911897132f49c380000000000000000
Node: node2
Vserver: datavs
Key Server: tenant.keysever:5696
Key ID
--------------------------------------------------------------------------------
00000000000000000200000000000400a05a7c294a7abc1e0911897132f49c380000000000000000
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the external key management servers configured on the cluster for a given Vserver.
No entries are displayed when external key management is not enabled for the given Vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If you specify this parameter, then the command displays the key management servers for only the
given Vserver.
● Examples
The following example lists all configured key management servers for all Vservers:
Vserver: datavs
Client Certificate: datavsClientCert
Server CA Certificates: datavsServerCaCert1, datavsServerCaCert2
Key Server
--------------------------------------------
keyserver.datavs.com:5696
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server
--------------------------------------------
10.0.0.10:1234
fd20:8b1e:b255:814e:32bd:f35c:832c:5a09:1234
ks1.local:1234
4 entries were displayed.
The following example lists all configured key management servers with more detail, including timeouts
and usernames:
Vserver: datavs
Client Certificate: datavsClientCert
Server CA Certificates: datavsServerCaCert1, datavsServerCaCert2
Key Server: keyserver.datavs.com:5696
Timeout: 25
Username: datavsuser
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: 10.0.0.10:1234
Timeout: 25
Username:
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: fd20:8b1e:b255:814e:32bd:f35c:832c:5a09:1234
Timeout: 25
Username:
Vserver: cluster-1
Client Certificate: AdminClientCert
Server CA Certificates: AdminServerCaCert
Key Server: ks1.local:1234
Timeout: 45
Username:
4 entries were displayed.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays connectivity information between Data ONTAP nodes and configured external key
management servers.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
If you specify this parameter, then the command displays the connectivity information for only the given
node.
[-vserver <vserver name>] - Vserver Name
If you specify this parameter, then the command displays the key management servers for only the
given Vserver.
[-key-server <Hostname and Port>] - Key Server
If you specify this parameter, then the command displays the connectivity information for only the given
key management server with the given name listening on the given port.
[-key-server-status {available|not-responding|unknown}] - Key Server Status
If you specify this parameter, then the command displays the connectivity information for only the key
management servers with the given status.
[-status-details <text>] - Key Server Status Details
If you specify this parameter, then the command displays the connectivity information for only the key
management servers with the given status details.
● Examples
The following example lists all configured key management servers for all Vservers:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command disables the Azure Key Vault (AKV) associated with the given Vserver. If the AKV is in use by
ONTAP, you cannot disable it. This command is not supported if AKV has not been enabled for the Vserver.
● Parameters
-vserver <Vserver Name> - Vserver
Use this parameter to specify the Vserver on which the AKV is to be disabled.
● Examples
The following example disables the AKV for Vserver v1.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command enables the Azure Key Vault (AKV) associated with the given Vserver. An Azure application
and AKV must be deployed on the Azure portal prior to running this command. This command is not
supported for the admin Vserver, or if a key manager for the given data Vserver is already enabled. This
command is also not supported in a MetroCluster environment.
● Parameters
-vserver <Vserver Name> - Vserver
Use this parameter to specify the Vserver on which the AKV is to be enabled.
-client-id <text> - Application (Client) ID of Deployed Azure Application
Use this parameter to specify the client (application) ID of the deployed Azure application.
-tenant-id <text> - Directory (Tenant) ID of Deployed Azure Application
Use this parameter to specify the tenant (directory) ID of the deployed Azure application.
-name {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - Deployed Azure Key Vault DNS Name
Use this parameter to specify the DNS name of the deployed AKV.
-key-id {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - Key Identifier of AKV Key
Encryption Key
Use this parameter to specify the key identifier of the AKV Key Encryption Key (KEK).
● Examples
The following example enables the AKV for Vserver v1. An Azure application with client-id "4a0f9c98-
c5aa-4275-abe3-2780cf2801c3", tenant-id "8e21f23a-10b9-46fb-9d50-720ef604be98", client secret
"abcdef" and an AKV with DNS name "https://akv-keyvault.vault.azure.net" is deployed on the Azure portal.
An AKV KEK with DNS name "https://akv-keyvault.vault.azure.net/keys/key1/
a8e619fd8f234db3b0b95c59540e2a74" is created on the Azure portal for the AKV.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command results in the key hierarchy being protected by the user designated AKV key encryption key
(KEK). Prior to running this command, the user should have already made the necessary change on the
Azure portal to use a new KEK for their key vault. The key-id used in this command is the key ID associated
with the user's new AKV KEK. Upon successful completion of this command, the internal keys for the given
Vserver will be protected by the new AKV KEK.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver for which ONTAP should rekey the AKV KEK.
-key-id {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - Key Identifier of a new AKV Key
Encryption Key
This parameter specifies the key id of the new AKV KEK that should be used by ONTAP for the provided
Vserver.
● Examples
The following command rekeys AKV KEK for data Vserver v1 using a new key-id key2.
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command rekeys the internal Vserver key hierarchy by changing the top-level internal key encryption
key (KEK). Upon successful completion of the command, all keys in the Vserver key hierarchy will be
protected by the new top-level KEK.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver for which ONTAP should rekey the SVM KEK.
● Examples
The following command rekeys the SVM KEK for data Vserver v1.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command retrieves and restores any current unrestored keys associated with the given Vserver to the
nodes internal key tables. This command is not supported when AKV has not been enabled for the Vserver.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver to which the missing keys will be restored.
● Examples
The following command restores missing keys for the data vserver v1 (which has AKV configuration) to the
internal key tables on the cluster.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the Azure Key Vault (AKV) configuration for a given Vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
If you specify this parameter, then the command displays only the AKV configuration for the given
Vserver.
[-client-id <text>] - Application (Client) ID of Deployed Azure Application
If you specify this parameter, then the command displays only the AKV configuration with the given
client id.
[-tenant-id <text>] - Directory (Tenant) ID of Deployed Azure Application
If you specify this parameter, then the command displays only the AKV configuration with the given
tenant id.
[-name {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...}] - Deployed Azure Key Vault DNS
Name
If you specify this parameter, then the command displays only the AKV configuration with the given key
vault name.
[-state {available|not-responding|unknown}] - Azure Key Vault Cluster State
If you specify this parameter, then the command displays only the AKV configuration with the given
state. The state can be either available or unknown.
[-key-id {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...}] - Key Identifier of AKV Key
Encryption Key
If you specify this parameter, then the command displays only the AKV configuration with the given key
id.
[-unavailable-nodes <text>] - Names of Unavailable Nodes
If you specify this parameter, then the command displays only the AKV configuration with the given
unavailable-nodes.
● Examples
The following example lists all Vservers with AKV configuration.
Name State
------------------------------------------ -------
https://akv-keyvault.vault.azure.net unknown
Unavailable Nodes: node1
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command provides a way to update the client secret that is used for the Azure Key Vault (AKV)
configured for the given Vserver. The command is initially set by running the security key-manager
external azure enable command. This command is not supported if AKV has not been enabled for the
Vserver.
● Parameters
-vserver <Vserver Name> - Vserver
Use this parameter to specify the Vserver for which the AKV client secret is to be updated.
● Examples
The following example updates the AKV client secret for the data Vserver v1.
Related reference
security key-manager external azure enable (page 629)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command enables cluster administrators to modify the IP address and route information that the
external key manager uses at boot time to restore keys from external key servers.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to modify information on the node that you specify.
-address-type {ipv4|ipv6|ipv6z} - Address Type
Use this parameter to modify information for the address-type that you specify.
[-address <IP Address>] - Local Interface Address
Use this parameter to modify the IP address that the system will use at boot time to restore keys from
external key servers. This parameter implies -override-default true.
{ [-netmask <IP Address>] - Network Mask
Use this parameter to modify the IP netmask that the system will use at boot time to restore keys from
external key servers. This parameter can be used only with address-type ipv4. This parameter implies -
override-default true.
| [-netmask-length <integer>]} - Bits in Network Mask
Use this parameter to modify the IP netmask length that the system will use at boot time to restore keys
from external key servers. This parameter implies -override-default true.
[-gateway <IP Address>] - Gateway
Use this parameter to modify the IP gateway that the system will use at boot time to restore keys from
external key servers. This parameter implies -override-default true.
[-port <Port Name>] - Network Port
Use this parameter to modify the port that the system will use at boot time to restore keys from external
key servers. The value that you specify cannot be a vlan or ifgrp port. This parameter implies -
override-default true.
[-override-default {true|false}] - Override Default Setting?
Use this parameter to modify the system's selection of boot time IP address and route information.
When this value is false, the system will use the information associated with a node management LIF.
When this value is true, then the administrator has chosen to override the defaults.
● Examples
The following shows how to modify the port used by node "node2" at boot time to restore keys from external
IPv4 key servers. In the example, IPv6 is not enabled in the cluster, so the -address-type parameter
defaults to ipv4.
cluster-1::*> security key-manager external boot-interfaces modify -node node2 -port e0d
The following example shows how to modify the IP address and gateway parameters used by node "node1"
at boot time to restore keys from external IPv6 key servers.
cluster-1::*> security key-manager external boot-interfaces modify -node node1 -address-type ipv6 -addr
ess fd20:8b1e:b255:814e:749e:11a3:3bff:5820 -gateway fd20:8b1e:b255:814e::1
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command enables cluster administrators to view the IP address and route information that the external
key manager uses at boot time to restore keys from external key servers.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about boot-time IP address and route information for the
node that you specify.
[-address-type {ipv4|ipv6|ipv6z}] - Address Type
Use this parameter to display information only about boot-time IP address and route information for the
address-type that you specify.
[-address <IP Address>] - Local Interface Address
Use this parameter to display information only about boot-time IP address and route information for the
IP address that you specify.
[-netmask <IP Address>] - Network Mask
Use this parameter to display information only about boot-time IP address and route information for the
network mask that you specify.
[-netmask-length <integer>] - Bits in Network Mask
Use this parameter to display information only about boot-time IP address and route information for the
network mask length that you specify.
[-gateway <IP Address>] - Gateway
Use this parameter to display information only about boot-time IP address and route information for the
gateway that you specify.
[-port <Port Name>] - Network Port
Use this parameter to display information only about boot-time IP address and route information for the
port that you specify.
[-override-default {true|false}] - Override Default Setting?
Use this parameter to display information only about boot-time IP address and route information with
the override-default setting that you specify.
● Examples
The following example shows how to display the IP address and route information that the external key
manager uses at boot time to restore keys. In the example, IPv6 is not enabled in the cluster and, as a
result, the command displays information for only the IPv4 address-type. he override-default value is false
for all rows, which indicates that the system automatically configured the values based on the node
management LIF configuration on the nodes.
The following example shows how to display the IP address and route information that the external key
manager uses at boot time to restore keys. In the example, IPv6 is enabled in the cluster and, as a result,
the command displays information for both the IPv4 and IPv6 address-types. The override-default value is
false for most rows, which indicates that the system automatically configured the values based on the node
management LIF configuration on the nodes. The override-default value for node1 and address-type ipv4 is
true, which indicates an administrator has used the security key-manager external boot-
interfaces modify command to override one or more fields, and that the values may differ from the
corresponding node management LIF.
Related reference
security key-manager external boot-interfaces modify (page 633)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command creates a new authentication key (AK) and stores it on the the admin Vserver's key
management servers. The command fails if the configured key management servers are already storing
more than 256 AKs. If this command fails because there are more than 256 AKs in the cluster, delete unused
keys on the Vserver's key management servers and retry the command. This command is not supported
when external key management is not enabled for the admin Vserver.
● Parameters
[-key-tag <text>] - Key Tag
This parameter specifies the key tag to associate with the new authentication key (AK). The default
value is the node name. This parameter can be used to help identify created authentication keys (AKs).
For example, the security key-manager key query command's key-tag parameter can be used to
query for a specific key-tag value.
[-prompt-for-key {true|false}] - Prompt for Authentication Passphrase
If you specify this parameter as true, then the command prompts you to enter an authentication
passphrase manually instead of generating it automatically. For security reasons, the authentication
passphrase you entered is not displayed at the command prompt. You must enter the authentication
passphrase a second time for verification. To avoid errors, copy and paste authentication passphrases
electronically instead of entering them manually. Data ONTAP saves the resulting authentication
key/key ID pair automatically on the configured key management servers.
● Examples
The following example creates an authentication key with the node name as the default key-tag value:
The following example creates an authentication key with a user-specified authentication passphrase:
Related reference
security key-manager key query (page 637)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command removes an authentication key from the configured key management servers on the admin
Vserver. The command fails if the given key is currently in use by Data ONTAP. This command is not
supported when external key management is not enabled for the admin Vserver.
● Parameters
-key-id <Hex String> - Authentication Key ID
Use this parameter to specify the key ID of the key that you want to remove.
● Examples
The following example deletes an authentication key:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command provides a mechanism to migrate the existing keys of a data Vserver from the admin
Vserver's key manager to their own key manager or vice versa. The keys stay the same and the data is not
rekeyed, only the keys are migrated from one Vserver's key manager to another. After a successful migration
to the new key manager, the data Vserver keys are deleted from the previous key manager.
● Parameters
-from-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver whose key manager the keys are migrated from.
-to-vserver <vserver name> - Vserver Name
Use this parameter to specify the name of the Vserver whose key manager the keys are migrated to.
● Examples
The following example migrates the keys of "datavs" data Vserver from "cluster-1" admin Vserver's key
manager to "datavs" data Vserver's key manager:
The following example migrates the keys of "datavs" data Vserver from "datavs" data Vserver's key manager
to "cluster-1" admin Vserver's key manager:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the IDs of the keys that are stored in the configured key managers. This command
does not update the key tables on the node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to specify the name of the node that queries the specified key management servers.
If this parameter is not specified, then all nodes query the specified key management servers.
[-vserver <vserver name>] - Vserver Name
Use this parameter to specify the Vserver for which to list the keys.
[-key-server <Hostname and Port>] - Key Server
This parameter specifies the host and port of the key management server that you want to query. This
parameter is used only with external key managers.
[-key-id <Hex String>] - Key Identifier
If you specify this parameter, then the command displays only the key IDs that match the specified
value.
[-key-tag <text>] - Key Tag
If you specify this parameter, then the command displays only the key IDs that match the specified
value. The key-tag for Volume Encryption Keys (VEKs) is set to the UUID of the encrypted volume.
[-key-type <Key Usage Type>] - Key Type
If you specify this parameter, then the command displays only the key IDs that match the specified
value.
[-restored {true|false}] - Restored
This parameter specifies whether the key corresponding to the displayed key ID is present in the
specified node's internal key table. If you specify 'yes' for this parameter, then the command displays the
key IDs of only those keys that are present in the system's internal key table. If you specify 'no' for this
parameter, then the command displays the key IDs of only those keys that are not present in the
system's internal key table.
[-key-store <Key Store>] - Key Store
Use this parameter to specify the key manager type from which to list the keys.
[-key-user <vserver name>] - Key User
If you specify this parameter, then the command displays only the key IDs that are used by the specified
Vserver.
[-key-manager <text>] - Key Manager
This parameter specifies the identity of the key manager. For external key managers that will be the
host and the port of the key server. In other cases that will be the name of a corresponding key
manager.
[-key-store-type <Key Store Type>] - Key Store Type
If you specify this parameter, then the command displays only the key IDs that are used by the specified
key manager type.
● Examples
The following example shows all of the keys on all configured key servers, and whether or not those keys
have been restored for all nodes in the cluster:
Node: node1
Vserver: cluster-1
Key Manager: onboard
Node: node1
Vserver: datavs
Key Manager: keyserver.datavs.com:5965
Key Manager Type: KMIP
Node: node2
Vserver: cluster-1
Key Manager: onboard
Key Manager Type: OKM
Node: node2
Vserver: datavs
Key Manager: keyserver.datavs.com:5965
Key Manager Type: KMIP
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and might be removed in a future release. Use security key-manager
key query instead.
This command displays the key IDs of the authentication keys (NSE-AK) and SVM keys (SVM-KEK) that are
available in Onboard Key Manager. This command is not supported for an external key management
configuration.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-detail ]
If this parameter is specified, the command displays additional details about the key IDs.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about key IDs that are located on
the specified storage system.
[-key-store <Key Store>] - Key Store
If this parameter is specified, the command displays information only about key IDs that are managed
by the specified key management. For example, use onboard for the Onboard Key Manager.
[-key-id <text>] - Key Identifier
If this parameter is specified, the command displays information only about the specified key IDs.
[-key-tag <text>] - Key Tag
If this parameter is specified, the command displays information only about key IDs that have the
specified key tags.
[-key-location <text>] - Key Location
If this parameter is specified, the command displays information only about key IDs that are located on
the specified key location. For example, use local-cluster for the Onboard Key Manager.
[-used-by <Key Usage Type>] - Used By
If this parameter is specified, the command displays information only about key IDs that are associated
with the specified application usage of the keys. For example, "NSE-AK" would display key IDs only for
NSE drives.
[-restored {yes|no}] - Restored
If this parameter is specified, the command displays information only about key IDs that have the
specified value of restored keys. If restored is yes, then the corresponding key is available (normal). If
restored is no, use the security key-manager setup command to restore the key. See the man page
for security key-manager setup for details.
● Examples
The following example shows all keys stored in the Onboard Key Manager:
Node: node1
Key Store: onboard
Used By
--------
NSE-AK
Key ID: 000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
NSE-AK
Key ID: 000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
SVM-KEK
Key ID: 00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
Node: node2
Key Store: onboard
Used By
--------
NSE-AK
Key ID: 000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
NSE-AK
Key ID: 000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
SVM-KEK
Key ID: 00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
6 entries were displayed.
The following example shows a detailed view of all keys stored in the Onboard Key Manager:
Node: node1
Key Store: onboard
Key ID Key Tag Used By Stored In Restored
------ --------------- ---------- ------------------------------------ --------
000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
- NSE-AK local-cluster yes
000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
- NSE-AK local-cluster yes
00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
- SVM-KEK local-cluster yes
Node: node2
Key Store: onboard
Key ID Key Tag Used By Stored In Restored
------ --------------- ---------- ------------------------------------ --------
000000000000000002000000000001001bc4c708e2a89a312e14b6ce6d4d49d40000000000000000
- NSE-AK local-cluster yes
000000000000000002000000000001005e89099721f8817e65e3aeb68be1bfca0000000000000000
- NSE-AK local-cluster yes
00000000000000000200000000000a0046df92864d4cece662b93beb7f5366100000000000000000
- SVM-KEK local-cluster yes
6 entries were displayed.
Related reference
security key-manager setup (page 612)
security key-manager key query (page 637)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command disables the Onboard Key Manager associated with the admin Vserver and permanently
deletes the Onboard Key Manager configuration associated with the admin Vserver.
● Examples
The following example disables the Onboard Key Manager for the admin Vserver:
Warning: This command will permanently delete all keys from Onboard Key
Manager.
Do you want to continue? {y|n}: y
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command enables the Onboard Key Manager for the admin Vserver.
● Parameters
[-cc-mode-enabled {yes|no}] - Enable Common Criteria Mode?
Use this parameter to specify whether the Common Critieria (CC) mode should be enabled or not. When
CC mode is enabled, you are required to provide a cluster passphrase that is between 64 and 256 ASCII
character long, and you are required to enter that passphrase each time a node reboots. CC mode cannot
be enabled in a MetroCluster configuration.
● Examples
The following example enables the Onboard Key Manager for the admin Vserver cluster-1:
After configuring the Onboard Key Manager, save the encrypted configuration data in a safe location so
that you can use it if you need to perform a manual recovery operation. To view the data, use the "secu
rity key-manager onboard show-backup" command.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the backup information for the Onboard Key Manager for the admin Vserver, which
can be used to recover the cluster in case of catastrophic situations. The information displayed is for the
cluster as a whole (not individual nodes).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays the Onboard Key Manager backup data for the admin Vserver:
b2IAAQAAAAMAAAAYAQAAAAAAADA5/
ccAAAAAIgAAAAAAAAAoAAAAAAAAAEOTcR0AAAAAAAAAAAAAAAACAAAAAAAJAGr3t
JA/LRzUQRHwv+1aWvAAAAAAAAAAACIAAAAAAAAAKAAAAAAAAACI8z/
bAAAAAAAAAAAAAAAAAgAAAAAAAQAbxMcI4qiaMS4Uts5tTUnUAAAAAAAAAAAkAAA
AAAAAAIAAAAAAAAAAqwxTcwAAAACkiwBAI3YeeV3jMFg5SmyjLSgoK/
qc8FAmMMcrRXY6uriulnL0WPB/
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE5ldEFwcCBL
ZXkgQmxvYgABAAAAAwAAABgBAAAAAAAA1cNLLwAAAAAiAAAAAAAAACgAAAAAAAAA
Q5NxHQAAAAAAAAAAAAAAAAIAAAAAAAkAave0kD8tHNRBEfC/
7Vpa8AAAAAAAAAAAIgAAAAAAAAAoAAAAAAAAAJ4/
cQsAAAAAAAAAAAAAAAACAAAAAAABAF6JCZch+IF+ZeOutovhv8oAAAAAAAAAACQA
AAAAAAAAgAAAAAAAAAAN3Zq7AAAAALO7qD20+H8TuGgSauEHoqAyWcLv4uA0m2rr
H4nPQM0nrDRYRa9SCv8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAA
---------------------------END BACKUP---------------------------
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command synchronizes missing onboard keys on any node in the cluster. For example, if you add a
node to a cluster that has the Onboard Key Manager configured, you should then run this command to
synchronize the keys. In a MetroCluster configuration, if the security key-manager onboard enable
command is used to enable the Onboard Key Manager on one site, then run the security key-manager
onboard sync command on the partner site. In a MetroCluster configuration, if the security key-
manager onboard update-passphrase command is used to update the passphrase on one site, then run
this command with the new passphrase on the partner site before proceeding with any key management
operations.
● Examples
The following example synchronizes the Onboard Key Manager key database across all nodes in the cluster.
In a MetroCluster configuration, this command synchronizes nodes in the local site.
Related reference
security key-manager onboard enable (page 642)
security key-manager onboard update-passphrase (page 644)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command provides a way to update the cluster-wide passphrase that is used for the Onboard Key
Manager and initially created by running the security key-manager onboard enable command. This
command prompts for the existing passphrase, and if that passphrase is correct then the command prompts
for a new passphrase. When the Onboard Key Manager is enabled for the admin Vserver, run the security
key-manager onboard show-backup command after updating the passphrase and save the output for
emergency recovery scenarios. When the security key-manager onboard update-passphrase
command is executed in a MetroCluster configuration, then run the security key-manager onboard
sync command with the new passphrase on the partner site before proceeding with any key-manager
operations. This allows the updated passphrase to be replicated to the partner site.
● Examples
The following example updates the cluster-wide passphrase used for the Onboard Key Manager:
Warning: This command will reconfigure the cluster passphrase for onboard
key management.
Do you want to continue? {y|n}: y
Related reference
security key-manager onboard enable (page 642)
security key-manager onboard show-backup (page 643)
security key-manager onboard sync (page 644)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command verifies the backup data and the passphrase of the Onboard Key Manager for the admin
Vserver.
● Examples
The following example displays the verification of the onboard key management backup data for the admin
Vserver:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login create command creates a login method for the management utility. A login
method consists of a user name, an application (access method), and an authentication method. A user
name can be associated with multiple applications. It can optionally include an access-control role name. If
an Active Directory, LDAP, or NIS group name is used, then the login method gives access to users belonging
to the specified group. If the user is a member of multiple groups provisioned in the security login table,
then the user will get access to a combined list of the commands authorized for the individual groups.
● Parameters
-vserver <Vserver Name> - Vserver
This specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name or Active Directory, LDAP, or NIS group name of the login method. The
Active Directory, LDAP, or NIS group name can be specified only with the domain or nsswitch
authentication method and ontapi and ssh application. If the user is a member of multiple groups
provisioned in the security login table, then the user will get access to a combined list of the commands
authorized for the individual groups.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh,
snmp, service-processor, ssh, and telnet.
Setting this parameter to service-processor grants the user access to the Service Processor (SP). Because
the SP supports only password authentication, when you set this parameter to service-processor, you
must also set the -authentication-method parameter to password. Vserver user accounts cannot access
the SP. Therefore, you cannot use the -vserver parameter when you set this parameter to service-
processor.
-authentication-method <text> - Authentication Method
This specifies the authentication method for login. Possible values include the following:
• cert - SSL certificate authentication
• community - SNMP community strings
• domain - Active Directory authentication
• nsswitch - LDAP or NIS authentication
• password - Password
• publickey - Public-key authentication
• usm - SNMP user security model
• saml - SAML authentication
[-remote-switch-ipaddress <IP Address>] - Remote Switch IP Address
This specifies the IP address of the remote switch. The remote switch could be a cluster switch monitored
by cluster switch health monitor (CSHM) or a Fibre Channel (FC) switch monitored by MetroCluster
health monitor (MCC-HM). This parameter is applicable only when the application is snmp and
authentication method is usm (SNMP user security model).
-role <text> - Role Name
This specifies an access-control role name for the login method.
[-comment <text>] - Comment Text
This specifies comment text for the user account, for example, "Guest account". The maximum length is
128 characters.
[-is-ns-switch-group {yes|no}] - Whether Ns-switch Group
This specifies whether user-or-group-name is an LDAP or NIS group. Possible values are yes or no.
Default value is no.
[-second-authentication-method {none|publickey|password|nsswitch}] - Second Authentication
Method2
This specifies the authentication method for the login. It will be used as the second factor for
authentication. Possible values include the following:
• password - Password
• publickey - Public-key authentication
• nsswitch - NIS or LDAP authentication
• none - default value
● Examples
The following example illustrates how to create a login that has the user name monitor, the application ssh,
the authentication method password, and the access-control role guest for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application
ontapi, the authentication method password, and the access-control role vsadmin for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application ssh,
the authentication method publickey, and the access-control role guest for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application
http, the authentication method cert, and the access-control role admin for Vserver vs:
The following example illustrates how to create a login that has the Active Directory group name adgroup in
DOMAIN1, the application ssh, the authentication method domain, and the access-control role vsadmin for
Vserver vs:
The following example illustrates how to create a login that has a group name nssgroup in the LDAP or NIS
server, the application ontapi, the authentication method nsswitch, and the access-control role vsadmin for
Vserver vs. Here is-ns-switch-group must be set to yes:
The following example illustrates how to create a login that has the user name monitor, the application ssh,
the authentication method password, the second authentication method publickey and the access-control
role vsadmin for Vserver vs:
The following example illustrates how to create a login that has the user name monitor, the application ssh,
the authentication method password, the second authentication method none and the access-control role
vsadmin for Vserver vs:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login delete command deletes a login method.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name or Active Directory, LDAP, or NIS group name of the login method that is to
be deleted. A user name can be associated with multiple applications.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh,
snmp, service-processor, ssh, and telnet.
-authentication-method <text> - Authentication Method
This specifies the authentication method of the login method. Possible values include the following:
• cert - SSL certificate authentication
• community - SNMP community strings
• domain - Active Directory authentication
• nsswitch - LDAP or NIS authentication
• password - Password
• publickey - Public-key authentication
• usm - SNMP user security model
• saml - SAML authentication
[-remote-switch-ipaddress <IP Address>] - Remote Switch IP Address
This specifies the IP address of the remote switch. The remote switch could be a cluster switch monitored
by cluster switch health monitor (CSHM) or a Fibre Channel (FC) switch monitored by MetroCluster
health monitor (MCC-HM). This parameter is applicable only when the application is snmp and
authentication method is usm (SNMP user security model).
● Examples
The following example illustrates how to delete a login that has the username guest, the application ssh,
and the authentication method password for Vserver vs:
The following example illustrates how to delete a login that has the username guest, the application ontapi,
and the authentication method cert for Vserver vs:
The following example illustrates how to delete a login that has the Active Directory group name adgroup in
DOMAIN1, the application ssh, and the authentication method domain for Vserver vs:
The following example illustrates how to delete a login that has a group name nssgroup in the LDAP or NIS
server, the application ontapi, and the authentication method nsswitch for Vserver vs:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login expire-password command expires a specified user account password, forcing the
user to change the password upon next login.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account whose password you want to expire.
[-hash-function {sha512|sha256}] - Password Hash Function
This optionally specifies the password-hashing algorithm used for encrypting the passwords that you
want to expire. The supported values include are as follows:
• sha512 - Secure hash algorithm (512 bits)
• sha256 - Secure hash algorithm (256 bits)
• md5 - Message digest algorithm (128 bits)
[-lock-after <integer>] - Lock User Account After N days (privilege: advanced)
This optionally specifies the number of days after which the new password hash policy will be enforced.
The enforcement will lock all user accounts that are still compliant with the provided hash alogrithm
using -hash-function parameter.
● Examples
The following command expires the password of the 'jdoe' user account which belongs to the 'vs1' Vserver.
The following command expires all user account passwords that are encrypted with the MD5 hash function.
The following command expires the password of any Vserver's user account named 'jdoe' that is encrypted
with the MD5 hash function.
The following command expires the password of the 'vs1' Vserver user account named 'jdoe' that is
encrypted with the MD5 hash function.
cluster1::> security login expire-password -vserver vs1 -username jdoe -hash-function md5
The following command expires all user account passwords that are encrypted with the MD5 hash function
and enforce the new password hash policy after 180 days.
cluster1::> security login expire-password -vserver * -username * -hash-function md5 -lock-after 180
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login lock command locks a specified account, preventing it from accessing the
management interface.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account that is to be locked.
● Examples
The following example locks a user account named 'jdoe' which belongs to the Vserver 'vs1'.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login modify command modifies the access-control role name of a login method. If the
user is a member of multiple groups provisioned in the security login table, then the user will get access to a
combined list of the commands authorized for the individual groups.
● Parameters
-vserver <Vserver Name> - Vserver
This specifies the Vserver name of the login method.
-user-or-group-name <text> - User Name or Group Name
This specifies the user name, Active Directory, LDAP, or NIS group name of the login method that is to be
modified. A user name can be associated with multiple applications. If the user is a member of multiple
groups provisioned in the security login table, then the user will get access to a combined list of the
commands authorized for the individual groups.
-application <text> - Application
This specifies the application of the login method. Possible values include console, http, ontapi, rsh,
snmp, service-processor, ssh, and telnet.
-authentication-method <text> - Authentication Method
This specifies the authentication method of the login method. Possible values include the following:
• cert - SSL certificate authentication
• community - SNMP community strings
• domain - Active Directory authentication
• nsswitch - LDAP or NIS authentication
• password - Password
• publickey - Public-key authentication
• usm - SNMP user security model
• saml - SAML authentication
[-remote-switch-ipaddress <IP Address>] - Remote Switch IP Address
This specifies the IP address of the remote switch. The remote switch could be a cluster switch monitored
by cluster switch health monitor (CSHM) or a Fibre Channel (FC) switch monitored by MetroCluster
health monitor (MCC-HM). This parameter is applicable only when the application is snmp and
authentication method is usm (SNMP user security model).
[-role <text>] - Role Name
This modifies the access-control role name for the login method.
[-comment <text>] - Comment Text
This specifies comment text for the user account, for example, "Guest account". The maximum length is
128 characters.
[-is-ns-switch-group {yes|no}] - Whether Ns-switch Group
This specifies if user-or-group-name is an LDAP or NIS group. Possible values are yes or no. Default value
is no.
[-second-authentication-method {none|publickey|password|nsswitch}] - Second Authentication
Method2
This specifies the authentication method for the login method. It will be used as the second factor for
authentication. Possible values include the following:
• password - Password
• publickey - Public-key authentication
• nsswitch - NIS or LDAP authentication
● Examples
The following example illustrates how to modify a login method that has the user name guest, the
application ontapi, and the authentication method password to use the access-control role guest for Vserver
vs:
The following example illustrates how to modify a login method that has the user name guest, the
application ssh, and the authentication method publickey to use the access-control role vsadmin for Vserver
vs:
The following example illustrates how to modify a login method that has the group name nssgroup, the
application ontapi, and the authentication method nsswitch to use the access-control role readonly for
Vserver vs. Here is-ns-switch-group must be set to yes:
The following example illustrates how to modify a login method that has the user name guest, the
application ssh, and the authentication method publickey to use the second-authentication-method
password for Vserver vs:
The following example illustrates how to modify a login method to have individual authentication methods
that have the user name guest, the application ssh, and the authentication method publickey to use the
second-authentication-method none for Vserver vs:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login password command resets the password for a specified user. The command
prompts you for the user's old and new password.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name of the login method.
-username <text> - Username
This optionally specifies the user name whose password is to be changed. If you do not specify a user,
the command defaults to the user name you are currently using.
● Examples
The following command initiates a password change for the 'admin' user account of the 'vs' Vserver.
The following command initiates a password change for the 'vs' Vserver user account named 'admin'. The
new password will be encrypted by using the SHA512 password-hashing algorithm.
The following command initiates a password change for the 'vs' Vserver user account named 'admin'. The
new password will be encrypted by using the SHA256 password-hashing encryption algorithm.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
If the password of the system administrator is not encrypted with an encryption type supported by releases
earlier than ONTAP 9.0, this command prompts the administrator for a new password and encrypt it using a
supported encryption type on each cluster or at each site in a MetroCluster configuration. In a MetroCluster
configuration, this command must be run on both sites. The password for all other users are marked as
"expired". This causes them to be re-encrypted using a compatible encryption type. The expired passwords
are changed with an internally generated password. The administrator must change the passwords for all
users before the users can login. The users are prompted to change their password upon login. This
command disables the logging of unsuccessful login attempts. The command must be run by a user with the
cluster admin role from a clustershell session on the console device. This user must be unlocked. If you fail to
run this command, the revert process fails.
● Parameters
-disable-feature-set <downgrade version> - Data ONTAP Version
This parameter specifies the Data ONTAP version that introduced the password feature set.
● Examples
The following command disables the logging of unsuccessful login attempts.
Warning: This command will disable the MOTD feature that prints unsuccessful login attempts.
Do you want to continue? {y|n}: y
cluster1::*>
The following command prompts system administrator to enter password and encrypt it with the hashing
algorithm supported by releases earlier than Data ONTAP 9.0.
cluster1::*>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login show command displays the following information about user login methods:
• User name
• Application (console, http, ontapi, rsh, snmp, service-processor, ssh, or telnet)
• Authentication method (community, password, publickey, or usm)
• Role name
• Whether the account is locked
• Whether the user name refers to nsswitch group
• Password hash function
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Displays the login methods that match the specified Vserver name.
[-user-or-group-name <text>] - User Name or Group Name
Displays the login methods that match this parameter value. Value can be a user name or Active
Directory, LDAP, or NIS group name.
[-application <text>] - Application
Displays the login methods that match the specified application type. Possible values include console,
http, ontapi, rsh, snmp, service-processor, ssh, and telnet.
[-authentication-method <text>] - Authentication Method
Displays the login methods that match the specified authentication method. Possible values include the
following:
• cert - SSL certificate authentication
• community - SNMP community strings
• domain - Active Directory authentication
• nsswitch - LDAP or NIS authentication
• password - Password
• publickey - Public-key authentication
• usm - SNMP user security model
• saml - SAML authentication
[-remote-switch-ipaddress <IP Address>] - Remote Switch IP Address
Displays the login methods that match the specified IP address of the remote switch. The remote switch
could be a cluster switch monitored by cluster switch health monitor (CSHM) or a Fibre Channel (FC)
switch monitored by MetroCluster health monitor (MCC-HM). This parameter is applicable only when the
application is snmp and authentication method is usm (SNMP user security model).
[-role <text>] - Role Name
Displays the login methods that match the specified role.
[-is-account-locked {yes|no}] - Account Locked
Displays the login methods that match the specified account lock status.
[-comment <text>] - Comment Text
Displays the login methods that match the specified comment text.
[-is-ns-switch-group {yes|no}] - Whether Ns-switch Group
This specifies whether user-or-group-name is an LDAP or NIS group. Possible values are yes or no.
[-hash-function {sha512|sha256}] - Password Hash Function (privilege: advanced)
Displays the login methods that match the specified password-hashing algorithm. Possible values are:
• sha512 - Secure hash algorithm (512 bits)
• sha256 - Secure hash algorithm (256 bits)
• md5 - Message digest algorithm (128 bits)
[-second-authentication-method {none|publickey|password|nsswitch}] - Second Authentication
Method2
Displays the login methods that match the specified authentication method to be used as the second
factor. Possible values include the following:
• password - Password
• publickey - Public-key authentication
• nsswitch - NIS or LDAP authentication
• none - default value
● Examples
The example below illustrates how to display information about all user login methods:
Vserver: cluster1
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
admin console password admin no none
admin http password admin no none
admin ontapi password admin no none
admin service-processor
password admin no none
admin ssh password admin no none
autosupport console password autosupport no none
Vserver: vs1.netapp.com
Second
User/Group Authentication Acct Authentication
Name Application Method Role Name Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
vsadmin http password vsadmin yes none
vsadmin ontapi password vsadmin yes none
vsadmin ssh password vsadmin yes none
9 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login unlock command unlocks a specified account, enabling it to access the
management interface.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver to which the user account belongs.
-username <text> - Username
This specifies the user name of the account that is to be unlocked.
● Examples
The following command unlocks a user account named jdoe which belongs to the Vserver vs1.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login whoami command displays the name and role of the user logged in at the current
console session. It takes no options or other parameters.
● Examples
The following example shows that the current session is logged in by using the 'admin' user account:
cluster1::> whoami
(security login whoami)
User: admin
Role: admin
Related reference
security login show (page 655)
security login create (page 646)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login banner modify command modifies the login banner. The login banner is printed
just before the authentication step during the SSH and console device login process.
● Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver whose banner will be modified. Use the name of the cluster
admin Vserver to modify the cluster-level message. The cluster-level message is used as the default for
data Vservers that do not have a message defined.
{ [-message <text>] - Login Banner Message
This optional parameter can be used to specify a login banner message. If the cluster has a login banner
message set, the cluster login banner will be used by all data Vservers as well. Setting a data Vserver's
login banner will override the display of the cluster login banner. To reset a data Vserver's login banner
to use the cluster login banner, use this parameter with the value "-".
If you use this parameter, the login banner cannot contain newlines (also known as end of lines (EOLs)
or line breaks). To enter a login banner message with newlines, do not specify any parameter. You will
be prompted to enter the message interactively. Messages entered interactively can contain newlines.
● Examples
This example shows how to enter a login banner interactively:
cluster1::>
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login banner show command displays the login banner.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects login banners that match the specified value. Use the name of the admin Vserver to specify the
cluster-level login banner.
[-message <text>] - Login Banner Message
Selects login banners that match the specified value. By default, this command will not display
unconfigured, or empty, login banners. To display all banners, specify -message *.
● Examples
The following shows sample output from this command:
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command establishes a gateway (tunnel) for authenticating Windows Active Directory (AD) domain
users' access to the cluster.
Before using this command to establish the tunnel, the following must take place:
• You must use the security login create command to create one or more AD domain user accounts
that will be granted access to the cluster.
- The -authmethod parameter of the security login create command must be set to 'domain'.
- The -username parameter of the security login create command must be set to a valid AD
domain user account that is defined in a Windows Domain Controller's Active Directory. The user account
must be specified in the format of <domainname>\<username>, where "domainname" is the name of
the CIFS domain server.
• You must identify or create a CIFS-enabled data Vserver that will be used for Windows authentication with
the Active Directory server. This Vserver is the tunnel Vserver, and it must be running for this command to
succeed.
Only one Vserver can be used as the tunnel. If you attempt to specify more than one Vserver for the tunnel,
Data ONTAP returns an error. If the tunnel Vserver is stopped or deleted, AD domain users' authentication
requests to the cluster will fail.
● Parameters
-vserver <vserver> - Authentication Tunnel Vserver
This parameter specifies a data Vserver that has been configured with CIFS. This Vserver will be used as
the tunnel for authenticating AD domain users' access to the cluster.
● Examples
The following commands create an Active Directory domain user account ('DOMAIN1\Administrator') for the
'cluster1' cluster, create a data Vserver ('vs'), create a CIFS server ('vscifs') for the Vserver, and specify 'vs' as
the tunnel for authenticating the domain user access to the cluster.
Related reference
security login create (page 646)
vserver create (page 1866)
vserver cifs create (page 1903)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login domain-tunnel delete command deletes the tunnel established by the
security login domain-tunnel create command. An error message will be generated if no tunnel
exists.
● Examples
The following command deletes the tunnel established by security login domain-tunnel create.
Related reference
security login domain-tunnel create (page 660)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login domain-tunnel modify command modifies or replaces the tunnel Vserver. If a
tunnel Vserver is not already specified, it sets the current tunnel Vserver with this Vserver, otherwise, it
replaces the current tunnel Vserver with the Vserver that you specify. If the tunnel Vserver is changed,
authentication requests via previous Vserver will fail. See security login domain-tunnel create for
more information.
● Parameters
[-vserver <vserver>] - Authentication Tunnel Vserver
This parameter specifies a Vserver that has been configured with CIFS and is associated with a Windows
Domain Controller's Active Directory authentication. This Vserver will be used as an authentication
tunnel for login accounts so that they can be used with administrative Vservers.
● Examples
The following command modifies the tunnel Vserver for administrative Vserver.
Related reference
security login domain-tunnel create (page 660)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login domain-tunnel show command shows the tunnel Vserver that was specified by
the security login domain-tunnel create or security login domain-tunnel modify
command.
● Examples
The example below shows the tunnel Vserver, vs, that is currently used as an authentication tunnel. The
output informs you that the table is currently empty if tunnel Vserver has not been specified.
Related reference
security login domain-tunnel create (page 660)
security login domain-tunnel modify (page 661)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login motd modify command updates the message of the day (MOTD).
There are two categories of MOTDs: the cluster-level MOTD and the data Vserver-level MOTD. A user logging
in to a data Vserver's clustershell will potentially see two messages: the cluster-level MOTD followed by the
Vserver-level MOTD for that Vserver. The cluster administrator can enable or disable the cluster-level MOTD
on a per-Vserver basis. If the cluster administrator disables the cluster-level MOTD for a Vserver, a user
logging into the Vserver will not see the cluster-level message. Only a cluster administrator can enable or
disable the cluster-level message.
● Parameters
-vserver <Vserver Name> - Vserver Name
Use this parameter to specify the Vserver whose MOTD will be modified. Use the name of the cluster
admin Vserver to modify the cluster-level message.
{ [-message <text>] - Message of the Day (MOTD)
This optional parameter can be used to specify a message. If you use this parameter, the MOTD cannot
contain newlines (also known as end of lines (EOLs) or line breaks). If you do not specify any parameter
other than the -vserver parameter, you will be prompted to enter the message interactively. Messages
entered interactively can contain newlines. Non-ASCII characters must be provided as Unicode UTF-8.
The message may contain dynamically generated content using the following escape sequences:
• \\ - A single backlash character.
• \b - No output: supported for compatibility with Linux only.
• \C - Cluster name.
• \d - Current date as set on the login node.
● Examples
This example shows how to enter a MOTD interactively:
cluster1::>
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login motd show command displays information about the cluster-level and data Vserver
clustershell message of the day (MOTD).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver Name
Selects the message of the day entries that match this parameter value. Use the name of the cluster
admin Vserver to see the cluster-level MOTD.
[-message <text>] - Message of the Day (MOTD)
Selects the message of the day entries that match this parameter value.
[-is-cluster-message-enabled {true|false}] - Is Cluster-level Message Enabled?
Selects the message of the day entries that match this parameter value.
● Examples
The following example displays all message of the day entries:
Vserver: vs0
Is the Cluster MOTD Displayed?: true
Message
-----------------------------------------------------------------------------
Welcome to the Vserver!
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login publickey create associates an existing public key with a user account. This
command requires that you enter a valid OpenSSH-formatted public key, a user name, index number, and
optionally, a comment.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter optionally specifies the Vserver of the user for whom you are adding the public key.
-username <text> - Username
This parameter specifies the name of the user for whom you are adding the public key. If you do not
specify a user, the user named admin is specified by default.
[-index <integer>] - Index
This parameter specifies an index number for the public key. The default value is the next available
index value, starting with zero if it is the first public key created for the user.
-publickey <certificate> - Public Key
This specifies the OpenSSH public key, which must be enclosed in double quotation marks.
[-comment <text>] - Comment
This optionally specifies comment text for the public key. Note that comment text should be enclosed in
quotation marks.
● Examples
The following command associates a public key with a user named tsmith for Vserver vs1. The public key is
assigned index number 5 and the comment text is “This is a new key”.
cluster1::> security login publickey create -vserver vs1 -username tsmith -index 5 -publickey
"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAspH64CYbUsDQCdW22JnK6J
/vU9upnKzd2zAk9C1f7YaWRUAFNs2Qe5lUmQ3ldi8AD0Vfbr5T6HZPCixNAIza
FciDy7hgnmdj9eNGedGr/JNrftQbLD1hZybX+72DpQB0tYWBhe6eDJ1oPLob
ZBGfMlPXh8VjeU44i7W4+s0hG0E=tsmith@publickey.example.com"
-comment "This is a new key"
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login publickey delete command deletes a public key for a specific user. To delete a
public key, you must specify a user name and index number.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter optionally specifies the Vserver of the user for whom you are adding the public key.
-username <text> - Username
This parameter specifies the name of the user for whom you are deleting a public key. If you do not
specify a user, the user named admin is specified by default.
-index <integer> - Index
This parameter specifies an index number for the public key.
● Examples
The following command deletes the public key for the user named tsmith with the index number 5.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login publickey load-from-uri command loads one or more public keys from a
Universal Resource Identifier (URI). To load public keys from a URI, you must specify a user name, the URI
from which to load them, and optionally, whether you want to overwrite the existing public keys.
● Parameters
-vserver <vserver name> - Vserver
This parameter optionally specifies the Vserver for the user associated with the public keys.
-username <text> - Username
This parameter specifies the username for the public keys. If you do not specify a username, the
username "admin" is used by default.
-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI to load from
This parameter specifies the URI from which the public keys will be loaded.
-overwrite {true|false} - Overwrite Entries
This parameter optionally specifies whether you want to overwrite existing public keys. The default value
for this parameter is false. If the value is true and you confirm to overwrite, then the existing public keys
are overwritten with the new public keys. If you use the value false or do not confirm the overwrite, then
newly loaded public keys are appended to the list of existing public keys using the next available index.
● Examples
The following command shows how to load public keys for the user named tsmith from the URI ftp://
ftp.example.com/identity.pub. This user's existing public keys are not overwritten.
The following command shows how to load public keys for the user named tsmith from the URI ftp:ftp://
ftp.example.com/identity.pub. This user's existing public keys are overwritten if user entered the option 'y' or
'Y'. The user's existing public keys are not overwritten if user entered the option 'n' or 'N' and the newly
loaded public keys are appended to the list of existing public keys using the next available index. The user
and password credentials that you provide when you use this command are the credentials to access the
server specified by the URI.
Enter User:
Enter Password:
Warning: You are about to overwrite the existing publickeys for the user
"tsmith" in Vserver "vs0". Do you want to proceed? {y|n}:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login publickey modify command modifies a public key and optionally its comment
text.
● Parameters
-vserver <Vserver Name> - Vserver
Specifies the Vserver for the user associated with the public key.
-username <text> - Username
Specifies the username for the public key. If you do not specify a username, the username 'admin' is
used by default.
-index <integer> - Index
Specifies the index number of the public key. The index number of the public key can be found by using
the security login publickey show command.
[-publickey <certificate>] - Public Key
Specifies the new public key. You must enclose the new public key in double quotation marks.
[-comment <text>] - Comment
Specifies the new comment text for the public key.
● Examples
The following command modifies the public key at index number 10 for the user named tsmith of Vserver
vs1.
cluster1::> security login publickey modify -vserver vs1 -username tsmith -index 10 -publickey
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDD+pFzFgV/2dlowKRFgym9K910H/u+BVTGitCtHteHyo8thmaXT
1GLCzaoC/12+XXiYKMRhJ00S9Svo4QQKUXHdCPXFSgR5PnAs39set39ECCLzmduplJnkWtX96pQH/bg2g3upFcdC6z9
c37uqFtNVPfv8As1Si/9WDQmEJ2mRtJudJeU5GZwZw5ybgTaN1jxDWus9SO2C43F/vmoCKVT529UHt4/ePcaaHOGTiQ
O8+Qmm59uTgcfnpg53zYkpeAQV8RdYtMdWlRr44neh1WZrmW7x5N4nXNvtEzr9cvb9sJyqTX1CkQGfDOdb+7T7y3X7M
if/qKQY6FsovjvfZD"
Related reference
security login publickey show (page 667)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login publickey show command displays information about public keys.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the public keys that match this parameter value.
[-username <text>] - Username
Selects the public keys that match this parameter value.
[-index <integer>] - Index
Selects the public keys that match this parameter value.
[-publickey <certificate>] - Public Key
● Examples
The example below displays public key information for the user named tsmith.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login rest-role create command creates an access-control role. An access-control role
consists of a role name and a api to which the role has access. It optionally includes an access level (none,
readonly, or all) for api. After you create an access-control role, you can apply it to a management-utility
login account by using the security login modify or security login create commands.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be created.
-api <text> - api path
This specifies the api to which the role has access.
-access {none|readonly|all} - Access Level
This optionally specifies an access level for the role. Possible access level settings are none, readonly,
and all. The default setting is all.
● Examples
The following command creates an access-control role named "admin" for the vs1.example.com Vserver. The
role has all access to the api "/api/storage/volume" but only within the "aggr0" aggregate.
cluster1::> security login rest-role create -role admin -api /api/storage/volume -access all -vserver
vs1.example.com
Related reference
security login modify (page 651)
security login create (page 646)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login rest-role delete command deletes an access-control role.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be deleted.
-api <text> - api path
This specifies the api to which the role has access.
● Examples
The following command deletes an access-control role with the role name readonly and the api "/api/
storage/volume" for Vserver vs.example.com.
cluster1::> security login rest-role delete -role readonly -api /api/storage/volume -vserver vs.exampl
e.com
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login rest-role modify command modifies an access-control role.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be modified.
-api <text> - api path
This specifies the api to which the role has access.
[-access {none|readonly|all}] - Access Level
This optionally specifies a new access level for the role. Possible access level settings are none, readonly,
and all. The default setting is all.
● Examples
The following command modifies an access-control role with the role name readonly and the api access
"/api/storage/volume" to have the access level readonly for Vserver vs.example.com:
cluster1::> security login rest-role modify -role readonly -api /api/storage/volume -access readonly -v
server vs.example.com
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login rest-role show command displays the following information about access-control
roles:
• vserver
• Role name
• api to which the role has access
• Access level (none, read-only, or all)
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the roles that match this parameter value.
[-role <text>] - Role Name
Selects the roles that match this parameter value. If this parameter and the -api parameter are both
used, the command displays detailed information about the specified access-control role.
[-api <text>] - api path
Selects the roles that match this parameter value. If this parameter and the -role parameter are both
used, the command displays detailed information about the specified access-control role.
[-access {none|readonly|all}] - Access Level
Selects the roles that match this parameter value.
● Examples
The example below displays information about all access-control roles:
Role Access
Vserver Name API Level
---------- ------------- ----------- -----------
vs vsadmin /api none
cluster1 readonly /api/storage none
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role create command creates an access-control role. An access-control role
consists of a role name and a command or directory to which the role has access. It optionally includes an
access level (none, readonly, or all) and a query that applies to the specified command or command
directory. After you create an access-control role, you can apply it to a management-utility login account by
using the security login modify or security login create commands.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be created.
-cmddirname <text> - Command / Directory
This specifies the command or command directory to which the role has access. To specify the default
setting, use the special value "DEFAULT".
[-access {none|readonly|all}] - Access Level
This optionally specifies an access level for the role. Possible access level settings are none, readonly,
and all. The default setting is all.
[-query <query>] - Query
This optionally specifies the object that the role is allowed to access. The query object must be
applicable to the command or directory name specified by -cmddirname. The query object must be
enclosed in double quotation marks (""), and it must be a valid field name.
● Examples
The following command creates an access-control role named "admin" for the vs1.example.com Vserver. The
role has all access to the "volume" command but only within the "aggr0" aggregate.
cluster1::> security login role create -role admin -cmddirname volume -query "-aggr aggr0" -access all
-vserver vs1.example.com
Related reference
security login modify (page 651)
security login create (page 646)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role delete command deletes an access-control role.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be deleted.
-cmddirname <text> - Command / Directory
This specifies the command or command directory to which the role has access. To specify the default
setting, use the special value "DEFAULT".
● Examples
The following command deletes an access-control role with the role name readonly and the command
access "volume" for Vserver vs.example.com.
cluster1::> security login role delete -role readonly -cmddirname volume -vserver vs.example.com
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role modify command modifies an access-control role.
● Parameters
-vserver <Vserver Name> - Vserver
This optionally specifies the Vserver name associated with the role.
-role <text> - Role Name
This specifies the role that is to be modified.
-cmddirname <text> - Command / Directory
This specifies the command or command directory to which the role has access. To specify the default
setting for a role, use the special value "DEFAULT". This value can be modified only for the roles created
for the admin Vserver.
[-access {none|readonly|all}] - Access Level
This optionally specifies a new access level for the role. Possible access level settings are none, readonly,
and all. The default setting is all.
[-query <query>] - Query
This optionally specifies the object that the role is allowed to access. The query object must be
applicable to the command or directory name specified by -cmddirname. The query object must be
enclosed in double quotation marks (""), and it must be a valid field name.
● Examples
The following command modifies an access-control role with the role name readonly and the command
access "volume" to have the access level readonly for Vserver vs.example.com:
cluster1::> security login role modify -role readonly -cmddirname volume -access readonly -vserver vs.e
xample.com
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security login role prepare-to-downgrade command restores predefined roles of all Vservers
earlier than Data ONTAP 8.3.2. You must run this command in advanced privilege mode when prompted to
do so during the release downgrade.
● Examples
The following command restores predefined roles of all Vservers earlier than Data ONTAP 8.3.2.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role show command displays the following information about access-control roles:
• Role name
• Command or command directory to which the role has access
• Access level (none, read-only, or all)
• Query (detailed view only)
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the roles that match this parameter value.
[-role <text>] - Role Name
Selects the roles that match this parameter value. If this parameter and the -cmddirname parameter
are both used, the command displays detailed information about the specified access-control role.
[-cmddirname <text>] - Command / Directory
Selects the roles that match this parameter value. If this parameter and the -role parameter are both
used, the command displays detailed information about the specified access-control role.
[-access {none|readonly|all}] - Access Level
Selects the roles that match this parameter value.
[-query <query>] - Query
Selects the roles that match this parameter value.
● Examples
The example below displays information about all access-control roles:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security login role show-ontapi command displays Data ONTAP APIs (ONTAPIs) and the CLI
commands that they are mapped to.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-ontapi <text>] - ONTAPI Name
Use this parameter to view the corresponding CLI command for the specified API.
[-command <text>] - CLI Command
Use this parameter to view the corresponding API or APIs for the specified CLI command.
● Examples
The following command displays all Data ONTAP APIs and their mapped CLI commands:
The following example displays all Data ONTAP APIs which are mapped to the specified CLI command:
The following example displays the CLI command that is mapped to the specified Data ONTAPI API:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role config modify command modifies user account and password restrictions.
For the password character restrictions documented below (uppercase, lowercase, digits, etc.), the term
"characters" refers to ASCII-range characters only - not extended characters.
● Parameters
-vserver <vserver name> - Vserver
This specifies the Vserver name associated with the profile configuration.
-role <text> - Role Name
This specifies the role whose account restrictions are to be modified.
[-username-minlength <integer>] - Minimum Username Length Required
This specifies the required minimum length of the user name. Supported values are 3 to 16 characters.
The default setting is 3 characters.
[-username-alphanum {enabled|disabled}] - Username Alpha-Numeric
This specifies whether a mix of alphabetic and numeric characters are required in the user name. If this
parameter is enabled, a user name must contain at least one letter and one number. The default setting
is disabled.
[-passwd-minlength <integer>] - Minimum Password Length Required
This specifies the required minimum length of a password. Supported values are 3 to 64 characters. The
default setting is 8 characters.
This specifies account expiration in days. The default setting is unlimited, which means the accounts
never expire. The account expiry time must be greater than account inactive limit.
[-account-inactive-limit <unsigned32_or_unlimited>] - Maximum Duration of Inactivity before
Account Expiration (Days)
This specifies inactive account expiry limit in days. The default setting is unlimited, which means the
inactive accounts never expire. The account inactive limit must be less than account expiry time.
● Examples
The following command modifies the user-account restrictions for an account with the role name admin for
a Vserver named vs. The minimum size of the password is set to 12 characters.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security login role config reset command resets the following role based access control
(RBAC) characteristics to their default values. The system prompts you to run this command if you revert to
Data ONTAP 8.1.2 or earlier. If you do not reset these characteristics, the revert process will fail.
• Minimum number of special characters required in password ("0")
• Password-expiration time, in days ("unlimited")
• Whether the password must be changed at the initial login ("disabled")
• Maximum number of failed login attempts permitted before the account is locked out ("0")
• Number of days that the user account is locked out after the maximum number of failed login attempts is
reached ("0")
● Examples
The following command resets the above mentioned RBAC characteristics of all cluster and Vserver roles to
their default values.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security login role config show command displays the following information about account
restrictions for management-utility user accounts:
• Role name -role
• Minimum size of the password, in characters -passwd-minlength
• Whether the password requires alphanumeric characters -passwd-alphanum
• Number of previous passwords that cannot be reused -disallowed-reuse
• Minimum number of days that must elapse before users can change their passwords -change-delay
You can display detailed information about the restrictions on a specific account by specifying the -role
parameter. This adds the following information:
• Minimum length of the user name, in characters -username-minlength
• Whether the user name requires alphanumeric characters -username-alphanum
• Minimum length of the password, in characters -passwd-minlength
• Whether the password requires alphanumeric characters -passwd-alphanum
• Minimum number of special characters required in password -passwd-min-special-chars
• Minimum number of lowercase characters required in password -passwd-min-lowercase-chars
• Minimum number of uppercase characters required in password -passwd-min-uppercase-chars
• Minimum number of digits required in password -passwd-min-digits
• Minimum number of days that must elapse before users can change their passwords -change-delay
• Whether the password must be changed at the initial login -require-initial-passwd-update
• Password-expiration time, in days -passwd-expiry-time
• Display warning message days prior to password expiry -passwd-expiry-warn-time
• Number of previous passwords that cannot be reused -disallowed-reuse
• Maximum number of failed login attempts permitted before the account is locked out -max-failed-
login-attempts
• Number of days for which the user account is locked after the maximum number of failed login attempts
is reached -lockout-duration
• Account-expiration time, in days -account-expiry-time
• Maximum duration of inactivity before account expiration, in days -account-inactive-limit
• Delay after each failed login attempt, in secs -delay-after-failed-login
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Selects the profile configurations that match this parameter value
[-role <text>] - Role Name
If this parameter is specified, the command displays detailed information about restrictions for the
specified user account.
[-username-minlength <integer>] - Minimum Username Length Required
Selects the profile configurations that match this parameter value.
[-username-alphanum {enabled|disabled}] - Username Alpha-Numeric
Selects the profile configurations that match this parameter value. Enabled means a user name must
contain both letters and numbers.
[-passwd-minlength <integer>] - Minimum Password Length Required
Selects the profile configurations that match this parameter value.
[-passwd-alphanum {enabled|disabled}] - Password Alpha-Numeric
Selects the profile configurations that match this parameter value. Enabled means a password must
contain both letters and numbers.
● Examples
The example below displays restriction information about all user accounts:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security protocol modify command modifies the existing cluster-wide configuration of RSH and
Telnet. Enable RSH and Telnet in the cluster by setting the enabled field as true.
● Parameters
-application <text> - application
Selects the application. Supported values are rsh and telnet.
[-enabled {true|false}] - enabled
Enables or disables the corresponding application. The default value is false.
● Examples
The following command enables RSH in the cluster. The default setting for RSH is false:
The following command enables Telnet in the cluster. The default setting for Telnet is false:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security protocol show command displays the cluster-wide configuration of RSH and Telnet in the
cluster in advanced privilege mode. RSH and Telnet are disabled by default. Use the security protocol
modify command to change the RSH and Telnet configuration that the cluster supports.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-application <text>] - application
Displays the insecure applications in the cluster.
[-enabled {true|false}] - enabled
Displays whether the application is enabled or disabled in the cluster.
● Examples
The following example shows the default security protocol configurations for a cluster:
Application Enabled
------------ -------------
rsh false
telnet false
The following example shows the security protocol configuration after RSH and Telnet have been enabled:
Related reference
security protocol modify (page 680)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security protocol ssh modify command modifies the existing cluster-wide configuration of SSH
● Parameters
[-per-source-limit <integer>] - Per-Source Limit
Modifies the maximum number of SSH instances per source IP address on a per-node basis.
[-max-instances <integer>] - Maximum Number of Instances
Modifies the maximum number of SSH instances that can be handled on a per-node basis.
[-connections-per-second <integer>] - Connections Per Second
Modifies the maximum number of SSH connections per second on a per-node basis.
● Examples
The following example modifies cluster-wide SSH configuration:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security protocol ssh show command displays the cluster-wide SSH configuration in advanced
privilege mode. Use the security protocol ssh modify command to change the SSH configuration
that the cluster supports.
● Examples
The following example displays cluster-wide SSH configuration:
Per-Source Limit: 32
Maximum Number of Instances: 64
Connections Per Second: 10
Related reference
security protocol ssh modify (page 681)
These commands configure the Security Assertion Markup Language (SAML) Service Provider (SP) single
sign-on configuration. SAML SP functionality provides the ability for Data ONTAP to use an Identity Provider
(IdP) for authentication.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security saml-sp create command configures ONTAP with Security Assertion Markup Language
(SAML) Service Provider (SP) for single sign-on authentication. This command does not enable SAML SP, it
just configures it. Configuring and enabling SAML SP is a two-step process:
• Create a SAML SP configuration using security saml-sp create command.
• Enable SAML SP by using security saml-sp modify -is-enabled true
After the SAML SP configuration is created, it cannot be modified. It must be deleted and created again to
change any settings.
Note
This restarts the web server. Any HTTP/S connections that are active will be disrupted.
● Parameters
-idp-uri {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - Identity Provider (IdP) Metadata
Location
This is the URI of the desired identity provider's (IdP) metadata.
[-sp-host <Remote InetAddress>] - SAML Service Provider Host
● Examples
The following example configures ONTAP with SAML SP IdP information:
Related reference
security saml-sp modify (page 684)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security saml-sp delete command is used to remove the Security Access Markup Language
(SAML) Service Provider (SP). Running this command frees resources used by the SP. SAML SP services will
no longer be available after the SP is removed.
If the SAML SP is currently enabled, it is necesary to first use security saml-sp modify -is-enabled
false prior to security saml-sp delete. The security saml-sp modify -is-enabled false
command must be issued by a password authenticated console application user or from a SAML
authenticated command interface.
Note
This restarts the web server. Any HTTP/S connections that are active will be disrupted.
● Examples
The following example unconfigures SAML SP:
Related reference
security saml-sp modify (page 684)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security saml-sp modify command modifies the Security Assertion Markup Language (SAML)
Service Provider (SP) configuration for single sign-on authentication. This command is used to enable or
disable an existing SAML SP, security saml-sp modify -is-enabled true or false respectively.
This command will check the validity of the current SAML SP configuration before enabling the SP. Also, it is
necessary to use this command with the -is-enabled false parameter prior to deleting an existing SAML
SP configuration. SAML SP can only be disabled in this way by a password authenticated console application
user or from a SAML authenticated command interface. The delete command must be used if the SAML
configuration settings are to be changed, as only the is-enabled parameter can be modified.
Note
This may restart the web server. Any HTTP/S connections that are active may be disrupted.
● Parameters
[-is-enabled {true|false}] - SAML Service Provider Enabled
Use this paramater to enable or disable the SAML SP.
● Examples
The following example enables SAML SP:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security saml-sp repair command attempts to repair a failed SAML SP configuration on a given
node. The status of the individual nodes can be viewed using the security saml-sp status show
command.
Note
This restarts the web server. Any active HTTP/S requests to the web server will be disrupted.
● Parameters
-node {<nodename>|local} - Node
This identifies a single node that matches the input. The repair job will run on this node.
[-foreground {true|false}] - Foreground Process
When this parameter is set to false the command runs in the background as a job. The default is true,
which causes the command to return after the operation completes.
● Examples
The following example repairs a failed SAML SP configuration:
Warning: This restarts the web server. Any active HTTP/S requests to the web
server will be disrupted
cluster1:>
Related reference
security saml-sp status show (page 686)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security saml-sp show command displays the Security Assertion Markup Language (SAML) Service
Provider (SP) configuration.
The Identity Provider (IdP) URI indicates the URI of the desired IdP's metadata.
The Service Provider (SP) host indicates the IP address contaning SAML SP metadata.
The Certificate Common Name indicates the SAML SP certificate's common name.
The Certificate Serial indicates the SAML SP certificate's serial number.
● Examples
The following example displays the SAML SP configuration:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The security saml-sp status show command displays the SAML Service Provider (SP) status for all
nodes in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This identifies the node in the cluster.
[-status {not-configured|config-in-progress|config-failed|config-success}] - Update Status
This identifies the SAML SP status on the specified node.
[-error-text <text>] - Error Text
This identifies the error text associated with the latest saml SP update for this node.
[-is-enabled {true|false}] - SAML Service Provider Enabled
When this parameter is set to true it indicates that the SAML SP is enabled on this node. Similarly, when
this parameter is set to false, it indicates that the SAML SP is not enabled on this node.
● Examples
The following example displays the SAML SP status information for all nodes in the cluster.
cluster::*>
The security session commands provide statistics to monitor management session activity and limit
configurations to control management session activity for specific categories.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session kill-cli command is used to terminate CLI sessions. If the session being killed
is actively processing a non-read command, the kill will wait until the command is complete before
terminating the session. If the session being killed is actively processing a read (show) command, the kill
will wait until the current row is returned before terminating the session.
● Parameters
-node {<nodename>|local} - Node
Selects the sessions that match this parameter value. This identifies the node that is processing the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that is processing the session.
[-start-time <MM/DD HH:MM:SS>] - Start Time
Selects the sessions that match this parameter value. This identifies the start time of the current active
session.
-session-id <integer> - Session ID
Selects the sessions that match this parameter value. This number uniquely identifies a management
session within a given node.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-username <text>] - Username
Selects the sessions that match this parameter value. This identifies the authenticated user associated
with this management session.
[-application <text>] - Client Application
Selects the sessions that match this parameter value. This identifies the calling application by name.
[-location <text>] - Client Location
Selects the sessions that match this parameter value. This identifies the location of the calling client
application. This is typically the IP address of the calling client, or "console" or "localhost" for console or
localhost connections.
[-idle-seconds <integer>] - Idle Seconds
Selects the sessions that match this parameter value. When a session is not actively executing a
command request (the session is idle), this indicates the time (in seconds) since the last request
completed.
[-state {pending|active|idle}] - Session State
Selects the sessions that match this parameter value. This identifies the state (pending, active, or idle)
of the session. The state is "pending" if it hit a session limit and the session is waiting for another
session to end. The state is "idle" for CLI sessions that are waiting at the command prompt. The state is
"active" if the session is actively working on a request.
[-request <text>] - Active Command
Selects the sessions that match this parameter value. This identifies the request (command) that is
currently being handled by the session.
● Examples
The following example illustrates killing a CLI session by specifying the node and the session id.
cluster1::>
cluster1::>
The following example illustrates killing a CLI session by specifying the node and specifying a query on idle-
seconds.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session show command displays all active management sessions across the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that is processing the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that is processing the session.
[-start-time <MM/DD HH:MM:SS>] - Start Time
Selects the sessions that match this parameter value. This identifies the start time of the current active
session.
[-session-id <integer>] - Session ID
Selects the sessions that match this parameter value. This number uniquely identifies a management
session within a given node.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-username <text>] - Username
Selects the sessions that match this parameter value. This identifies the authenticated user associated
with this management session.
[-application <text>] - Client Application
Selects the sessions that match this parameter value. This identifies the calling application by name.
[-location <text>] - Client Location
Selects the sessions that match this parameter value. This identifies the location of the calling client
application. This is typically the IP address of the calling client, or "console" or "localhost" for console or
localhost connections.
[-ipspace <IPspace>] - IPspace of Location
Selects the sessions that match this parameter value. This identifies the IPspace of the client location.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that
have been made thus far in the active session. The following commands are not counted: top, up, cd,
rows, history, exit.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that have
failed for any reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took for this session.
[-last-time <integer>] - Last Time (ms)
Selects the sessions that match this parameter value. This identifies the amount of time (in
milliseconds) that the last request took for this session.
[-total-seconds <integer>] - Total Seconds
Selects the sessions that match this parameter value. This identifies the total time (in seconds) that has
been taken by all completed requests for the current session; it does not include session idle time.
[-state {pending|active|idle}] - Session State
Selects the sessions that match this parameter value. This identifies the state (pending, active, or idle)
of the session. The state is "pending" if it hit a session limit and the session is waiting for another
session to end. The state is "idle" for CLI sessions that are waiting at the command prompt. The state is
"active" if the session is actively working on a request.
[-request <text>] - Request Input
Selects the sessions that match this parameter value. This identifies the request (command) that is
currently being handled by the session.
[-idle-seconds <integer>] - Idle Seconds
Selects the sessions that match this parameter value. When a session is not actively executing a
command request (the session is idle), this indicates the time (in seconds) since the last request
completed.
[-active-seconds <integer>] - Active Seconds
Selects the sessions that match this parameter value. When a session is actively executing a command
request, this indicates the time (in seconds) since the current request started.
● Examples
The following example illustrates displaying all active sessions across the cluster. In this example, we see
one active session on node node2 from the console application. We also see three active sessions on node
node1. One is from the console application and two are from the ssh application. Also one of the ssh
sessions is from user diag and the other ssh session is from user admin.
cluster1::>
The following example illustrates displaying all active sessions that have been idle for longer than 500
seconds.
cluster1::>
These commands allow management session limits to be configured for specific categories, including
application, location, request, user, and Vserver. The default limits can be overridden for specific values
within each category by using advanced privilege level commands.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command allows creation of a default management session limit that does not yet exist. The default
limits can be overridden for specific values within each category by using advanced privilege level
commands.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location,
request, user, Vserver.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and category.
● Examples
The following example illustrates creating a default limit for management sessions using the same
application.
cluster1::> security session limit create -interface ontapi -category application -max-active-limit 8
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command allows deletion of a default management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location,
request, user, Vserver.
● Examples
The following example illustrates deleting all default limits for CLI management sessions.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command allows modification of a default management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-category {application|location|request|user|vserver} - Category
The session type for this default limit. The following categories are supported: application, location,
request, user, Vserver.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and category.
● Examples
The following example illustrates modifying the default limit for CLI management sessions from the same
location.
cluster1::> security session limit modify -interface cli -category location -max-active-limit 4
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command shows the default management session limits that have been configured for each interface
and category.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-category {application|location|request|user|vserver}] - Category
Selects the sessions that match this parameter value. This identifies the category for the limit. The
following categories are supported: application, location, request, user, and Vserver.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the default limits for management sessions.
These commands allow management session limits to be configured for specific applications.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows creation of a per-application management session limit that does not yet exist.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and application.
● Examples
The following example illustrates creating a limit for management sessions from a custom application.
cluster1::*> security session limit application create -interface ontapi -application "custom_app" -max
-active-limit 8
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows deletion of a per-application management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
● Examples
The following example illustrates deleting a limit for management sessions from a custom application.
cluster1::*> security session limit application delete -interface ontapi -application "custom_app"
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows modification of a per-application management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-application <text> - Application
The specified application to which this limit applies. The limit with the application name -default- is the
limit used for any application without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and application.
● Examples
The following example illustrates modifying management session limits for some custom applications.
cluster1::*> security session limit application modify -interface ontapi -application custom* -max-acti
ve-limit 4
2 entries were modified.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command shows the per-application management session limits that have been configured for each
interface and application.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-application <text>] - Application
Selects the sessions that match this parameter value. This identifies the application for the limit. The
limit with the application name -default- is the limit used for any application without a specific
configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the per-application limits for ONTAPI management sessions.
These commands allow management session limits to be configured for specific locations.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows creation of a per-location management session limit that does not yet exist.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default
IPspace.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and location.
● Examples
The following example illustrates creating a CLI limit for specific location.
cluster1::*> security session limit location create -interface cli -location 10.98.16.164 -max-active-l
imit 1
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows deletion of a per-location management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default
IPspace.
● Examples
The following example illustrates deleting limits for management sessions from a specific set of locations.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows modification of a per-location management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-location <text> - Location
The specified location to which this limit applies. The limit with the location name -default- (in the
Default IPspace) is the limit used for any location (in any IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
This identifies the IPspace of the client location. If not specified, changes are made in the Default
IPspace.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and location.
● Examples
The following example illustrates modifying management sessions limits for specific locations.
cluster1::*> security session limit location modify -interface * -location 10.98.* -max-active-limit 2
3 entries were modified.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command shows the per-location management session limits that have been configured for each
interface and location.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-location <text>] - Location
Selects the sessions that match this parameter value. This identifies the location for the limit. The limit
with the location name -default- (only in the Default IPspace) is the limit used for any location (in any
IPspace) without a specific configured limit.
[-ipspace <IPspace>] - IPspace of Location
Selects the sessions that match this parameter value. This identifies the IPspace of the client location.
The default IPspace is Default.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the per-location limits for management sessions.
These commands allow management session limits to be configured for specific requests.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows creation of a per-request management session limit that does not yet exist.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-request <text> - Request Name
The specified request to which this limit applies. The limit with the request name -default- is the limit
used for any request without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and request.
● Examples
The following example illustrates creating a limit for number of clients executing a specific API.
cluster1::*> security session limit request create -interface ontapi -request storage-disk-get-iter -ma
x-active-limit 2
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows deletion of a per-request management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-request <text> - Request Name
The specified request to which this limit applies. The limit with the request name -default- is the limit
used for any request without a specific configured limit.
● Examples
The following example illustrates deleting custom limits for that were configured for the volume commands
and APIs.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows modification of a per-request management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
● Examples
The following example illustrates modifying the limit of the number of clients simulatiously executing a
specific API.
cluster1::*> security session limit request modify -interface ontapi -request storage-disk-get-iter -ma
x-active-limit 4
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command shows the per-request management session limits that have been configured for each
interface and request.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-request <text>] - Request Name
Selects the sessions that match this parameter value. This identifies the request (command or API) for
the limit. The limit with the request name -default- is the limit used for any request without a specific
configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the per-request limits for management sessions.
These commands allow management session limits to be configured for specific Vserver users.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows creation of a per-user management session limit that does not yet exist.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface, Vserver, and user.
● Examples
The following example illustrates creating a per-user limit override for ONTAPI requests for the admin user in
the admin Vserver.
cluster1::*> security session limit user create -interface ontapi -vserver cluster1 -username admin -ma
x-active-limit 16
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows deletion of a per-user management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
● Examples
The following example illustrates deleting all user-specific limits for CLI management sessions.
cluster1::*> security session limit user delete -interface cli -user !"-default-"
2 entries were deleted.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows modification of a per-user management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-user <text> - User
The specified user to which this limit applies. The limit with the user name -default- is the limit used for
any user without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface, Vserver, and user.
● Examples
The following example illustrates modifying the admin user's limit for CLI management sessions.
cluster1::*> security session limit user modify -interface cli -vserver cluster1 -username admin -max-a
ctive-limit 30
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command shows the per-user management session limits that have been configured for each interface,
Vserver, and user.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver for the limit. The
"Cluster" Vserver is used to limit Vservers that do not have a configured limit.
[-user <text>] - User
Selects the sessions that match this parameter value. This identifies the user for the limit. The limit with
the user name -default- is the limit used for any user without a specific configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the per-user limits for CLI management sessions. In this
example, there is a default limit of 4 sessions for each user. That limit is expanded to 8 for the admin
Vserver. That limit is further expanded to 20 for the admin user in the admin Vserver.
These commands allow management session limits to be configured for specific Vservers.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows creation of a per-Vserver management session limit that does not yet exist.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
-max-active-limit <integer> - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and Vserver.
● Examples
The following example illustrates creating a per-Vserver limit override for ONTAPI requests on the admin
Vserver.
cluster1::*> security session limit vserver create -interface ontapi -vserver cluster1 -max-active-limi
t 4
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows deletion of a per-Vserver management session limit. The "Cluster" vserver is used
when the specific Vserver doesn't have a configured limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
● Examples
The following example illustrates deleting all per-Vserver limits for management sessions except the default
limit.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command allows modification of a per-Vserver management session limit.
● Parameters
-interface {cli|ontapi|rest} - Interface
The interface (CLI, ONTAPI, or REST) to which the limit applies.
-vserver <vserver> - Vserver
The specified Vserver to which this limit applies. The "Cluster" Vserver is used to limit Vservers that do not
have a configured limit.
[-max-active-limit <integer>] - Max-Active Limit
The maximum number of concurrent sessions allowed for this interface and Vserver.
● Examples
The following example illustrates modifying the admin Vserver's limit for CLI management sessions.
cluster1::*> security session limit vserver modify -interface cli -vserver cluster1 -max-active-limit 4
0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command shows the per-Vserver management session limits that have been configured for each
interface and Vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
to which the limit applies.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver for the limit. The
"Cluster" Vserver is used to limit Vservers that do not have a configured limit.
[-max-active-limit <integer>] - Max-Active Limit
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying the per-Vserver limits for management sessions.
These commands provide historical statistics surrounding management session activity for specific
categories, including application, location, request, user, and Vserver.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session request-statistics show-by-application command shows historical
statistics for management session activity, categorized by application name. CLI sessions connections will
have an application name based on the connection method, i.e.: ssh, telnet, rsh, console, or ngsh. ONTAPI
sessions will extract the application name from the ZAPI request. ONTAP looks for the application name in
the following three locations, in the following order of precedence:
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-application <text>] - Application
Selects the sessions that match this parameter value. This identifies the calling application by name.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that
have been made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were
blocked due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed
for any reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took.
[-last-time <integer>] - Last Time (ms)
Selects the sessions that match this parameter value. This identifies the amount of time (in
milliseconds) that the last request took.
[-active <integer>] - Number Active Now
Selects the sessions that match this parameter value. This identifies the number of currently active
sessions.
● Examples
The following example illustrates displaying historical statistics for all management session activity across
the cluster, categorized by application name.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a
specific node and for a specific application.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session request-statistics show-by-location command shows historical statistics
for management session activity, categorized by client location.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-location <text>] - Client Location
Selects the sessions that match this parameter value. This identifies the location of the calling client
application. This is typically the IP address of the calling client, or "console" or "localhost" for console or
localhost connections.
[-ipspace <IPspace>] - IPspace of Location
Selects the sessions that match this parameter value. This identifies the IPspace of the client location.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that
have been made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were
blocked due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed
for any reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took.
[-last-time <integer>] - Last Time (ms)
Selects the sessions that match this parameter value. This identifies the amount of time (in
milliseconds) that the last request took.
[-active <integer>] - Number Active Now
Selects the sessions that match this parameter value. This identifies the number of currently active
sessions.
[-max-active <integer>] - Max Number Active
Selects the sessions that match this parameter value. This identifies the maximum number of
concurrently active sessions.
[-last-active-seconds <integer>] - Seconds Since Last Session Start
Selects the sessions that match this parameter value. When a session is active, this indicates the time
(in seconds) since the last session started.
[-idle-seconds <integer>] - Idle Seconds
Selects the sessions that match this parameter value. When no sessions are active, this indicates the
time (in seconds) since the last session ended.
[-total-seconds <integer>] - Total Seconds
Selects the sessions that match this parameter value. This identifies the total time (in seconds) that
have been taken by all completed requests; it does not include session idle time.
[-average-time <integer>] - Average Time (ms)
Selects the sessions that match this parameter value. This identifies the mean time spent processing
requests.
[-success-percent <percent>] - Success Percent
Selects the sessions that match this parameter value. This identifies the percentage of successful
requests.
[-blocked-percent <percent>] - Blocked Percent
Selects the sessions that match this parameter value. This identifies the percentage of requests that
were blocked due to configured limits.
[-failed-percent <percent>] - Failed Percent
Selects the sessions that match this parameter value. This identifies the percentage of requests that
failed for any reason (including if they were blocked by configured limits).
[-max-active-limit <integer>] - Max-Active Limit (privilege: advanced)
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying historical statistics for all management session activity across
the cluster, categorized by location.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a
specific node and for a specific location.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session request-statistics show-by-request command shows historical statistics
for management session activity, categorized by request (command or API name).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-request <text>] - Request Name
Selects the sessions that match this parameter value. This identifies the command associated with these
requests.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that
have been made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
[-blocked <integer>] - Blocked Requests
Selects the sessions that match this parameter value. This identifies the number of requests that were
blocked due to configured limits.
[-failed <integer>] - Failed Requests
Selects the sessions that match this parameter value. This identifies the number of requests that failed
for any reason (including if they were blocked by configured limits).
[-max-time <integer>] - Maximum Time (ms)
Selects the sessions that match this parameter value. This identifies the maximum amount of time (in
milliseconds) that any request took.
[-last-time <integer>] - Last Time (ms)
Selects the sessions that match this parameter value. This identifies the amount of time (in
milliseconds) that the last request took.
[-active <integer>] - Number Active Now
Selects the sessions that match this parameter value. This identifies the number of currently active
requests.
[-max-active <integer>] - Max Number Active
Selects the sessions that match this parameter value. This identifies the maximum number of
concurrently active requests.
[-last-active-seconds <integer>] - Seconds Since Last Request Start
Selects the sessions that match this parameter value. When requests are active, this indicates the time
(in seconds) since the last request started.
[-idle-seconds <integer>] - Idle Seconds
Selects the sessions that match this parameter value. When no requests are active, this indicates the
time (in seconds) since the last request ended.
[-total-seconds <integer>] - Total Seconds
Selects the sessions that match this parameter value. This identifies the total time (in seconds) that
have been taken by all completed requests; it does not include session idle time.
[-average-time <integer>] - Average Time (ms)
Selects the sessions that match this parameter value. This identifies the mean time spent processing
requests.
[-success-percent <percent>] - Success Percent
Selects the sessions that match this parameter value. This identifies the percentage of successful
requests.
[-blocked-percent <percent>] - Blocked Percent
Selects the sessions that match this parameter value. This identifies the percentage of requests that
were blocked due to configured limits.
[-failed-percent <percent>] - Failed Percent
Selects the sessions that match this parameter value. This identifies the percentage of requests that
failed for any reason (including if they were blocked by configured limits).
[-max-active-limit <integer>] - Max-Active Limit (privilege: advanced)
Selects the sessions that match this parameter value. This identifies the configured limit that is used to
throttle or reject requests.
● Examples
The following example illustrates displaying historical statistics for all management session activity on a
specific node, with a specific request query.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session request-statistics show-by-user command shows historical statistics for
management session activity, categorized by username. Entries for username 'autosupport' reflect
commands that are executed by the AutoSupport OnDemand feature.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the
session.
[-interface {cli|ontapi|rest}] - Interface
Selects the sessions that match this parameter value. This identifies the interface (CLI, ONTAPI, or REST)
that processed the session.
[-vserver <vserver>] - Vserver
Selects the sessions that match this parameter value. This identifies the Vserver associated with this
management session.
[-username <text>] - Username
Selects the sessions that match this parameter value. This identifies the authenticated user associated
with this management session.
[-total <integer>] - Total Requests
Selects the sessions that match this parameter value. This identifies the total number of requests that
have been made on a session. The following commands are not counted: top, up, cd, rows, history, exit.
● Examples
The following example illustrates displaying historical statistics for all management session activity across
the cluster, categorized by username.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a
specific node and for a specific username.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The security session request-statistics show-by-vserver command shows historical statistics
for management session activity, categorized by vserver.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the sessions that match this parameter value. This identifies the node that processed the
session.
● Examples
The following example illustrates displaying historical statistics for all management session activity across
the cluster, categorized by Vserver.
cluster1::>
The following example illustrates displaying historical statistics for management session activity on a
specific node, for a specific Vserver.
cluster1::>
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ssh add command adds additional SSH key exchange algorithms or ciphers or MAC
algorithms to the existing configurations of the cluster or a Vserver. The added algorithms or ciphers or MAC
algorithms are enabled on the cluster or Vserver. If you change the cluster configuration settings, it is used
as the default for all newly created Vservers. The existing SSH key exchange algorithms, ciphers, and MAC
algorithms remain unchanged in the configuration. If the SSH key exchange algorithms or ciphers or MAC
algorithms are already enabled in the current configuration, the command will does not not fail. Data
ONTAP supports the diffie-hellman-group-exchange-sha256 key exchange algorithm for SHA-2. Data ONTAP
also supports the diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-
group1-sha1 SSH key exchange algorithms for SHA-1. The SHA-2 key exchange algorithm is more secure
than the SHA-1 key exchange algorithms. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-
nistp384, ecdh-sha2-nistp521, and curve25519-sha256. Data ONTAP also supports the AES and 3DES
symmetric encryptions (also known as ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr,
aes256-cbc, aes192-cbc, aes128-cbc, aes128-gcm, aes256-gcm, and 3des-cbc. Data ONTAP supports MAC
algorithms of the following types: hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160,
umac-64, umac-64, umac-128, hmac-sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-
sha2-256-etm, hmac-sha2-512-etm, hmac-md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-
etm, and umac-128-etm.
● Parameters
-vserver <Vserver Name> - Vserver
Identifies the Vserver to which you want to add additional SSH key exchange algorithms or ciphers.
[-key-exchange-algorithms <algorithm name>, ...] - List of SSH Key Exchange Algorithms to Add
Adds the specified SSH key exchange algorithm or algorithms to the Vserver.
[-ciphers <cipher name>, ...] - List of SSH Ciphers to Add
Adds the specified cipher or ciphers to the Vserver.
[-mac-algorithms <MAC name>, ...] - List of SSH MAC Algorithms to Add
Adds the specified MAC algorithm or algorithms to the Vserver.
● Examples
The following command adds the diffie-hellman-group-exchange-sha256 and diffie-hellman-group-
exchange-sha1 key exchange algorithms for the cluster1 Vserver. It also adds the aes256-cbc and aes192-
cbc ciphers and the hmac-sha1 and hmac-sha2-256MAC algorithms to the cluster1 Vserver.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ssh modify command replaces the existing configurations of the SSH key exchange
algorithms or ciphers or MAC algorithms for the cluster or a Vserver with the configuration settings you
specify. If you modify the cluster configuration settings, it will be used as the default for all newly created
Vservers. Data ONTAP supports the diffie-hellman-group-exchange-sha256 key exchange algorithm for
SHA-2. Data ONTAP also supports the diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1,
and diffie-hellman-group1-sha1 SSH key exchange algorithms for SHA-1. The SHA-2 key exchange algorithm
is more secure than the SHA-1 key exchange algorithms. Data ONTAP also supports the AES and 3DES
symmetric encryptions (also known as ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr,
aes256-cbc, aes192-cbc, aes128-cbc, aes128-gcm, aes256-gcm, and 3des-cbc. Data ONTAP supports MAC
algorithms of the following types: hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160,
umac-64, umac-64, umac-128, hmac-sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-
sha2-256-etm, hmac-sha2-512-etm, hmac-md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-
etm, and umac-128-etm.
● Parameters
-vserver <Vserver Name> - Vserver
Identifies the Vserver for which you want to replace the existing SSH key exchange algorithm and cipher
configurations.
[-key-exchange-algorithms <algorithm name>, ...] - Key Exchange Algorithms
Enables the specified SSH key exchange algorithm or algorithms for the Vserver. This parameter also
replaces all existing SSH key exchange algorithms with the specified settings.
[-ciphers <cipher name>, ...] - Ciphers
Enables the specified cipher or ciphers for the Vserver. This parameter also replaces all existing ciphers
with the specified settings.
[-mac-algorithms <MAC name>, ...] - MAC Algorithms
Enables the specified MAC algorithm or algorithms for the Vserver. This parameter also replaces all
existing MAC algorithms with the specified settings.
[-max-authentication-retry-count <integer>] - Max Authentication Retry Count
Modifies the maximum number of authentication retry count for the Vserver.
● Examples
The following command enables the diffie-hellman-group-exchange-sha256 and diffie-hellman-group14-
sha1 key exchange algorithms for the cluster1 Vserver. It also enables the aes256-ctr, aes192-ctr and
aes128-ctr ciphers, hmac-sha1 and hmac-sha2-256 MAC algorithms for the cluster1 Vserver. It also modifies
the maximum authentication retry count to 3 for the cluster1 Vserver:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command downgrades the SSH configurations of all Vservers and the cluster to settings compatible with
releases earlier than Data ONTAP 9.2.0. This command also disables the max-authentication-retry feature.
You must run this command in advanced privilege mode when prompted to do so during the release
downgrade. Otherwise, the release downgrade process will fail.
● Examples
The following command downgrades the SSH security configurations of all Vservers and the cluster to
settings compatible with releases earlier than Data ONTAP 9.2.0.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ssh remove command removes the specified SSH key exchange algorithms or ciphers from
the existing configurations of the cluster or a Vserver. The removed algorithms or ciphers are disabled on the
cluster or Vserver. If you changed the cluster configuration settings, it will be is used as the default for all
newly created Vservers. If the SSH key exchange algorithms or ciphers that you specify with this command
are not currently enabled, the command will notdoes not fail. Data ONTAP supports the diffie-hellman-
group-exchange-sha256 key exchange algorithm for SHA-2. Data ONTAP also supports the diffie-hellman-
group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1 SSH key exchange
algorithms for SHA-1. The SHA-2 key exchange algorithm is more secure than the SHA-1 key exchange
algorithms. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, and
curve25519-sha256. Data ONTAP also supports the AES and 3DES symmetric encryption (also known as
ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-cbc, aes128-cbc,
aes128-gcm, aes256-gcm and 3des-cbc. Data ONTAP supports MAC algorithms of the following types: hmac-
sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160, umac-64, umac-64, umac-128, hmac-
sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-sha2-256-etm, hmac-sha2-512-etm,
hmac-md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-etm, and umac-128-etm.
● Parameters
-vserver <Vserver Name> - Vserver
Identifies the Vserver from which you want to remove the SSH key exchange algorithms or ciphers.
[-key-exchange-algorithms <algorithm name>, ...] - List of SSH Key Exchange Algorithms to Remove
Removes the specified key exchange algorithm or algorithms from the Vserver.
[-ciphers <cipher name>, ...] - List of SSH Ciphers to Remove
Removes the specified cipher or ciphers from the Vserver.
[-mac-algorithms <MAC name>, ...] - List of SSH MAC algorithms to Remove
Removes the specified MAC algorithm or algorithms from the Vserver.
● Examples
The following command removes the diffie-hellman-group1-sha1 and diffie-hellman-group-exchange-sha1
key exchange algorithms from the cluster1 Vserver. It also removes the aes128-cbc and 3des-cbc ciphers and
the hmac-sha1-96 and hmac-sha2-256 MAC algorithms from the cluster1 Vserver.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The security ssh show command displays the configurations of the SSH key exchange algorithms,
ciphers, MAC algorithms and maximum authentication retry count for the cluster and Vservers. The SSH
protocol uses a Diffie-Hellman based key exchange method to establish a shared secret key during the SSH
negotiation phrase. The key exchange method specifies how one-time session keys are generated for
encryption and authentication and how the server authentication takes place. Data ONTAP supports the
diffie-hellman-group-exchange-sha256 key exchange algorithm for SHA-2. Data ONTAP also supports the
diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1 key
exchange algorithms for SHA-1. Data ONTAP also supports ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-
sha2-nistp521, curve25519-sha256. Data ONTAP also supports the AES and 3DES symmetric encryptions
(also known as ciphers) of the following types: aes256-ctr, aes192-ctr, aes128-ctr, aes256-cbc, aes192-cbc,
aes128-cbc, aes128-gcm, aes256-gcm and 3des-cbc. Data ONTAP supports MAC algorithms of the following
types: hmac-sha1, hmac-sha1-96, hmac-md5, hmac-md5-96, hmac-ripemd160, umac-64, umac-64,
umac-128, hmac-sha2-256, hmac-sha2-512, hmac-sha1-etm, hmac-sha1-96-etm, hmac-sha2-256-etm,
hmac-sha2-512-etm, hmac-md5-etm, hmac-md5-96-etm, hmac-ripemd160-etm, umac-64-etm, umac-128-
etm
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Identifies the Vserver for which you want to display the SSH key exchange algorithm, cipher, and MAC
algorithm configurations.
[-key-exchange-algorithms <algorithm name>, ...] - Key Exchange Algorithms
Displays the Vserver or Vservers that have the specified key exchange algorithms enabled.
[-ciphers <cipher name>, ...] - Ciphers
Displays the Vserver or Vservers that have the specified ciphers enabled.
[-mac-algorithms <MAC name>, ...] - MAC Algorithms
Displays the Vserver or Vservers that have the specified MAC algorithm or algorithms.
[-max-authentication-retry-count <integer>] - Max Authentication Retry Count
Displays Vservers with a matching maximum authentication retry count value.
● Examples
The following command displays the enabled SSH key exchange algorithms, ciphers, MAC algorithms and
maximum number of authentication retry count for the cluster and all Vservers. The cluster settings are used
as the default for all newly created Vservers:
These commands control the use of encrypted HTTP on Vservers, including whether SSL is available, which
certificate is used for communication, and whether OCSP checks are enabled.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command modifies the configuration of encrypted HTTP (SSL) for Vservers in the cluster. Depending on
the requirements of the individual node's or cluster's web services (displayed by the vserver services
web show command), this encryption might or might not be used. If the Vserver does not have a certificate
associated with it, SSL will not be available.
● Parameters
-vserver <Vserver Name> - Vserver
Identifies a Vserver for hosting SSL-encrypted web services.
[-ca <text>] - Server Certificate Issuing CA
Identifies a Certificate Authority (CA) of a certificate to be associated with the instance of a given
Vserver. If this parameter, along with serial, is omitted during modification, a self-signed SSL certificate
can be optionally generated for that Vserver.
[-serial <text>] - Server Certificate Serial Number
Identifies a serial number of a certificate to be associated with the instance of a given Vserver. If this
parameter, along with ca, is omitted during modification, a self-signed SSL certificate can be optionally
generated for that Vserver.
[-common-name <FQDN or Custom Common Name>] - Server Certificate Common Name
Identifies the common name (CN) of a certificate to be associated with the instance of a given Vserver.
This parameter becomes optional if serial and ca are specified. You can use the security
certificate create and security certificate install commands to add new certificates to
Vservers.
Note
The use of self-signed SSL certificates exposes users to man-in-the-middle security attacks. Where
possible, obtain a certificate that is signed by a reputable certificate authority (CA) and use the
security certificate install command to configure it before enabling SSL on a Vserver.
Note
A NONCE is a unique identifier included in each OCSP request or OCSP response to prevent a replay
attack.
● Examples
The following example enables SSL server authentication for a Vserver named vs0 with a certificate that has
ca as www.example.com and serial as 4F4EB629.
cluster1::> security ssl modify -vserver vs0 -ca www.example.com -serial 4F4EB629 -server-enabled true
The following example disables SSL server authentication for a Vserver name vs0.
The following example enables SSL client authentication for a Vserver named vs0.
The following example disables SSL client authentication for a Vserver named vs0.
Related reference
security certificate create (page 554)
security certificate install (page 560)
vserver services web show (page 2482)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command displays the configuration of encrypted HTTP (SSL) for Vservers in the cluster. Depending on
the requirements of the individual node's or cluster's web services (displayed by the vserver services
web show command), this encryption might or might not be used. If the Vserver does not have a certificate
associated with it, SSL will not be available.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-ocsp ]
If you specify the -ocsp parameter, the command displays the Online Certificate Status Protocol
configuration.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Identifies a Vserver for hosting SSL-encrypted web services.
[-ca <text>] - Server Certificate Issuing CA
Filters the display of SSL configuration by specifying the Certificate Authority (CA) that issued the server
certificate.
[-serial <text>] - Server Certificate Serial Number
Filters the display of SSL configuration by specifying the serial number of a server certificate.
[-common-name <FQDN or Custom Common Name>] - Server Certificate Common Name
Filters the display of SSL configuration by specifying the common name for the server certificate.
[-server-enabled {true|false}] - SSL Server Authentication Enabled
Filters the display of SSL configuration according to whether the SSL server authentication is enabled or
disabled. Vservers have self-signed certificates automatically generated during their creation. These
Vserver self-signed certificates are server-enabled by default.
[-client-enabled {true|false}] - SSL Client Authentication Enabled
Filters the display of SSL configuration according to whether the SSL client authentication is enabled or
disabled. You can enable client authentication only when server authentication is enabled.
[-ocsp-enabled {true|false}] - Online Certificate Status Protocol Validation Enabled
Filters the display of SSL configuration when the Online Certificate Status Protocol validation is enabled.
[-ocsp-default-responder <text>] - URI of the Default Responder for OCSP Validation
Filters the display of SSL configuration according to the URI of the default responder for OCSP validation.
[-ocsp-override-responder {true|false}] - Force the Use of the Default Responder URI for OCSP
Validation
Filters the display of SSL configuration, which forces the use of the default responder URI for OCSP
validation.
[-ocsp-responder-timeout <[<integer>h][<integer>m][<integer>s]>] - Timeout for OCSP Queries
Filters the display of SSL configuration according to the timeout for queries to OCSP responders.
[-ocsp-max-response-age <unsigned32_or_unlimited>] - Maximum Allowable Age for OCSP Responses
(secs)
Filters the display of SSL configuration according to the maximum allowable age (freshness) in seconds
for the OCSP responses.
[-ocsp-max-response-time-skew <[<integer>h][<integer>m][<integer>s]>] - Maximum Allowable
Time Skew for OCSP Response Validation
Filters the display of SSL configuration according to the maximum allowable time difference for OCSP
responses (when validating their ThisUpdate and NextUpdate fields).
[-ocsp-use-request-nonce {true|false}] - Use a NONCE within OCSP Queries
Filters the display of SSL configuration by specifying whether the queries to the OCSP responders should
contain a NONCE or not.
Note
A NONCE is a unique identifier included in each OCSP request or OCSP response to prevent a replay
attack.
● Examples
The following example displays the configured certificates for Vservers.
Related reference
vserver services web show (page 2482)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays information about the status of the Trusted Platform Module(TPM) device. By
default, this command displays the following information:
• Node name
• Availability of the device
• State of the device, if available
• Firmware version
• Firmware upgrade counter
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value
[-is-available {yes|no}] - Is TPM Available?
Selects the nodes that match this parameter value.
• yes - The TPM device is mounted and available.
• no - The node does not support a TPM.
[-is-active {yes|no}] - Is TPM Active?
Selects the nodes that match this parameter value.
• yes - The TPM device is available and active.
• no - if -is-available parameter is yes, the TPM device is mounted and available but is not
responding to TPM commands.
[-version <text>] - Firmware Version
Selects the nodes that match this firmware version.
[-upgrade-count <integer>] - Firmware Counter
Selects the nodes that match the given number of firmware upgrade tries left.
● Examples
The snaplock commands manage compliance-related functionality in the system. A volume created using
the volume create command becomes a SnapLock volume when it is created on a SnapLock aggregate. A
SnapLock aggregate is created using the storage aggr create command when the snaplock-type is
specified as either compliance or enterprise.
The snaplock compliance-clock command can be used to manage the ComplianceClock in the system.
The snaplock log command can be used to manage SnapLock log infrastructure.
Related reference
volume create (page 1627)
SnapLock Clock commands (page 727)
SnapLock Log commands (page 741)
Related reference
snaplock compliance-clock show (page 728)
volume snaplock show (page 1835)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
snaplock compliance-clock initialize command is used to initialize System ComplianceClock from
the system clock. System ComplianceClock can be initialized only once by the user. Once initialized, user
cannot make any changes to the System ComplianceClock. Hence, user should ensure that system clock is
correct before initializing the System ComplianceClock.
● Parameters
-node {<nodename>|local} - Node
Specifies the name of the node on which System ComplianceClock needs to be initialized.
[-force [true]] - Forces Initialization
If you use this paramter, it will suppress the warning message during snaplock compliance-clock
initialize operation.
● Examples
Warning: You are about to initialize the secure ComplianceClock of the node
node1 to the current value of the node's system clock. This
procedure can be performed only once on a given node, so you should
ensure that the system time is set correctly before proceeding.
The current node's system clock is: Wed Nov 26 16:18:30 IST 2014
cluster-1::>
Related reference
snaplock compliance-clock show (page 728)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snaplock compliance-clock show command will display System ComplianeClock of the nodes in
the cluster. It will display the following information:
• Node name
• ComplianceClock Time
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command will display ComplianceClock for that particular node only.
[-time <text>] - ComplianceClock Time of the Node
If this parameter is specified, the command will display nodes having the same -time value.
● Examples
The following example shows the output when the session privilege level is "diagnostic".
The following example shows the output when the session privilege level is "diagnostic".
Node: node1
ComplianceClock Time: Mon Jan 12 11:37:55 IST 2015 +05:30
Node ID: 4040216954
ID: 1418640203778
ComplianceClock Time (secs): 1419002188
ComplianceClock Skew (secs): 1014
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snaplock compliance-clock ntp modify command modifies the option to enable or disable the
SnapLock ComplianceClock synchronization with the system time. The ComplianceClock is synchronized only
when an NTP server has been configured so that the system time follows the NTP time and the skew
between the ComplianceClock time and the system time is greater than 1 day.
● Parameters
[-is-sync-enabled {true|false}] - Enable ComplianceClock sync to NTP system time
Specifies whether synchronization should be enabled or not. This is a cluster wide option.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snaplock compliance-clock ntp show command will display ComplianceClock synchronization
setting. It will display the following information:
• is-sync-enabled - Displays if the option to synchronize the ComplianceClock with system time has been
enabled or not.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention abort is used to abort an ongoing Event Based Retention (EBR)
operation. This command only aborts the operations that have not yet completed. Only a user with security
login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
● Examples
The following example aborts an ongoing EBR operation with operation-id 16842754:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention apply command starts a new operation to apply the specified Event
Based Retention (EBR) policy to all files in the specified path. If a file is a regular file, it will be made a
WORM file and retained for a retention-period as defined by the specified policy name. If a file is already
WORM, its retention time will be extended to a retention-period as defined by the specified policy name,
starting from the current time. The retention time of a file will be extended only if the file's current retention
time is less than the new retention time value to be set. Only a user with security login role vsadmin-
snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which has the EBR policy defined to be applied on one or more files.
-policy-name <text> - Policy Name
Specifies the name of the EBR policy to be applied on one or more files.
-volume <volume name> - Volume
Specifies the name of the SnapLock volume containing a file path or a directory path as specified by the
path parameter. The specified EBR policy is applied to one or more files depending on the value of path.
-path <text> - Path
Specifies the path relative to the output volume root, of the form "/path". The path can be path to a file
or a directory. The EBR policy is applied to all files under the specified path. To apply the EBR policy to all
files in a volume, specify the path as "/".
● Examples
The following example starts an EBR operation to apply a policy on files for specified volume:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention show command displays the status of an Event Based Retention (EBR)
operation. Information about completed operations will be cleaned up after an hour after completion. Only
a user with security login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays all EBR operations that match the specified Vserver.
[-operation-id <integer>] - Operation ID
If this parameter is specified, the command displays all EBR operations that match the specified
operation ID.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays all EBR operations that match the specified volume.
The parameter specifies the volume on which EBR operation is running or has completed.
[-path <text>] - Path
If this parameter is specified, the command displays all EBR operations that match the specified path.
The parameter specifies the path on which EBR operation is running or has completed.
[-policy-name <text>] - Policy Name
If this parameter is specified, the command displays all EBR operations that match the specified policy
name. The parameter specifies the EBR policy name.
[-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] - Retention
Period
If this parameter is specified, the command displays all EBR operations that match the specified
retention period. The parameter specifies the retention period of the EBR policy.
[-num-files-processed <integer>] - Number of Files Processed
If this parameter is specified, the command displays all EBR operations that match the specified number
of processed files. The parameter specifies the number of files on which EBR policy was applied
successfully.
[-num-files-failed <integer>] - Number of Files Failed
If this parameter is specified, the command displays all EBR operations that match the specified number
of failed files. The parameter specifies the number of files on which the application of EBR policy failed.
[-num-files-skipped <integer>] - Number of Files Skipped
If this parameter is specified, the command displays all EBR operations that match the specified number
of skipped files. The parameter specifies the number of files on which the application of EBR policy was
skipped. A file that is under legal-hold will be skipped. If the retention time of a file is being shortened
as a result of applying the EBR policy, that file will also be skipped.
[-num-inodes-ignored <integer>] - Number of Inodes Ignored
If this parameter is specified, the command displays all EBR operations that match the specified number
of ignored inodes. The parameter specifies the number of inodes on which the application of EBR policy
was not attempted because they were not regular files.
[-operation-status {Unknown|In-Progress|Failed|Aborting|Completed}] - Operation Status
If this parameter is specified, the command displays all EBR operations that match the specified
operation status. The parameter specifies the operation status of an EBR operation.
[-status-details <text>] - Status Details
If this parameter is specified, the command displays all EBR operations that match the specified status
details. The parameter specifies the status details of an EBR operation.
● Examples
The following examples show the status of EBR operations for Vserver "vs1" and volume "slc" and the status
of event-retention operation for operation ID 16842753 respectively.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention show-vservers command is used to display the Vservers that have
SnapLock Event Based Retention (EBR) policies created.
● Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
● Examples
The following example displays all Vservers that have SnapLock EBR policies:
vs1
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention policy create command is used to create Event Based Retention
(EBR) policies for a Vserver. A policy consists of a policy-name and a retention-period. Only a user with
security login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which a policy needs to be created.
-name <text> - Policy Name
Specifies the name of the EBR policy to be created.
-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite} - Event
Retention Period
Specifies the retention period for an EBR policy.
● Examples
The following example creates a new EBR policy "p1" for Vserver "vs1" with a retention period of "10 years":
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention policy delete command is used to delete Event Based Retention
(EBR) policies for a Vserver. Only a user with security login role vsadmin-snaplock is allowed to perform this
operation.
● Parameters
-vserver <vserver name> - Vserver Name
If this parameter is specified, the command deletes all EBR policies that match the specified Vserver.
-name <text> - Policy Name
If this parameter is specified, the command deletes all EBR policies that match the specified name.
● Examples
The following example deletes retention policy "p1" for Vserver "vs1":
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention policy modify command is used to modify the retention period of an
Event Based Retention (EBR) policy for a Vserver. Only a user with security login role vsadmin-snaplock is
allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which retention period of a policy needs to be modified.
-name <text> - Policy Name
Specifies the name of the EBR policy for which the retention period needs to be modified.
[-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] - Event
Retention Period
Specifies the new value of retention period.
● Examples
The following example modifies the retention period of policy "p1" for Vserver "vs1" to "5 years":
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock event-retention policy show command is used to show Event Based Retention (EBR)
policies for a Vserver. A policy consists of a policy-name and a retention-period. The command output
depends on the parameter or parameters specified. If no parameters are specified, all policies for all vservers
will be displayed. If one or more parameters are specified, only those entries matching the specified values
will be displayed. Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays all EBR policies that match the specified Vserver.
[-name <text>] - Policy Name
If this parameter is specified, the command displays all EBR policies that match the specified name.
[-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] - Event
Retention Period
If this parameter is specified, the command displays all EBR policies that match the specified retention-
period.
● Examples
The following example displays all event-retention policies for vserver "vs1":
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold abort is used to abort an ongoing legal-hold operation. The type of legal-
hold operations that can be aborted using this command are begin, end and dump-files. This command only
aborts operations that have not yet completed. Only a user with security login role vsadmin-snaplock is
allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the vserver on which the legal-hold operation is running.
-operation-id <integer> - Operation ID
Specifies the operation ID of the legal-hold operation to be aborted.
● Examples
The following example aborts an ongoing legal-hold operation with operation-id 16842754:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold begin command is used to place specified file or files under legal-hold for a
given litigation. Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which owns the volume. The specified file or files to be placed under
legal-hold reside on this volume.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the file or files have to be placed under legal-hold.
-volume <volume name> - Volume
Specifies the name of the SnapLock compliance volume on which the file or files to be placed under
legal-hold reside.
-path <text> - Path
Specifies a path relative to the volume root. The path can be either a file path of the single file to be
placed under legal-hold or a directory path where all regular files under it must be placed under legal-
hold.
● Examples
The following example starts a legal-hold begin operation on file file1 in volume slc_vol1:
vs1::> snaplock legal-hold begin -litigation-name litigation1 -volume slc_vol1 -path /file1
SnapLock legal-hold begin operation is queued. Run "snaplock legal-hold show -operation-id 16842773 -in
stance" to view the operation status.
The following example starts a legal-hold begin operation on all files in the volume slc_vol1:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold dump-files is used to dump the list of files under legal-hold for a given
vserver, volume and litigation to an auto-generated file in the user specified path. Only a user with security
login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the list of files under legal-hold is to be dumped.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the list of files under legal-hold is to be dumped.
-volume <volume name> - Volume Name
Specifies the name of the SnapLock compliance volume for which the list of files under legal-hold is to
be dumped.
-output-volume <volume name> - Output Volume Name
Specifies the name of the output volume containing the output directory path where the list of files
under legal-hold is to be dumped. The output volume must be a regular read-write volume.
-output-directory-path <text> - Path Relative to Output Volume Root
Specifies the output directory path relative to the output volume root, where the list of files under legal-
hold is to be dumped. The output directory path should be of the form "/directory-path". If output needs
to be dumped on the volume root, specify the path as "/".
● Examples
The following example starts a legal-hold dump-files operation:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold dump-litigations is used to dump the list of litigations for a given vserver
to a user specified path. Under the user specified path, we create a directory with a unique name. Under the
user specified path, a directory with an auto-generated name is created. Under this directory, multiple files
are created. Each file represents a unique litigation name that was found in the given vserver. Each file
contains a list of volume names that have files under legal-hold for that given litigation. For example, if the
file name is "lit1" and the contents of the file are "volume1" and "volume2", then it indicates that both these
volumes have files under legal-hold for litigation "lit1". Only a user with security login role vsadmin-snaplock
is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the list of litigations is to be dumped.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays the list of litigations for volume that matches the
specified value. The volume must be of type SnapLock compliance.
-output-volume <volume name> - Output Volume Vame
Specifies the name of the output volume containing the output directory path where the list of
litigations is to be dumped. The output volume must be a regular read-write volume.
-output-directory-path <text> - Path Relative to Output Volume Root
Specifies the output directory path relative to the volume root, where the list of litigations is to be
dumped. The output directory path should be of the form "/directory-path". If output needs to be
dumped to the volume root, specify the path as "/".
● Examples
The following example starts a legal-hold dump-litigations job:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold end command is used to release legal-hold on specified file or files for a given
litigation. Only a user with security login role vsadmin-snaplock is allowed to perform this operation.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver which owns the volume. The specified file or files to be released from
legal-hold reside on this volume.
-litigation-name <text> - Litigation Name
Specifies the name of the litigation for which the file or files have to release from legal-hold.
-volume <volume name> - Volume
Specifies the name of the SnapLock compliance volume on which the file or files to be released from
legal-hold reside.
-path <text> - Path
Specifies a path relative to the volume root. The path can be either a file path of the single file to be
released from legal-hold or a directory path where all regular files under it must be released from legal-
hold.
● Examples
The following example starts a legal-hold end operation on file file1 in volume slc_vol1:
vs1::> snaplock legal-hold end -litigation-name litigation1 -volume slc_vol1 -path /file1
SnapLock legal-hold end operation is queued. Run "snaplock legal-hold show -operation-id 16842773 -inst
ance" to view the operation status.
The following example starts a legal-hold end operation on all files in the volume slc_vol1:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock legal-hold show command displays the status of a legal-hold operation. Information
about completed operations will be cleaned up after an hour of completion. Only a user with security login
role vsadmin-snaplock is allowed to perform this operation.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays all legal-hold operations that match the specified
Vserver.
[-operation-id <integer>] - Operation ID
If this parameter is specified, the command displays all legal-hold operations that match the specified
operation ID.
[-volume <volume name>] - Volume Name
If this parameter is specified, the command displays all legal-hold operations that match the specified
volume. The parameter specifies the volume on which legal-hold operation is running or has completed.
[-path <text>] - Path
If this parameter is specified, the command displays all legal-hold operations that match the specified
path. The parameter specifies the path on which legal-hold operation is running or has completed.
[-litigation-name <text>] - Litigation Name
If this parameter is specified, the command displays all legal-hold operations that match the specified
litigation name. The parameter specifies the legal-hold litigation name.
[-operation-type {unknown|begin|end|dump-files}] - Operation Type
If this parameter is specified, the command displays all legal-hold operations that match the specified
operation type. The parameter specifies the type of legal-hold operation.
[-operation-status {Unknown|In-Progress|Failed|Aborting|Completed}] - Operation Status
If this parameter is specified, the command displays all legal-hold operations that match the specified
operation status. The parameter specifies the status of legal-hold operation.
[-num-files-processed <integer>] - Number of Files Processed
If this parameter is specified, the command displays all legal-hold operations that match the specified
number of files processed. The parameter specifies the number of files on which legal-hold operation
was successful.
[-num-files-failed <integer>] - Number of Files Failed
If this parameter is specified, the command displays all legal-hold operations that match the specified
number of files failed. The parameter specifies the number of files on which legal-hold operation failed.
[-num-files-skipped <integer>] - Number of Files Skipped
If this parameter is specified, the command displays all legal-hold operations that match the specified
number of files skipped.
The parameter specifies the number of files on which legal-hold begin operation was skipped. The legal-
hold begin operation is skipped on a file if it is already under hold for a given litigation or if it is a hard
link to a file that is already under hold for a given litigation.
[-num-inodes-ignored <integer>] - Number of Inodes Ignored
If this parameter is specified, the command displays all legal-hold operations that match the specified
number of inodes ignored. The parameter specifies the number of inodes on which the legal-hold
operation was not attempted because they were not regular files.
[-status-details <text>] - Status Details
If this parameter is specified, the command displays all legal-hold operations that match the specified
status details. The parameter specifies the status details of an legal-hold operation.
● Examples
The following examples show the status of legal-hold operations for Vserver vs1 and volume slc_vol1 and
the status of legal-hold operation for operation ID 16842786 respectively:
Vserver: vs1
Volume: slc_vol1
Operation ID: 16842786
Litigation Name: litigation1
Path: /
Operation Type: begin
Status: Completed
Number of Files Processed: 100
Number of Files Failed: 15
Number of Files Skipped: 20
Number of Inodes Ignored: 0
Status Details: No error
The snaplock log commands manage SnapLock log infrastructure for the SnapLock feature. This
infrastructure provides the capability to record events that are required to provide an audit trail. These
commands enable you to create and initialize the SnapLock log configuration for the Vserver, modify
attributes associated with the SnapLock log configuration, and delete the auditlog configuration. Attributes
of a SnapLock log configuration include the following:
• SnapLock log volume
• Maximum log size
• Default retention period
The SnapLock log volume is a SnapLock Compliance volume or a SnapLock Enterprise volume. If a SnapLock
Enterprise volume is chosen as the log volume, the volume and the hosting aggregate cannot be deleted
until the volume expires. The SnapLock log infrastructure creates directories and files in this volume to store
the SnapLock log records.
The maximum log size specifies the maximum size of a log file that stores SnapLock log records. Once the
file reaches this size, it is archived and a new log file is created.
The default retention period is the time period for which the log file is retained, if the SnapLock log records
that are stored in the file do not carry any retention period.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log create command is used to create a SnapLock log configuration for the Vserver. A
SnapLock log configuration consists of volume to store the log, the maximum size of the log file, and the
default period of time for which the log file should be retained.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the configuration needs to be created.
-volume <volume name> - Log Volume Name
Specifies the name of the volume that is used for logging. This must be a SnapLock Compliance volume.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
Specifies the maximum size of the log file. Once a log file reaches this limit, it is archived and a new log
file is created. This parameter is optional. The default value is 10MB.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] -
Default Log Record Retention Period
Specifies the default period of time a record (which is logged) is retained. This parameter is optional.
The default value is "6 months".
● Examples
cluster1::> snaplock log create -volume vol1 -max-log-size 50MB -default-retention-period "
1 year" -vserver vs1
[Job 47] Job succeeded: SnapLock log created for Vserver "vs1".
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log delete command deletes the SnapLock log configuration associated with the Vserver.
This command closes all the active log files in the log volume and mark the volume as disabled for SnapLock
logging.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver whose SnapLock log configuration is deleted.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log modify command modifies the SnapLock log configuration of the Vserver. Log volume,
maximum size of log file, and default retention period can be modfied. If the log volume is modified, then
the active log files in the existing log volume is closed and the log volume is marked as disabled for logging.
The new log volume is enabled for logging.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which the SnapLock log configuration needs to be modified.
[-volume <volume name>] - Log Volume Name
Specifies the new log volume that is configured for this Vserver for logging.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
Specifies the new value for maximum log file size.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] -
Default Log Record Retention Period
Specifies the new value for default retention period.
● Examples
cluster1::> snaplock log modify -volume vol1 -vserver vs1 -max-log-size 15MB
[Job 48] Job succeeded: SnapLock log modified for Vserver "vs1".
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log show command displays the following information about the SnapLock log
infrastructure:
• Vserver name
• Volume name
• Maximum log size
• Default retention period
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, the command displays the log information for Vserveers that match the
specified value.
[-volume <volume name>] - Log Volume Name
If this parameter is specified, the command displays the log configuration for volumes that match the
specified value.
[-max-log-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum Size of Log File
If this parameter is specified, the command displays the log configuration with a matching -max-log-
size value.
[-default-retention-period {{<integer> seconds|minutes|hours|days|months|years} | infinite}] -
Default Log Record Retention Period
If this parameter is specified, the command displays the log configuration with a matching -default-
retention-period value.
● Examples
The snaplock log file commands manage the log files used for recording events that need to be
logged. Commands in this directory enable you to archive and close log files and display active log files.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log file archive command archives the currently active log file by closing it and
creating a new active log file. If base-name is not provided, the command archives all active log files
associated with the Vserver. Otherwise, the command archives the active log file associated with the base-
name provided.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the name of the Vserver for which active log files need to be archived.
[-base-name {privileged-delete | system | legal-hold}] - Base Name of Log File
Specifies the log base-name, whose active log file needs to be archived. The base-name is the name of
the source of log records. Valid base-names are system, privileged-delete and legal-hold. Each base-
name has its own directory in which log files containing log records generated by base-name are stored.
● Examples
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snaplock log file show command displays the following information about the log files:
• Vserver name
• Volume name
• File path
• Expiry time of the file
• File size
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
If this parameter is specified, then log files in the Vserver that match the specified value is displayed.
[-base-name {privileged-delete | system | legal-hold}] - Base Name of Log File
If this parameter is specified, then the log files having a matching -base-name is displayed.
[-volume <volume name>] - Log Volume Name
If this parameter is specified, then the log files in volumes that match the specified value are shown.
[-file-path <text>] - Log File Path
If this parameter is specified, then the log files that match the specified value are displayed.
[-expiry-time <text>] - Log File Expiry Time
If this parameter is specified, then the log files having a matching -expiry-time value are displayed.
[-file-size {<integer>[KB|MB|GB|TB|PB]}] - File Size
If this parameter is specified, then the log files having a matching -file-size value are displayed.
● Examples
Vserver : vs1
Volume : vol1
Base Name : system
File Path : /vol/vol1/snaplock_log/system_logs/20160120_183756_GMT-present
Expiry Time : Wed Jul 20 18:37:56 GMT 2016
File Size : 560B
The snapmirror commands enable you to create and manage data protection mirrors, extended data
protection relationships, and load-sharing mirrors.
These commands are available to the cluster and Vserver administrators.
Note that there are "Pre 8.2" relationships: (1) load-sharing relationships; (2) data protection relationships
with the source volume on a storage system running clustered Data ONTAP 8.1; (3) data protection
relationships that existed before the source and destination storage systems were upgraded from clustered
Data ONTAP 8.1 to clustered Data ONTAP 8.2 and later and have not yet been converted to ones with Data
ONTAP 8.2 capabilities. These relationships have the same limitations as on Data ONTAP 8.1. In particular,
only snapmirror commands present on Data ONTAP 8.1 can be used for these relationships. The
"Relationship Capability" field, as shown in the output of the snapmirror show command, is set to "Pre 8.2"
for these relationships. "Pre 8.2" data protection relationships can only be created and managed by the
cluster administrator; load-sharing relationships which are all "Pre 8.2" can be created and managed by
either a cluster or Vserver administrator.
Data protection relationships that existed before the source and destination storage systems were upgraded
from Data ONTAP 8.1 will be auto-converted to "8.2 and above" with full capabilities when a Vserver peering
relationship is set up between the source and destination Vservers. Relationships that have both the source
and destination in the same Vserver and therefore require no Vserver peering relationship will be converted
on the first boot when all nodes in the storage system are running Data ONTAP 8.2 or later. Note that since
there is no "8.2 and above" implementation of load-sharing relationships, there is no conversion of load-
sharing relationships to "8.2 and above".
When a SnapMirror relationship is created with the type of the relationship specified as data protection
(DP), an extended data protection (XDP) relationship will be created. This relationship will provide the same
functional behavior as the DP type in terms of Snapshot copy selection and retention. This is achieved by
using a policy which has the rule to replicate all Snapshot copies. There is an exception to this relationship
creation behavior of overriding of the type DP. If a relationship of type DP in the opposite direction already
exists, the new relationship will also be of type DP. The type DP will eventually be deprecated and
applications should not use this type when creating new relationships. When the type is specified as DP or
when no type is specified, if a policy of type async-mirror is specified that does not have the rule to replicate
all Snapshot copies, the snapmirror create command will fail.
Related reference
snapmirror show (page 800)
snapmirror create (page 752)
snapmirror abort
Abort an active transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror abort command stops SnapMirror transfers that might have started and not completed.
A SnapMirror transfer is an operation on a given SnapMirror relationship and the relationship is identified by
its destination endpoint, which can be a volume, a Vserver, or a non-Data ONTAP endpoint. You identify the
SnapMirror relationship with this command and the command aborts the transfer for the relationship. For
load-sharing mirrors, the command also aborts transfers for other relationships that are part of the same
load-sharing set. For SolidFire destination endpoints, the snapmirror abort command is only supported if
the endpoint is in a SnapMirror relationship.
Load-sharing mirrors are either up to date and serving data to clients, or they are lagging and not serving
data to clients. If the snapmirror abort command identifies an up-to-date load-sharing mirror, then
SnapMirror transfers to the up-to-date load-sharing mirror and associated up-to-date load-sharing mirrors in
the set of load-sharing mirrors are aborted. If the snapmirror abort command identifies a lagging load-
sharing mirror, then only the SnapMirror transfer associated with the lagging load-sharing mirror is aborted.
After the snapmirror abort command successfully completes its operation, the volume on the receiving
side of the transfer might contain a restart checkpoint. The restart checkpoint can be used by a subsequent
transfer to restart and continue the aborted SnapMirror transfer.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The use of wildcards in parameter values is not supported from the source Vserver or cluster for relationships
with "Relationship Capability" of "8.2 and above".
You can use this command from the source or the destination Vserver or cluster for FlexVol volume
relationships.
For SnapMirror Synchronous relationships, this command aborts any ongoing transfer and takes the
relationship OutOfSync. This can result in primary client IO failure for relationships with a policy of type
strict-sync-mirror. Instead, the best practice recommendation is to use the snapmirror quiesce
command.
For Vserver SnapMirror relationships, this command must be run only from the cluster containing the
destination Vserver.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-hard | -h [true]] - Discard Restart Checkpoint
If this option is specified true, the restart checkpoint is discarded and the destination volume is restored
to the last Snapshot copy that was successfully transferred. You can use the -hard option to discard the
restart checkpoint of a previous transfer attempt which forces the subsequent transfer to start with a
fresh Snapshot copy on the destination volume. This option can only be used from the destination
Vserver or cluster. This parameter is not supported for relationships with non-Data ONTAP endpoints.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To stop the active SnapMirror replication to the destination volume vs2.example.com:dept_eng_dp_mirror1,
type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to stop the active SnapMirror replication to the
destination volume cluster2://vs2.example.com/dept_eng_dp_mirror1, type the following command:
To stop the active SnapMirror replication to the destination Vserver dvs1.example.com, type the following
command:
Under PVR control to stop user-initiated active SnapMirror replication to the destination Consistency Group
cg_dst in Vserver vs2.example.com, type the following command:
Related reference
job stop (page 205)
snapmirror quiesce (page 779)
snapmirror show (page 800)
snapmirror break
Make SnapMirror destination writable
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror break command breaks a SnapMirror relationship between a source and destination
endpoint of a data protection mirror. The destination endpoint can be a Vserver, volume or SolidFire
endpoint. When Data ONTAP breaks the relationship, if the endpoint is a volume or SolidFire endpoint, the
destination is made read/write and can diverge from the source volume, client redirection is turned off on
the destination, the restart checkpoint is cleared, and the clients can see the latest Snapshot copy. If the
endpoint is a Vserver, the subtype of the destination Vserver is changed to default, volumes in the
destination Vserver are made read/write and the clients can now access the Vserver namespace for
modifications. For SolidFire destination endpoints, the snapmirror break command is only supported if
the endpoint is in a SnapMirror relationship.
Subsequent manual or scheduled SnapMirror updates to the broken relationship will fail until the
SnapMirror relationship is reestablished using the snapmirror resync command.
This command applies to data protection mirrors. For vault relationships, this command is only intended for
use when preparing for a Data ONTAP revert operation (see the -delete-snapshots parameter in
advanced privilege level). This command is not intended for use with load-sharing mirrors.
For relationships with a policy of type strict-sync-mirror or sync-mirror, the relationship must be Quiesced
before running the snapmirror break command.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror break command must be used from the destination Vserver or cluster.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-force | -f [true]] - Force
If this parameter is specified, the command proceeds without prompting for confirmation.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
[-delete-snapshots [true]] - Delete Snapshots for Revert (privilege: advanced)
Using this parameter causes break to delete Snapshot copies on a vault destination so that the system
can be reverted. Note that the only Snapshot copies that will be deleted are those that were created
with the current version of Data ONTAP. Any Snapshot copies that might be present created with a
different version will not be deleted. This parameter is not supported for SnapLock Compliance
destinations.
● Examples
To stop the SnapMirror replication to the destination volume vs2.example.com:dept_eng_dp_mirror1, type
the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to stop the SnapMirror replication to the
destination volume cluster2://vs2.example.com/dept_eng_dp_mirror1, type the following command:
To stop replication to the destination Vserver dvs1.example.com of a Vserver SnapMirror relationship, type
the following command:
Under PVR control to stop synchronous SnapMirror replication to the destination Consistency Group cg_dst in
Vserver vs2.example.com, type the following command:
Related reference
snapmirror resync (page 792)
snapmirror show (page 800)
snapmirror create
Create a new SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror create command creates a SnapMirror relationship between a source and destination
endpoint. You can use this command to create a data protection relationship, an extended data protection
relationship, or a load-sharing relationship between FlexVol volumes. You can also use it to create a data
protection relationship between Vservers. A SnapMirror relationship between Vservers can only be created if
the system containing the source Vserver is also running Data ONTAP 8.3 or later. You can also use the
snapmirror create command to create an extended data protection relationship between FlexGroups.
FlexGroups only support extended data protection relationships. A SnapMirror relationship between
FlexGroups is only supported if the system containing the source FlexGroup volume is also running Data
ONTAP 9.1.0 or later. The source or destination of a FlexGroup SnapMirror relationship cannot be the source
or destination of any other SnapMirror relationship.
You can use the snapmirror create command to create a data protection relationship or an extended
data protection relationship between SnapLock source and SnapLock destination endpoints. When the
cluster containing the source is running ONTAP 9.5.0 or later, the default relationship type is extended data
protection (XDP), otherwise it is data protection (DP).
A SnapMirror relationship can also be created between a non-SnapLock source and a SnapLock destination
to WORM-protect Snapshot copies. In this case, the relationship type is always set to extended data
protection (XDP).
The snapmirror create command can be used to create an extended data protection (XDP) relationship
between a Data ONTAP volume and a non-Data ONTAP endpoint that supports SnapMirror (AltaVault,
Solidfire). AltaVault endpoints can only be used as destinations. SolidFire endpoints can be used as sources
or destinations.
The snapmirror create command can be used to create a synchronous relationship between FlexVol
volumes, which provides zero RPO data protection. SnapMirror Synchronous supports two policy types, sync-
mirror and strict-sync-mirror. Upon a permanent replication failure, the strict-sync-mirror variant restricts
further client IO on the primary, whereas the sync-mirror variant does not.
Before using this command to create a SnapMirror relationship between Vservers, you typically create a
source and destination Vserver using the vserver create command. The source Vserver should be of
subtype default and the destination Vserver of subtype dp-destination. Also, before creating the relationship
between Vservers, you must setup Vserver peer by using the vserver peer create command between
the source and destination Vservers. A Vserver relationship cannot be created between two Vservers within
the same cluster. The root volume of the destination Vserver will be read-write and data from the source
Vserver's root volume will not be replicated. Therefore there will be no volume level relationship created
between the root volumes of the two Vservers.
After creating the relationship, the destination Vserver must be initialized by using the snapmirror
initialize command.
Before using this command to create a volume SnapMirror relationship, you typically create a source and
destination volume using the volume create command. The source volume should be in the online state
and a read-write (RW) type. The destination volume should be in the online state and a data protection
(DP) type. For FlexGroup SnapMirror relationships, the source and destination FlexGroups must be spread
over the same number of aggregates as specified in the -aggr-list parameter with the same number of
constituents per aggregate as specified in the -aggr-list-multiplier parameter of the volume create
command.
When a FlexGroup SnapMirror relationship is created, normally hidden relationships are also created for the
constituent volumes. These relationships can be seen by using the -expand parameter of the snapmirror
show command. Source information for these relationships can be seen using the -expand parameter of
the snapmirror list-destinations command. Other SnapMirror commands are disabled for FlexGroup
constituent relationships and FlexGroup constituent volumes.
If all systems involved are running Data ONTAP version 8.2 and later, a Vserver peering relationship must be
set up using the vserver peer create command between the source and the destination Vservers to
create a relationship between the source and destination volumes. To enable interoperability with Data
ONTAP 8.1, if the source volume is on a storage system running clustered Data ONTAP 8.1, the cluster
administrator can create a data protection relationship between the source and destination volumes
without a Vserver peering relationship between the source and destination Vservers. These relationships are
managed the same way as on Data ONTAP 8.1 and the "Relationship Capability" field, as shown in the output
of the snapmirror show command, is set to "Pre 8.2".
Note
SnapMirror relationships, except load-sharing relationships, which are created between two volumes
which are both on a storage system running Data ONTAP version 8.2 and later have the "Relationship
Capability" field set to "8.2 and above".
Load-sharing mirrors must be confined to a single Vserver; they are not allowed to span Vservers. Load-
sharing relationships are created with the "Relationship Capability" field set to "Pre 8.2" even if both the
source and destination volumes are on a storage system running Data ONTAP version 8.2 and later. There is
no "8.2 and above" implementation for load-sharing relationships.
A set of load-sharing mirrors can have one or more destination volumes. You create separate SnapMirror
relationships between the common source volume and each destination volume to create the set of load-
sharing mirrors.
The source or destination of a load-sharing relationship cannot be the source or destination of any other
SnapMirror relationship.
After creating the relationship, the destination volume can be initialized using the snapmirror
initialize command. The destination volumes in a set of load-sharing mirrors are initialized using the
snapmirror initialize-ls-set command.
The snapmirror create command must be used from the destination Vserver or cluster.
● Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/
share/share-name>|<[vserver:]/cg/[cgname]>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-source-vserver <vserver name> - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-type <snapmirrorType>] - Relationship Type
This parameter specifies the type of SnapMirror relationship that will be created. You can create a data
protection (DP) relationship, an extended data protection (XDP) relationship, a transition data
protection relationship between a Data ONTAP operating in 7-Mode system and a clustered Data ONTAP
system (TDP), or a load-sharing (LS) relationship. The default value is XDP for all relationships except for
Vserver DR relationships. FlexGroup volumes and non-Data ONTAP endpoints support only XDP
relationships. For FlexVol volume relationships, when DP is specified as the type, an XDP relationship
will be created unless a relationship of type DP in the opposite direction already exists. In that case the
new relationship will be of type DP.
[-vserver <vserver name>] - Managing Vserver
If this optional parameter is specified, it designates the managing Vserver. The managing Vserver is
authorized to use SnapMirror commands to manage the SnapMirror relationship. The -vserver
parameter is currently a reserved parameter.
[-schedule <text>] - SnapMirror Schedule
This optional parameter designates the name of the schedule which is used to update the SnapMirror
relationship. If you do not designate a schedule, updates are not scheduled, so you must update the
SnapMirror relationship manually using the snapmirror update command or, in the case of a set of
load-sharing mirrors, using the snapmirror update-ls-set command.
Note
You define and name a schedule using the job schedule cron create command.
protection (XDP) relationships support two more use cases, mirroring and unified mirror-vault. The exact
behavior of an extended data protection (XDP) relationship is governed by the snapmirror policy
associated with that relationship. In clustered Data ONTAP 8.3 the snapmirror policy has a new field
type to indicate how the relationships with that policy will behave. The supported types are async-mirror
(mirroring), vault (vaulting) and mirror-vault (unified mirroring and vault). For XDP relationships
between a Data ONTAP source volume and an AltaVault destination endpoint, only policies of type vault
are supported. For XDP relationships between a Data ONTAP source volume and a SolidFire destination
endpoint, only policies of type async-mirror without an all_source_snapshots rule are supported. For XDP
relationships between a SolidFire source endpoint and a Data ONTAP destination volume, only policies
of type async-mirror without an all_source_snapshots rule, and policies of type mirror-vault are
supported. SnapMirror policies of type async-mirror associated with FlexVol volume relationships when
relationship type DP is specified or when no relationship type is specified, must include the label
all_source_snapshots. Refer to the man page for the snapmirror policy create command for more
information.
Note
You define and name a policy using the snapmirror policy create command.
● Examples
To create an extended data protection relationship between the source endpoint
vs1.example.com:dept_eng, and the destination endpoint vs2.example.com:dept_eng_dp_mirror2, with the
default policy of MirrorAndVault, type the following command:
To create an extended data protection relationship between the source FlexGroup vs1.example.com:fg_src
and the destination FlexGroup vs2.example.com:fg_dst, with the default policy of MirrorAndVault, type the
following command:
To create a synchronous SnapMirror relationship between the source Flexvol vs1.example.com:vol_log, and
the destination Flexvol vs2.example.com:vol_log_sync_dp when the source cluster is running ONTAP 9.5 or
above, type the following command:
To create a strict synchronous SnapMirror relationship between the source Flexvol vs1.example.com:vol_log,
and the destination Flexvol vs2.example.com:vol_log_sync_dp when the source cluster is running ONTAP 9.5
or above, type the following command:
To create a data protection mirror between the source endpoint cluster1://vs1.example.com/dept_eng, and
the destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2 when the source cluster is
running Data ONTAP 8.1 software, type the following command:
To create a load-sharing mirror between the source endpoint cluster1://vs1.example.com/mkt1, and the
destination endpoint cluster1://vs1.example.com/mkt1_ls1 with the schedule named 5min used to update
the relationship, type the following command:
To create a SnapMirror relationship between the source Vserver vs1.example.com, and the destination
Vserver dvs1.example.com with the schedule named hourly used to update the relationship, type the
following command:
To create an extended data protection (XDP) relationship between the Data ONTAP source endpoint
vs1.example.com:data_ontap_vol, and the AltaVault destination endpoint 10.0.0.11:/share/share1, type the
following command:
To create an extended data protection (XDP) relationship between the SolidFire source endpoint
10.0.0.12:/lun/0001, and the Data ONTAP destination endpoint vs2.example.com:data_ontap_vol2, type the
following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following
attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination
Consistency Group cg_dst in Vserver vs2.example.com.
• It has item mappings between lun1 and lun2 on volume srcvol and lun1 and lun2 on volume dstvol.
• It uses a policy named SmgrSync that has a policy type of smgr-mirror that the user has previously created.
type the following command:
Under PVR control to create a new item mapping between lun3 on volume srcvol and lun3 on volume dstvol
in the existing SnapMirror synchronous Consistency Group relationship that was created above, type the
following command:
Related reference
snapmirror update (page 834)
snapmirror update-ls-set (page 838)
job schedule cron create (page 219)
snapmirror policy commands (page 850)
snapmirror policy create (page 852)
vserver create (page 1866)
vserver peer create (page 2275)
snapmirror initialize (page 762)
volume create (page 1627)
snapmirror show (page 800)
snapmirror list-destinations (page 768)
lun create (page 227)
snapmirror delete (page 759)
snapmirror resync (page 792)
snapmirror resume (page 790)
snapmirror initialize-ls-set (page 767)
snapmirror delete
Delete a SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror delete command removes the SnapMirror relationship between a source endpoint and a
destination endpoint. The destination endpoint can be a Vserver, a volume, or a non-Data ONTAP endpoint.
The Vservers, volumes and non-Data ONTAP destinations are not destroyed and Snapshot copies on the
volumes are not removed.
For relationships with SolidFire endpoints, the SnapMirror source commands snapmirror release and
snapmirror list-destinations are not supported. Therefore, Snapshot copies that are locked by
SnapMirror on the source container cannot be cleaned up using the snapmirror release command. If the
source container is a Data ONTAP volume, in order to reclaim space captured in the base Snapshot copy on
the volume, issue a snapshot delete command specifying the -ignore-owners parameter in diag
privilege level. To reclaim space captured in a Snapshot copy locked by SnapMirror on a SolidFire system,
refer to SolidFire documentation.
The snapmirror delete command fails if a SnapMirror transfer for the SnapMirror relationship is in
progress for relationships with "Relationship Capability" of "Pre 8.2". For relationships with "8.2 and above"
capability the delete will succeed even if a transfer is in progress and the transfer will ultimately stop.
A set of load-sharing mirrors can contain multiple destination volumes, each destination volume having a
separate SnapMirror relationship with the common source volume. When used on one of the SnapMirror
relationships from the set of load-sharing mirrors, the snapmirror delete command deletes the specified
SnapMirror relationship from the set of load-sharing mirrors.
The snapmirror delete command preserves the read-write or read-only attributes of the volumes of a
SnapMirror relationship after the relationship is deleted. Therefore, a read-write volume that was the source
of a SnapMirror relationship retains its read-write attributes, and a data protection volume or a load-sharing
volume that was a destination of a SnapMirror relationship retains its read-only attributes. Similarly, the
subtype attribute of source and destination Vservers is not modified when a Vserver SnapMirror relationship
is deleted.
Note
When a SnapMirror relationship from a set of load-sharing mirrors is deleted, the destination volume
becomes a data protection volume and retains the read-only attributes of a data protection volume.
For relationships with a policy of type strict-sync-mirror or sync-mirror, the relationship must be Quiesced
before it can be deleted. This command is supported for SnapMirror relationships with the field "Relationship
Capability" showing as either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", the snapmirror delete command must
be used from the destination Vserver or cluster. The SnapMirror relationship information is deleted from the
destination Vserver, but no cleanup or deletion is performed on the source Vserver. The snapmirror
release command must be issued on the source Vserver to delete the source relationship information.
For relationships with "Relationship Capability" of "Pre 8.2", you can use this command from the source or
from the destination cluster. When used from the destination cluster, the SnapMirror relationship
information on the source and destination clusters is deleted. When used from the source cluster, only the
SnapMirror relationship information on the source cluster is deleted.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-force | -f [true]] - Force
If specified, the delete proceeds even if it cannot clean up all artifacts of the relationship.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To delete the SnapMirror relationship with the destination endpoint vs2.example.com:dept_eng_dp_mirror4,
type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to delete the SnapMirror relationship with the
destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror4, type the following command:
To delete the SnapMirror relationship with destination endpoint dvs1.example.com:, type the following
command:
Under PVR control to delete the synchronous SnapMirror Consistency Group relationship with the destination
Consistency Group cg_dst in Vserver vs2.example.com and all item mappings, type the following command:
Under PVR control to delete the item mapping between lun3 on volume srcvol and lun3 on volume dstvol in
the SnapMirror synchronous Consistency Group relationship with the destination Consistency Group cg_dst in
Vserver vs2.example.com, type the following command:
Related reference
snapmirror release (page 781)
snapmirror list-destinations (page 768)
snapmirror resync (page 792)
snapmirror resume (page 790)
snapmirror show (page 800)
snapmirror initialize
Start a baseline transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror initialize command initializes the destination Vserver, volume or a non-Data ONTAP
endpoint of a SnapMirror relationship. The command behaves differently between data protection (DP),
extended data protection (XDP) and load-sharing (LS) relationships.
If you specify a sync-mirror or strict-sync-mirror type policy, the snapmirror initialize command creates
and initializes a synchronous relationship and brings it InSync, providing zero RPO data protection.
For data protection (DP) and extended data protection (XDP) relationships, the snapmirror initialize
command initializes the destination volume.
For load-sharing (LS) relationships, the snapmirror initialize command initializes a new load-sharing
mirror in an existing set of load-sharing mirrors. If the command finishes before the start of a scheduled or
manual transfer of the set of load-sharing mirrors, the load-sharing mirror is up to date with the set of load-
sharing mirrors; otherwise, the load-sharing mirror will be brought up to date at the next scheduled or
manual transfer of the set of load-sharing mirrors.
The initial transfer to an empty destination volume is called a baseline transfer. During a baseline transfer
for a data protection (DP) or extended data protection (XDP) relationship, the snapmirror initialize
command takes a Snapshot copy on the source volume to capture the current image of the source volume.
For data protection relationships, the snapmirror initialize command transfers all of the Snapshot
copies up to and including the Snapshot copy created by it from the source volume to the destination
volume. For extended data protection (XDP) relationships, the snapmirror initialize command
behavior depends on the snapmirror policy associated with the relationship. If the policy type is async-
mirror then depending on the rules in the policy it can transfer either all the Snapshot copies up to and
including the Snapshot copy created by it or only the Snapshot copy created by it from the source volume to
the destination volume. For extended data protection (XDP) relationships with policy type vault or mirror-
vault the snapmirror initialize transfers only the Snapshot copy created by it.
After the snapmirror initialize command successfully completes, the last Snapshot copy transferred is
made the exported Snapshot copy on the destination volume.
You can use the snapmirror initialize command to initialize a specific load-sharing mirror that is new
to the set of load-sharing mirrors. An initialize of the new load-sharing mirror should bring it up to date with
the other up-to-date destination volumes in the set of load-sharing mirrors.
Note
Using the snapmirror initialize command to initialize a set of load-sharing mirrors will not
work. Use the snapmirror initialize-ls-set command to initialize a set of load-sharing
mirrors.
If a SnapMirror relationship does not already exist, that is, the relationship was not created using the
snapmirror create command, the snapmirror initialize command will implicitly create the
SnapMirror relationship, with the same behaviors as described for the snapmirror create command
before initializing the relationship. This implicit create feature is not supported for Vservers.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the
operation using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the
SnapMirror relationship, and the job id will be shown in the command output. The progress of the job can
be tracked using the job show and job history show commands.
The snapmirror initialize command must be used from the destination Vserver or cluster.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-source-snapshot | -s <text>] - Source Snapshot
This optional parameter specifies the Snapshot copy that snapmirror initialize will use for the
baseline transfer. For data protection (DP) relationships, the baseline transfer will include all of the
Snapshot copies up to and including the specified Snapshot copy. For extended data protection (XDP)
relationships, the snapmirror initialize command behavior depends on the snapmirror policy
associated with the relationship. If the policy type is async-mirror then depending on the rules in the
policy it can transfer either all the Snapshot copies up to and including the specified Snapshot copy or
only the specified Snapshot copy from the source volume to the destination volume. For extended data
protection (XDP) relationships with policy type vault or mirror-vault the snapmirror initialize
transfers only the specified Snapshot copy. This parameter is not supported for SnapMirror Synchronous
relationships and relationships with "Relationship Capability" of "Pre 8.2".
[-type <snapmirrorType>] - Snapmirror Relationship Type
Specifies the type of SnapMirror relationship if a relationship is implicitly created. This parameter is the
same as the one used in the snapmirror create command.
[-policy <sm_policy>] - SnapMirror Policy
This optional parameter designates the name of the SnapMirror policy which is associated with the
SnapMirror relationship. If you do not designate a policy, the current policy will be retained. This
parameter is not applicable to relationships with "Relationship Capability" of "Pre 8.2".
Note
You define and name a policy using the snapmirror policy create command.
relationships the throttle value is applicable only for asynchronous transfers performed as part of user-
initiated operations. The -throttle parameter does not affect load-sharing transfers and transfers for
other relationships with "Relationship Capability" of "Pre 8.2" confined to a single cluster.
[-transfer-priority {low|normal}] - Transfer Priority
This optional parameter specifies the priority at which the transfer runs. The default value for this
parameter is the value in the SnapMirror policy associated with this relationship. This parameter is not
applicable to relationships with a "Relationship Capability" of "Pre 8.2".
[-cg-item-mappings {<source volume>:@<destination volume>|<source item path>:@<destination
item path>}, ...] - Consistency Group Item Mappings
This optional parameter specifies a list of the consistency group (CG) item mappings. It is considered
only if the supplied source and destination path values denote valid CG paths. For CG level relationships,
this parameter must be specified. The value for this parameter must contain mappings of one or more
pairs of constituent source and destination volumes of the form srcvol1:@dstvol1,....
[-is-auto-expand-enabled {true|false}] - Is Auto Expand Enabled
This optional parameter specifies whether or not a FlexGroup SnapMirror relationship and its destination
FlexGroup should be auto-expanded if the source FlexGroup is expanded. This parameter is supported
only for FlexGroup SnapMirror relationships. If this initialize is creating a new Snapmirror relationship,
the default value is true. If it is not creating a new relationship, if a value is specified, it must match the
current value for the existing relationship. If the parameter is not specified the existing value will be
retained.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To start the initial transfer for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2 after the relationship has been created with the snapmirror
create command, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to start the initial transfer for the SnapMirror
relationship with the destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2 after the
relationship has been created with the snapmirror create command, type the following command:
To create a data protection mirror relationship between the source endpoint vs1.example.com:dept_mkt,
and the destination endpoint vs2.example.com:dep_mkt_dp_mirror, and start the initial transfer, type the
following command:
To create a data protection mirror relationship between the source endpoint cluster1://vs1.example.com/
dept_mkt, and the destination endpoint cluster2://vs2.example.com/dep_mkt_dp_mirror, and start the initial
transfer when the source cluster is running Data ONTAP 8.1 software, type the following command:
-source-path cluster1://vs1.example.com/dept_mkt
To create an extended data protection (XDP) relationship between the Data ONTAP source endpoint
vs1.example.com:data_ontap_vol, and the AltaVault destination endpoint 10.0.0.11:/share/share1, and start
the initial transfer, type the following command:
To start the initial transfer for the Vserver SnapMirror relationship with destination endpoint
dvs1.example.com: after the relationship was created with the snapmirror create command, type the
following command:
To initialize the SnapMirror Synchronous relationship between FlexVols vol_log and vol_log_sync_dp and
bring it to InSync, after it is created using the snapmirror create command, type the following
command:
To create a strict SnapMirror Synchronous relationship between FlexVols vol_log and vol_log_sync_dp, to
initialize it and bring it to InSync, type the following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following
attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination
Consistency Group cg_dst in Vserver vs2.example.com.
• It has item mappings between lun1 and lun2 on volume srcvol and lun1 and lun2 on volume dstvol.
• It uses a policy named SmgrSync that has a policy type of smgr-mirror that the user has previously created.
and initialize it and bring it InSync, type the following command:
Under PVR control to initialize the relationship with destination Consistency Group cg_dst in Vserver
vs2.example.com that has been created with the snapmirror create command and bring it InSync, type
the following command:
Related reference
snapmirror policy commands (page 850)
snapmirror create (page 752)
snapmirror policy create (page 852)
snapmirror modify (page 772)
snapmirror initialize-ls-set (page 767)
snapmirror show (page 800)
job show (page 198)
job history show (page 207)
snapmirror initialize-ls-set
Start a baseline load-sharing set transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror initialize-ls-set command initializes and updates a set of load-sharing mirrors. This
command is usually used after the snapmirror create command is used to create a SnapMirror
relationship for each of the destination volumes in the set of load-sharing mirrors. The initial transfers to
empty load-sharing mirrors are baseline transfers done in parallel. During a baseline transfer Data ONTAP
takes a Snapshot copy on the source volume to capture the current image of the source volume and
transfers all of the Snapshot copies on the source volume to each of the destination volumes.
After the snapmirror initialize-ls-set command successfully completes, the last Snapshot copy
transferred is made the exported Snapshot copy on the destination volumes.
The parameter that identifies the set of load-sharing mirrors is the source volume. Data and Snapshot copies
are transferred from the source volume to all up-to-date destination volumes in the set of load-sharing
mirrors.
Use the snapmirror initialize command to add and initialize a new destination volume to an existing
set of load-sharing mirrors.
Note
Even if the load-sharing set only has one mirror, you still need to use the snapmirror initialize-
ls-set command to initialize the set. The snapmirror initialize command can only be used to
initialize a new destination volume, if the load-sharing set has already been initialized.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing
as "Pre 8.2" in the output of the snapmirror show command.
● Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/
share/share-name>|<[vserver:]/cg/[cgname]>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
● Examples
To initialize the group of load-sharing mirrors for the source endpoint //vs1.example.com/dept_eng, type the
following command:
Related reference
snapmirror create (page 752)
snapmirror initialize (page 762)
snapmirror show (page 800)
snapmirror list-destinations
Display a list of destinations for SnapMirror sources
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror list-destinations command displays information including the destination
endpoints, the relationship status, and transfer progress, for SnapMirror relationships whose source
endpoints are in the current Vserver if you are in a Vserver context, or the current cluster if you are in a
cluster context.
The command might display several relationships that have the same source and destination endpoints, but
have different relationship IDs. If this is the case, some of the information is stale. It corresponds to
relationships that have been deleted on the destination Vserver or cluster, and have not been released yet
on the source Vserver or source cluster.
The relationships and the information displayed are controlled by the parameters that you specify. If no
parameters are specified, the command displays the following information associated with each SnapMirror
relationship whose source endpoint is in the current Vserver if you are in a Vserver context, or the current
cluster if you are in a cluster context:
• Source path
• Relationship Type
• Destination Path
• Relationship Status
• Transfer Progress
• Progress Last Updated
• Relationship ID
Note the following limitations on the information displayed by the snapmirror list-destinations
command:
• The "Relationship Status" field is not valid after the node hosting the source volume joins the cluster
quorum, until at least one transfer is performed on the SnapMirror relationship.
• "Transfer Progress" and "Progress Last Updated" fields are only valid if a Snapshot copy transfer is in
progress.
• The "Relationship ID" field is not valid for Vserver SnapMirror relationships.
• The "Relationship Status", "Transfer Progress", and "Progress Last Updated" fields are not valid for FlexGroup
relationships, but they are valid for FlexGroup constituent relationships.
The -instance and -fields parameters are mutually exclusive and select the fields that are displayed.
The -instance parameter if specified, displays detailed information about the relationships. The -fields
parameter specifies what fields should be displayed. The other parameters of the snapmirror list-
destinations command, select the SnapMirror relationships for which the information is displayed.
This command is not supported for SnapMirror relationships with non-Data ONTAP endpoints.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command only displays the fields that
you have specified.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all
relationships selected.
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Selects SnapMirror relationships that have a matching source path name.
| [-source-vserver <vserver name>] - Source Vserver
Selects SnapMirror relationships that have a matching source Vserver name.
[-source-volume <volume name>]} - Source Volume
Selects SnapMirror relationships that have a matching source volume name.
{ [-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Destination Path
Selects SnapMirror relationships that have a matching destination path name.
| [-destination-vserver <vserver name>] - Destination Vserver
Selects SnapMirror relationships that have a matching destination Vserver name.
[-destination-volume <volume name>]} - Destination Volume
Selects SnapMirror relationships that have a matching destination volume name.
● Examples
To display summary information about all relationships whose source endpoints are in the current cluster,
type the following command:
Progress
Source Destination Transfer Last Relationship
Path Type Path Status Progress Updated ID
----------- ----- ------------ ------- --------- --------- ----------------
vserver1.example.com:dp_s1
DP vserver2.example.com:dp_d1
Idle - - 06b4327b-954f-11e1-af65-123478563412
vserver1.example.com:xdp_s1
XDP vserver2.example.com:xdp_d1
Idle - - a9c1db0b-954f-11e1-af65-123478563412
vserver2.example.com:
DP dvserver2.example2.com:
Idle - - -
3 entries were displayed.
To display summary information about all relationships whose source endpoints are in the current Vserver,
type the following command:
Progress
Source Destination Transfer Last Relationship
Path Type Path Status Progress Updated ID
----------- ----- ------------ ------- --------- --------- ----------------
vserver1.example.com:dp_s1
DP vserver2.example.com:dp_d1
Idle - - 06b4327b-954f-11e1-af65-123478563412
vserver1.example.com:xdp_s1
XDP vserver2.example.com:xdp_d1
Idle - - a9c1db0b-954f-11e1-af65-123478563412
2 entries were displayed.
To display detailed information about SnapMirror relationships whose source endpoints are in the current
Vserver, type the following command:
To display summary information about all relationships including constituent relationships whose source
endpoints are in the current Vserver, type the following command:
Related reference
snapmirror show (page 800)
snapmirror modify
Modify a SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror modify command allows you to change one or more properties of SnapMirror
relationships. The key parameter that identifies any SnapMirror relationship is the destination endpoint. The
destination endpoint can be a Vserver, a volume, or a non-Data ONTAP endpoint.
For load-sharing mirrors, a change to a property affects all of the SnapMirror relationships in the set of load-
sharing mirrors. Destination volumes in a set of load-sharing mirrors do not have individual property
settings.
Changes made by the snapmirror modify command do not take effect until the next manual or
scheduled update of the SnapMirror relationship. Changes do not affect updates that have started and have
not finished yet.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror modify command must be used from the destination Vserver or cluster.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-vserver <vserver name>] - Managing Vserver
If this optional parameter is specified, designates the managing Vserver. The managing Vserver is
authorized to use some snapmirror commands to manage the SnapMirror relationship. The -vserver
option is currently a reserved option.
[-schedule <text>] - SnapMirror Schedule
This optional parameter designates the name of the schedule which is used to update the SnapMirror
relationship. If you do not designate a schedule, updates are not scheduled, so you must update the
SnapMirror relationship manually using the snapmirror update command or, in the case of a set of
load-sharing mirrors, using the snapmirror update-ls-set command.
Note
You define and name a schedule using the job schedule cron create command.
Note
You define and name a policy using the snapmirror policy create command.
This optional parameter specifies whether or not a FlexGroup SnapMirror relationship and its destination
FlexGroup should be auto-expanded if the source FlexGroup is expanded. This parameter is supported
only for FlexGroup SnapMirror relationships. If you do not specify the parameter, the current value of
auto expand will be retained.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To change the schedule to halfhour for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to change the schedule to halfhour for the
SnapMirror relationship with the destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror2,
type the following command:
To change the schedule to halfhour for the Vserver SnapMirror relationship with destination endpoint
dvs1.example.com:, type the following command:
To change the policy associated with the synchronous SnapMirror Consistency Group relationship with the
destination Consistency Group cg_dst in Vserver vs2.example.com to the policy Sync2, type the following
command:
Related reference
snapmirror update (page 834)
snapmirror update-ls-set (page 838)
job schedule cron create (page 219)
snapmirror policy create (page 852)
snapmirror show (page 800)
snapmirror promote
Promote the destination to read-write
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror promote command performs a failover to the destination volume of a SnapMirror
relationship. This command changes the destination volume from a read-only volume to a read-write
volume and makes the destination volume assume the identity of the source volume. The command then
destroys the original source volume. The destination volume must be a load-sharing volume. Note that you
can promote a load-sharing volume that has been left in read-write state by a previously failed promote
operation.
Client accesses are redirected from the original source volume to the promoted destination volume. The
view clients see on the promoted destination volume is the latest transferred Snapshot copy, which might
lag behind the view clients had of the original source volume before the promote.
The SnapMirror relationship is always deleted as part of the promotion process.
It is possible that the original source volume is the source of multiple SnapMirror relationships. For such a
configuration, the promoted destination volume becomes the new source volume of the other SnapMirror
relationships.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing
as "Pre 8.2" in the output of the snapmirror show command.
The snapmirror promote command fails if a SnapMirror transfer is in progress for any SnapMirror
relationship with "Relationship Capability" of "Pre 8.2" involving the original source volume. It does not fail if
a SnapMirror transfer is in progress for a relationship with "Relationship Capability" of "8.2 and above".
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-force | -f [true]] - Force
If this parameter is specified, the command proceeds without prompting for confirmation.
● Examples
To promote a mirror named dept_eng_ls_mirror1 to be the source read-write volume for mirroring and client
access, type the following command:
Related reference
snapmirror show (page 800)
snapmirror protect
Start protection for Vservers and volumes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror protect command establishes SnapMirror protection for the specified Vserver or a list of
volumes. For each endpoint, the command creates a data protection destination endpoint in the Vserver
specified by the -destination-vserver parameter, creates an extended data protection (XDP)
SnapMirror relationship, and starts the initialization of the SnapMirror relationship. This command must be
used from the destination Vserver or cluster. This command is not supported for FlexGroup volume
constituents or non ONTAP endpoints.
● Parameters
[-source-cluster <Cluster name>] - Source Cluster
This optional parameter specifies the source cluster name. This parameter is valid only when only a
single Vserver is specified in the path-list parameter.
-path-list {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/share/
share-name>|<[vserver:]/cg/[cgname]>}, ... - Path List
This parameter specifies the list of source endpoints to be protected. The list is a comma separated list of
paths of the form vserver:volume or vserver:, for example vs1.example.com:dept_eng1,
vs1.example.com:dept_eng2 or vs1.example.com:. If the list contains a Vserver endpoint, then the
Vserver is the only endpoint that can be specified, and the list cannot contain a mixture of volume and
Vserver endpoints.
[-destination-vserver <vserver name>] - Destination Vserver
This parameter specifies the Vserver in which to create the destination volumes of the SnapMirror
relationships. When protecting a single Vserver, this parameter specifies the destination Vserver
endpoint for protection.
[-destination-vserver-ipspace <IPspace>] - Destination Vserver IPSpace Name
This optional parameter specifies the IPspace the Vserver will be assigned to. If left unspecified, the
Vserver will be assigned to the default IPspace. This parameter is supported while protecting a Vserver.
[-schedule <text>] - SnapMirror Schedule
This optional parameter designates the name of the schedule which is used to update the SnapMirror
relationships.
-policy <sm_policy> - SnapMirror Policy
This parameter designates the name of the SnapMirror policy which is associated with the SnapMirror
relationships.
[-auto-initialize {true|false}] - Auto Initialize
This optional parameter specifies whether or not initializes of the SnapMirror relationships should be
started after the relationships are created. The default value for this parameter is true.
[-destination-volume-prefix <text>] - Destination Volume Name Prefix
This optional parameter designates the prefix for the destination volume name. For example if the
source path is of the form vserver:volume and the destination-volume-prefix specified is prefix_ and no
destination-volume-suffix is specified, then the destination volume name will be prefix_volume_dst or
possibly prefix_volume_1_dst if a name conflict is encountered. If both prefix and suffix are specified as
prefix_ and _suffix, then the destination volume name will be prefix_volume_suffix or
prefix_volume_1_suffix, if a name conflict is encountered. This parameter is not supported for Vserver
endpoints.
[-destination-volume-suffix <text>] - Destination Volume Name Suffix
This optional parameter designates the suffix for the destination volume name. If you do not desginate
a suffix, a destination volume name with suffix _dst will be used. For example if the source path is of the
form vserver:volume, and the suffix specified is _DP, the destination volume will be created with the
name volume_DP or volume_1_DP if a name conflict is encountered. If both prefix and suffix are
specified as prefix_ and _suffix, then the destination volume name will be prefix_volume_suffix or
prefix_volume_1_suffix, if a name conflict is encountered. This parameter is not supported for Vserver
endpoints.
[-support-tiering {true|false}] - Provision Destination Volumes on FabricPools
This optional parameter specifies whether or not FabricPools are selected when provisioning a FlexVol
volume or a FlexGroup volume during protection workflows. When this parameter is set to true, only
FabricPools are used; when set to false, only non-FabricPools are used. Tiering support for a FlexVol
volume can be changed by moving the volume to the required aggregate. Tiering support for a
FlexGroup volume can be changed by moving all of the constituents to the required aggregates. The
default value is false. This parameter is supported only for FlexVol volumes and FlexGroup volumes.
[-tiering-policy {snapshot-only|auto|none|all}] - Destination Volume Tiering Policy
This optional parameter specifies the tiering policy to apply to the destination FlexVol volume or
FlexGroup volume. This policy determines whether the user data blocks of a FlexVol volume or FlexGroup
volume in a FabricPool will be tiered to the capacity tier when they become cold. FabricPool combines
flash (performance tier) with an object store (external capacity tier) into a single aggregate. The default
tiering policy is 'snapshot-only' for a FlexVol volume and 'none' for a FlexGroup volume. The temperature
of a FlexVol volume or FlexGroup volume block increases if it is accessed frequently and decreases when
it is not.
The available tiering policies are:
• snapshot-only - This policy allows tiering of only the FlexVol volume or FlexGroup volume Snapshot
copies not associated with the active file system. The default minimum cooling period is 2 days. The -
tiering-minimum-cooling-days parameter can be used to override the default using the volume
modify command after the destination FlexVol volume or FlexGroup volume has been created.
• auto - This policy allows tiering of both snapshot and active file system user data to the capacity tier.
The default cooling period is 31 days. The -tiering-minimum-cooling-days parameter can be
used to override the default using the volume modify command after the destination FlexVol
volume or FlexGroup volume has been created.
• none - FlexVol volume or FlexGroup volume blocks will not be tiered to the capacity tier.
• backup - On a DP FlexVol volume or FlexGroup volume this policy allows all transferred user data
blocks to start in the capacity tier.
This parameter is supported only for FlexVol volumes and FlexGroup volumes.
● Examples
To establish SnapMirror protection for the source volumes vs1.example.com:dept_eng1 and
vs1.example.com:dept_eng2 using destination-vserver vs2.example.com and policy MirrorAllSnapshots type
the following command:
To establish SnapMirror protection for the source Vserver vs1.example.com which is on cluster cluster1
creating a destination-vserver named vs1dp.example.com and using policy MirrorAllSnapshots type the
following command:
snapmirror quiesce
Disable future transfers
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror quiesce command disables future transfers for a SnapMirror relationship. If there is no
transfer in progress, the relationship becomes "Quiesced".
If there is a transfer in progress, it is not affected, and the relationship becomes "Quiescing" until the transfer
completes. If the current transfer aborts, it will be treated like a future transfer and will not restart.
If applied to a load-sharing (LS) SnapMirror relationship, all the relationships in the load-sharing set will be
quiesced.
If applied to a relationship with a policy of type strict-sync-mirror or sync-mirror, any ongoing operation is
aborted, and if the relationship is InSync, synchronous replication is stopped. If the replication policy type is
"strict-sync-mirror", then the primary client I/O disruption is not enforced. A new common Snapshot copy is
created if the relationship is InSync, unless a recent one exists. The relationship becomes "Quiescing" until
these operations complete.
When a SnapMirror relationship is quiesced, it remains quiesced across reboots and fail-overs.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror quiesce command must be used from the destination Vserver or cluster.
The relationship must exist on the destination Vserver or cluster. When issuing snapmirror quiesce, you
must specify the destination endpoint. The specification of the source endpoint of the relationship is
optional.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
● Examples
To quiesce the SnapMirror relationship with the destination endpoint vs2.example.com:dept_eng_mirror2,
type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to quiesce the SnapMirror relationship with the
destination endpoint cluster2://vs2.example.com/dept_eng_mirror2, type the following command:
To quiesce the Vserver SnapMirror relationship with the destination endpoint dvs1.example.com:, type the
following command:
Under PVR control to quiesce the synchronous SnapMirror Consistency Group relationship with the
destination Consistency Group cg_dst in Vserver vs2.example.com, type the following command:
Related reference
snapmirror show (page 800)
snapmirror resume (page 790)
snapmirror release
Remove source information for a SnapMirror relationship
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror release command removes the relationship information from the source Vserver. The
command also removes any Snapshot copy owner tags and any Snapshot copies which were created for the
specified relationship from the source volumes. It does not destroy any volumes or Vservers. This command
must be used from the source Vserver or cluster.
For relationships with a policy of type strict-sync-mirror, it additionally resumes primary volume access if the
IO was restricted because the relationship was OutOfSync.
You can use the snapmirror list-destinations command to display source Vservers' relationship
information. This information is populated during the first SnapMirror transfer, not when the snapmirror
create command is issued.
This command is not supported for SnapMirror relationships with the field "Relationship Capability" showing
as "Pre 8.2" in the output of the snapmirror show command.
This command is not supported for SnapMirror relationships with non-Data ONTAP endpoints.
The snapmirror release operation fails if a SnapMirror transfer for the SnapMirror relationship is in a
data phase of the transfer.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Specifies the source endpoint of the SnapMirror relationship in one of two formats. The normal format
includes the names of the Vserver (vserver), and/or volume (volume). A format which also includes the
name of the cluster (cluster) is also provided for consistency with other snapmirror commands. The form
of the pathname which includes the cluster name cannot be used when operating in a Vserver context.
| [-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameter -source-volume must also be specified.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameter -
source-vserver must also be specified.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
Specifies the destination endpoint of the SnapMirror relationship in one of two formats. The normal
format includes the names of the Vserver (vserver), and/or volume (volume). A format which also
includes the name of the cluster (cluster) is also provided for consistency with other snapmirror
commands. The form of the pathname which includes the cluster name cannot be used when operating
in a Vserver context.
| -destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameter -destination-volume must also be specified.
[-destination-volume <volume name>]} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified, parameter
-destination-vserver must also be specified.
[-relationship-info-only [true]] - Remove relationship info only (skip cleanup of snapshots)
If this parameter is specified, the cleanup of Snapshot copies is bypassed and only the source
relationship information is removed. It is recommended to specify this parameter only when the source
volume is not accessible.
[-relationship-id <UUID>] - Relationship ID
This optional parameter specifies the relationship identifier of the relationship. It must be specified
when information for more than one relationship with the same source and destination paths is present.
This parameter is not supported for Vserver SnapMirror relationships.
[-force | -f [true]] - Force
If this parameter is specified, the command proceeds without prompting for confirmation.
● Examples
To release the source information for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror4, type the following command:
To release the source information for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror4, and relationship-id 5f91a075-6a72-11e1-b562-123478563412,
type the following command:
To release the source information for the SnapMirror relationship with the destination endpoint
dvs1.example.com:, type the following command:
Under PVR control to release the source information for the synchronous SnapMirror Consistency Group
relationship with the destination Consistency Group cg_dst in Vserver vs2.example.com, type the following
command:
Under PVR control to release just the source information but not remove the Snapshot copies that might be
needed for a subsequent resync for the synchronous SnapMirror Consistency Group relationship with the
destination Consistency Group cg_dst in Vserver vs2.example.com, type the following command:
Related reference
snapmirror list-destinations (page 768)
snapmirror create (page 752)
snapmirror show (page 800)
snapmirror restore
Restore a Snapshot copy from a source volume to a destination volume
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror restore command restores the entire contents of a Snapshot copy or one or more files,
LUNs or NVMe namespaces of a Snapshot copy from one volume to another volume.
restore to which one or more files, LUNs or NVMe namespaces are being restored, can simultaneously be
the source volume of a snapmirror update.
For a file, LUN or NVMe namespace restore the following steps are performed:
• Create the RST SnapMirror relationship.
• If any file, LUN or NVMe namespace being restored does not exist on the destination volume, create all
such files, LUNs or NVMe namespaces.
• Prevent client I/O to files, LUNs or NVMe namespaces to which data is being restored on the destination
volume.
• Revoke locks and space reservations held by NAS clients for files being restored.
• Copy the contents of all source files, LUNs or NVMe namespaces to the corresponding file, LUN or NVMe
namespace on the destination volume.
• Allow client I/O to files, LUNs or NVMe namespaces to which data has been restored on the destination
volume.
• Delete the RST SnapMirror relationship.
Note
Some file restore operations require a Snapshot copy to be created. This Snapshot copy is temporary, it
is deleted before the operation completes.
Since client I/O is not allowed to files, LUNs or NVMe namespaces being restored, client I/O to files, LUNs or
NVMe namespaces being restored should be quiesced. Mapped LUNs or NVMe namespaces remain mapped
throughout the operation. SAN clients do not need to rediscover a mapped LUN that has been restored.
Restoring an NVMe namespace on top of another NVMe namespace with a different attribute relevant to
NVMe protocol accessibility (like size) is not supported.
If snapmirror restore fails or is aborted, the RST relationship remains. Use the snapmirror show
command with the destination volume to display the reason for the error. An EMS is also generated when a
failure occurs. There are two options to recover when restore fails or is aborted:
• Take corrective action suggested by the EMS and reissue the original command.
• Use snapmirror restore -clean-up-failure along with specifying the destination volume to cancel
the request.
When specifying -clean-up-failure to cancel a file restore request, the following steps are performed:
• Any files to which client I/O is not allowed are removed.
• Any Snapshot copy created for use during a file restore operation is deleted.
• The RST SnapMirror relationship is deleted.
Note
LUNs to which client I/O is not allowed remain. For LUNs to which client I/O is not allowed, do one of
the following:
• Use the snapmirror restore command to restore data to the LUN. Once the command completes
successfully, client I/O to the LUN is allowed.
• Delete the LUN using the lun delete command with the -force-fenced parameter.
Note
Similarly, NVMe namespaces to which client I/O is not allowed remain. For NVMe namespaces to
which client I/O is not allowed, do one of the following:
• Use the snapmirror restore command to restore data to the NVMe namespace. Once the command
completes successfully, client I/O to the NVMe namespace is allowed.
• Delete the NVMe namespace using the vserver nvme namespace delete command with the -skip-
mapped-check parameter.
The snapmirror restore command must be used from the destination Vserver or cluster.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Specifies the source endpoint in one of three formats. The basic format includes the names of the
Vserver (vserver) and volume (volume). A format which also includes the name of the cluster (cluster) is
supported for consistency with other snapmirror commands. The form of the pathname which includes
the cluster name is not valid when operating in a Vserver context. A non-Data ONTAP source endpoint
(for example, AltaVault) can be specified in the form hostip:/share/share-name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the cluster in which the source volume resides. This parameter is not needed; it is provided for
consistency with other snapmirror commands. If this parameter is specified, the -source-vserver
and -source-volume parameters must also be specified. This parameter is not valid when operating in
a Vserver context. This parameter is not supported if the source is a non-Data ONTAP endpoint.
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. If this parameter is specified, the -source-
volume parameter must also be specified. This parameter is not supported if the source is a non-Data
ONTAP endpoint.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, the -source-
vserver parameter must also be specified. This parameter is not supported if the source is a non-Data
ONTAP endpoint.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
Specifies the destination endpoint in one of two formats. The basic format includes the names of the
Vserver (vserver) and volume (volume). A format that also includes the name of the cluster (cluster) is
supported for consistency with other snapmirror commands. The form of the pathname which includes
the cluster name is not valid when operating in a Vserver context.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the cluster in which the destination volume resides. This parameter is not needed; it is
provided for consistency with other snapmirror commands. If this parameter is specified, the -
destination-vserver and -destination-volume parameters must also be specified. This
parameter is not valid when operating in a Vserver context. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver. If this parameter is specified, the -destination-volume parameter
must also be specified.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume. If this parameter is specified, the -destination-vserver parameter
must also be specified.
[-source-snapshot | -s <text>] - Source Snapshot
When restoring the entire contents of a Snapshot copy, this optional parameter identifies the Snapshot
copy to be restored from the source volume to the destination volume. The default value is the latest
snapshot on the source volume. When restoring one or more files, LUNs or NVMe namespaces from a
Snapshot copy, this parameter is required.
[-throttle | -k <throttleType>] - Throttle (KB/sec)
This optional parameter limits the network bandwidth used for the restore transfer when the source and
destination volumes belong to different clusters. It sets the maximum rate (in Kbytes/sec) at which data
can be transferred between the clusters during the operation. To fully use the network bandwidth
available between the clusters, set the throttle value to unlimited or 0. The minimum throttle value is
four Kbytes/sec, so if you specify a throttle value between 1 and 4, it will be treated as if you specified 4.
[-transfer-priority {low|normal}] - Transfer Priority
This optional parameter specifies the priority at which the transfer runs. The default value for this
parameter is normal.
[-disable-storage-efficiency [true]] - Disable Storage Efficient Transfer
The default behavior of restore is to preserve storage efficiency when possible. Use this optional
parameter to turn off storage efficiency for data transferred over the wire and written to the destination
volume.
[-clean-up-failure [true]] - Clean up after Failure
Use this optional parameter to recover from an aborted or failed restore operation. Any temporary RST
relationship is removed from the destination Vserver. An attempt is made to remove any temporary RST
relationship from the source Vserver. If cleaning up an incomplete restore of the entire contents of a
Snapshot copy and the destination volume was read-write prior to the failed or aborted restore
operation, it is converted back to read-write if necessary, while removing all data transferred or copied
during the restore operation. If cleaning up an incomplete restore of one or more files, LUNs or NVMe
namespaces of a Snapshot copy, any file to which client I/O is not allowed is deleted.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Specifies the total number of attempts to transfer data in cases where a transfer is interrupted by an
error that SnapMirror can recover from. The value of this parameter must be a positive integer or
unlimited.
[-force | -f [true]] - Force
If this parameter is specified, the command proceeds without prompting for confirmation.
[-file-list <<source path>[,@<destination path>]>, ...] - File List
Specifies the files, LUNs or NVMe namespaces to be restored. The list can contain specifications for up to
8 files, LUNs or NVMe namespaces. Specification for each file, LUN or NVMe namespace consists of a
source_path and an optional destination_path, and is of the form 'source_path[,@destination_path]'.
source_path is required and is the path of the file from the source Snapshot copy, e.g. /dira/file1 or /lun1.
The source path does not include the Snapshot name nor the source volume name. The path to each file
to be restored in the active file system of the destination volume is the same as the path specified by
source_path, unless an optional destination_path is specified. destination_path begins with the @
symbol followed by the path of the file from the root of the active file system of the destination volume,
e.g. @/file1 or @/dira/lun1. Each source_path and destination_path is a separate entity in the list of
paths. A destination_path is associated with the source_path that immediately precedes it. If this
parameter is specified, -source-snapshot must also be specified. Examples:
/dira/file1
/dira/file1,@/dirb/file2
/dira/file1,@/dirb/file2,/dirc/file3
● Examples
The following example does an incremental restore between the restore source volume
vs2.example.com:dept_eng_dp_mirror2 and the restore destination volume vs1.example.com:dept_eng:
The following example restores /file3 from the source Snapshot copy snap3 on the source volume
vs2.example.com:dept_eng_dp_mirror2 to the active file system of the restore destination volume
vs1.example.com:dept_eng:
The following example restores /file3 from the source Snapshot copy snap3 on the source volume
vs2.example.com:dept_eng_dp_mirror2 to /file3.new in the active file system of the restore destination
volume vs1.example.com:dept_eng:
The following example restores /file1, /file2, and /file3 from the source Snapshot copy snap3 on the source
volume vs2.example.com:dept_eng_dp_mirror2 respectively to /file1.new, /file2, and /file3.new in the active
file system of the restore destination volume vs1.example.com:dept_eng:
-source-snapshot snap3
-file-list /file1,@/file1.new,/file2,/file3,@/file3.new
Warning: This command will overwrite any file on destination
"vs1.example.com:dept_eng" that has the same path as any of
the files to be restored.
Do you want to continue? {y|n}: y
[Job 36] Job is queued: snapmirror restore from source
"vs2.example.com:dept_eng_dp_mirror2" for the snapshot snap3.
vs1.example.com::>
The following example deletes data from an incomplete file restore, captured in a Snapshot copy which was
later promoted to the active file system of a volume.
Related reference
22. SnapMirror Commands (page 747)
volume snapshot restore (page 1844)
volume clone commands (page 1687)
snapmirror break (page 750)
volume quota modify (page 1798)
volume snapshot create (page 1838)
snapmirror show (page 800)
snapmirror update (page 834)
lun delete (page 230)
vserver nvme namespace delete (page 2237)
volume show-space (page 1678)
snapmirror resume
Enable future transfers
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror resume command enables future transfers for a SnapMirror relationship that has been
quiesced.
If there is a scheduled transfer for the relationship, it will be triggered on the next schedule. If there is a
restart checkpoint, it will be re-used if possible.
If applied on a load-sharing (LS) SnapMirror relationship, it enables future transfers for all the relationships
in the load-sharing set.
If applied on a relationship with a policy of type strict-sync-mirror or sync-mirror, it enables future resync
operations and initiates an Auto Resync.
When a quiesced SnapMirror relationship is resumed, future transfers remain enabled across reboots and
fail-overs.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
The snapmirror resume command must be used from the destination Vserver or cluster.
The relationship must exist on the destination Vserver or cluster. When issuing snapmirror resume, you
must specify the destination endpoint. The specification of the source endpoint of the relationship is
optional.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
● Examples
To re-enable future transfers for the SnapMirror relationship with the destination endpoint
vs2.example.com:dept_eng_dp_mirror2 that has been previously quiesced, type the following command:
To re-enable future transfers for the SnapMirror relationship with the destination endpoint cluster2://
vs2.example.com/dept_eng_dp_mirror2 that has been previously quiesced, type the following command:
To re-enable future transfers for the Vserver SnapMirror relationship with the destination endpoint
dvs1.example.com: that has been previously quiesced, type the following command:
Under PVR control to re-enable future transfers and initiate an Auto Resync of the synchronous SnapMirror
Consistency Group relationship with the destination Consistency Group cg_dst in Vserver vs2.example.com,
type the following command:
Related reference
snapmirror show (page 800)
snapmirror quiesce (page 779)
snapmirror resync
Start a resynchronize operation
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror resync command establishes or reestablishes a mirroring relationship between a source
and a destination endpoint. The endpoints can be Vservers, volumes or non-Data ONTAP endpoints that
support SnapMirror. snapmirror resync for a SnapMirror relationship with volumes as endpoints is
typically executed in the following cases:
• The destination mirror is broken (that is, the destination volume is a read-write volume and no longer a
data protection mirror). After the snapmirror resync command completes, the destination volume is
made a data protection mirror and the mirror can be manually updated or scheduled for updates.
• snapmirror update command failed because the required common Snapshot copy was deleted on the
source volume.
• The volumes are the first and third endpoints in a cascade chain of relationships and they have a common
Snapshot copy. In this case, snapmirror resync might implicitly create the SnapMirror relationship
between them.
The snapmirror resync command can cause data loss on the destination
volume because the command can remove the exported Snapshot copy on the
destination volume.
The default behavior of the snapmirror resync command for volume relationships is defined as follows:
• Finds the most recent common Snapshot copy between the source and destination volumes, removes
Snapshot copies on the destination volume that are newer than the common Snapshot copy and mounts
the destination volume as a DP volume with the common Snapshot copy as the exported Snapshot copy.
• For data protection (DP) relationships, takes a Snapshot copy of the source volume to capture the current
image and transfers Snapshot copies that are newer than the common Snapshot copy from the source
volume to the destination volume. For extended data protection (XDP) relationships, transfers Snapshot
copies newer than the common Snapshot copy according to the relationship policy, i.e., Snapshot copies
will match rules associated with the policy as defined by the snapmirror policy commands. For
relationships associated with snapmirror policy of type async-mirror and mirror-vault the snapmirror
resync first takes a Snapshot copy of the source volume and includes it in the Snapshot copies selected
for transfer.
• For a SnapLock Compliance volume in an XDP relationship with SnapMirror policy of type async-mirror, if
SnapMirror resync operation detects data divergence between the common Snapshot copy and the AFS on
the destination volume, the resync operation preserves the data changes in a locked Snapshot copy for
the duration of the current volume expiry time. If the volume expiry time is in the past or has not been
set, then the Snapshot copy is locked for a duration of 30 days. The common Snapshot copy is also locked
for the same duration.
For Vserver SnapMirror relationships, a resync operation is typically executed when the relationship is
broken-off, the subtype of the destination Vserver is default and the destination volumes are of type read-
write. Once the command is queued, the subtype of the destination Vserver changes from default to dp-
destination. A successful resync operation also makes the destination Vserver's volumes data protection
volumes.
If the resync command is executed on a Vserver SnapMirror relationship, and the corresponding source and
destination Vservers have volumes with volume level SnapMirror relationships, then the volume level
SnapMirror relationships will be converted to volumes under the Vserver SnapMirror relationship. This
conversion is supported only for source and destination Vservers which have been transitioned from a 7-
Mode vFiler into a C-Mode Vserver. Some basic pre-requisites for the conversion are that the destination
Vserver should be in a stopped state and all the destination Vserver volumes except the root volume should
be in a volume level SnapMirror relationship with volumes of the source Vserver. The state of these volume
level SnapMirror relationships should be Snapmirrored and status should be Idle.
snapmirror resync for a relationship with a policy of type strict-sync-mirror or sync-mirror is typically
executed in the following case:
• The destination mirror is broken (that is, the destination volume is read-write and no longer read-only).
After the snapmirror resync command completes, the destination volume changes to read-only and
the relationship to InSync.
Note
The snapmirror resync command is typically not required to return a relationship that has fallen
out of sync due to an error condition to InSync because SnapMirror has Auto Resync for synchronous
relationships. When SnapMirror detects that the relationship has fallen out of sync for any reason
other than a snapmirror quiesce, snapmirror break or snapmirror delete command was
exectuted on the relationship, it will automatically initiate a resync operation.
The default behavior of the snapmirror resync command for relationships with a policy of type strict-
sync-mirror or sync-mirror is defined as follows:
• Creates a Snapshot copy on the destination of the current image of the destination file system. This
Snapshot copy becomes the exported Snapshot copy for the volume during the resync operation.
• Finds the most recent common Snapshot copy between the source and destination volumes. Performs a
local rollback transfer to give the active file system the same data as the common Snapshot copy. It then
loops through a sequence, creating a Snapshot copy on the source volume, transferring the data captured
in that Snapshot copy, creating a Snapshot copy of the data on the destination, and repeating until the
relationship is close to InSync. After the last transfer, it enters cutover to bring the relationship to InSync.
• User-created Snapshot copies are not replicated by a resync operation.
• At the conclusion of the resync operation, the exported Snapshot copy on the destination is removed and
the client will then see the active file system on the destination volume. The relationship will be InSync
and periodic creation of common Snapshot copies will resume.
The snapmirror resync command supports an optional parameter "preserve". The parameter "preserve" is
only supported for extended data protection (XDP) relationships. It is not supported for relationships with a
non-Data ONTAP endpoint. It is not supported for relationships with a policy of type strict-sync-mirror and
sync-mirror. When used, the parameter "preserve" changes the behavior of the snapmirror resync
command. The changed behavior of the command can be described as follows:
• Finds the most recent common Snapshot copy between the source and destination volumes, preserves all
Snapshot copies on the destination volume that are newer than the common Snapshot copy, and mounts
the destination volume as a DP volume with the common Snapshot copy as the exported Snapshot copy.
• Performs a local rollback transfer to make a copy of the common Snapshot copy on the destination volume
and establish it as the latest Snapshot copy on the destination volume. The command then transfers all
Snapshot copies that are newer than the common Snapshot copy, from the source volume to the
destination volume. The command only transfers Snapshot copies that match the relationship's policy, i.e.,
Snapshot copies will match rules associated with the policy as defined by the snapmirror policy
commands.
If a SnapMirror relationship does not already exist, that is, the relationship was not created using the
snapmirror create command, the snapmirror resync command will implicitly create the SnapMirror
relationship, with the same behaviors as described for the snapmirror create command before resyncing
it.
For Vservers, you must create SnapMirror relationships between Vservers by using the snapmirror create
command before you run the snapmirror resync command. The snapmirror resync command does not
implicitly create the relationship.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the
operation using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the
SnapMirror relationship, and the job id will be shown in the command output. The progress of the job can
be tracked using the job show and job history show commands.
The snapmirror resync command fails if the destination volume does not have a Snapshot copy in
common with the source volume.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-destination-vserver <vserver name> - Destination Vserver
Specifies the destination Vserver of the SnapMirror relationship. For relationships with volumes as
endpoints, if this parameter is specified, parameters -destination-volume and for relationships with
"Relationship Capability" of "Pre 8.2", -destination-cluster must also be specified. This parameter is
not supported for relationships with non-Data ONTAP destination endpoints.
-destination-volume <volume name>} - Destination Volume
Specifies the destination volume of the SnapMirror relationship. If this parameter is specified,
parameters -destination-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -
destination-cluster must also be specified. This parameter is not supported for relationships with
non-Data ONTAP destination endpoints.
[-source-snapshot | -s <text>] - Source Snapshot
This optional parameter specifies a Snapshot copy to transfer. The default behavior for a data protection
relationship with a read-write source is that Data ONTAP creates a new Snapshot copy and uses it as the
basis for determining what data are replicated; with this option, the specified Snapshot copy will be
used instead. The default behavior for an extended data protection relationship depends on the
relationship's policy type. For a data protection relationship, the specified Snapshot copy must be newer
than the latest common Snapshot copy. For an extended data protection relationship, the specified
Snapshot copy can be newer or older than the common Snapshot copy. This parameter is not supported
for relationships with "Relationship Capability" of "Pre 8.2".
[-type <snapmirrorType>] - Snapmirror Relationship Type
Specifies the type of SnapMirror relationship if a relationship is implicitly created. This parameter is the
same as the one used in the snapmirror create command.
[-policy <sm_policy>] - SnapMirror Policy
This optional parameter designates the name of the SnapMirror policy which is associated with the
SnapMirror relationship. If you do not designate a policy, the current policy will be retained. This
parameter is not applicable to relationships with "Relationship Capability" of "Pre 8.2".
Note
You define and name a policy using the snapmirror policy create command.
This parameter is only supported for extended data protection (XDP) relationships. This parameter is not
supported for relationships with non-Data ONTAP endpoints. It is not supported for relationships with a
policy of type strict-sync-mirror and sync-mirror. Specifying this optional parameter reduces the resync
time because the resync does not incur storage efficiency overhead before the transfer of new data.
Specifying this parameter is recommended if the source of the resync does not have volume efficiency
enabled or if reducing resync time is more important than preserving all possible storage efficiency.
When this parameter is specified, resync does not preserve the storage efficiency of the new data with
existing data over the wire and on the destination.
[-cg-item-mappings {<source volume>:@<destination volume>|<source item path>:@<destination
item path>}, ...] - Consistency Group Item Mappings
This optional parameter specifies a list of the consistency group (CG) item mappings. It is considered
only if the supplied source and destination path values denote valid CG paths. For CG level relationships,
this parameter must be specified. The value for this parameter must contain mappings of one or more
pairs of constituent source and destination volumes of the form srcvol1:@dstvol1,....
[-is-auto-expand-enabled {true|false}] - Is Auto Expand Enabled
This optional parameter specifies whether or not a FlexGroup SnapMirror relationship and its destination
FlexGroup should be auto-expanded if the source FlexGroup is expanded. This parameter is supported
only for FlexGroup SnapMirror relationships. If this resync is creating a new Snapmirror relationship, the
default value is true. If it is not creating a new relationship, if a value is specified, it must match the
current value for the existing relationship. If the parameter is not specified, the existing value will be
retained.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To reestablish mirroring for the destination endpoint vs2.example.com:dept_mkt_mirror that has been
previously broken off with the snapmirror break command, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to reestablish mirroring for the destination
endpoint cluster2://vs2.example.com/dept_mkt_mirror that has been previously broken off with the
snapmirror break command, type the following command:
To create a SnapMirror relationship and reestablish mirroring between the destination endpoint named
vs2.example.com:dept_eng_dp_mirror2 and the source endpoint named vs1.example.com:dept_eng, type
the following command:
To create a SnapMirror relationship and reestablish mirroring between the destination endpoint named
cluster2://vs2.example.com/dept_eng_dp_mirror2 and the source endpoint named cluster1://
vs1.example.com/dept_eng when the source cluster is running Data ONTAP 8.1 software, type the following
command:
To create and reestablish an extended data protection (XDP) relationship between the Data ONTAP source
endpoint vs1.example.com:data_ontap_vol, and the non-Data ONTAP (for example, AltaVault) destination
endpoint 10.0.0.11:/share/share1, and start the initial transfer, type the following command:
To reestablish mirroring for the destination endpoint dvs1.example.com: of a Vserver relationship that has
been previously broken off with the snapmirror break command, type the following command:
Under PVR control to create a SnapMirror synchronous Consistency Group relationship with the following
attributes:
• It is between the source Consistency Group cg_src in Vserver vs1.example.com, and the destination
Consistency Group cg_dst in Vserver vs2.example.com.
• It has item mappings between lun1 and lun2 on volume srcvol and lun1 and lun2 on volume dstvol.
• It uses a policy named Sync that has a policy type of smgr-mirror that the user has previously created.
and reestablish mirroring, type the following command:
Under PVR control to reestablish mirrroring to the destination Consistency Group cg_dst in Vserver
vs2.example.com that has been previously broken off with the snapmirror break command, type the
following command:
Related reference
snapmirror create (page 752)
snapmirror policy create (page 852)
snapmirror modify (page 772)
snapmirror update (page 834)
snapmirror policy commands (page 850)
snapmirror quiesce (page 779)
snapmirror break (page 750)
snapmirror delete (page 759)
snapmirror set-options
Display/Set SnapMirror options
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror set-options command can be used to display or set snapmirror options.
● Parameters
[-dp-source-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for DP Source
Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for source DP transfers
[-xdp-source-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for XDP Source
Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for source XDP transfers
[-dp-destination-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for DP
Destination Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for destination DP transfers
[-xdp-destination-xfer-reserve-pct {0|25%|50%|75%|100%}] - Percentage Reserved for XDP
Destination Transfers
Specifies the percentage of maximum allowed concurrent transfers reserved for destination XDP
transfers
● Examples
The following example displays SnapMirror options:
snapmirror show
Display a list of SnapMirror relationships
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror show command displays information associated with SnapMirror relationships. By default,
the command displays the following information:
• Source path
• Relationship Type
• Destination Path
• Mirror State
• Relationship Status
• Total Progress
• Healthy
• Progress Last Updated
For backward compatibility with clustered Data ONTAP 8.1, and to accommodate load-sharing relationships
which are only supported in a Data ONTAP 8.1 compatible way, SnapMirror relationships, which match one
of the following conditions are managed as on clustered Data ONTAP 8.1: (1) The relationship is of type
load-sharing; (2) The source endpoint of the relationship is on a remote Data ONTAP 8.1 cluster; (3) The
local cluster was upgraded from clustered Data ONTAP 8.1, the relationship was created before the upgrade,
and the relationship has not yet been converted to one with Data ONTAP 8.2 capabilities. These
relationships have the same limitations as on clustered Data ONTAP 8.1. Especially, they support the same
set of information fields. The "Relationship Capability" field is set to "Pre 8.2" for these relationships.
The snapmirror show command displays information for SnapMirror relationships whose destination
endpoints are in the current Vserver if you are in a Vserver context, or in the current cluster if you are in a
cluster context, or on a non-Data ONTAP endpoint that supports SnapMirror (for example, AltaVault). For
backward compatibility with clustered Data ONTAP 8.1, the command also displays information for
SnapMirror relationships with the "Relationship Capability" of "Pre 8.2", and whose source endpoints are in
the current Vserver or cluster, and destination endpoints are in different Vservers or clusters. You must use
the snapmirror list-destinations command to display information for SnapMirror relationships
whose source endpoints are in the current Vserver or current cluster.
Some of the SnapMirror relationship information is cached. The snapmirror show command only returns
the cached information, therefore there is a delay after the information is changed before it is reflected in
the snapmirror show output. Other information, such as progress metrics during a transfer, is only
updated periodically and can be very delayed in the snapmirror show output.
The -instance and -fields parameters are mutually exclusive and select the information fields that are
displayed. The other parameters to the snapmirror show command select the SnapMirror relationships for
which information is displayed. The -instance displays detailed information fields including:
"Relationship Capability"
of "8.2 and above".
SnapMirror Policy: Name of the SnapMirror policy associated
with the relationship.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
SnapMirror Schedule: Name of the schedule (empty if there is
no schedule) associated with the
relationship.
Tries Limit: Maximum number of times a transfer will be
tried.
Only for relationships with
"Relationship Capability"
of "Pre 8.2".
This parameter is not supported for Vserver
SnapMirror relationships.
Destination Volume Node: Node which owns the destination volume
of the relationship. For FlexGroup
relationships it is the node which owns
the root constituent destination volume.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
This parameter is not supported for Vserver
SnapMirror relationships.
Identity Preserve Vserver DR: Whether or not the identity of the source
Vserver is replicated to the destination
Vserver. Can be:
- true: Source Vserver's configuration will
additionally be replicated to the
destination, along with the
Vserver's volumes and RBAC
configuration.
- false: Only volumes and RBAC configuration
of the source Vserver is replicated
to the destination.
This parameter is supported only for Vserver
SnapMirror relationships.
Volume MSIDs Preserved: Whether or not the MSIDs of the source
volumes are retained while creating
destination volumes.
Can be:
- true: MSIDs of source Vserver volumes and
destination Vserver volumes match.
- false: MSIDs of source Vserver volumes and
destination Vserver volumes do not
match.
This parameter is supported only for Vserver
SnapMirror relationships.
Is Auto Expand Enabled: Whether or not the auto expand is enabled.
Can be:
- true: Auto Expand is enabled.
- false: Auto Expand is disabled.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Select SnapMirror relationships that have a matching source path name.
| [-source-cluster <Cluster name>] - Source Cluster
Select SnapMirror relationships that have a matching source cluster name. This parameter is not
supported for relationships with non-Data ONTAP source endpoints.
[-source-vserver <vserver name>] - Source Vserver
Select SnapMirror relationships that have a matching source Vserver name. This parameter is not
supported for relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Select SnapMirror relationships that have a matching source volume name. This parameter is not
supported for relationships with non-Data ONTAP source endpoints.
{ [-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Destination Path
Select SnapMirror relationships that have a matching destination path name.
Note
Using wildcards with this parameter:
• To match all Vserver Snapmirror relationships, use: -destination-path *:
• To match all the Snapmirror relationships except Vserver Snapmirror relationships in the cluster,
use: -destination-path *
Selects SnapMirror relationships that have a matching SnapMirror policy type. Possible values are:
• async-mirror
• vault
• mirror-vault
• automated-failover
• sync-mirror
• strict-sync-mirror
[-policy <sm_policy>] - SnapMirror Policy
Select SnapMirror relationships that have a matching SnapMirror policy.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Select SnapMirror relationships that have a matching tries limit.
[-throttle | -k <throttleType>] - Throttle (KB/sec)
Select SnapMirror relationships that have a matching throttle.
[-cg-item-mappings {<source volume>:@<destination volume>|<source item path>:@<destination
item path>}, ...] - Consistency Group Item Mappings
Select SnapMirror relationships that have a matching cg-item-mappings.
[-current-throttle <throttleType>] - Current Transfer Throttle (KB/sec)
Select SnapMirror relationships that have a matching current throttle.
[-state <mirror state>] - Mirror State
Select SnapMirror relationships that have a matching mirror state. Possible values are:
• Uninitialized
• Snapmirrored
• Broken-off
[-status <mirror status>] - Relationship Status
Select SnapMirror relationships that have a matching relationship status. Possible values are:
• Idle
• Queued
• Transferring
• Preparing
• Finalizing
• Aborting
• Quiesced
• Quiescing
• Checking
Status values Finalizing, Checking, Waiting and Preparing are not supported for Vserver SnapMirror
relationships.
[-file-restore-file-count <integer>] - File Restore File Count
The number of files being restored by file restore.
[-file-restore-file-list <text>, ...] - File Restore File List
List of the destination file names of the files being restored by file restore.
[-transfer-snapshot <text>] - Transfer Snapshot
Select SnapMirror relationships that have a matching transfer Snapshot copy.
[-snapshot-progress {<integer>[KB|MB|GB|TB|PB]}] - Snapshot Progress
Select SnapMirror relationships that have a matching Snapshot progress.
Select SnapMirror relationships that have a matching last transfer end timestamp.
[-unhealthy-reason <text>] - Unhealthy Reason
Select SnapMirror relationships that have a matching unhealthy reason.
[-progress-last-updated <MM/DD HH:MM:SS>] - Progress Last Updated
Select SnapMirror relationships that have a matching progress last updated.
[-relationship-capability <text>] - Relationship Capability
Select SnapMirror relationships that have a matching relationship capability. This parameter is not
supported for Vserver SnapMirror relationships.
[-lag-time <[[<hours>:]<minutes>:]<seconds>>] - Lag Time
Select SnapMirror relationships that have a matching lag time.
[-current-transfer-priority {low|normal}] - Current Transfer Priority
Select SnapMirror relationships that have a matching current transfer priority.
[-is-smtape-op {true|false}] - SMTape Operation
Select SnapMirror relationships that have a matching smtape operation.
[-destination-volume-node <nodename>] - Destination Volume Node Name
Select SnapMirror relationships that have a matching destination volume node name. This parameter is
not supported for Vserver SnapMirror relationships.
[-identity-preserve {true|false}] - Identity Preserve Vserver DR
Select SnapMirror relationships that have a matching value for identity-preserve. This parameter is valid
only for Vserver SnapMirror relationships.
[-expand [true]] - Show Constituents of the Group
Specifies whether to display constituent relationships of Vserver, FlexGroup and Consistency Group
SnapMirror relationships. By default, the constituents are not displayed.
[-update-successful-count <integer>] - Number of Successful Updates
Select SnapMirror relationships that have a matching number of successful updates. This parameter is
not supported for Vserver SnapMirror relationships.
[-update-failed-count <integer>] - Number of Failed Updates
Select SnapMirror relationships that have a matching number of failed updates. This parameter is not
supported for Vserver SnapMirror relationships.
[-resync-successful-count <integer>] - Number of Successful Resyncs
Select SnapMirror relationships that have a matching number of successful resyncs. This parameter is
not supported for Vserver SnapMirror relationships.
[-resync-failed-count <integer>] - Number of Failed Resyncs
Select SnapMirror relationships that have a matching number of failed resyncs. This parameter is not
supported for Vserver SnapMirror relationships.
[-break-successful-count <integer>] - Number of Successful Breaks
Select SnapMirror relationships that have a matching number of successful breaks. This parameter is not
supported for Vserver SnapMirror relationships.
[-break-failed-count <integer>] - Number of Failed Breaks
Select SnapMirror relationships that have a matching number of failed breaks. This parameter is not
supported for Vserver SnapMirror relationships.
[-total-transfer-bytes <integer>] - Total Transfer Bytes
Select SnapMirror relationships that have a matching total transfer bytes. This parameter is not
supported for Vserver SnapMirror relationships.
[-total-transfer-time-secs <integer>] - Total Transfer Time in Seconds
Select SnapMirror relationships that have a matching total transfer time in seconds. This parameter is
not supported for Vserver SnapMirror relationships.
[-msid-preserve {true|false}] - Source Volume MSIDs Preserved
This parameter specifies whether the volume MSIDs are preserved at the destination. This parameter is
applicable only for Vserver SnapMirror relationships.
[-is-auto-expand-enabled {true|false}] - Is Auto Expand Enabled
Select SnapMirror relationships that have a matching value for auto expand. This parameter is supported
only for FlexGroup SnapMirror relationships. Possible values are:
• true
• false
● Examples
The snapmirror show command displays information associated with SnapMirror relationships. By default,
the command displays the following information:
• Source path
• Relationship Type
• Destination Path
• Mirror State
• Relationship Status
• Total Progress
• Healthy
• Progress Last Updated
For backward compatibility with clustered Data ONTAP 8.1, and to accommodate load-sharing relationships
which are only supported in a Data ONTAP 8.1 compatible way, SnapMirror relationships, which match one
of the following conditions are managed as on clustered Data ONTAP 8.1: (1) The relationship is of type
load-sharing; (2) The source endpoint of the relationship is on a remote Data ONTAP 8.1 cluster; (3) The
local cluster was upgraded from clustered Data ONTAP 8.1, the relationship was created before the upgrade,
and the relationship has not yet been converted to one with Data ONTAP 8.2 capabilities. These
relationships have the same limitations as on clustered Data ONTAP 8.1. Especially, they support the same
set of information fields. The "Relationship Capability" field is set to "Pre 8.2" for these relationships.
The snapmirror show command displays information for SnapMirror relationships whose destination
endpoints are in the current Vserver if you are in a Vserver context, or in the current cluster if you are in a
cluster context, or on a non-Data ONTAP endpoint that supports SnapMirror (for example, AltaVault). For
backward compatibility with clustered Data ONTAP 8.1, the command also displays information for
SnapMirror relationships with the "Relationship Capability" of "Pre 8.2", and whose source endpoints are in
the current Vserver or cluster, and destination endpoints are in different Vservers or clusters. You must use
the snapmirror list-destinations command to display information for SnapMirror relationships
whose source endpoints are in the current Vserver or current cluster.
Some of the SnapMirror relationship information is cached. The snapmirror show command only returns
the cached information, therefore there is a delay after the information is changed before it is reflected in
the snapmirror show output. Other information, such as progress metrics during a transfer, is only
updated periodically and can be very delayed in the snapmirror show output.
The -instance and -fields parameters are mutually exclusive and select the information fields that are
displayed. The other parameters to the snapmirror show command select the SnapMirror relationships for
which information is displayed. The -instance displays detailed information fields including:
update failure.
Unhealthy Reason: Reason the SnapMirror relationship is not
healthy. Only for relationships with
"Relationship Capability"
of "8.2 and above"
Newest Snapshot: Name of the newest Snapshot copy on the
destination volume.
Newest Snapshot Timestamp: Timestamp of the newest Snapshot copy.
Exported Snapshot: Name of the exported Snapshot copy on the
destination volume.
Exported Snapshot Timestamp: Timestamp of the exported Snapshot copy.
Lag Time: Time since the exported Snapshot copy
was created. It is displayed in the
format: hours:minutes:seconds.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Transfer Type: Type of the current transfer operation.
Can be one of the following:
- initialize
- update
- resync
- restore
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Transfer Snapshot: Name of the Snapshot copy being transferred.
Snapshot Progress: Amount of data transferred for the transfer
snapshot. This parameter is not supported
for SnapMirror FlexGroup relationships, but
it is supported for FlexGroup constituent
relationships.
Total Progress: Total amount of data transferred for the
current transfer operation. This parameter
is not supported for SnapMirror FlexGroup
relationships, but it is supported for
FlexGroup constituent relationships.
Network Compression Ratio: The compression ratio achieved for the data
sent over the wire as a part of the current
transfer operation. The ratio is not
maintained across checkpoint restarts. If
network compression is disabled for the
transfer, the ratio will be set to 1:1.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
This parameter is not supported for Vserver
or FlexGroup SnapMirror relationships, but
it is supported for FlexGroup constituent
relationships.
Snapshot Checkpoint: The amount of data transferred as recorded in
the restart checkpoint of the current or most
recent transfer snapshot. If a restart
checkpoint is present the next transfer will
available:
- sync-mirror
The following fields are available only under PVR control for relationships with policy type sync-mirror.
The following fields are available only at the diagnostic privilege level:
Last Transfer Error Codes: Set of Data ONTAP internal error codes
providing information on the context
of the previous transfer failure.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
This parameter is not supported for Vserver
SnapMirror relationships.
Source Vserver UUID: The unique identifier of the source Vserver.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Destination Vserver UUID: The unique identifier of the destination
Vserver.
Only for relationships with
"Relationship Capability"
of "8.2 and above".
Source Endpoint UUID: The unique identifier of the non-Data ONTAP
source endpoint of a relationship. Only for
relationships with a non-Data ONTAP source
endpoint.
Destination Endpoint UUID: The unique identifier of the non-Data ONTAP
destination endpoint of a relationship. Only
for relationships with a non-Data ONTAP
destination endpoint.
Consistency Group Item Mappings: List of the consistency group item mappings.
Only for consistency group (CG) level relationships.
The example below displays summary information for all SnapMirror relationships with destination
endpoints in the current cluster:
Idle - true -
cluster2-vs1.example.com:xdp_src1
XDP cluster2-vs2.example.com:xdp_dst1
Snapmirrored
Idle - true -
cluster2://cluster2-vs1.example.com/ls_src1
LS cluster2://cluster2-vs1.example.com/ls_mr1
Snapmirrored
Idle - true -
cluster2://cluster2-vs1.example.com/ls_mr2
Snapmirrored
Idle - true -
5 entries were displayed.
The example below displays detailed information for the SnapMirror relationship with the destination
endpoint cluster2-vs2.example.com:dp_dst1.
The example below displays detailed information for SnapMirror relationships with the Relationship
Capability of "Pre 8.2" source or destination endpoints in the current cluster.
The example below displays detailed information for the Vserver SnapMirror relationship with the
destination endpoint cluster2-dvs2.example2.com:.
Unhealthy Reason: -
Destination Volume Node: -
Relationship ID: -
Operation ID: -
Transfer Type: -
Transfer Error: -
Current Throttle: -
Current Transfer Priority: -
Last Transfer Type: resync
Last Transfer Error: -
Last Transfer Size: -
Last Transfer Network Compression Ratio: -
Last Transfer Duration: -
Last Transfer From: cluster1-vs2.example1.com:
Last Transfer End Timestamp: -
Progress Last Updated: -
Relationship Capability: -
Lag Time: 18:47:9
Identity Preserve Vserver DR: false
Volume MSIDs Preserved: true
Is Auto Expand Enabled: -
Is Adaptive: -
Number of Successful Updates: -
Number of Failed Updates: -
Number of Successful Resyncs: -
Number of Failed Resyncs: -
Number of Successful Breaks: -
Number of Failed Breaks: -
Total Transfer Bytes: -
Total Transfer Time in Seconds: -
The following example displays detailed information for the SnapMirror relationship with the AltaVault
destination endpoint 10.0.0.11:/share/share1:
The example shows the usage of the -expand parameter to additionally display the constituents of Vserver
SnapMirror relationships with destination endpoints in the current cluster. Note that in the following
example, since there is no volume level relationship for the root volume of a Vserver, it is not shown in the
output:
Idle - true -
cluster1-vs2.example1.com:
DP cluster2-dvs2.example2.com:
Snapmirrored
Idle - true -
cluster1-vs2.example1.com:vol1
DP cluster2-dvs2.example2.com:vol1
Snapmirrored
Idle - true -
4 entries were displayed.
Related reference
snapmirror list-destinations (page 768)
snapmirror policy create (page 852)
snapmirror show-history
Displays history of SnapMirror operations.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror show-history command displays the history of SnapMirror operations. This command is
not supported for relationships with non-Data ONTAP endpoints.
By default, the command displays the following information:
• Destination Path
• Source Path
• Operation
• Start Time
• End Time
• Result
The snapmirror show-history command displays - in reverse chronological order - the history of
completed SnapMirror operations whose destination endpoints are in the current Vserver for Vserver
administrators, or the current cluster for cluster administrators. This command does not return information
on the following operations and relationships:
• Operations that happened prior to installing Data ONTAP 8.3.
• Relationships with the "Relationship Capability" field, as shown in the output of the SnapMirror show
command, set to "Pre 8.2".
• Operations on FlexGroup relationships that happened prior to installing Data ONTAP 9.5.
• Operations on FlexGroup constituent relationships.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
{ [-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Destination Path
Select SnapMirror operations that have a matching destination path name.
| [-destination-vserver <vserver name>] - Destination Vserver
Select SnapMirror operations that have a matching destination Vserver name.
[-destination-volume <volume name>]} - Destination Volume
Select SnapMirror operations that have a matching destination volume name.
[-operation-id <UUID>] - Operation ID
Select SnapMirror operations that have a matching operation ID.
[-source-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/
share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Select SnapMirror operations that have a matching source path name.
[-source-vserver <vserver name>] - Source Vserver
Select SnapMirror operations that have a matching source Vserver name.
[-source-volume <volume name>] - Source Volume
Select SnapMirror operations that have a matching source volume name.
[-operation-type {create|modify|quiesce|resume|delete|initialize|manual-update|scheduled-
update|break|resync|abort|restore|failover}] - Operation Type
Select SnapMirror operations that have a matching operation type. Possible values are:
• create
• modify
• quiesce
• resume
• delete
• initialize
• manual-update
• scheduled-update
• break
• resync
• abort
• restore
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
Select SnapMirror operations that have a matching start time.
[-end-time <MM/DD/YYYY HH:MM:SS>] - End Time
Select SnapMirror operations that have a matching end time.
[-relationship-id <UUID>] - Relationship ID
Select SnapMirror operations that have a matching relationship ID.
[-relationship-group-type {none|vserver|consistencygroup|flexgroup}] - Relationship Group Type
Select SnapMirror relationships that have a matching relationship group type. Possible values are:
• none
• vserver
• flexgroup
[-result {success|failure}] - Result of the Operation
Select SnapMirror operations that have a matching result. Possible values are:
• success
• failure
[-transfer-size {<integer>[KB|MB|GB|TB|PB]}] - Transfer Size
Select SnapMirror operations that have a matching transfer size.
[-additional-info <text>] - Additional Information
Select SnapMirror operations that have matching additional information.
[-max-rows-per-relationship <integer>] - Maximum Number of Rows per Relationship
Select a matching number of SnapMirror operations per relationship.
[-expand [true]] - Show Constituents of the Group.
Select SnapMirror operations on relationships that are constituents and non-constituents of a group.
● Examples
The example below displays summary information for all SnapMirror operations on relationships with
destination endpoints in the current cluster:
The example below displays detailed information for the SnapMirror operation with operation ID
dc158715-0583-11e3-89bd-123478563412
The example below displays detailed information for all SnapMirror operations on relationships with the
Result of "success" and whose destination endpoints are in the current cluster.
The example below displays summary information for all SnapMirror operations on relationships with max-
rows-per-relationship of 1 and whose destination endpoints are in the current cluster.
snapmirror update
Start an incremental transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror update command updates the destination volume or non-Data ONTAP endpoint of a
SnapMirror relationship. The snapmirror update command behaves differently for data protection (DP),
extended data protection (XDP) and load-sharing (LS) relationships. Refer to the -type parameter of the
snapmirror create command to understand different types of relationships supported by SnapMirror.
The snapmirror update command performs an incremental transfer.
Before using this command, the relationship must be initialized using the snapmirror initialize or
snapmirror initialize-ls-set commands.
For data protection SnapMirror relationships with volumes as endpoints, the snapmirror update
command makes the destination volume an up-to-date mirror of the source volume with the following
steps:
• If the source volume is read-write, takes a Snapshot copy on the source volume to capture the current
image of the source volume
• Finds the most recent Snapshot copy on the destination volume and validates that the corresponding
Snapshot copy is still present on the source
• Incrementally transfers Snapshot copies that are newer than the corresponding Snapshot copy to the
destination volume
You can use the snapmirror update command to update a specific load-sharing mirror that lags behind
up-to-date destination volumes in the set of load-sharing mirrors. An update to the lagging load-sharing
mirror should bring it up to date with the other up-to-date destination volumes in the set of load-sharing
mirrors.
Note
Using the snapmirror update command to update a set of load-sharing mirrors will not work. Use
the snapmirror update-ls-set command to update a set of load-sharing mirrors.
For extended data protection (XDP) relationships with a snapmirror policy of type async-mirror, a
snapmirror update always creates a new Snapshot copy on the source volume. Depending on the rules in
the policy, the command might transfer just the newly created Snapshot copy or all Snapshot copies that are
newer than the common Snapshot copy including the newly created Snapshot copy to the destination
volume.
For extended data protection (XDP) relationships with a snapmirror policy of type vault, a snapmirror
update does not create a new Snapshot copy on the source volume but transfers only selected Snapshot
copies that are newer than the common Snapshot copy to the destination volume. (Those older than the
common copy can be transferred by using the -source-snapshot parameter.) Snapshot copies are
selected by matching the value of -snapmirror-label of a Snapshot copy with the value of -
snapmirror-label of one of the rules from the corresponding SnapMirror policy associated with the
SnapMirror relationship. All matching Snapshot copies are incrementally transferred to the destination
volume.
For extended data protection (XDP) relationships with a snapmirror policy of type mirror-vault, a
snapmirror update always creates a new Snapshot copy on the source volume and transfers only selected
Snapshot copies that are newer than the common snapshot copy. The newly created Snapshot copy is
always selected.
For extended data protection (XDP) relationships with a snapmirror policy of type vault or mirror-vault,
the snapmirror update command also manages expiration of Snapshot copies on the destination
volume. It does so by deleting Snapshot copies that have exceeded the value of -keep for the matching rule
from the corresponding SnapMirror policy associated with the SnapMirror relationship. Snapshot copies that
match the same -snapmirror-label will be deleted in oldest-first order.
For relationships with a policy of type strict-sync-mirror or sync-mirror, this command creates a new common
Snapshot copy and designates it as the exported Snapshot copy on the destination volume. It updates the
destination read-only view because IO is redirected to the new exported Snapshot copy. Clients could
experience a brief latency spike during this process as the primary IO is temporarily fenced. This command is
allowed only when the relationship-status is InSync. The command retains two pairs of common Snapshot
copies and deletes the older ones.
For data protection relationships, the parameter -source-snapshot is optional and only allows for the
transfer of Snapshot copies newer than the common Snapshot copy up to the specified -source-snapshot.
For extended data protection (XDP) relationships the parameter -source-snapshot is optional.
For extended data protection (XDP) relationships with a snapmirror policy of type vault or mirror-vault,
the parameter -source-snapshot allows transfer of a Snapshot copy that is older than the common
Snapshot copy and/or might not be selected for transfer based on policy-based selection of a scheduled
update transfer.
For extended data protection (XDP) relationships with a snapmirror policy of type async-mirror, the
snapmirror update with parameter -source-snapshot does not create a new Snapshot copy on the
source volume. Depending on the rules in the policy, the command might transfer just the specified
Snapshot copy or Snapshot copies that are newer than the common Snapshot copy upto and including the
specified Snapshot copy to the destination volume.
After the snapmirror update command successfully completes, the last Snapshot copy transferred is
designated as the new exported Snapshot copy on the destination volume. If an update to an extended data
protection (XDP) relationship specifies a Snapshot copy using the -source-snapshot parameter that is
older than the common snapshot, after the snapmirror update successfully completes, the exported
Snapshot copy on the destination volume will remain unchanged.
If the snapmirror update does not finish successfully--for example, due to a network failure or because a
snapmirror abort command was issued--a restart checkpoint might be recorded on the destination
volume. If a restart checkpoint is recorded, the next update restarts and continues the transfer from the
restart checkpoint. For extended data protection (XDP) relationships, the next update will restart and
continue the old transfer regardless of whether the Snapshot copy being transferred is a matching Snapshot
copy or not.
This command is supported for SnapMirror relationships with the field "Relationship Capability" showing as
either "8.2 and above" or "Pre 8.2" in the output of the snapmirror show command.
For relationships with "Relationship Capability" of "8.2 and above", you can track the progress of the
operation using the snapmirror show command.
For relationships with "Relationship Capability" of "Pre 8.2", a job will be spawned to operate on the
SnapMirror relationship, and the job id will be shown in the command output. The progress of the job can
be tracked using the job show and job history show commands.
For Vserver SnapMirror relationships, the snapmirror update command makes the destination Vserver an
up-to-date mirror of the source Vserver.
The snapmirror update command must be used from the destination Vserver or cluster.
● Parameters
{ [-source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
[-source-vserver <vserver name>] - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
[-source-volume <volume name>]} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
{ -destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
This parameter specifies the destination endpoint of the SnapMirror relationship in one of four path
formats. The normal format includes the names of the Vserver (vserver) and/or volume (volume). To
support relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of
the cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context
on relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with
AltaVault destinations, the destination endpoint is specified in the form hostip:/share/share-name. For
relationships with SolidFire destinations, the destination endpoint is specified in the form hostip:/lun/
name.
| [-destination-cluster <Cluster name>] - Destination Cluster
Specifies the destination cluster of the SnapMirror relationship. If this parameter is specified, parameters
-destination-vserver and -destination-volume must also be specified. This parameter is only
applicable for relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified
when operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
● Examples
To update the mirror relationship between the destination endpoint vs2.example.com:dept_eng_dp_mirror3
and its source endpoint, type the following command:
For relationships with "Relationship Capability" of "Pre 8.2", to update the mirror relationship between the
destination endpoint cluster2://vs2.example.com/dept_eng_dp_mirror3 and its source endpoint, type the
following command:
To update the Vserver SnapMirror relationship between destination endpoint dvs1.example.com: and its
source endpoint, type the following command:
Related reference
snapmirror create (page 752)
snapmirror modify (page 772)
snapmirror initialize (page 762)
snapmirror initialize-ls-set (page 767)
snapmirror update-ls-set (page 838)
snapmirror policy commands (page 850)
snapmirror abort (page 747)
snapmirror show (page 800)
job show (page 198)
job history show (page 207)
snapmirror update-ls-set
Start an incremental load-sharing set transfer
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror update-ls-set command updates a set of load-sharing mirrors. The command makes
destination volumes, in the group of load-sharing mirrors, up-to-date mirrors of the source volume.
The key parameter that identifies the set of load-sharing mirrors is the source volume. SnapMirror transfers
are performed from the source volume to each of the up-to-date destination volumes in the set of load-
sharing mirrors.
The snapmirror update-ls-set command performs an incremental transfer to each of the destination
volumes. During an incremental transfer, Data ONTAP takes a Snapshot copy on the source volume to
capture the current image of the source volume, finds the most recent common Snapshot copy between the
source and destination volumes, and incrementally transfers Snapshot copies that are newer than the
common Snapshot copy to the destination volume.
Note
You still need to use the snapmirror update-ls-set command to manually update the set of load-
sharing mirrors even if the set only has one destination mirror. The snapmirror update command
can only be used to bring up to date a specific destination mirror that is lagging to the set.
After an update using the snapmirror update-ls-set command successfully completes, the last
Snapshot copy transferred is made the new exported Snapshot copy on the destination volumes.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing
as "Pre 8.2" in the output of the snapmirror show command.
● Parameters
{ -source-path | -S {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/
share/share-name>|<[vserver:]/cg/[cgname]>} - Source Path
This parameter specifies the source endpoint of the SnapMirror relationship in one of four path formats.
The normal format includes the names of the Vserver (vserver) and/or the volume (volume). To support
relationships with "Relationship Capability" of "Pre 8.2", a format which also includes the name of the
cluster (cluster) is provided. The "Pre 8.2" format cannot be used when operating in a Vserver context on
relationships with "Relationship Capability" of "8.2 and above". For SnapMirror relationships with an
AltaVault source, the source endpoint is specified in the form hostip:/share/share-name. For SnapMirror
relationships with a SolidFire source, the source endpoint is specified in the form hostip:/lun/name.
| [-source-cluster <Cluster name>] - Source Cluster
Specifies the source cluster of the SnapMirror relationship. If this parameter is specified, the -source-
vserver and -source-volume parameters must also be specified. This parameter is only applicable for
relationships with "Relationship Capability" of "Pre 8.2". This parameter cannot be specified when
operating in a Vserver context on relationships with "Relationship Capability" of "8.2 and above".
-source-vserver <vserver name> - Source Vserver
Specifies the source Vserver of the SnapMirror relationship. For relationships with volumes as endpoints,
if this parameter is specified, parameters -source-volume and for relationships with "Relationship
Capability" of "Pre 8.2", -source-cluster must also be specified. This parameter is not supported for
relationships with non-Data ONTAP source endpoints.
-source-volume <volume name>} - Source Volume
Specifies the source volume of the SnapMirror relationship. If this parameter is specified, parameters -
source-vserver and for relationships with "Relationship Capability" of "Pre 8.2", -source-cluster
must also be specified. This parameter is not supported for relationships with non-Data ONTAP source
endpoints.
[-foreground | -w [true]] - Foreground Process
This specifies whether the operation runs as a foreground process. If this parameter is specified, the
default setting is true (the operation runs in the foreground). When set to true, the command will not
return until the process completes. This parameter is only applicable to relationships with "Relationship
Capability" of "Pre 8.2".
● Examples
To update the group of load-sharing mirrors for the source endpoint named //vs1.example.com/dept_eng,
type the following command:
Related reference
snapmirror update (page 834)
snapmirror show (page 800)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snapmirror config-replication cluster-storage-configuration modify command
modifies the configuration of storage used for configuration replication.
● Parameters
[-disallowed-aggregates <aggregate name>, ...] - Disallowed Aggregates
Use this parameter to set the list of storage aggregates that are not available to host storage for
configuration replication.
● Examples
The following example disallows two aggregates named aggr1 and aggr2:
Related reference
snapmirror config-replication cluster-storage-configuration show (page 840)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snapmirror config-replication cluster-storage-configuration show command shows
details of the configuration of the storage used for configuration replication.
The information displayed is the following:
• Disallowed Aggregates - The list of storage aggregates that are configured as not allowed to host storage
areas.
• Auto-Repair - Displays true if the automatic repair of storage areas used by configuration replication is
enabled.
• Auto-Recreate - Displays true if the automatic recreation of storage volumes used by configuration
replication is enabled.
• Use Mirrored Aggregate - Displays true if storage areas for configuration replication are to be hosted on a
mirrored aggregate.
● Examples
The following is an example of the snapmirror config-replication cluster-storage-
configuration show command:
Disallowed Aggregates: -
Auto-Repair: true
Auto-Recreate: true
Use Mirrored Aggregate: true
Related reference
snapmirror config-replication cluster-storage-configuration modify (page 840)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror config-replication status show command displays the current SnapMirror
configuration replication status.
The command displays the following aspects of SnapMirror configuration replication:
• Enabled: Verifies that SnapMirror configuration replication is enabled on the cluster.
• Running: Verifies that SnapMirror configuration replication is running on the cluster.
• Storage Status: Verifies that SnapMirror configuration replication storage is healthy.
• Storage In Use: Prints the location of SnapMirror configuration replication storage.
• Storage Remarks: Prints the underlying root cause for non-healthy SnapMirror configuration storage.
• Vserver Streams: Verifies that SnapMirror configuration replication Vserver streams are healthy.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance option.
● Parameters
[-instance ]
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example shows the execution of the command:
Storage Remarks: -
Vserver Streams: ok
Related reference
snapmirror config-replication status show-communication (page 843)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror config-replication status show-aggregate-eligibility command displays
the SnapMirror configuration replication aggregate eligibility.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Display only rows that have a matching aggregate name.
[-hosted-configuration-replication-volumes <volume name>, ...] - Currently Hosted
Configuration Replication Volumes
Display only rows that have matching configuration replication volumes hosted on this aggregate.
[-is-eligible-to-host-additional-volumes {true|false}] - Eligibility to Host Another Configuration
Replication Volume
Display only rows that have a matching eligibility of the aggregate to host additional configuration
replication volumes.
[-comment <text>] - Comment for Eligibility Status
Display only rows that have a matching comment regarding the eligibility of the aggregate to host
configuration replication volumes.
● Examples
The following example shows the execution of the command in a SnapMirror configuration with thirteen
aggregates in the cluster:
Related reference
snapmirror config-replication status show (page 841)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror config-replication status show-communication command displays the current
SnapMirror configuration replication communication status.
The command displays the following aspects of SnapMirror configuration replication for each peer cluster:
• Remote Heartbeat: Verifies that the SnapMirror configuration replication heartbeat with the remote
cluster is healthy.
• Last Heartbeat Sent: Prints the timestamp of the last SnapMirror configuration replication heartbeat sent
to the remote cluster.
• Last Heartbeat Received: Prints the timestamp of the last SnapMirror configuration replication hearbeat
received from the remote cluster.
Additional information about the warnings (if any) and recovery steps can be viewed by running the
command with the -instance option.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-cluster-uuid <UUID>] - Remote Cluster
Display only rows that have a matching peer cluster UUID.
[-cluster <text>] - Peer Cluster Name
Display only rows that have matching peer cluster name.
[-remote-heartbeat {ok|warning|not-run|not-applicable}] - Remote Heartbeat
Display only rows that have a matching remote heartbeat status.
[-last-heartbeat-sent <MM/DD/YYYY HH:MM:SS>] - Last Heartbeat Sent Time
Display only rows that have a matching timestamp of the last heartbeat sent.
[-last-heartbeat-received <MM/DD/YYYY HH:MM:SS>] - Last Heartbeat Received Time
Display only rows that have a matching timestamp of the last heartbeat received.
[-heartbeat-recovery-steps <text>] - Heartbeat Recovery Steps
● Examples
The following example shows the execution of the command in a SnapMirror configuration with two peer
clusters:
Related reference
snapmirror config-replication status show (page 841)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror failover show command displays the failover status of SnapMirror relationships with
'automated-failover' policy-type.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
Start operation time of the failover.
[-source-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|<hostip:/
share/share-name>|<[vserver:]/cg/[cgname]>}] - Source Path
Specifies the source endpoint of the SnapMirror relationship.
[-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>}] - Destination Path
Specifies the destination endpoint of the SnapMirror relationship.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror failover start command starts planned failover.
● Parameters
-destination-path {<[vserver:][volume]>|<[[cluster:]//vserver/]volume>|<hostip:/lun/name>|
<hostip:/share/share-name>|<[vserver:]/cg/[cgname]>} - Destination Path
Specifies the destination endpoint of the SnapMirror relationship.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror mediator add command configures the connection between a pair of clusters and a
mediator. This configuration is used only when a pair of clusters have SnapMirror relationships of type
AutomatedFailOver.
● Parameters
-mediator-address <IP Address> - Mediator Ip Address
Specifies the IP address of the mediator.
-peer-cluster <text> - Peer cluster
Peer cluster with AutomatedFailOver SnapMirror relationships.
-username <text> - Username
● Examples
The following example configures the connection between a mediator and a pair of clusters.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror mediator remove command deletes a mediator configuration entry.
● Parameters
-mediator-address <IP Address> - Mediator Ip Address
Specifies the IP address of the mediator.
-peer-cluster <text> - Peer cluster
Peer cluster with AutomatedFailOver SnapMirror relationships.
● Examples
The following example removes a mediator configuration entry.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The snapmirror mediator show command retrieves the list of mediator configurations.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-mediator-address <IP Address>] - Mediator IP Address
Ip address of the mediator.
[-peer-cluster <text>] - Peer Cluster Name
Peer cluster name.
[-connection-status {connected|unreachable|suspended|unusable|misconfigured|removing|not-
configured|unknown|adding}] - Connection Status
Connection status of the cluster with the mediator.
[-quorum-status {unknown|true|false}] - Mediator Quorum Status
Shows whether the SnapMirror Consistency Group relationships are synchronized with mediator.
[-health-fetch-timeout <integer>] - Peer Health Fetch Timeout
Timeout value (in seconds) for peer-cluster health fetch via mediator.
[-connection-timeout <integer>] - Connection Timeout
Timeout value (in seconds) for mediator connection.
● Examples
The following example shows the list of mediator configurations.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snapmirror object-store profiler abort command will abort an ongoing object store profiler
run. This command requires two parameters - an object store configuration and a node on which the profiler
is currently running.
● Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node on which the object store profiler is running.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider
type.
● Examples
The following example aborts the object store profiler :
Related reference
storage aggregate object-store profiler abort (page 1041)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snapmirror object-store profiler show command is used to monitor progress and results of
the snapmirror object-store profiler start command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter specifies the node on which the profiler was started.
[-object-store-name <text>] - ONTAP Name for this Object Store Configuration
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider
type.
[-profiler-status <text>] - Profiler Status
Current status of the profiler.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Profiler Start Time
Time at which profiler run started.
● Examples
The following example displays the results of snapmirror object-store profiler start :
Related reference
storage aggregate object-store profiler show (page 1041)
snapmirror object-store profiler start (page 849)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The snapmirror object-store profiler start command writes objects to an object store and reads
those objects to measure latency and throughput of an object store. This command requires two parameters
- an object store configuration and node from which to send the PUT/GET/DELETE operations. This command
verifies whether the object store is accessible through the intercluster LIF of the node on which it runs. The
command fails if the object store is not accessible. The command will create a 10GB dataset by doing 2500
PUTs for a maximum time period of 60 seconds. Then it will issue GET operations of different sizes - 4KB,
8KB, 32KB, 256KB for a maximum time period of 180 seconds. Finally it will delete the objects it created.
This command can result in additional charges to your object store account. This is a CPU intensive
command. It is recommended to run this command when the system is under 50% CPU utilization.
● Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node from which PUT/GET/DELETE operations are sent.
● Examples
The following example starts the object store profiler :
Related reference
storage aggregate object-store profiler start (page 1042)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy add-rule command adds a rule to a SnapMirror policy. Rules define which
Snapshot copies are protected by vault relationships or define the schedule at which Snapshot copies are
created on the SnapMirror destination. Rules which do not include a schedule are rules for protecting
Snapshot copies. Rules which include a schedule are rules for creating Snapshot copies on the SnapMirror
destination. A rule with a schedule can only be added to SnapMirror policies of type vault or mirror-vault. A
rule must not be added to a policy that will be associated with a SnapMirror data protection relationship. A
policy that will be associated with a SnapMirror vault relationship must have at least one rule and at most
ten rules. A SnapMirror policy with rules must have at least one rule without a schedule.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter is primarily used for the purpose of Snapshot copy selection for extended data protection
(XDP) relationships. Only Snapshot copies that have a SnapMirror label that matches this parameter will
be transferred to the SnapMirror destination. However, when this parameter is associated with a rule
containing a schedule, Snapshot copies will be created on the SnapMirror destination using this
snapmirror-label parameter. The label can be 31 or fewer characters in length. SnapMirror policies of
type async-mirror and mirror-vault have a rule added for label sm_created at the time of policy creation.
This rule cannot be removed or modified by the user. This rule when coupled with create-snapshot field
set to true indicates that the SnapMirror relationship using this policy shall create a new Snapshot copy
and transfer it as part of a snapmirror update operation. SnapMirror policies of type async-mirror
support one additional rule with SnapMirror label all_source_snapshots. This rule along with the rule for
SnapMirror label sm_created indicates that all new Snapshot copies on the primary volume along with
the newly created Snapshot copy are transferred as a part of a snapmirror update or snapmirror
initialize operation. Rules with any other SnapMirror labels cannot be added to SnapMirror policies
of type async-mirror. The rule for label sm_created when added to a snapmirror policy of type vault
indicates that all SnapMirror created Snapshot copies of the primary volume are selected for transfer.
-keep <text> - Snapshot Copy Retention Count
Specifies the maximum number of Snapshot copies that are retained on the SnapMirror destination
volume for a rule. The total number of Snapshot copies retained for all the rules in a policy cannot
exceed 1019. For all the rules in SnapMirror policies of type async-mirror, this parameter must be set to
value 1.
[-preserve {true|false}] - Snapshot Copy Preserve Enabled
Specifies the behavior when the Snapshot copy retention count is reached on the SnapMirror vault
destination for the rule. The default value is false, which means that the oldest Snapshot copy will be
deleted to make room for new ones only if the number of Snapshot copies has exceeded the retention
count specified in the "keep" parameter. When set to true, and when the Snapshot copies have reached
the retention count, then an incremental SnapMirror vault update transfer will fail or if the rule has a
schedule, Snapshot copies will no longer be created on the SnapMirror destination. For all the rules in
SnapMirror policies of type async-mirror this parameter must be set to value false.
[-warn <integer>] - Warning Threshold Count
Specifies the warning threshold count for the rule. The default value is 0. When set to a value greater
than zero, an event is generated after the number of Snapshot copies (for the particular rule) retained
on a SnapMirror vault destination reaches the specified warn limit. The preserve parameter for the rule
must be true to set the warn parameter to a value greater than zero.
[-schedule <text>] - Snapshot Copy Creation Schedule
This optional parameter specifies the name of the schedule associated with a rule. This parameter is
allowed only for rules associated with SnapMirror policies of type vault or mirror-vault. When this
parameter is specified, Snapshot copies are directly created on the SnapMirror destination. The Snapshot
copies created will have the same content as the latest Snapshot copy already present on the SnapMirror
destination. Snapshot copies on the source that have a SnapMirror label matching this rule will not be
selected for transfer. The default value is -.
Note
You define and name a schedule using the job schedule cron create command.
● Examples
The following example adds a rule named nightly to the SnapMirror policy named TieredBackup on Vserver
vs0.example.com. The rule will retain a maximum of 5 nightly Snapshot copies.
The following example adds a rule named SyncProtectMe to the SnapMirror policy named Smgr on Vserver
vs0.example.com. The rule will retain the same SyncProtectMe snapshots on the destination as are present
on the source when the relationship is InSync.
Related reference
snapmirror update (page 834)
snapmirror initialize (page 762)
snapmirror policy commands (page 850)
snapmirror resync (page 792)
job schedule cron create (page 219)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy create command creates a SnapMirror policy. When applied to a SnapMirror
relationship, the SnapMirror policy controls the behavior of the relationship and specifies the configuration
attributes for that relationship. The policies DPDefault, MirrorAllSnapshots, MirrorAndVault, MirrorLatest,
Unified7year and XDPDefault are created by the system for asynchronous replication. The policies Sync and
StrictSync are created by the system for synchronous replication.
Note
All SnapMirror policies have a field create-snapshot. This field specifies whether SnapMirror creates a
new Snapshot copy on the primary volume at the beginning of a snapmirror update or
snapmirror resync operation. Currently this field cannot be set or modified by the user. It is set to
true for SnapMirror policies of type async-mirror and mirror-vault at the time of creation. SnapMirror
policies of type vault have create-snapshot set to false at the time of creation.
Note
Use the snapmirror policy add-rule command to add a rule to a policy.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
This parameter specifies the SnapMirror policy name. A policy name can be made up of the characters A
to Z, a to z, 0 to 9, ".", "-", and "_". The name can be up to 256 characters in length.
[-type {vault|async-mirror|mirror-vault|strict-sync-mirror|sync-mirror|automated-failover}] -
Snapmirror Policy Type
This parameter specifies the SnapMirror policy type. The supported values are async-mirror, vault, mirror-
vault, sync-mirror and strict-sync-mirror. Data protection (DP) relationships support only async-mirror
policy type, while extended data protection (XDP) relationships support all policy types.
If the type is set to async-mirror then the policy is for Disaster Recovery. When the policy type is
associated with extended data protection (XDP) relationships, snapmirror update and snapmirror
resync operations transfer selected Snapshot copies from the primary volume to the secondary volume.
The selection of Snapshot copies is governed by the rules in the policy. However snapmirror
initialize and snapmirror update operations on data protection (DP) relationships ignore the
rules in the policy and transfer all Snapshot copies of the primary volume which are newer than the
common Snapshot copy on the destination. For both data protection (DP) and extended data protection
(XDP) relationships, the Snapshot copies are kept on the secondary volume as long as they exist on the
primary volume. Once a protected Snapshot copy is deleted from the primary volume, it is deleted from
the secondary volume as part of the next transfer. The policy type supports rules with certain pre-defined
label names only. Refer to the man page for the snapmirror policy add-rule command for the
details.
If the type is set to vault then the policy is used for Backup and Archive. The rules in this policy type
determine which Snapshot copies are protected and how long they are retained on the secondary
volume. This policy type is supported by extended data protection (XDP) relationships only.
If the type is set to mirror-vault then the policy is used for unified data protection which provides both
Disaster Recovery and Backup using the same secondary volume. This policy type is supported by
extended data protection (XDP) relationships only.
If the type is set to sync-mirror or strict-sync-mirror then the policy is used for synchronous Disaster
Recovery. These are supported only by extended data protection (XDP) relationships between FlexVol
volumes. Once the relationship is initialized with snapmirror initialize, the relationship will be
InSync such that all writes to the primary will be replicated to the secondary before the write is
acknowledged to the client. Upon a replication failure, relationship falls OutOfSync. Upon an OutOfSync
event, the strict-sync-mirror variant restricts further client IO on the primary, whereas the sync-mirror
variant does not. SnapMirror will automatically trigger resync to bring the OutOfSync relationships back
InSync as soon as it can, unless the relationship is Quiesced or Broken-off. Once a relationship is
initialized, you normally use the snapmirror quiesce command to stop synchronous replication and
the snapmirror resume command to resume synchronous replication. These policy types do not
support replication of user Snapshot copies.
[-comment <text>] - Comment
Specifies a text comment for the SnapMirror policy. If the comment contains spaces, it must be enclosed
within quotes.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Determines the maximum number of times to attempt each manual or scheduled transfer for a
SnapMirror relationship. The value of this parameter must be a positive integer or unlimited. The default
value is 8.
[-transfer-priority {low|normal}] - Transfer Scheduling Priority
Specifies the priority at which a transfer runs. The supported values are normal or low. The normal
transfers are scheduled before the low priority transfers. The default is normal.
[-ignore-atime {true|false}] - Ignore File Access Time
This parameter applies only to extended data protection (XDP) relationships. It specifies whether
incremental transfers will ignore files which have only their access time changed. The supported values
are true or false. The default is false.
[-restart {always|never|default}] - Restart Behavior
This parameter applies only to data protection relationships. It defines the behavior of SnapMirror if an
interrupted transfer exists. The supported values are always, never, or default. If the value is set to
always, an interrupted SnapMirror transfer always restarts provided it has a restart checkpoint and the
conditions are the same as they were before the transfer was interrupted. In addition, a new SnapMirror
Snapshot copy is created which will then be transferred. If the value is set to never, an interrupted
SnapMirror transfer will never restart, even if a restart checkpoint exists. A new SnapMirror Snapshot
copy will still be created and transferred. Data ONTAP version 8.2 will interpret a value of default as
being the same as always. Vault transfers will always resume based on a restart checkpoint, provided
the Snapshot copy still exists on the source volume.
[-is-network-compression-enabled {true|false}] - Is Network Compression Enabled
Specifies whether network compression is enabled for transfers. The supported values are true or false.
The default is false.
[-always-replicate-snapshots {true|false}] - This prioritizes replication of app-consistent snapshots
over synchronous replication
If this parameter is set to true, it specifies that SnapMirror Synchronous relationships will lose the zero
RPO protection upon failure in replicating application created snapshots. The default value is false.
● Examples
The following example creates a SnapMirror policy named TieredBackup on a Vserver named
vs0.example.com.
The following example executed under PVR control creates a SnapMirror policy named Sync on a Vserver
named vs0.example.com with -always-replicate_snapshots set to true to be used for a relationship
between items in Consistency Groups.
Related reference
snapmirror update (page 834)
snapmirror resync (page 792)
snapmirror initialize (page 762)
snapmirror policy add-rule (page 850)
snapmirror quiesce (page 779)
snapmirror resume (page 790)
snapmirror policy commands (page 850)
job schedule cron create (page 219)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy delete command deletes a SnapMirror policy. A policy that is to be deleted
must not be associated with any SnapMirror relationship. The built-in policies cannot be deleted.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
● Examples
The following example deletes a SnapMirror policy named TieredBackup on Vserver vs0.example.com:
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy modify command can be used to modify the policy attributes.
Note
Use the snapmirror policy modify-rule command to modify a rule in a SnapMirror policy.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
[-comment <text>] - Comment
Specifies a text comment for the SnapMirror policy. If the comment contains spaces, it must be enclosed
within quotes.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Determines the maximum number of times to attempt each manual or scheduled transfer for a
SnapMirror relationship. The value of this parameter must be a positive integer or unlimited. The default
value is 8.
[-transfer-priority {low|normal}] - Transfer Scheduling Priority
Specifies the priority at which a transfer runs. The supported values are normal or low. The normal
transfers are scheduled before the low priority transfers. The default is normal.
[-ignore-atime {true|false}] - Ignore File Access Time
This parameter applies only to extended data protection (XDP) relationships. It specifies whether
incremental transfers will ignore files which have only their access time changed. The supported values
are true or false. The default is false.
[-restart {always|never|default}] - Restart Behavior
This parameter applies only to data protection relationships. It defines the behavior of SnapMirror if an
interrupted transfer exists. The supported values are always, never, or default. If the value is set to
always, an interrupted SnapMirror transfer always restarts provided it has a restart checkpoint and the
conditions are the same as they were before the transfer was interrupted. In addition, a new SnapMirror
Snapshot copy is created which will then be transferred. If the value is set to never, an interrupted
SnapMirror transfer will never restart, even if a restart checkpoint exists. A new SnapMirror Snapshot
copy will still be created and transferred. Data ONTAP version 8.2 will interpret a value of default as
being the same as always. Vault transfers will always resume based on a restart checkpoint, provided
the Snapshot copy still exists on the source volume.
[-is-network-compression-enabled {true|false}] - Is Network Compression Enabled
Specifies whether network compression is enabled for transfers. The supported values are true or false.
The default is false.
[-always-replicate-snapshots {true|false}] - This prioritizes replication of app-consistent snapshots
over synchronous replication
If this parameter is set to true, it specifies that SnapMirror Synchronous relationships will lose the zero
RPO protection upon failure in replicating application created snapshots. The default value is false.
[-common-snapshot-schedule <text>] - Common Snapshot Copy Creation Schedule for SnapMirror
Synchronous (privilege: advanced)
Specifies the common Snapshot creating schedule. This parameter is only supported for Snapmirror
Synchronous relationships.
[-are-data-ops-sequentially-split {true|false}] - Is Sequential Splitting of Data Operations
Enabled?
This parameter specifies whether I/O, such as write, copy-offload and punch-holes, are split sequentially,
rather than being run in parallel on the source and destination. Spliiting the I/O sequentially will make
the system more robust, and less prone to I/O errors. However, it will also make I/O performance slower.
The default value of parameter -sequential-split-data-ops is false. The parameter -are-data-
ops-sequentially-split should only be used if too frequent I/O timeout or OutOfSync has
happened. Changes made by the snapmirror policy modify -sequential-split-data-ops
command do not take effect until the next resync. Changes do not affect resync or initialize operations
that have started and have not finished yet. The parameter -are- data-ops-sequentially-split
requires an effective cluster version of Data ONTAP 9.6.0 or later on both the source and destination
clusters.
[-discard-configs <network>, ...] - Configurations Not Replicated During Identity Preserve Vserver DR
Specifies the configuration to be dropped during replication. The supported values are:
• network - Drops network interfaces, routes, and kerberos configuration.
This parameter is supported only for policies of type async-mirror and applicable only for identity-
preserve Vserver SnapMirror relationships.
● Examples
The following example changes the "transfer-priority" and the "comment" text of a snapmirror policy named
TieredBackup on Vserver vs0.example.com:
Related reference
snapmirror update (page 834)
snapmirror initialize (page 762)
snapmirror policy commands (page 850)
snapmirror resync (page 792)
job schedule cron create (page 219)
snapmirror policy modify-rule (page 857)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy modify-rule command can be used to modify the retention count, preserve
setting, warning threshold count, schedule, and prefix for a rule in a SnapMirror policy. Reducing the
retention count or disabling the preserve setting for a rule in a SnapMirror policy might result in the deletion
of Snapshot copies on the vault destination when the next transfer by the snapmirror update command
occurs or when the next scheduled Snapshot copy creation on the destination for the rule occurs. Modifying
a rule to add a schedule will enable creation of Snapshot copies on the SnapMirror destination. Snapshot
copies on the source that have a SnapMirror label matching this rule will not be selected for transfer.
Schedule and prefix can only be modified for rules associated with SnapMirror policies of type vault or
mirror-vault. A SnapMirror policy with rules must have at least one rule without a schedule.
Note
The rules in SnapMirror policies of type async-mirror cannot be modified.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter specifies the rule that is to be modified in a SnapMirror policy.
[-keep <text>] - Snapshot Copy Retention Count
Specifies the maximum number of Snapshot copies that are retained on the SnapMirror destination
volume for a rule. The total number of Snapshot copies retained for all the rules in a policy cannot
exceed 1019. For all the rules in SnapMirror policies of type async-mirror, this parameter must be set to
value 1.
[-preserve {true|false}] - Snapshot Copy Preserve Enabled
Specifies the behavior when the Snapshot copy retention count is reached on the SnapMirror vault
destination for the rule. The default value is false, which means that the oldest Snapshot copy will be
deleted to make room for new ones only if the number of Snapshot copies has exceeded the retention
count specified in the "keep" parameter. When set to true, and when the Snapshot copies have reached
the retention count, then an incremental SnapMirror vault update transfer will fail or if the rule has a
schedule, Snapshot copies will no longer be created on the SnapMirror destination. For all the rules in
SnapMirror policies of type async-mirror this parameter must be set to value false.
[-warn <integer>] - Warning Threshold Count
Specifies the warning threshold count for the rule. The default value is 0. When set to a value greater
than zero, an event is generated after the number of Snapshot copies (for the particular rule) retained
on a SnapMirror vault destination reaches the specified warn limit. The preserve parameter for the rule
must be true to set the warn parameter to a value greater than zero.
[-schedule <text>] - Snapshot Copy Creation Schedule
This optional parameter specifies the name of the schedule associated with a rule. This parameter is
allowed only for rules associated with SnapMirror policies of type vault or mirror-vault. When this
parameter is specified, Snapshot copies are directly created on the SnapMirror destination. The Snapshot
copies created will have the same content as the latest Snapshot copy already present on the SnapMirror
destination. Snapshot copies on the source that have a SnapMirror label matching this rule will not be
selected for transfer. The default value is -.
Note
You define and name a schedule using the job schedule cron create command.
● Examples
The following example changes the retention count for nightly Snapshot copies to 6 for a rule named nightly
on a SnapMirror policy named TieredBackup on Vserver vs0.example.com:
Related reference
job schedule cron create (page 219)
snapmirror update (page 834)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy remove-rule command removes a rule from a SnapMirror policy. On the
destination of a SnapMirror relationship with snapmirror policy of type vault or mirror-vault, all
Snapshot copies with a SnapMirror label matching the rule being removed are no longer processed by
SnapMirror and might need to be deleted manually. A snapmirror policy of type vault must have at
least one rule if that policy is associated with a SnapMirror relationship. A SnapMirror policy with rules must
have at least one rule without a schedule.
● Parameters
-vserver <vserver name> - Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy> - SnapMirror Policy Name
Specifies the SnapMirror policy name.
-snapmirror-label <text> - Snapshot Copy Label
This parameter specifies the rule that is removed from the SnapMirror policy.
The rule for SnapMirror label sm_created cannot be removed from SnapMirror policies of type async-
mirror or mirror-vault.
● Examples
The following example removes a rule named nightly from a SnapMirror policy named TieredBackup on
Vserver vs0.example.com:
Related reference
snapmirror policy commands (page 850)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror policy show command displays the following information about SnapMirror policies:
• Vserver Name
• SnapMirror Policy Name
• SnapMirror Policy Type
• Number of Rules in the policy
• Tries
• Transfer Priority
• Comment for the policy
• Individual Rule Names
• Keep value for the Rule
• Total of Keep values across all Rules in the policy
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver Name
Selects the policies that match this parameter value.
[-policy <sm_policy>] - SnapMirror Policy Name
Selects the policies that match this parameter value.
[-type {vault|async-mirror|mirror-vault|strict-sync-mirror|sync-mirror|automated-failover}] -
Snapmirror Policy Type
Selects the policies that match this parameter value. A policy can be of type async-mirror, vault or mirror-
vault.
[-owner {cluster-admin|vserver-admin}] - Owner of the Policy
Selects the policies that match this parameter value. A policy can be owned by either the "Cluster Admin"
or a "Vserver Admin".
[-comment <text>] - Comment
Selects the policies that match this parameter value.
[-tries <unsigned32_or_unlimited>] - Tries Limit
Selects the policies that match this parameter value.
[-transfer-priority {low|normal}] - Transfer Scheduling Priority
Selects the policies that match this parameter value.
[-ignore-atime {true|false}] - Ignore File Access Time
Selects the policies that match this parameter value.
● Examples
The following example displays information about all SnapMirror policies:
vs0.example.com
TieredBackup vault 0 8 normal Use for tiered backups
Snapmirror-label: - Keep: -
Total Keep: 0
cs XDPDefault vault 2 8 normal Vault policy with daily and weekly rules.
SnapMirror Label: daily Keep: 7
weekly 52
Total Keep: 59
The following example shows all the policies with the following fields - vserver (default), policy (default)
and transfer-priority:
normal
7 entries were displayed.
The snapmirror snapshot-owner command enables management of user-created owners for a Snapshot
copy.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror snapshot-owner create command adds an owner on the specified Snapshot copy. A
Snapshot copy can have at most one owner. An owner can only be added to a Snapshot copy on a read-write
volume. The Snapshot copy must have a valid SnapMirror label.
Note
Refer to the ONTAP Data Protection Guide for valid use cases to add an owner on a Snapshot copy.
● Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
-snapshot <snapshot name> - Snapshot Copy Name
This parameter specifies the name of the Snapshot copy.
[-owner <owner name>] - Snapshot Copy Owner Name
This parameter specifies the name of the owner for the Snapshot copy. The owner name can be made up
of the characters A to Z, a to z, 0 to 9, and "_". The name can be up to 32 characters in length. When not
specified, an owner will be added with a system-generated default name.
● Examples
The following example adds owner app1 on Snapshot copy snap1 on volume vol1 in Vserver
vs0.example.com.
The following example adds a default owner on Snapshot copy snap2 on volume vol1 in Vserver
vs0.example.com.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror snapshot-owner delete command removes an owner on the specified Snapshot copy,
which was added using the snapmirror snapshot-owner create command.
● Parameters
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
-snapshot <snapshot name> - Snapshot Copy Name
This parameter specifies the name of the Snapshot copy.
[-owner <owner name>] - Snapshot Copy Owner Name
This parameter specifies the name of the owner for the Snapshot copy. When not specified, the owner
with the system-generated default name will be removed.
● Examples
The following example removes owner app1 on Snapshot copy snap1 on volume vol1 in Vserver
vs0.example.com.
The following example removes the default owner on Snapshot copy snap2 on volume vol1 in Vserver
vs0.example.com.
Related reference
snapmirror snapshot-owner create (page 862)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The snapmirror snapshot-owner show command is used to list all Snapshot copies with owners that
were added using the snapmirror snapshot-owner create command.
● Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays information about the specified fields.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all fields.
-vserver <vserver name> - Vserver Name
This parameter specifies the Vserver on which the volume is located.
-volume <volume name> - Volume Name
This parameter specifies the name of the volume.
● Examples
The following example lists all Snapshot copies with owners on volume vol1 in Vserver vs0. The system-
generated default owner name is displayed as "-".
The following example displays the owner name for Snapshot copy snap1 on volume vol1 in Vserver
vs0.example.com.
Vserver: vs0.example.com
Volume: vol1
Snapshot: snap1
Owner Names: app1
Related reference
snapmirror snapshot-owner create (page 862)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics-v1 nfs show-mount command displays the following statistics about the NFS mounts
on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of mount operations
• Total number of dump operations
• Total number of unmount operations
• Total number of unmountall operations
• Total number of export operations
• Total number of exportall operations
• Total number of pathconf operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-mount <Counter with Delta>] - Mount Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mount operations.
[-dump <Counter with Delta>] - Dump Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of dump operations.
[-unmnt <Counter with Delta>] - UnMount Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmount operations.
[-unmntall <Counter with Delta>] - UnMountAll Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmountall operations.
[-export <Counter with Delta>] - Export Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of export operations.
[-exportall <Counter with Delta>] - ExportAll Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of exportall operations.
[-pathconf <Counter with Delta>] - PathConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of pathconf operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total operations.
● Examples
The following example displays statistics about the NFS mounts for a node named node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics-v1 nfs show-nlm command displays the following statistics about the Network Lock
Manager (NLM) on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of test operations
• Total number of lock operations
• Total number of cancel operations
• Total number of unlock operations
• Total number of granted operations
• Total number of share operations
• Total number of unshare operations
• Total number of nmlock operations
• Total number of freeall operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Note
This command requires an effective cluster version earlier than Data ONTAP 9.0. Data for nodes
running Data ONTAP 9.0 or later is not collected, and will not be displayed. Use the statistics
show -object nlm command instead.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-test <Counter with Delta>] - Test Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of test operations.
[-lock <Counter with Delta>] - Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lock operations.
[-cancel <Counter with Delta>] - Cancel Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of cancel operations.
[-unlock <Counter with Delta>] - Unlock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unlock operations.
[-granted <Counter with Delta>] - Granted Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of granted operations.
[-share <Counter with Delta>] - Share Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of share operations.
[-unshare <Counter with Delta>] - Unshare Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unshare operations.
[-nmlock <Counter with Delta>] - NmLock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of nmlock operations.
[-freeall <Counter with Delta>] - FreeAll Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of freeall operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total operations.
● Examples
The following example displays statistics about the NLM for a node named node1:
Cancel: 0 -
Unlock: 0 -
Granted: 0 -
Share: 0 -
Unshare: 0 -
NmLock: 0 -
FreeAll: 0 -
Total: 0 -
Related reference
statistics show (page 899)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics-v1 nfs show-statusmon command displays the following statistics about the Status
Monitor on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of stat operations
• Total number of monitor operations
• Total number of unmonitor operations
• Total number of unmonitor all operations
• Total number of simucrash operations
• Total number of notify operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-stat <Counter with Delta>] - Stat Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of stat operations.
[-monitor <Counter with Delta>] - Monitor Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of monitor operations.
[-unmonitor <Counter with Delta>] - Unmonitor Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmonitor operations.
[-unmonall <Counter with Delta>] - Unmonitor All Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmonitor all operations.
[-simucrash <Counter with Delta>] - SimuCrash Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of simucrash operations.
[-notify <Counter with Delta>] - Notify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of notify operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total operations.
● Examples
The following example displays statistics about the status monitor for a node named node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics-v1 nfs show-v3 command displays the following statistics about the NFSv3 operations
on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of getattr operations
• Total number of setattr operations
• Total number of lookup operations
• Total number of access operations
• Total number of readsymlink operations
• Total number of read operations
• Total number of write operations
• Total number of create operations
• Total number of mkdir operations
• Total number of symlink operations
• Total number of mknod operations
• Total number of remove operations
• Total number of rmdir operations
• Total number of rename operations
• Total number of link operations
• Total number of readdir operations
• Total number of readdirplus operations
• Total number of fsstat operations
• Total number of fsinfo operations
• Total number of pathconf operations
• Total number of commit operations
• Total number of nfsv3 operations
• Percent of null operations
• Percent of getattr operations
• Percent of setattr operations
• Percent of lookup operations
• Percent of access operations
• Percent of readsymlink operations
• Percent of read operations
• Percent of write operations
• Percent of create operations
• Percent of mkdir operations
• Percent of symlink operations
• Percent of mknod operations
• Percent of remove operations
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv3 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-gattr <Counter with Delta>] - GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of getattr operations.
[-sattr <Counter with Delta>] - SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setattr operations.
[-lookup <Counter with Delta>] - LookUp Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookup operations.
[-access <Counter with Delta>] - Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of access operations.
[-rsym <Counter with Delta>] - ReadSymlink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readsymlink operations.
[-read <Counter with Delta>] - Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of read operations.
[-write <Counter with Delta>] - Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of write operations.
[-create <Counter with Delta>] - Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of create operations.
[-mkdir <Counter with Delta>] - MkDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mkdir operations.
[-symln <Counter with Delta>] - SymLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of symlink operations.
[-mknod <Counter with Delta>] - MkNod Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mknod operations.
[-remove <Counter with Delta>] - Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of remove operations.
[-rmdir <Counter with Delta>] - RmDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rmdir operations.
[-rename <Counter with Delta>] - Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rename operations.
[-link <Counter with Delta>] - Link Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of link operations.
[-rdir <Counter with Delta>] - ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdir operations.
[-rdirp <Counter with Delta>] - ReadDirPlus Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdirplus operations.
[-fsstat <Counter with Delta>] - FsStat Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of fsstat operations.
[-fsinfo <Counter with Delta>] - FsInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of fsinfo operations.
[-pconf <Counter with Delta>] - PathConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of pathconf operations.
[-commit <Counter with Delta>] - Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of commit operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total NFSv3 operations.
[-null-pct <Counter with Delta>] - Percent Null Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of null operations.
[-gattr-pct <Counter with Delta>] - Percent GetAttr Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getattr operations.
[-sattr-pct <Counter with Delta>] - Percent SetAttr Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setattr operations.
[-lookup-pct <Counter with Delta>] - Percent LookUp Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookup operations.
[-access-pct <Counter with Delta>] - Percent Access Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of access operations.
[-rsym-pct <Counter with Delta>] - Percent ReadSymlink Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readsymlink operations.
[-read-pct <Counter with Delta>] - Percent Read Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of read operations.
[-write-pct <Counter with Delta>] - Percent Write Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of write operations.
[-create-pct <Counter with Delta>] - Percent Create Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of create operations.
[-mkdir-pct <Counter with Delta>] - Percent MkDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of mkdir operations.
[-symln-pct <Counter with Delta>] - Percent SymLink Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of symlink operations.
[-mknod-pct <Counter with Delta>] - Percent MkNod Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of mknod operations.
[-remove-pct <Counter with Delta>] - Percent Remove Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of remove operations.
[-rmdir-pct <Counter with Delta>] - Percent RmDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rmdir operations.
[-rename-pct <Counter with Delta>] - Percent Rename Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rename operations.
[-link-pct <Counter with Delta>] - Percent Link Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of link operations.
[-rdir-pct <Counter with Delta>] - Percent ReadDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdir operations.
[-rdirp-pct <Counter with Delta>] - Percent ReadDirPlus Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdirplus operations.
[-fsstat-pct <Counter with Delta>] - Percent FsStat Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of fsstat operations.
[-fsinfo-pct <Counter with Delta>] - Percent FsInfo Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of fsinfo operations.
[-pconf-pct <Counter with Delta>] - Percent PathConf Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of pathconf operations.
[-commit-pct <Counter with Delta>] - Percent Commit Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of commit operations.
● Examples
The following example displays statistics about the NFSv3 operations for a node named node1:
Access Ops: 0 - 0% -
ReadSymlink Ops: 0 - 0% -
Read Ops: 0 - 0% -
Write Ops: 0 - 0% -
Create Ops: 0 - 0% -
MkDir Ops: 0 - 0% -
Symlink Ops: 0 - 0% -
MkNod Ops: 0 - 0% -
Remove Ops: 0 - 0% -
RmDir Ops: 0 - 0% -
Rename Ops: 0 - 0% -
Link Ops: 0 - 0% -
ReadDir Ops: 0 - 0% -
ReadDirPlus Ops: 0 - 0% -
FsStat Ops: 0 - 0% -
FsInfo Ops: 0 - 0% -
PathConf Ops: 0 - 0% -
Commit Ops: 0 - 0% -
Total Ops: 2 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics-v1 nfs show-v4 command displays the following statistics about the NFSv4 operations
on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of compound operations
• Total number of access operations
• Total number of close operations
• Total number of commit operations
• Total number of create operations
• Total number of delegpurge operations
• Total number of delegret operations
• Total number of getattr operations
• Total number of getfh operations
• Total number of link operations
• Total number of lock operations
• Total number of lockt operations
• Total number of locku operations
• Total number of lookup operations
• Total number of lookupp operations
• Total number of nverify operations
• Total number of open operations
• Total number of openattr operations
• Total number of openconf operations
• Total number of opendowng operations
• Total number of putfh operations
• Total number of putpubfh operations
• Total number of putrootfh operations
• Total number of read operations
• Total number of readdir operations
• Total number of readlink operations
• Total number of remove operations
• Total number of rename operations
• Total number of renew operations
• Total number of restorefh operations
• Total number of savefh operations
• Total number of secinfo operations
• Total number of setattr operations
• Total number of setcliid operations
• Total number of setcliidconf operations
• Total number of verify operations
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv4 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-cmpnd <Counter with Delta>] - Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of compound operations.
[-access <Counter with Delta>] - Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of access operations.
[-close <Counter with Delta>] - Close Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of close operations.
[-commit <Counter with Delta>] - Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of commit operations.
[-create <Counter with Delta>] - Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of create operations.
[-delpur <Counter with Delta>] - Delegpurge Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of delegpurge operations.
[-delrtn <Counter with Delta>] - Delegret Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of delegret operations.
[-gattr <Counter with Delta>] - GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of getattr operations.
[-getfh <Counter with Delta>] - GetFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of getfh operations.
[-link <Counter with Delta>] - Link Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of link operations.
[-lock <Counter with Delta>] - Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lock operations.
[-lockt <Counter with Delta>] - LockT Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lockt operations.
[-locku <Counter with Delta>] - LockU Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of locku operations.
[-lookup <Counter with Delta>] - Lookup Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookup operations.
[-lookpp <Counter with Delta>] - LookupP Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookupp operations.
[-nverfy <Counter with Delta>] - Nverify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of nverify operations.
[-open <Counter with Delta>] - Open Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of open operations.
[-opattr <Counter with Delta>] - OpenAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of openattr operations.
[-opconf <Counter with Delta>] - OpenConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of openconf operations.
[-opndg <Counter with Delta>] - OpenDowng Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of opendowng operations.
[-putfh <Counter with Delta>] - PutFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putfh operations.
[-putpfh <Counter with Delta>] - PutPubFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putpubfh operations.
[-putrfh <Counter with Delta>] - PutRootFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putrootfh operations.
[-read <Counter with Delta>] - Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of read operations.
[-readdr <Counter with Delta>] - ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdir operations.
[-rlink <Counter with Delta>] - ReadLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readlink operations.
[-remove <Counter with Delta>] - Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of remove operations.
[-rename <Counter with Delta>] - Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rename operations.
[-renew <Counter with Delta>] - Renew Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of renew operations.
[-restfh <Counter with Delta>] - RestoreFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of restorefh operations.
[-savefh <Counter with Delta>] - SaveFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of savefh operations.
[-secinf <Counter with Delta>] - SecInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of secinfo operations.
[-sattr <Counter with Delta>] - SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setattr operations.
[-sclid <Counter with Delta>] - SetCliId Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setcliid operations.
[-scidc <Counter with Delta>] - SetCliIdConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setcliidconf operations.
[-verify <Counter with Delta>] - Verify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of verify operations.
[-write <Counter with Delta>] - Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of write operations.
[-relown <Counter with Delta>] - RelLockOwn Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rellockown operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total nfsv4 operations.
[-null-pct <Counter with Delta>] - Percent Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of null operations.
[-cmpnd-pct <Counter with Delta>] - Percent Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of compound operations.
[-access-pct <Counter with Delta>] - Percent Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of access operations.
[-close-pct <Counter with Delta>] - Percent Close Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of close operations.
[-commit-pct <Counter with Delta>] - Percent Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of commit operations.
[-create-pct <Counter with Delta>] - Percent Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of create operations.
[-delpur-pct <Counter with Delta>] - Percent Delegpurge Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of delegpurge operations.
[-delrtn-pct <Counter with Delta>] - Percent Delegret Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of delegret operations.
[-gattr-pct <Counter with Delta>] - Percent GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getattr operations.
[-getfh-pct <Counter with Delta>] - Percent GetFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getfh operations.
[-link-pct <Counter with Delta>] - Percent Link Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of link operations.
[-lock-pct <Counter with Delta>] - Percent Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lock operations.
[-lockt-pct <Counter with Delta>] - Percent LockT Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lockt operations.
[-locku-pct <Counter with Delta>] - Percent LockU Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of locku operations.
[-lookup-pct <Counter with Delta>] - Percent Lookup Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookup operations.
[-lookpp-pct <Counter with Delta>] - Percent LookupP Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookupp operations.
[-nverfy-pct <Counter with Delta>] - Percent Nverify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of nverify operations.
[-open-pct <Counter with Delta>] - Percent Open Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of open operations.
[-opattr-pct <Counter with Delta>] - Percent OpenAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of openattr operations.
[-opconf-pct <Counter with Delta>] - Percent OpenConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of openconf operations.
[-opndg-pct <Counter with Delta>] - Percent OpenDowng Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of opendowng operations.
[-putfh-pct <Counter with Delta>] - Percent PutFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putfh operations.
[-putpfh-pct <Counter with Delta>] - Percent PutPubFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putpubfh operations.
[-putrfh-pct <Counter with Delta>] - Percent PutRootFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putrootfh operations.
[-read-pct <Counter with Delta>] - Percent Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of read operations.
[-readdr-pct <Counter with Delta>] - Percent ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdir operations.
[-rlink-pct <Counter with Delta>] - Percent ReadLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readlink operations.
[-remove-pct <Counter with Delta>] - Percent Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of remove operations.
[-rename-pct <Counter with Delta>] - Percent Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rename operations.
[-renew-pct <Counter with Delta>] - Percent Renew Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of renew operations.
[-restfh-pct <Counter with Delta>] - Percent RestoreFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of restorefh operations.
[-savefh-pct <Counter with Delta>] - Percent SaveFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of savefh operations.
[-secinf-pct <Counter with Delta>] - Percent SecInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of secinfo operations.
[-sattr-pct <Counter with Delta>] - Percent SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setattr operations.
[-sclid-pct <Counter with Delta>] - Percent SetCliId Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setcliid operations.
[-scidc-pct <Counter with Delta>] - Percent SetCliIdConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setcliidconf operations.
[-verify-pct <Counter with Delta>] - Percent Verify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of verify operations.
[-write-pct <Counter with Delta>] - Percent Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of write operations.
[-relown-pct <Counter with Delta>] - Percent RelLockOwn Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rellockown operations.
● Examples
The following example displays statistics about the NFSv4 operations for a node named node1:
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 286 -
Node Value Delta Percent Ops Delta
node1 ---------------------------failure-----------------------
Null Procs: 0 - 0% -
Cmpnd Procs: 0 - -
Access Ops: 0 - 0% -
Close Ops: 0 - 0% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 0 - 0% -
Getfh Ops: 0 - 0% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 5 - 63% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 2 - 25% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 0 - 0% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 0 - 0% -
Read Ops: 0 - 0% -
Readdir Ops: 0 - 0% -
Readlink Ops: 0 - 0% -
Remove Ops: 0 - 0% -
Rename Ops: 0 - 0% -
Renew Ops: 0 - 0% -
Restorefh Ops: 0 - 0% -
Savefh Ops: 0 - 0% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 1 - 13% -
Setclid Ops: 0 - 0% -
Setclidconf Ops: 0 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 0 - 0% -
Rlockown Ops: 0 - 0% -
Total Ops: 8 -
Node Value Delta Percent Ops Delta
node1 --------------------------- all -----------------------
Null Procs: 2 - 1% -
Cmpnd Procs: 92 - -
Access Ops: 16 - 5% -
Close Ops: 8 - 3% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 76 - 26% -
Getfh Ops: 22 - 7% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 18 - 6% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 10 - 3% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 92 - 31% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 2 - 1% -
Read Ops: 0 - 0% -
Readdir Ops: 2 - 1% -
Readlink Ops: 0 - 0% -
Remove Ops: 5 - 2% -
Rename Ops: 3 - 1% -
Renew Ops: 0 - 0% -
Restorefh Ops: 11 - 4% -
Savefh Ops: 13 - 4% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 9 - 3% -
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 294 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics-v1 nfs show-v41 command displays the following statistics about the NFSv4.1
operations on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of compound operations
• Total number of access operations
• Total number of close operations
• Total number of commit operations
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv4 statistics only for the specified node.
● Examples
The following example displays statistics about the NFSv4.1 operations for a node named node1:
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 0 - 0% -
Getfh Ops: 0 - 0% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 5 - 63% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 2 - 25% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 0 - 0% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 0 - 0% -
Read Ops: 0 - 0% -
Readdir Ops: 0 - 0% -
Readlink Ops: 0 - 0% -
Remove Ops: 0 - 0% -
Rename Ops: 0 - 0% -
Renew Ops: 0 - 0% -
Restorefh Ops: 0 - 0% -
Savefh Ops: 0 - 0% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 1 - 13% -
Setclid Ops: 0 - 0% -
Setclidconf Ops: 0 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 0 - 0% -
Rlockown Ops: 0 - 0% -
Total Ops: 8 -
Node Value Delta Percent Ops Delta
node1 --------------------------- all -----------------------
Null Procs: 2 - 1% -
Cmpnd Procs: 92 - -
Access Ops: 16 - 5% -
Close Ops: 8 - 3% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 76 - 26% -
Getfh Ops: 22 - 7% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 18 - 6% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 10 - 3% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 92 - 31% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 2 - 1% -
Read Ops: 0 - 0% -
Readdir Ops: 2 - 1% -
Readlink Ops: 0 - 0% -
Remove Ops: 5 - 2% -
Rename Ops: 3 - 1% -
Renew Ops: 0 - 0% -
Restorefh Ops: 11 - 4% -
Savefh Ops: 13 - 4% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 9 - 3% -
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 294 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays size statistics for CIFS and NFS protocol read and write requests. The output of the
command includes the following information:
• Node name
• Statistic type
• Average size of request
• Total request count
• Current number of requests in each category of request size
• Number of requests after the command was last executed
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays the number of NFS v3 requests in each size range for only one node in the
cluster.
Node: node0
Node: node0
Stat Type: nfs3_write
Value Delta
-------------- -------- ----------
Average Size: 0 -
Total Request Count:
199294247 -
0-511: 36556 -
512-1023: 3683 -
1K-2047: 745 -
2K-4095: 1413 -
4K-8191: 28643 -
8K-16383:
199223207 -
16K-32767: 0 -
32K-65535: 0 -
64K-131071: 0 -
128K - : 0 -
statistics show
Display performance data for a time interval
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays performance data for a period of time.
To display data for a period of time, collect a sample using the statistics start and statistics stop
commands. The data that displays is calculated data based on the samples the cluster collects. To view the
sample, specify the -sample-id parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-tab ]}
If this parameter is specified, the command displays performance data in tabular format.
[-object <text>] - Object
Selects the objects for which you want to display performance data. To view a list of valid object names,
type statistics show -object ? or statistics catalog object show. To specify multiple
objects, use "|" between each object.
You should limit the scope of this command to only a few objects at a time
to avoid a potentially significant impact on the performance of the system.
● Examples
The following example starts collecting statistics and displays statistics for the sample named smpl_1 for
counters: avg_processor_busy and cpu_busy
The following example starts and stops data collection and displays statistics for the sample named smpl_1
for counters: avg_processor_busy and cpu_busy
Counter Value
-------------------------------- --------------------------------
avg_processor_busy 249876451
cifs_ops 0
cpu_busy 303355441
disk_data_read 51453952
disk_data_written 486117376
fcp_data_recv 0
fcp_data_sent 0
fcp_ops 0
hdd_data_read 51453952
hdd_data_written 486117376
hostname node-name1
http_ops 0
instance_name cluster
iscsi_ops 0
net_data_recv 35034112
net_data_sent 3177472
nfs_ops 0
node_name node-name1
node_uuid
[...]
The following example displays raw statistics for counters "avg_processor_busy" and "cpu_busy":
Counter Value
-------------------------------- --------------------------------
avg_processor_busy 249876451
cpu_busy 303355441
[...]
Related reference
statistics catalog object show (page 911)
statistics start (page 904)
statistics stop (page 906)
statistics show-periodic
Continuously display current performance data at regular interval
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command continuously displays specified performance data at a regular interval. The command output
displays data in the following columns:
Note
This command has been deprecated and may be removed from a future version of Data ONTAP. Use
the "statistics show" command with the tabular format instead.
• cpu avg: Average processor utilization across all processors in the system.
• cpu busy: Overall system utilization based on CPU utilization and subsystem utilization. Examples of
subsystems include the storage subsystem and RAID subsystem.
• total ops: Number of total operations per second.
• nfs-ops: Number of NFS operations per second.
• cifs-ops: Number of CIFS operations per second.
• fcache ops: Number of 7M flexcache operations per second.
• pkts recv: Number of packets received over physical ports per second.
• pkts sent: Number of packets sent over physical ports per second.
• total recv: Total network traffic received over physical ports per second (KBps).
• total sent: Total network traffic sent over physical ports per second (KBps).
• data busy: The percentage of time that data ports sent or received data.
• data recv: Network traffic received on data ports (KBps).
• data sent: Network traffic sent on data ports (KBps).
• cluster busy: The percentage of time that cluster ports sent or received data.
• cluster recv: Network traffic received on cluster ports (KBps).
• cluster sent: Network traffic sent on cluster ports (KBps).
• disk read: Data read from disk (KBps).
• disk write: Data written to disk (KBps).
● Parameters
[-object <text>] - Object
Selects the object for which you want to display performance data. The default object is "cluster".
● Examples
The following example displays the "cluster" statistics for a node named node1. Because no number of
iterations is specified, this command will continue to run until you interrupt it by pressing Ctrl-C.
The following example displays the "processor" statistics for an instance named processor1. This command
will display only five iterations.
The following example displays the processor statistics for an instance named processor1 and counters
"processor_busy" and "sk_switches". This command will display only five iterations.
cluster1::*> statistics show-periodic -object processor -instance processor1 -iteration 5 -counter proc
essor_busy|sk_switches
processor sk
busy switches
--------- --------
5% 1267
4% 1163
7% 1512
5% 1245
4% 1128
[...]
statistics start
Start data collection for a sample
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command starts the collection of performance data. Use the statistics stop command to stop the
collection. You view the sample of performance data by using the statistics show command. You can
collect more than one sample at a time.
● Parameters
[-object <text>] - Object
Selects the objects for which you want to collect performance data. This parameter is required. To view a
list of valid object names, type statistics catalog object show at the command prompt. To
specify multiple objects, use "|" between each object.
You should limit the scope of this command to only a few objects at a time
to avoid a potentially significant impact on the performance of the system.
● Examples
The following example starts statistics collection for sample "smpl_1":
The following example starts collecting statistics for the sample named smpl_1 for counters:
avg_processor_busy and cpu_busy
Related reference
statistics catalog object show (page 911)
statistics show (page 899)
statistics stop (page 906)
statistics stop
Stop data collection for a sample
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command stops the collection of performance data. You view the sample of performance data by using
the statistics show command.
● Parameters
[-sample-id <text>] - Sample Identifier
Specifies the identifier of the sample for which you want to stop data collection. If you do not specify this
parameter, the command stops data collection for the last sample that you started by running the
statistics start command without the -sample-id parameter.
● Examples
The following example stops data collection for sample "smpl_1":
Related reference
statistics start (page 904)
statistics show (page 899)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for aggregates at a regular interval. The command
output displays data in the following columns:
• Aggregate - aggregate name.
• Node - node name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
● Parameters
[-aggregate <text>] - Aggregate
Selects the aggregate for which you want to display performance data.
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies maximum number of aggregates to display. The default setting is 25.
● Examples
The following example displays aggregate statistics:
[...]
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for flash pool caches at a regular interval. The
command output displays data in the following columns:
• Aggregate - aggregate name.
• Vserver - vserver name.
• Volume - volume name.
• Read Hit - percent of IOs serviced from a cache level.
• Write Hit - percent of IOs serviced from a cache level.
• Cache Used - percent of cache used.
• Read Blocks - read blocks.
• Write Blocks - write blocks.
• Rejects - cache rejects.
● Parameters
[-aggregate <text>] - Aggregate
Selects the aggregate for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of flash pools to display. The default setting is 25.
● Examples
The following example displays flash pool statistics:
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays the names and descriptions of counters. The displayed data is either node-specific or
cluster-wide, depending on the objects specified.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each counter, including privilege level, label, and whether the
counter is a key counter.
-object <text> - Object
Selects the object for which you want to display the list of counters. This parameter is required. To view a
list of valid object names, type statistics catalog counter show -object ? or statistics
catalog object show.
[-counter <text>] - Counter
Selects the counters that match this parameter value. If you do not specify this parameter, the command
displays details for all counters.
[-filter <text>] - Filter Data
Selects the counters that match this parameter value. For example, to display counters from node1,
specify -filter "node_name=node1".
[-label <text>, ...] - Labels for Array Counters
Selects the counters that match this parameter value. A label is the name of the bucket to which an
array counter belongs.
[-description <text>] - Description
Selects the counters that match this parameter value.
[-privilege <text>] - Privilegel Level
Selects the counters that match this parameter value.
[-is-key-counter {true|false}] - Is Key Counter
Selects the counters that are key counters (true) or are not key counters (false). A key counter is a
counter that uniquely identifies an instance across the cluster. The default setting is false. For example,
"vserver_name" and "node_name" are key counters because they identify the specific Vserver or node to
which the instance belongs.
[-is-deprecated {true|false}] - Is Counter Deprecated
Selects the counters that are deprecated (true) or are not deprecated (false).
[-replaced-by <text>] - Replaced By Counter If Deprecated
Selects all deprecated counters that are replaced by the counter provided to this parameter.
● Examples
The following example displays the list of counters in the processor object.
Related reference
statistics catalog object show (page 911)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays the names of instances associated with the specified object. The displayed data is
either node-specific or cluster-wide, depending on the objects specified.
● Parameters
[-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
-object <text> - Object
Selects the object for which you want to display the list of instances. This parameter is required. To view
a list of valid object names, type statistics catalog instance show -object ? or statistics
catalog object show.
● Examples
The following example displays the list of instances associated with the processor object.
Related reference
statistics catalog object show (page 911)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays the names and descriptions of objects from which you can obtain performance data.
The displayed data is either node-specific or cluster-wide, depending on the objects specified.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each object, including privilege level.
[-object <text>] - Object
Selects the objects for which you want to display information. If you do not specify this parameter, the
command displays details for all of the objects.
[-privilege <text>] - Privilege Level
Selects the objects that match this parameter value.
● Examples
The following example displays descriptions of all objects in the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command continuously displays performance data for disks at a regular interval. The command output
displays data in the following columns:
• Disk - disk name.
• Node - node name.
• Busy (%) - percentage of time there was at least one outstanding request to the disk.
• Total Ops - total operations per second.
● Parameters
[-disk <text>] - Disk
Selects the disk for which you want to display performance data.
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of disks to display. The default setting is 25.
● Examples
The following example displays disk statistics:
[...]
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for LIFs at a regular interval. The command output
displays data in the following columns:
• LIF - logical interface name.
• Vserver - vserver name.
● Parameters
[-lif <text>] - LIF
Selects the LIF for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of LIFs to display. The default setting is 25.
● Examples
The following example displays LIFs statistics:
[...]
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for LUNs at a regular interval. The command output
displays data in the following columns:
• Lun - LUN name.
• Vserver - vserver name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
• Read (Bps) - read throughput in bytes per second.
• Write (Bps) - write throughput in bytes per second.
• Latency(ms) - average latency for an operation in milliseconds.
● Parameters
[-lun <text>] - Lun
Selects the LUN for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of LUNs to display. The default setting is 25.
● Examples
The following example displays LUN statistics:
[...]
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for Namespaces at a regular interval. The command
output displays data in the following columns:
• Namespace - Namespace name.
• Vserver - vserver name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
• Read (Bps) - read throughput in bytes per second.
• Write (Bps) - write throughput in bytes per second.
• Latency(ms) - average latency for an operation in milliseconds.
● Parameters
[-namespace <text>] - Namespace
Selects the Namespace for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of Namespaces to display. The default setting is 25.
● Examples
The following example displays Namespace statistics:
[...]
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics nfs show-mount command displays the following statistics about the NFS mounts on
each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of mount operations
• Total number of dump operations
• Total number of unmount operations
• Total number of unmountall operations
• Total number of export operations
• Total number of exportall operations
• Total number of pathconf operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-mount <Counter with Delta>] - Mount Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mount operations.
[-dump <Counter with Delta>] - Dump Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of dump operations.
[-unmnt <Counter with Delta>] - UnMount Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmount operations.
[-unmntall <Counter with Delta>] - UnMountAll Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmountall operations.
[-export <Counter with Delta>] - Export Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of export operations.
[-exportall <Counter with Delta>] - ExportAll Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of exportall operations.
[-pathconf <Counter with Delta>] - PathConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of pathconf operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total operations.
● Examples
The following example displays statistics about the NFS mounts for a node named node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics nfs show-nlm command displays the following statistics about the Network Lock
Manager (NLM) on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of test operations
• Total number of lock operations
• Total number of cancel operations
• Total number of unlock operations
• Total number of granted operations
• Total number of share operations
• Total number of unshare operations
• Total number of nmlock operations
• Total number of freeall operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
Note
This command requires an effective cluster version earlier than Data ONTAP 9.0. Data for nodes
running Data ONTAP 9.0 or later is not collected, and will not be displayed. Use the statistics
show -object nlm command instead.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-test <Counter with Delta>] - Test Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of test operations.
[-lock <Counter with Delta>] - Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lock operations.
[-cancel <Counter with Delta>] - Cancel Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of cancel operations.
● Examples
The following example displays statistics about the NLM for a node named node1:
NmLock: 0 -
FreeAll: 0 -
Total: 0 -
Related reference
statistics show (page 899)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics nfs show-statusmon command displays the following statistics about the Status
Monitor on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of stat operations
• Total number of monitor operations
• Total number of unmonitor operations
• Total number of unmonitor all operations
• Total number of simucrash operations
• Total number of notify operations
• Total number of all the above operations
This command is designed to be used to analyze performance characteristics and to help diagnose issues.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-stat <Counter with Delta>] - Stat Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of stat operations.
[-monitor <Counter with Delta>] - Monitor Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of monitor operations.
[-unmonitor <Counter with Delta>] - Unmonitor Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmonitor operations.
[-unmonall <Counter with Delta>] - Unmonitor All Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of unmonitor all operations.
[-simucrash <Counter with Delta>] - SimuCrash Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of simucrash operations.
[-notify <Counter with Delta>] - Notify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of notify operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total operations.
● Examples
The following example displays statistics about the status monitor for a node named node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics nfs show-v3 command displays the following statistics about the NFSv3 operations on
each node in the cluster:
• Result of the operations (success or failure)
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv3 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-gattr <Counter with Delta>] - GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of getattr operations.
[-sattr <Counter with Delta>] - SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setattr operations.
[-lookup <Counter with Delta>] - LookUp Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookup operations.
[-access <Counter with Delta>] - Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of access operations.
[-rsym <Counter with Delta>] - ReadSymlink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readsymlink operations.
[-read <Counter with Delta>] - Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of read operations.
[-write <Counter with Delta>] - Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of write operations.
[-create <Counter with Delta>] - Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of create operations.
[-mkdir <Counter with Delta>] - MkDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mkdir operations.
[-symln <Counter with Delta>] - SymLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of symlink operations.
[-mknod <Counter with Delta>] - MkNod Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of mknod operations.
[-remove <Counter with Delta>] - Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of remove operations.
[-rmdir <Counter with Delta>] - RmDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rmdir operations.
[-rename <Counter with Delta>] - Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rename operations.
[-link <Counter with Delta>] - Link Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of link operations.
[-rdir <Counter with Delta>] - ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdir operations.
[-rdirp <Counter with Delta>] - ReadDirPlus Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdirplus operations.
[-fsstat <Counter with Delta>] - FsStat Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of fsstat operations.
[-fsinfo <Counter with Delta>] - FsInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of fsinfo operations.
[-pconf <Counter with Delta>] - PathConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of pathconf operations.
[-commit <Counter with Delta>] - Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of commit operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total NFSv3 operations.
[-null-pct <Counter with Delta>] - Percent Null Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of null operations.
[-gattr-pct <Counter with Delta>] - Percent GetAttr Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getattr operations.
[-sattr-pct <Counter with Delta>] - Percent SetAttr Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setattr operations.
[-lookup-pct <Counter with Delta>] - Percent LookUp Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookup operations.
[-access-pct <Counter with Delta>] - Percent Access Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of access operations.
[-rsym-pct <Counter with Delta>] - Percent ReadSymlink Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readsymlink operations.
[-read-pct <Counter with Delta>] - Percent Read Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of read operations.
[-write-pct <Counter with Delta>] - Percent Write Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of write operations.
[-create-pct <Counter with Delta>] - Percent Create Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of create operations.
[-mkdir-pct <Counter with Delta>] - Percent MkDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of mkdir operations.
[-symln-pct <Counter with Delta>] - Percent SymLink Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of symlink operations.
[-mknod-pct <Counter with Delta>] - Percent MkNod Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of mknod operations.
[-remove-pct <Counter with Delta>] - Percent Remove Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of remove operations.
[-rmdir-pct <Counter with Delta>] - Percent RmDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rmdir operations.
[-rename-pct <Counter with Delta>] - Percent Rename Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rename operations.
[-link-pct <Counter with Delta>] - Percent Link Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of link operations.
[-rdir-pct <Counter with Delta>] - Percent ReadDir Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdir operations.
[-rdirp-pct <Counter with Delta>] - Percent ReadDirPlus Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdirplus operations.
[-fsstat-pct <Counter with Delta>] - Percent FsStat Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of fsstat operations.
[-fsinfo-pct <Counter with Delta>] - Percent FsInfo Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of fsinfo operations.
[-pconf-pct <Counter with Delta>] - Percent PathConf Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of pathconf operations.
[-commit-pct <Counter with Delta>] - Percent Commit Ops
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of commit operations.
● Examples
The following example displays statistics about the NFSv3 operations for a node named node1:
MkNod Ops: 0 - 0% -
Remove Ops: 0 - 0% -
RmDir Ops: 0 - 0% -
Rename Ops: 0 - 0% -
Link Ops: 0 - 0% -
ReadDir Ops: 0 - 0% -
ReadDirPlus Ops: 0 - 0% -
FsStat Ops: 0 - 0% -
FsInfo Ops: 0 - 0% -
PathConf Ops: 0 - 0% -
Commit Ops: 0 - 0% -
Total Ops: 2 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics nfs show-v4 command displays the following statistics about the NFSv4 operations on
each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of compound operations
• Total number of access operations
• Total number of close operations
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookup operations.
[-lookpp <Counter with Delta>] - LookupP Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of lookupp operations.
[-nverfy <Counter with Delta>] - Nverify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of nverify operations.
[-open <Counter with Delta>] - Open Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of open operations.
[-opattr <Counter with Delta>] - OpenAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of openattr operations.
[-opconf <Counter with Delta>] - OpenConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of openconf operations.
[-opndg <Counter with Delta>] - OpenDowng Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of opendowng operations.
[-putfh <Counter with Delta>] - PutFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putfh operations.
[-putpfh <Counter with Delta>] - PutPubFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putpubfh operations.
[-putrfh <Counter with Delta>] - PutRootFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of putrootfh operations.
[-read <Counter with Delta>] - Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of read operations.
[-readdr <Counter with Delta>] - ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readdir operations.
[-rlink <Counter with Delta>] - ReadLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of readlink operations.
[-remove <Counter with Delta>] - Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of remove operations.
[-rename <Counter with Delta>] - Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rename operations.
[-renew <Counter with Delta>] - Renew Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of renew operations.
[-restfh <Counter with Delta>] - RestoreFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of restorefh operations.
[-savefh <Counter with Delta>] - SaveFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of savefh operations.
[-secinf <Counter with Delta>] - SecInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of secinfo operations.
[-sattr <Counter with Delta>] - SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setattr operations.
[-sclid <Counter with Delta>] - SetCliId Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setcliid operations.
[-scidc <Counter with Delta>] - SetCliIdConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of setcliidconf operations.
[-verify <Counter with Delta>] - Verify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of verify operations.
[-write <Counter with Delta>] - Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of write operations.
[-relown <Counter with Delta>] - RelLockOwn Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of rellockown operations.
[-total <Counter64 with Delta>] - Total Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of total nfsv4 operations.
[-null-pct <Counter with Delta>] - Percent Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of null operations.
[-cmpnd-pct <Counter with Delta>] - Percent Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of compound operations.
[-access-pct <Counter with Delta>] - Percent Access Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of access operations.
[-close-pct <Counter with Delta>] - Percent Close Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of close operations.
[-commit-pct <Counter with Delta>] - Percent Commit Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of commit operations.
[-create-pct <Counter with Delta>] - Percent Create Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of create operations.
[-delpur-pct <Counter with Delta>] - Percent Delegpurge Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of delegpurge operations.
[-delrtn-pct <Counter with Delta>] - Percent Delegret Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of delegret operations.
[-gattr-pct <Counter with Delta>] - Percent GetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getattr operations.
[-getfh-pct <Counter with Delta>] - Percent GetFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of getfh operations.
[-link-pct <Counter with Delta>] - Percent Link Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of link operations.
[-lock-pct <Counter with Delta>] - Percent Lock Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lock operations.
[-lockt-pct <Counter with Delta>] - Percent LockT Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lockt operations.
[-locku-pct <Counter with Delta>] - Percent LockU Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of locku operations.
[-lookup-pct <Counter with Delta>] - Percent Lookup Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookup operations.
[-lookpp-pct <Counter with Delta>] - Percent LookupP Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of lookupp operations.
[-nverfy-pct <Counter with Delta>] - Percent Nverify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of nverify operations.
[-open-pct <Counter with Delta>] - Percent Open Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of open operations.
[-opattr-pct <Counter with Delta>] - Percent OpenAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of openattr operations.
[-opconf-pct <Counter with Delta>] - Percent OpenConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of openconf operations.
[-opndg-pct <Counter with Delta>] - Percent OpenDowng Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of opendowng operations.
[-putfh-pct <Counter with Delta>] - Percent PutFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putfh operations.
[-putpfh-pct <Counter with Delta>] - Percent PutPubFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putpubfh operations.
[-putrfh-pct <Counter with Delta>] - Percent PutRootFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of putrootfh operations.
[-read-pct <Counter with Delta>] - Percent Read Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of read operations.
[-readdr-pct <Counter with Delta>] - Percent ReadDir Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readdir operations.
[-rlink-pct <Counter with Delta>] - Percent ReadLink Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of readlink operations.
[-remove-pct <Counter with Delta>] - Percent Remove Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of remove operations.
[-rename-pct <Counter with Delta>] - Percent Rename Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rename operations.
[-renew-pct <Counter with Delta>] - Percent Renew Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of renew operations.
[-restfh-pct <Counter with Delta>] - Percent RestoreFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of restorefh operations.
[-savefh-pct <Counter with Delta>] - Percent SaveFh Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of savefh operations.
[-secinf-pct <Counter with Delta>] - Percent SecInfo Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of secinfo operations.
[-sattr-pct <Counter with Delta>] - Percent SetAttr Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setattr operations.
[-sclid-pct <Counter with Delta>] - Percent SetCliId Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setcliid operations.
[-scidc-pct <Counter with Delta>] - Percent SetCliIdConf Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of setcliidconf operations.
[-verify-pct <Counter with Delta>] - Percent Verify Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of verify operations.
[-write-pct <Counter with Delta>] - Percent Write Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of write operations.
[-relown-pct <Counter with Delta>] - Percent RelLockOwn Operations
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified percentage of rellockown operations.
● Examples
The following example displays statistics about the NFSv4 operations for a node named node1:
Access Ops: 0 - 0% -
Close Ops: 0 - 0% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 0 - 0% -
Getfh Ops: 0 - 0% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 5 - 63% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 2 - 25% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 0 - 0% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 0 - 0% -
Read Ops: 0 - 0% -
Readdir Ops: 0 - 0% -
Readlink Ops: 0 - 0% -
Remove Ops: 0 - 0% -
Rename Ops: 0 - 0% -
Renew Ops: 0 - 0% -
Restorefh Ops: 0 - 0% -
Savefh Ops: 0 - 0% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 1 - 13% -
Setclid Ops: 0 - 0% -
Setclidconf Ops: 0 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 0 - 0% -
Rlockown Ops: 0 - 0% -
Total Ops: 8 -
Node Value Delta Percent Ops Delta
node1 --------------------------- all -----------------------
Null Procs: 2 - 1% -
Cmpnd Procs: 92 - -
Access Ops: 16 - 5% -
Close Ops: 8 - 3% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 76 - 26% -
Getfh Ops: 22 - 7% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 18 - 6% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 10 - 3% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 92 - 31% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 2 - 1% -
Read Ops: 0 - 0% -
Readdir Ops: 2 - 1% -
Readlink Ops: 0 - 0% -
Remove Ops: 5 - 2% -
Rename Ops: 3 - 1% -
Renew Ops: 0 - 0% -
Restorefh Ops: 11 - 4% -
Savefh Ops: 13 - 4% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 9 - 3% -
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 294 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The statistics nfs show-v41 command displays the following statistics about the NFSv4.1 operations
on each node in the cluster:
• Result of the operations (success or failure)
• Total number of null operations
• Total number of compound operations
• Total number of access operations
• Total number of close operations
• Total number of commit operations
• Total number of create operations
• Total number of delegpurge operations
• Total number of delegret operations
• Total number of getattr operations
• Total number of getfh operations
• Total number of link operations
• Total number of lock operations
• Total number of lockt operations
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays NFSv4.1 statistics only for the specified node.
[-result {success|failure|all}] - Result
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified result (success/failure/all).
[-null <Counter with Delta>] - Null Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of null operations.
[-cmpnd <Counter with Delta>] - Compound Procedure
If you specify this parameter, the command displays statistics only about the node or nodes that have
the specified number of compound operations.
● Examples
The following example displays statistics about the NFSv4.1 operations for a node named node1:
Access Ops: 16 - 6% -
Close Ops: 8 - 3% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 76 - 27% -
Getfh Ops: 22 - 8% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Lookup Ops: 13 - 5% -
Lookupp Ops: 0 - 0% -
Nverify Ops: 0 - 0% -
Open Ops: 8 - 3% -
Openattr Ops: 0 - 0% -
Openconf Ops: 0 - 0% -
Opendowng Ops: 0 - 0% -
Putfh Ops: 92 - 32% -
Putpubfh Ops: 0 - 0% -
Putrootfh Ops: 2 - 1% -
Read Ops: 0 - 0% -
Readdir Ops: 2 - 1% -
Readlink Ops: 0 - 0% -
Remove Ops: 5 - 2% -
Rename Ops: 3 - 1% -
Renew Ops: 0 - 0% -
Restorefh Ops: 11 - 4% -
Savefh Ops: 13 - 5% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 8 - 3% -
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 286 -
Node Value Delta Percent Ops Delta
node1 ---------------------------failure-----------------------
Null Procs: 0 - 0% -
Cmpnd Procs: 0 - -
Access Ops: 0 - 0% -
Close Ops: 0 - 0% -
Commit Ops: 0 - 0% -
Create Ops: 0 - 0% -
Delpur Ops: 0 - 0% -
Delrtn Ops: 0 - 0% -
Getattr Ops: 0 - 0% -
Getfh Ops: 0 - 0% -
Link Ops: 0 - 0% -
Lock Ops: 0 - 0% -
Lockt Ops: 0 - 0% -
Locku Ops: 0 - 0% -
Readlink Ops: 0 - 0% -
Remove Ops: 5 - 2% -
Rename Ops: 3 - 1% -
Renew Ops: 0 - 0% -
Restorefh Ops: 11 - 4% -
Savefh Ops: 13 - 4% -
Secinfo Ops: 0 - 0% -
Setattr Ops: 9 - 3% -
Setclid Ops: 1 - 0% -
Setclidconf Ops: 1 - 0% -
Verify Ops: 0 - 0% -
Write Ops: 3 - 1% -
Rlockown Ops: 0 - 0% -
Total Ops: 294 -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for nodes at a regular interval. The command output
displays data in the following columns:
• Node - node name.
• CPU (%) - CPU utilization.
• Total Ops - total number of operations per second.
• Latency(ms) - average latency for an operation in milliseconds.
● Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of aggregates to display. The default setting is 25.
● Examples
The following example displays node statistics:
[...]
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The statistics oncrpc show-rpc-calls command displays information about the Open Network
Computing Remote Procedure Call (ONC RPC) calls performed by the nodes of a cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information only about the RPC calls performed by the node you specify.
[-protocol {TCP|UDP}] - Transport Protocol
Use this parameter to display information only about the RPC calls performed using the network protocol
you specify.
[-badproc <Counter with Delta>] - Bad Procedure Calls
Use this parameter to display information only about the RPC calls that have the number of bad
procedure calls you specify. Bad procedure calls are RPC requests that contain invalid procedure numbers
and cannot be completed.
[-badlen <Counter with Delta>] - Bad Length Calls
Use this parameter to display information only about the RPC calls that have the number of bad length
calls you specify.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for FCP ports at a regular interval. The command
output displays data in the following columns:
• Port - port name.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
● Parameters
[-port <text>] - Port
Selects the port for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of ports to display. The default setting is 25.
● Examples
The following example displays port statistics:
[...]
Directory contains commands to delete, modify, import, and display Performance Preset configurations and
their details. A Performance Preset declares a list of one or more Performance Object names and a list of
Performance Counter names (for each Object). Preset configurations expected to be used by the
Performance Archive will specify sample periods for each Counter, declaring how often each Counter is to be
archived. To create a new Performance Preset configuration, the perf-preset-create API or statistics
preset import command must be used.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Deletes a performance preset configuration and all of its associated details.
● Parameters
-preset <text> - Preset Name
Specifies the name of the performance presets that you want to delete.
● Examples
Read-Only: true
Archive Enabled: true
Generation ID: 0
system performance.
Privilege: diagnostic
Read-Only: true
Archive Enabled: true
Generation ID: 0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Modifies an existing performance preset configuration. The command modifies the global properties of a
preset, but does not modify the details of the preset, such as specific objects and counters sampled.
● Parameters
-preset <text> - Preset Name
Name of the performance preset to be modified.
[-new-name <text>] - New Preset Name
Set preset name to the given new name.
[-comment <text>] - Preset Description
Set comment to the given value.
[-privilege <PrivilegeLevel>] - Preset Privilege Level
Set privilege level at which this preset can be viewed or modified to the given value. Possible values:
admin, advanced, diagnostic.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Displays information about performance preset configurations.
● Parameters
{ [-fields <fieldname>, ...]
Selects which performance preset attributes to display.
| [-instance ]}
Shows details of all attributes of performance preset configuration.
[-preset <text>] - Preset Name
Selects the performance presets that match the specified preset name.
[-comment <text>] - Preset Description
Selects the performance presets that match the specified comment.
[-privilege <PrivilegeLevel>] - Preset Privilege Level
Selects the performance presets that are available with the specified privilege.
[-is-read-only {true|false}] - Is Preset Read-Only?
Selects the performance presets that are read-only (true) or are not read-only (false). Read-only presets
cannot be modified.
[-store <text>] - Name of Store Where Data is Saved
Selects the store where data is saved.
● Examples
Directory contains commands to display Performance Preset Details for Performance Preset configurations.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Displays the specific details of each preset, including the objects sampled, the counter sample periods, and
the counters sampled.
● Parameters
{ [-fields <fieldname>, ...]
Selects which performance preset detail attributes to display.
| [-instance ]}
Displays all of the performance preset detail attributes.
[-preset <text>] - Preset Name
Selects the performance preset details that match the specified preset name.
[-object <text>] - Performance Object
Selects the performance preset details that match the specified object name.
[-sample-period <sample_period>] - Archive Sample period
Selects the performance preset details that are collected at the specified sample period.
[-counter-set <text>, ...] - Performance Counter Name Set
Selects the performance preset details that match the specified counters in the counter set. Use "|" to
separate multiple counters.
[-instance-filters <text>, ...] - Performance Instance Filters
Selects the performance preset details that match the specified instance filters. Use "|" to separate
multiple instance filters. This field is reserved for future use.
● Examples
user_writes,
cp_reads,
user_read_
blocks,
user_write_
blocks,
cp_read_
blocks,
wv_fsid,
wv_vol_type,
wv_fsinfo_fs_
version,
wv_volinfo_fs_
options,
. . .
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for qtrees at a regular interval. The command output
displays data in the following columns:
• Qtree - Qtree name.
• Vserver - Vserver name.
• Volume - Volume name.
• NFS Ops - NFS operations per second.
• CIFS Ops - CIFS operations per second.
• Internal Ops - Internal operations per second.
• Total Ops - Total number of operations per second.
● Parameters
[-qtree <text>] - Qtree
Selects the qtree for which you want to display performance data. The qtree name has to be given in the
format "vol_name/qtree_name"
[-vserver <vserver name>] - Vserver
Selects the Vserver for which you want to display performance data.
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of qtrees to display. The default setting is 25.
● Examples
The following example displays qtree statistics:
[...]
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command deletes samples that you created using the statistics start command.
● Parameters
-vserver <vserver name> - Vserver
Selects the Vserver for which you want to delete the sample. The default Vserver is admin Vserver.
-sample-id <text> - Sample Identifier
Specifies the sample that you want to delete. This is a required parameter.
● Examples
The following example deletes the sample "smpl_1":
Related reference
statistics start (page 904)
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays information about the samples that you created using the statistics start
command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-describe ]}
Displays detailed information about each sample.
[-vserver <vserver name>] - Vserver
Selects the samples that match this parameter value. If you omit this parameter, the command displays
details for all samples.
[-sample-id <text>] - Sample Identifier
Selects the samples that match this parameter value. If you do not specify this parameter, the command
will display information about all the samples in the cluster.
● Examples
The following example displays information for sample "smpl_1":
Vserver: vs1
Sample ID: smpl_1
Object: processor
Instance: -
Counter: -
Start Time: 09/13 18:06:46
Stop Time: -
Status: Ready - -
Privilege: admin
Related reference
statistics start (page 904)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command modifies the settings for all of the statistics commands.
● Parameters
[-display-rates {true|false}] - Display Rates
Specifies whether the statistics commands display rate counters in rates/second. The default is true.
[-client-stats {enabled|disabled}] - Collect Per-Client Statistics
Specifies whether statistics commands display per-client information. The default is disabled.
Note
If you enable this setting, you might significantly impact system performance.
● Examples
The following example sets the value of the -display-rates parameter to false:
Related reference
24. Statistics Commands (page 899)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command displays the current settings for all of the statistics commands.
● Examples
The following example displays the current settings for all statistics commands:
Related reference
24. Statistics Commands (page 899)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for cluster at a regular interval. The command output
displays data in the following columns:
• System - cluster name.
• CPU (%) - CPU utilization.
• Total Ops - total number of operations per second.
• Latency(ms) - average latency for an operation in milliseconds.
● Parameters
[-system <text>] - System
Selects the cluster for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of systems to display. The default setting is 25.
● Examples
The following example displays system statistics:
[...]
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for the top NFS and CIFS clients at a regular interval.
The command output displays data in the following columns:
• Client - client name.
• Vserver - vserver name.
• Node - node name.
• Total Ops - total number of operations per second.
• Total (Bps) - total throughput in bytes per second.
● Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies maximum number of top clients to display. The default setting is 10.
● Examples
The following example displays top client statistics:
*Total Total
Client Vserver Node Ops (Bps)
------------------ --------- ------------- ------ -----
172.17.236.53:938 vserver01 cluster-node2 9 80
172.17.236.160:898 vserver02 cluster-node1 6 50
[...]
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for top files at a regular interval. The command
output displays data in the following columns:
• File - file name.
• Volume - volume name.
• Vserver - vserver name.
• Aggregate - aggregate name.
• Node - node name.
• Total Ops - total number of operations per second.
• Total (Bps) - total throughput in bytes per second.
● Parameters
[-node {<nodename>|local}] - Node
Selects the node for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies maximum number of top files to display. The default setting is 10.
● Examples
The following example displays top files statistics:
*Total Total
File Volume Vserver Aggregate Node Ops (Bps)
--------------------- ------- --------- --------- ------------- ------ -----
/vol/vol01/clus/cache vol01 vserver01 aggr1 cluster-node2 9 80
/vol/vol02 vol02 vserver02 aggr2 cluster-node1 6 50
[...]
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command continuously displays performance data for volumes at a regular interval. The command
output displays data in the following columns:
• Volume - volume name.
• Vserver - vserver name.
• Aggregate - aggregate name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
• Read (Bps) - read throughput in bytes per second.
• Write (Bps) - write throughput in bytes per second.
• Latency(us) - average latency for an operation in microseconds.
● Parameters
[-volume <text>] - Volume
Selects the volume for which you want to display performance data.
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-aggregate <text>] - Aggregate
Selects the aggregate for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of volumes to display. The default setting is 25.
● Examples
The following example displays volume statistics:
Volume Vserver Aggregate Ops Ops Ops Ops (Bps) (Bps) (us)
------ ------- --------- ------ ---- ----- ----- ------ ----- -------
vol0 - aggr0 58 13 15 29 310585 3014 39
vol0 - aggr0_n0 56 0 11 45 8192 28826 47
[...]
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continuously displays performance data for Vservers at a regular interval. The command
output displays data in the following columns:
• Vserver - Vserver name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
• Read (Bps) - read throughput in bytes per second.
• Write (Bps) - write throughput in bytes per second.
• Latency(us) - average latency for an operation in microseconds.
● Parameters
[-vserver <vserver name>] - Vserver
Selects the vserver for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of Vservers to display. The default setting is 25.
● Examples
The following example displays Vserver statistics:
[...]
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command continuously displays performance data for workloads at a regular interval. The command
output displays data in the following columns:
• Workload - workload name.
• Total Ops - total number of operations per second.
• Read Ops - read operations per second.
• Write Ops - write operations per second.
• Other Ops - other operations per second.
• Read (Bps) - read throughput in bytes per second.
• Write (Bps) - write throughput in bytes per second.
• Latency(us) - average latency for an operation in microseconds.
● Parameters
[-workload <text>] - Workload
Selects the workload for which you want to display performance data.
[-sort-key <text>] - Column to Sort By
If this parameter is specified, the command displays statistics sorted by the specified column.
-interval <integer> - Interval
Specifies, in seconds, the interval between statistics updates. The default setting is 5 seconds.
-iterations <integer> - Iterations
Specifies the number of iterations the command runs before terminating. The default setting is 1. If the
number is 0 (zero), the command continues to run until you interrupt it by pressing Ctrl-C.
-max <integer> - Maximum Number of Instances
Specifies the maximum number of workloads to display. The default setting is 25.
● Examples
The following example displays workload statistics:
[...]
storage-service show
Display the available storage services
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
● Description
This command displays the available storage services.
Note
The available storage services are defined by the type of storage making up an aggregate.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <vserver name>] - Vserver
Selects the available storage services for Vservers that match the parameter value.
[-storage-service <text>] - Storage Service
Selects the available storage services whose name matches the parameter value.
[-description <text>] - Description
Selects the available storage services whose description matches the parameter value. This field is a text
description of the storage service.
[-expected-iops-per-tb <integer>] - Expected IOPS per TB
Selects the available storage services whose expected IOPS per TB matches the parameter value. When
multiplied by a number of TB, this field yields the number of IOPS nominally guaranteed by the storage
service. The multiplier is either the logical used space or the provisioned size of the storage object,
depending on the value of expected-iops-allocation.
[-expected-iops-allocation {used-space|allocated-space}] - Expected IOPS Allocation
Selects the available storage services whose expected IOPS allocation policy matches the parameter
value. The allocation policy is either allocated-space or used-space. When the expected-iops-allocation
policy is allocated-space, the expected IOPS is calculated based on the size of the storage object. When
the expected-iops-allocation policy is set to used-space, the expected IOPS is calculated based on the
amount of data logically stored in the storage object.
[-peak-iops-per-tb <integer>] - Peak IOPS per TB
Selects the available storage services whose peak IOPS per TB matches the parameter value. When
multiplied by a number of TB, this field yields the number of IOPS for the maximum Quality of Service
(QoS) throttle. The multiplier is either the logical used space or the provisioned size of the storage
object, depending on the value of peak-iops-allocation.Selects the available storage services
whose peak IOPS per TB matches the parameter value. This field is the number of IOPS per provisioned
TB used as the maximum Quality of Service (QoS) throttle.Selects the available storage services whose
peak IOPS per TB matches the parameter value. This field is the number of IOPS per provisioned TB used
as the maximum Quality of Service (QoS) throttle.Selects the available storage services whose peak IOPS
per TB matches the parameter value. This field is the number of IOPS per provisioned TB used as the
maximum Quality of Service (QoS) throttle.
[-peak-iops-allocation {used-space|allocated-space}] - Peak IOPS Allocation
Selects the available storage services whose peak IOPS allocation policy matches the parameter value.
The allocation policy is either allocated-space or used-space. When the peak-iops-allocation policy is
allocated-space, the peak IOPS is calculated based on the size of the storage object. When the peak-
iops-allocation policy is set to used-space, the peak IOPS is calculated based on the amount of data
logically stored in the storage object.
[-absolute-min-iops <integer>] - Absolute Minimum IOPS
Selects the available storage services whose absolute minimum IOPS matches the parameter value. This
field is the minimum number of IOPS used as the Quality of Service (QoS) throttle, if larger than the
values calculated using the IOPS per TB parameters.
[-target-latency <integer>] - Target Latency (ms)
Selects the available storage service whose target latency matches the parameter value.
[-aggr-list <aggregate name>, ...] - Aggregate List
Selects the available storage services whose aggregate list matches the parameter value. The
aggregates shown are the only ones used for provisioning when the corresponding Vserver and storage
service are selected.
● Examples
The storage commands enable you to manage physical and logical storage, including disks and storage
aggregates. They also enable you to manage storage failover.
The storage aggregate command family manages aggregates. The storage aggregate commands can
create new aggregates, add more disks to an aggregate, delete the existing ones, change aggregate status
and apply options to an aggregate. Aggregate commands often affect the volumes contained within
aggregates.
An aggregate name can contain letters, numbers, and the underscore character(_), but the first character
must be a letter or underscore. A maximum of 100 aggregates can be created on each node.
An aggregate may be online, restricted, iron_restricted, or offline. When an aggregate is offline, no read or
write access is allowed. When an aggregate is restricted, certain operations are allowed (parity
recomputation or RAID reconstruction) but data access is not allowed.
Aggregates can be in combinations of the following raid status:
normal
The aggregate is in a normal state.
copying
The aggregate is currently the target aggregate of an active aggr copy operation.
ironing
A WAFL consistency check is being performed on this aggregate.
degraded
The aggregate contains at least one degraded RAID group that is not being reconstructed.
mirror degraded
The aggregate is a mirrored aggregate, and one of its plexes is offline or resyncing.
growing
Disks are in the process of being added to the aggregate.
initializing
The aggregate is in the process of being initialized.
invalid
The aggregate does not contain any volume and no volume can be added to it. Typically, this happens after
an aborted aggregate copy operation.
needs check
A WAFL consistency check needs to be performed on the aggregate.
partial
The aggregate contains at least one disk, however, two or more disks are missing.
reconstruct
At least one RAID group in the aggregate is being reconstructed.
raid0
The aggregate consists of RAID-0 (no parity) RAID groups (V-Series and NetCache only).
raid4
The aggregate consists of RAID-4 RAID groups.
raid_dp
The aggregate consists of RAID-DP (Double Parity) RAID groups.
raid_tec
The aggregate consists of RAID-TEC (Triple Erasure Code) RAID groups.
redirect
A reallocation process optimized the layout of blocks in the aggregate. You cannot clear this status.
wafl inconsistent
The aggregate has been marked corrupted. Please contact technical support if you see an aggregate in this
state.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate add-disks command adds disks to an existing aggregate. You must specify the
number of disks or provide a list of disks to be added. If you specify the number of disks without providing a
list of disks, the system selects the disks.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to which disks are to be added.
[-diskcount <integer>] - Disk Count
This parameter specifies the number of disks that are to be added to the aggregate.
{ [-disktype | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Disk Type
This parameter specifies the type of disk that is to be added. It must be specified with the -diskcount
parameter when adding disks to a Flash Pool.
Use this parameter when adding spare SSDs to an aggregate to convert it to a Flash Pool.
Note
Only the aggregates marked as hybrid-enabled can be converted to Flash Pools. Use storage
aggregate modify command to mark the aggregate as hybrid-enabled.
Note
When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state. Only disks of the specified type are considered eligible for
selection.
Note
When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state.
Note
This parameter is applicable only when the -disklist or -mirror-disklist parameter is used.
This parameter specifies the number of allocation units to be added to a given aggregate from an SSD
storage pool. Number of allocation units available and size of each unit can be found using the storage
pool show-available-capacity command. This parameter works only when you also use the -
storage-pool parameter.
[-simulate | -n [true]] - Simulate Addition of Disks
This parameter is used with the disktype and diskcount parameters to determine which disks would be
added without actually performing the addition of disks operation.
[-raidgroup | -g {new|all|<raidgroup>}] - RAID Group
This parameter enables the administrator to specify which RAID group will receive the added disks. If
this parameter is not used, the disks are added to the most recently created RAID group until it is full,
then new raid groups are created and filled until all the disks are added. If a RAID group name rgX is
specified, the disks are added to that RAID group. If new is specified, the disks are added to a new RAID
group, even if the disks would fit into an existing RAID group. If all is specified, the disks are added to
existing RAID groups until all existing RAID groups are full. Then Data ONTAP creates one or more new
RAID groups and adds the remaining disks to the new groups. If the disk type or checksum style
parameters are specified with this parameter, the command operates only on the RAID groups with the
matching disk type or checksum style, even if all is specified.
[-cache-raid-group-size <integer>] - RAID Group Size for Cache Tier
This parameter specifies the maximum number of disks that can be included in an SSD RAID group for
this aggregate.
Note
This parameter is applicable only when adding SSDs for the first time to a hybrid-enabled
aggregate. If this parameter is not used when the first SSDs are added to the aggregate, the
maximum RAID group size for the SSD cache is set to the default SSD RAID group size for the RAID
type of the SSD cache.
● Examples
The following example adds 10 disks to an aggregate named aggr0. The disks are added to a RAID group
named rg1:
In this example, an aggregate is converted to a Flash Pool aggregate using SSD capacity from a storage
pool. The aggregate was created using RAID-DP for the hard disks and the SSDs are added using RAID4.
cluster1::> storage aggregate add-disks -aggregate FlashPool -storage-pool SP1 -allocation-units 1 -rai
dtype raid4
Related reference
storage aggregate modify (page 980)
storage pool show-available-capacity (page 1184)
Storage raid-options Commands (page 1194)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command analyzes available spare disks in the cluster, and it provides a recommendation how spare
disks should be used to create aggregates according to best practices. The command prints the summary of
recommended aggregates including their names and usable size. It then prompts the user whether or not
the aggregates should be created as recommended. On positive response, ONTAP creates the aggregates as
described in the recommendation.
The command parameters allow to restrict the command to some nodes in the cluster, print more details
about recommended aggregates, and to skip the prompt.
● Parameters
[-nodes {<nodename>|local}, ...] - List of Nodes
Comma separated list of node names to which the command applies. If this parameter is not used, the
command applies to all nodes in the cluster.
[-verbose [true]] - Report More Details
Report additional details about recommended aggregates and spare disks. Per node summary shows
number and total size of aggregates to create, discovered spares, and also remaining spare disks and
partitions after aggregate creation. RAID group layout shows how spare disks and partitions will be used
in new data aggregates to be created. The last table shows spare disks and partitions remaining unused
after aggregate creation.
[-skip-confirmation [true]] - Skip the Confirmation and Create Recommended Aggregates
When this parameter is used, the command automatically creates the recommended aggregates. When
this parameter is not used, the command checks to proceed with aggregate creation or not.
Note
The command is not affected by the CLI session setting: set -confirmations on/off.
● Examples
node2 1 3.66TB 6 48 1 0
------------------ ----- ------------ ------ ----------- ------ ----------
Total: 2 7.32TB 12 96 2 0
RAID group layout showing how spare disks and partitions will be used
in new data aggregates to be created:
Details about spare disks and partitions remaining after aggregate creation:
Info: Aggregate auto provision has started. Use the "storage aggregate
show-auto-provision-progress" command to track the progress.
Related reference
6. set (page 45)
storage aggregate mirror (page 979)
storage aggregate create (page 976)
storage aggregate add-disks (page 971)
storage disk assign (page 1072)
storage disk zerospares (page 1100)
storage aggregate modify (page 980)
storage pool create (page 1178)
storage aggregate show-auto-provision-progress (page 996)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate create command creates an aggregate. An aggregate consists of disks. You
must specify the number of disks or provide a list of disks to be added to the new aggregate. If you specify
the number of disks without providing a list of disks, the system selects the disks.
When creating an aggregate, you can optionally specify the aggregate's home node, the RAID type for RAID
groups on the aggregate, and the maximum number of disks that can be included in a RAID group.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the name of the aggregate that is to be created.
[-chksumstyle <aggrChecksumStyle>] - Checksum Style
This parameter specifies the checksum style for the aggregate. The values are block for Block Checksum
and advanced_zoned for Advanced Zoned Checksum (AZCS).
-diskcount <integer> - Number Of Disks
This parameter specifies the number of disks that are to be included in the aggregate, including the
parity disks. The disks in this newly created aggregate come from the pool of spare disks. The smallest
disks in this pool are added to the aggregate first, unless you specify the -disksize parameter.
[-diskrpm | -R <integer>] - Disk RPM
This parameter specifies the RPM of the disks on which the aggregate is to be created. The possible
values include 5400, 7200, 10000, and 15000.
[-disksize <integer>] - Disk Size(GB)
This parameter specifies the size, in GB, of the disks on which the aggregate is to be created. Disks with
a usable size between 90% and 105% of the specified size are selected.
{ [-disktype | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Disk Type
This parameter specifies the type of disk on which the aggregate is to be created.
Note
When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state. Only disks of the specified type are considered eligible for
selection.
Note
When this parameter is used, disk selection is not influenced by RAID options
raid.mix.hdd.disktype.capacity, raid.mix.hdd.disktype.performance, or
raid.mix.disktype.solid_state.
This parameter specifies the type for RAID groups on the aggregate. The values are raid4 for RAID4,
raid_dp for RAID Double Parity, and raid_tec for RAID Triple-Erasure-Code. The default setting is raid_dp
unless the disks are HDDs with a capacity larger than 4 TB, in which case the default will be raid_tec. This
parameter is not needed for array LUNs because they are always created with the raid0 raidtype. raid4 is
not compatible with shared disks unless the shared disks belong to a storage pool.
[-simulate [true]] - Simulate Aggregate Provisioning Operation
This option simulates the aggregate creation and prints the layout of the new aggregate.
[-force-small-aggregate [true]] - Force the Creation of a Small Aggregate (privilege: advanced)
This parameter can be used to force the creation of a 2-disk RAID4 aggregate, or a 3-disk or 4-disk RAID-
DP aggregate.
[-is-autobalance-eligible {true|false}] - Is Eligible for Auto Balance Aggregate (privilege:
advanced)
This specifies whether the aggregate will be considered by the Auto Balance Aggregate feature. If the
Auto Balance Aggregate feature is not used, this field is not used. When this parameter is set to true the
Auto Balance Aggregate feature might recommend moving volumes to or from this aggregate in order
to balance system workload. When this parameter is set to false the aggregate will not be considered as
a destination for the Auto Balance Aggregate feature allowing for predictability in data placement. The
default value is false.
[-snaplock-type | -L {non-snaplock|compliance|enterprise}] - SnapLock Type
This parameter specifies the type of SnapLock aggregate to be created. In order to create a SnapLock
Compliance aggregate, specify compliance. To create a SnapLock Enterprise aggregate, specify
enterprise.
[-autobalance-unbalanced-threshold-percent <integer>] - Threshold When Aggregate Is
Considered Unbalanced (%) (privilege: advanced)
This parameter specifies the space used threshold percentage that will cause the Auto Balance
Aggregate feature to consider an aggregate as unbalanced.
[-autobalance-available-threshold-percent <integer>] - Threshold When Aggregate Is
Considered Balanced (%) (privilege: advanced)
This parameter specifies the threshold percentage which will determine if an aggregate is a target
destination for a move. The Auto Balance Aggregate feature will attempt to move volumes from an
unbalanced aggregate until it is under this percentage.
[-encrypt-with-aggr-key {true|false}] - Enable Aggregate level Encryption
This parameter specifies the data encryption policy for the contained volumes. If this parameter is set to
true, then by default, the volumes created in this aggregate will be encrypted, using the aggregate level
encryption keys.
● Examples
The following example creates an aggregate named aggr0 on a home node named node0. The aggregate
contains 20 disks and uses RAID-DP. The aggregate contains regular FlexVol volumes:
The following example creates an aggregate named aggr0 on a home node named node0. The aggregate
contains the disks specified and uses RAID-DP
The following example creates an aggregate named aggr0 on a home node named node0. The aggregate
contains 20 disks of size 6 TB and of type FSAS and uses RAID-TEC:
The following example creates a mirrored aggregate named aggr0 on the local node. The aggregate
contains 10 disks in each plex:
Related reference
Storage raid-options Commands (page 1194)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate delete command deletes a storage aggregate. The command fails if there are
volumes present on the aggregate. If the aggregate has an object store attached to it, then in addition to
deleting the aggregate the command deletes the objects in the object store as well. No changes are made
to the object store configuraton as part of this command.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that is to be deleted.
[-preserve-config-data [true]] - Delete Physical Aggregate but Preserve Configuration Data
(privilege: advanced)
Deletes the physical aggregate, but preserves the aggregate configuration data. The aggregate must not
have any disks associated with it. If the parameter -preserve-config-data is specified without a
value, the default value is true; if this parameter is not specified, the default value is false.
● Examples
The following example deletes an aggregate named aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate mirror command adds a plex to an existing unmirrored aggregate. You can
specify a list of disks to be used for the mirrored plex. If you do not specify the disks, the system
automatically selects the disks based on the aggregate's existing plex.
● Parameters
-aggregate <aggregate name> - Aggregate
Note
This parameter is only applicable when the -mirror-disklist parameter is used.
Note
This parameter is accepted only when the -mirror-disklist parameter is used.
● Examples
The following example mirrors an unmirrored aggregate aggr1:
The following example mirrors an unmirrored aggregate aggr1. The specified disks are used for the new
plex.
cluster1::> storage aggregate mirror -aggregate aggr1 -mirror-disklist 1.2.12, 1.2.14, 1.2.16
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate modify command can be used to modify attributes of an aggregate such as
RAID type and maximum RAID group size.
Changing the RAID type immediately changes the RAID group type for all RAID groups in the aggregate.
Changing the maximum RAID size does not cause existing RAID groups to grow or to shrink; rather, it affects
the size of RAID groups created in the future, and determines whether more disks can be added to the RAID
group that was most recently created.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the storage aggregate that is to be modified.
[-disktype | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Disk Type
This parameter specifies either the HDD tier or the SSD tier when changing the RAID type of a Flash Pool.
If the HDD tier is composed of more than one type of disk, specifying any of the disk types in use causes
that tier to be modified.
[-free-space-realloc {on|off}] - Free Space Reallocation
This parameter specifies whether free space reallocation is enabled on the aggregate.
Free space reallocation optimizes the free space in an aggregate immediately before Data ONTAP writes
data to the blocks in that aggregate.
The default setting is off.
[-ha-policy {sfo|cfo}] - HA Policy
This parameter specifies the high-availability policy to be used in the context of a root recovery
procedure. Do not modify this setting unless directed to do so by a customer support representative.
[-percent-snapshot-space <percent>] - Space Reserved for Snapshot Copies
This parameter is used to set the space reserved for Snapshot copies to the specified value. For example,
to set the snapshot reserve to 5%, you should enter -percent-snapshot-space 5.
[-space-nearly-full-threshold-percent <percent>] - Aggregate Nearly Full Threshold Percent
This optionally specifies the percentage at which the aggregate is considered nearly full, and above
which an EMS warning will be generated. The default value is 95%. The maximum value for this option
is 99%. Setting this threshold to 0 disables the aggregate nearly full space alerts.
[-space-full-threshold-percent <percent>] - Aggregate Full Threshold Percent
This optionally specifies the percentage at which the aggregate is considered full, and above which a
critical EMS error will be generated. The default value is 98%. The maximum value for this option is
100%. Setting this threshold to 0 disables the aggregate full space alerts.
[-hybrid-enabled {true|false}] - Hybrid Enabled
If the hybrid-enabled option is set to "true", the aggregate is marked as hybrid_enabled, that is, the
aggregate can contain a mix of SSDs and HDDs (Hard Disk Drives, e.g., SAS, SATA, and/or FC). By default,
aggregates cannot be marked "hybrid_enabled" if the aggregate contains FlexVols that cannot be write
cached. A FlexVol cannot be write-cached if it is part of an aggregate created in Data ONTAP 7. Use -
force-hybrid-enabled to over-ride this behavior.
[-force-hybrid-enabled | -f [true]] - Force Marking of Aggregate as Hybrid Enabled
By default, aggregates cannot be marked "hybrid_enabled" if the aggregate contains FlexVols that
cannot be write cached. A FlexVol cannot be write-cached if it is part of an aggregate created in Data
ONTAP 7. Use -force-hybrid-enabled to over-ride this behavior. Note that read caching will be
enabled on these FlexVols, but write caching will be disabled.Setting this parameter to true would mark
the aggregate as hybrid_enabled; this means that the aggregate can contain a mix of SSDs and HDDs
(Hard Disk Drives, for example, SAS, SATA and/or FC). This parameter is used to force marking
aggregates which have FlexVols that cannot be write cached as hybrid enabled. FlexVols in the
aggregate marked as hybrid enabled using this parameter which cannot participate in write-caching will
only have read-caching enabled. All other FlexVols in the aggregate can participate in both read and
write caching.
[-maxraidsize | -s <integer>] - Max RAID Size
This parameter specifies the maximum number of disks that can be included in a RAID group for this
aggregate.
Note
For Flash Pools, this option controls the maximum size of the HDD RAID groups.
[-cache-raid-group-size <integer>] - Flash Pool SSD Tier Maximum RAID Group Size
This parameter specifies the maximum number of disks that can be included in a SSD RAID group for this
Flash Pool.
Note
This parameter is applicable only for Flash Pools.
This parameter specifies the new resynchronization priority value for the specified aggregate. This field
cannot be modified for unmirrored or Data ONTAP system aggregates.
Possible values for this parameter are:
• high: Mirrored data aggregates with this priority value start resynchronization first.
• medium: Mirrored data aggregates with this priority value start resynchronization after all the system
aggregates and data aggregates with 'high' priority value have started resynchronization.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
[-single-instance-data-logging {off|on}] - Enable SIDL
This parameter specifies whether Single Instance Data Logging feature is enabled on the aggregate and
the constituent volumes on the aggregate. This feature improves user write perfomance by optimizing
the amount of data nvlogged by user writes on platforms where NVRAM and secondary storage are of
same media type.
[-is-inactive-data-reporting-enabled {true|false}] - Inactive Data Reporting Enabled
This parameter specified whether the reporting of how much user data is inactive should be enabled on
the aggregate and volumes on the aggregate. This parameter is not allowed on FabricPools.
[-encrypt-with-aggr-key {true|false}] - Enable Aggregate level Encryption
This parameter specifies that the volumes within the new aggregate can be encrypted with aggregate
keys. If this parameter is set to true, the aggregate will support encryption with aggregate keys.
[-force-disable-encrypt-with-aggr-key [true]] - Force disable NAE. Skip aggregate snapshot
check.
This parameter allows disabling Aggregate Encryption (NAE) on an aggregate if the user is certain there
is no aggregate snapshot for that aggregate containing NAE volumes. If the parameter is set to true,
aggregate snapshot check is skipped and NAE is disabled.
● Examples
The following example changes all RAID groups on an aggregate named aggr0 to use RAID-DP:
The following example changes all RAID groups with FSAS disks in an aggregate named aggr0 to use RAID-
TEC:
cluster1::> storage aggregate modify -aggregate aggr0 -disktype FSAS -raidtype raid_tec
Related reference
storage aggregate scrub (page 986)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate offline command takes an aggregate offline.
If you are taking a root aggregate offline, the node owning the aggregate must be in maintenance mode.
● Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be taken offline.
● Examples
The following example takes an aggregate named aggr1 offline:
The following example takes an aggregate named aggr1 offline by unmounting its volumes:
Related reference
storage aggregate online (page 984)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate online command brings an aggregate online if the aggregate is in offline or
restricted state. If an aggregate is in an inconsistent state, it must be brought to a consistent state before it
can be brought online. If you have an aggregate that is in an inconsistent state, contact technical support.
● Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be brought online.
● Examples
The following example brings an aggregate named aggr1 online:
Related reference
storage aggregate offline (page 983)
storage aggregate restrict (page 985)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate remove-stale-record command removes a stale storage aggregate record on
disk. A stale aggregate record refers to an aggregate that has been removed from the storage system, but
whose information remains recorded on disk. Stale aggregate records are displayed in the nodeshell aggr
status -r command, but the storage aggregate show command does not show the aggregate as hosted on
that node.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that corresponds to the stale aggregate record that is to be
deleted.
-nodename {<nodename>|local} - Node Name
This parameter specifies the node that contains the aggregate.
● Examples
The following example removes a stale aggregate record that refers to aggregate "aggr1":
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate rename command renames an aggregate.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be renamed.
-newname <aggregate name> - New Name
This parameter specifies the new name for the aggregate.
● Examples
The following example renames an aggregate named aggr5 as sales-aggr:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate restrict command puts an aggregate in restricted state to make data in the
aggregate's volumes unavailable to clients. When an aggregate is in restricted state data access is not
allowed. However, few operations such as aggregate copy, parity recomputation, scrub and RAID
reconstruction are allowed. You can also use this command if you want the aggregate to be the target of an
aggregate copy or SnapMirror replication operation.
● Parameters
-aggregate <aggregate name> - Aggregate
The name of the aggregate to be restricted.
● Examples
The following example restricts an aggregate named aggr1:
The following example restricts an aggregate named aggr2 by unmounting all the volumes within the
aggregate:
Related reference
storage aggregate show (page 987)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate scrub command scrubs an aggregate for media and parity errors. Parity
scrubbing compares the data disks to the parity disks in their RAID group and corrects the parity disks
contents, as required. If no name is given, parity scrubbing is started on all online aggregates.
Note
By default, scrubs are scheduled to run for a specified time on a weekly basis. However, you can use
this command to run scrubs manually to check for errors and data inconsistencies.
● Parameters
{ -aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be scrubbed for errors.
[-plex <text>] - Plex
This parameter specifies the name of the plex to scrub. If this parameter is not specified, the command
scrubs the entire aggregate.
[-raidgroup <text>] - RAID Group
This parameter specifies the RAID group to be scrubbed. If this parameter is not specified, the command
scrubs the entire aggregate.
Note
This parameter is only applicable when the -plex parameter is used.
● Examples
The following example starts a scrub on a RAID group named rg0 of plex named plex0 on an aggregate
named aggr0:
cluster1::> storage aggregate scrub -aggregate aggr0 -raidgroup rg0 -plex plex0 -action start
cluster1::> storage aggregate scrub -aggregate aggr0 -raidgroup rg0 -plex plex0 -action status
cluster1::> storage aggregate scrub -aggregate aggr1 -plex plex1 -action start
The following example queries the status of plex1 of an aggregate named aggr1:
cluster1::> storage aggregate scrub -aggregate aggr1 -plex plex1 -action status
The following example queries the status of all the plexes for an aggregate named aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show command displays information about aggregates. The command output
depends on the parameter or parameters specified with the command. If no parameters are specified, the
command displays the following information about all aggregates:
• Aggregate name
• Size
• Available size
• Percentage used
• State
• Number of volumes
• Node on which the aggregate is located
• RAID status
To display detailed information about a single aggregate, use the -aggregate parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-checksum ]
If this parameter is specified, the command displays information about the checksum for all aggregates
in the cluster:
• Aggregate name
• Checksum status (active, off, reverting, none, unknown, initializing, reinitializing, reinitialized,
upgrading_phase1, upgrading_phase2)
• Checksum style (none, advanced_zoned, block, mixed, WAFL, or unknown)
| [-disk ]
If this parameter is specified, the command displays disk names for all aggregates in the cluster:
• Aggregate name
• Number and names of disks in the aggregate
| [-raid-info ]
If this parameter is specified, the command displays information about RAID groups, RAID type,
maximum RAID size, checksum state, checksum style and whether the RAID status is inconsistent.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all aggregates in the
cluster.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays detailed information about the specified aggregate.
[-storage-type {hdd | hybrid | lun | ssd | vmdisk}] - Storage Type
If this parameter is specified, the command displays information only about the aggregates with the
specified storage type. The possible values are hdd, hybrid, lun, ssd and vmdisk.
[-chksumstyle <aggrChecksumStyle>] - Checksum Style
If this parameter is specified, the command displays information only about the aggregates that use the
specified checksum style.
[-diskcount <integer>] - Number Of Disks
If this parameter is specified, the command displays information only about the aggregates that have
the specified number of disks.
[-mirror | -m [true]] - Is Mirrored
If this parameter is specified, the command displays information only about the aggregates that have
the specified mirrored value.
[-disklist | -d <disk path name>, ...] - Disks for First Plex
If this parameter is specified, the command displays information only about the aggregates that have
the specified disk or disks.
[-mirror-disklist <disk path name>, ...] - Disks for Mirrored Plex
If this parameter is specified, the command displays information only about the aggregates that have
the specified disk or disks present in the mirrored plex.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about the aggregates that are
located on the specified node.
If this parameter is specified, the command displays information about only the aggregates that contain
the management root directory of their owning node's failover partner.
[-home-id <integer>] - Home ID
If this parameter is specified, the command displays information only about the aggregates whose
home node has the specified system ID.
[-home-name <text>] - Home Name
If this parameter is specified, the command displays information only about the aggregates whose
home node is the specified node.
[-hybrid-cache-size-total {<integer>[KB|MB|GB|TB|PB]}] - Total Hybrid Cache Size
If this parameter is specified, the command displays information only about the aggregates that have
the specified total cache size in a Flash Pool.
[-hybrid {true|false}] - Hybrid
If this parameter is specified, the command displays information only about the aggregates that
currently contain both SSD and non-SSD RAID groups.
[-inconsistent {true|false}] - Inconsistent
If this parameter is specified, the command displays information only about the aggregates that have
the specified consistency.
[-is-home {true|false}] - Is Aggregate Home
If this parameter is specified, the command displays information only about the aggregates whose
home node and owner node have the same system ID.
[-maxraidsize | -s <integer>] - Max RAID Size
If this parameter is specified, the command displays information only about the aggregates that have
the specified maximum number of disks for RAID groups.
Note
For Flash Pools, this option controls the maximum size of the HDD RAID groups.
[-cache-raid-group-size <integer>] - Flash Pool SSD Tier Maximum RAID Group Size
If this parameter is specified, the command displays information about the maximum RAID group size
for the SSD tier for Flash Pools.
Note
This parameter is applicable only for Flash Pools.
If this parameter is specified, the command displays information only about the aggregates that have
the specified RAID status. The possible values for RAID status are normal, copying, ironing, degraded,
mirror degraded, growing, initializing, invalid, needs check, partial, reconstruct, raid4, raid0, raid_dp,
raid_tec, redirect, and wafl inconsistent. You can specify multiple values (for example, reconstruct and
growing).
[-raidtype | -t {raid_tec|raid_dp|raid4}] - RAID Type
If this parameter is specified, the command displays information only about the aggregates that use the
specified RAID type. The possible values are raid0 for RAID 0, raid4 for RAID4, raid_dp for RAID-DP,
raid_tec for RAID-TEC, and mixed_raid_type for aggregates that include a mix of RAID types.
[-resyncsnaptime <integer>] - SyncMirror Resync Snapshot Frequency in Minutes
If this parameter is specified, the command displays information only about the aggregates whose
SyncMirror Resynchronization Snapshot Frequency is the specified value.
[-root {true|false}] - Is Root
If this parameter is specified, the command displays information about only the root aggregates in the
cluster.
[-sis-metadata-space-used {<integer>[KB|MB|GB|TB|PB]}] - Space Used by Metadata for Volume
Efficiency
If this parameter is specified, the command displays information about only the aggregates with the
specified space used by A-SIS metafiles for volume efficiency. This parameter is deprecated in Data
ONTAP 8.2 and later. Use the volume-footprint-list-info API for details related to space usage by
deduplication metadata
[-size {<integer>[KB|MB|GB|TB|PB]}] - Size
If this parameter is specified, the command displays information only about the aggregates that have
the specified size. The size of the aggregate is reported as the size available for use by WAFL, excluding
WAFL reserve and aggregate Snapshot reserve capacity. Use the storage aggregate show-space
command to see the details of space utilization within an aggregate.
[-state <aggregate state>] - State
If this parameter is specified, the command displays information only about the aggregates that have
the specified state.
[-usedsize {<integer>[KB|MB|GB|TB|PB]}] - Used Size
If this parameter is specified, the command displays information only about the aggregates that have
the specified used size.
[-uses-shared-disks {true|false}] - Uses Shared Disks
Selects the aggregates that match this parameter value. This parameter is used to list all the aggregates
that use shared HDDs or shared SSDs.
[-uuid <text>] - UUID String (privilege: advanced)
If this parameter is specified, the command displays information only about the aggregate that has the
specified UUID. This parameter is available only at the advanced privilege level and higher.
[-volcount <integer>] - Number Of Volumes
If this parameter is specified, the command displays information only about the aggregates that have
the specified number of volumes.
[-is-autobalance-eligible {true|false}] - Is Eligible for Auto Balance Aggregate (privilege:
advanced)
If this parameter is specified, the command displays information only about the aggregates that are
considered by the Auto Balance Aggregate feature.
[-autobalance-state <Auto Balance Aggregate state>] - State of the aggregate being balanced
(privilege: advanced)
If this parameter is specified, the command displays information only about the aggregates that have
the specified state.
This parameter indicates the amount of the space saved by Data Compaction in bytes.
[-data-compaction-space-saved-percent <percent>] - Percentage Saved by Data Compaction
This parameter indicates the percentage of space saved in the aggregate by Data Compaction.
[-data-compacted-count {<integer>[KB|MB|GB|TB|PB]}] - Amount of compacted data
This parameter indicates the number of bytes occupied by compacted data inside this aggregate.
[-creation-timestamp <MM/DD/YYYY HH:MM:SS>] - Timestamp of Aggregate Creation
This parameter indicates the date and time the aggregate was created.
[-single-instance-data-logging {off|on}] - Enable SIDL
If this parameter is specified, the command displays whether Single Instance Data Logging feature is
enabled on the specified aggregate.
[-composite {true|false}] - Composite
If this parameter is specified, the command displays information only about aggregates whose
classification as a FabricPool matches the specified value. A FabricPool has an external capacity tier
attached to it.
[-is-fabricpool-mirrored {true|false}] - Is FabricPool Mirrored
If this parameter is specified, the command displays information only about FabricPools whose
classification as mirrored matches the specified value. A mirrored FabricPool has a second external
capacity tier attached to it.
[-composite-capacity-tier-used {<integer>[KB|MB|GB|TB|PB]}] - Capacity Tier Used Size
If this parameter is specified, the command displays the amount of space in use in the attached external
capacity tier.
[-sis-space-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Storage Efficiency
This parameter indicates the total amount of space saved by storage efficiency in bytes.
[-sis-space-saved-percent <percent>] - Percentage of Space Saved by Storage Efficiency
This parameter indicates the percentage of space saved by storage efficiency.
[-sis-shared-count {<integer>[KB|MB|GB|TB|PB]}] - Amount of Shared bytes count by Storage
Efficiency
This parameter indicates the number of bytes shared by storage efficiency.
[-is-inactive-data-reporting-enabled {true|false}] - Inactive Data Reporting Enabled
If this parameter is specified, the command displays whether reporting of inactive user data is enabled.
This parameter is not allowed on FabricPools
[-inactive-data-reporting-start-timestamp <MM/DD/YYYY HH:MM:SS>] - Timestamp when
Inactive Data Reporting was Enabled
If this parameter is specified, the command displays the timestamp at which inactive data reporting was
enabled on the aggregate. This parameter is not allowed on FabricPools.
[-azcs-read-optimization-enabled {true|false}] - azcs-read-optimization Enabled
If this parameter is specified, the command displays whether azcs-with-compression feature is enabled.
[-encrypt-with-aggr-key {true|false}] - Enable Aggregate level Encryption
Selects the aggregates that are encrypted with aggregate keys.
[-drive-protection-enabled {true|false}] - Aggregate uses data protected SEDs
If this parameter is specified, the command displays whether this aggregate is entirely composed of self-
encrypting drives that have data protection enabled.
● Examples
The following example displays information about all aggregates that are owned by nodes in the local
cluster:
/aggr1/plex1/rg0 (block)
RAID Status: raid_dp, mirrored, normal
RAID Type: raid_dp
SyncMirror Resync Snapshot Frequency in Minutes: 60
Is Root: false
Space Used By metadata for Volume Efficiency: 0B
Size: 53.24MB
SnapLock Type of the Aggregate: -
State: online
Used Size: 144KB
Number Of Volumes: 0
Is Flash Pool Caching: -
Is Eligible for Auto Balance Aggregate: false
State of the aggregate being balanced: ineligible
State Change Counter for Auto Balancer: 0
Is Encrypted: true
Encryption Key ID: 40004FE3000000000303000000000000436F5DB53445FD60
3FB5A8A64937AA7B
Is in the precommit phase of Copy-Free Transition: false
Is a 7-Mode transitioning aggregate that is not yet committed in clustered Data ONTAP and is currently
out of space: false
Threshold When Aggregate Is Considered Unbalanced (%): 70
Threshold When Aggregate Is Considered Balanced (%): 40
Resynchronization Priority: -
Space Saved by Data Compaction: 99.24MB
Percentage Saved by Data Compaction: 7%
Amount of compacted data: 99.24MB
Timestamp of Aggregate Creation: 1/3/2017 23:38:06
Enable SIDL: off
Composite: false
Capacity Tier Used Size: 0B
Space Saved by Storage Efficiency: 99.24MB
Percentage of Space Saved by Storage Efficiency: 7%
Amount of Shared bytes count by Storage Efficiency: 99.24MB
The following example displays information about aggregates that are owned by nodes in cluster1:
cluster1:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 6.04GB 3.13GB 48% online 2 cluster1-01 raid_dp,
mirrored,
normal
aggr1 53.24MB 12.59MB 76% online 2 cluster1-02 raid_dp,
mirrored,
normal
2 entries were displayed.
The following example displays information about aggregates that are owned by nodes in the remote
cluster named cluster2:
cluster2:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr2 - - - remote_cluster
- - -
aggr3 - - - remote_cluster
- - -
2 entries were displayed.
The following example displays information about aggregates that are owned by nodes in all the clusters:
cluster2:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr2 - - - remote_cluster
- - -
aggr3 - - - remote_cluster
- - -
cluster1:
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 6.04GB 3.14GB 48% online 2 cluster1-01 raid_dp,
mirrored,
normal
aggr1 53.24MB 12.59MB 76% online 2 cluster1-02 raid_dp,
mirrored,
normal
4 entries were displayed.
Related reference
storage aggregate show-space (page 1007)
storage aggregate resynchronization modify (page 1052)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-auto-provision-progress command displays the status of the most
recent auto provision operation. The command output displays the progress for all the aggregates included
in the provisioning operation. The command displays the following information about each aggregate:
• Aggregate
• Current Usable Size
• Target Usable Size
• Provisioning Progress
● Examples
The following example displays the information about all aggregates that are provisioned during the
aggregate auto provision operation:
Info: Aggregate auto provision has started. Use the "storage aggregate
show-auto-provision-progress" command to track the progress.
Related reference
storage aggregate auto-provision (page 974)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-cumulated-efficiency command displays information about the
cumulated storage efficiency of all the aggregates. The storage efficiency is displayed at four different levels:
• Total
• Aggregate
• Volume
• Snapshot and FlexClone volume
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-details ]
Use this parameter to show additional Storage Efficiency Ratios.
| [-all-details ] (privilege: advanced)
Use this parameter to show additional Storage Efficiency Ratios and size values.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregates <aggregate name>, ...] - List of Aggregates to cumulate Storage Efficiency ratio
If this parameter is specified, the command calculates the cumulated storage efficiency of the specified
list of aggregates.
[-nodes {<nodename>|local}, ...] - List of Aggregates to cumulate Storage Efficiency ratio
If this parameter is specified, the command calculates the cumulated storage efficiency of aggregates
that are located on the specified list of node.
[-total-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by volumes, clones,
Snapshot copies in the Aggregate (privilege: advanced)
Displays the total logical size used in all the specified aggregates. This includes Volumes, Clones and
Snapshots in all the specified aggregates. The logical size is computed based on physical usage and
savings obtained in all the specified aggregates.
[-total-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Physical Used (privilege: advanced)
Displays the physical size used by all the specified aggregates.
[-total-storage-efficiency-ratio <text>] - Total Storage Efficiency Ratio
Displays the total storage efficiency ratio of the aggregate.
[-total-data-reduction-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Data Reduction
Logical Used (privilege: advanced)
Displays the total logical size used in all the specified aggregates excluding Snapshot copies.
[-total-data-reduction-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Data Reduction
Physical Used (privilege: advanced)
Displays the total physical size used by all the specified aggregates aggregates excluding Snapshot
copies.
[-total-data-reduction-efficiency-ratio <text>] - Total Data Reduction Efficiency Ratio
Displays the total storage efficiency ratio obtained by Deduplication, Compression, Data Compaction,
Pattern Detection and FlexClone data reduction technologies on the specified aggregates.
[-volume-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Space Used for All volumes
(privilege: advanced)
Displays the total logical size used by all the volumes in all the specified aggregates.
● Examples
The following example displays information about all aggregates that are owned by nodes in the local
cluster:
-------------Snapshot--------------------
Logical Physical Storage
Used Used Efficiency Ratio
----------- ----------- ----------------
0B 2.22MB 1.00:1
-------------FlexClone-------------------
Logical Physical Storage
Used Used Efficiency Ratio
----------- ----------- ----------------
0B 0B 1.00:1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-efficiency command displays information about the storage efficiency
of all the aggregates. The storage efficiency is displayed at four different levels:
• Total
• Aggregate
• Volume
• Snapshot and FlexClone volume
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-details ]
Use this parameter to show additional Storage Efficiency Ratios.
| [-advanced ] (privilege: advanced)
Use this parameter to show additional Storage Efficiency Ratios and size values.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Name of the Aggregate
Displays the aggregate name. If this parameter is specified, the command displays detailed information
about the storage efficiency of the specified aggregate.
[-node {<nodename>|local}] - Node where Aggregate Resides
Displays the node which owns the aggregate. If this parameter is specified, the command displays
storage efficiency information only about the aggregates that are located on the specified node.
[-total-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by Volumes, Clones,
Snapshot Copies in the Aggregate (privilege: advanced)
Displays the logical size used in the aggregate. This includes Volumes, Clones and Snapshots in the
aggregate. The logical size is computed based on physical usage and savings obatained in the
aggregate.
[-total-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Physical Used (privilege: advanced)
Displays the physical size used by the aggregate.
[-total-storage-efficiency-ratio <text>] - Total Storage Efficiency Ratio
Displays the total storage efficiency ratio of the aggregate.
[-total-data-reduction-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Data Reduction
Logical Used (privilege: advanced)
Displays the logical size used in the aggregate excluding Snapshot copies.
[-total-data-reduction-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Data Reduction
Physical Used (privilege: advanced)
Displays the physical size used by the aggregate excluding Snapshot copies.
[-total-data-reduction-efficiency-ratio <text>] - Total Data Reduction Efficiency Ratio
Displays the total storage efficiency ratio obtained by Deduplication, Compression, Data Compaction,
Pattern Detection and FlexClone data reduction technologies on the aggregate.
[-volume-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Space Used for All Volumes
Displays the logical size used by all the volumes in the aggregate.
[-volume-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Physical Space Used for All Volumes
Displays the physical size used by all volumes in the aggregate.
[-volume-efficiency-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Volume Deduplication
(privilege: advanced)
Displays the total disk space that is saved by deduplication and FlexClone for files, LUNs or NVMe
namespaces by all volumes in the aggregate.
[-volume-dedupe-zero-pattern-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Volume
Deduplication and pattern detection (privilege: advanced)
Displays the total disk space that is saved by deduplication, Zero pattern detection and FlexClone for
files, LUNs or NVMe namespaces by all volumes in the aggregate.
[-volume-efficiency-saved-ratio <text>] - Volume Deduplication Savings ratio
Displays the storage efficiency ratio for savings by deduplication and FlexClone for files, LUNs or NVMe
namespaces by all volumes in the aggregate.
[-volume-compression-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Volume Compression
(privilege: advanced)
Displays the total disk space that is saved by compressing blocks by all volumes in the aggregate.
[-volume-compression-saved-ratio <text>] - Volume Compression Savings ratio
Displays the storage efficiency ratio for savings by compressing blocks on all volumes in the aggregate.
[-volume-vbn-zero-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Inline Zero Pattern
Detection
Displays the total disk space that is saved by inline zero pattern detection by all the volumes in the
aggregate.
[-volume-data-reduction-storage-efficiency-ratio <text>] - Volume Data Reduction SE Ratio
Displays the storage efficiency ratio of all the volumes in the aggregate.
[-aggr-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Space Used by the Aggregate
(privilege: advanced)
Displays the logical size used by the aggregate.
[-aggr-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Physical Space Used by the Aggregate
(privilege: advanced)
Displays the physical size used by the aggregate.
[-aggr-compaction-saved {<integer>[KB|MB|GB|TB|PB]}] - Space Saved by Aggregate Data Reduction
(privilege: advanced)
Displays the total disk space that is saved by data compaction, cross volume sharing at the aggregate
level.
[-aggr-data-reduction-storage-efficiency-ratio <text>] - Aggregate Data Reduction SE Ratio
(privilege: advanced)
Displays the storage efficiency ratio of the aggregate.
[-snapshot-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by Snapshot Copies
(privilege: advanced)
Displays the logical size used by all Volume Snapshots residing in the aggregate.
[-snapshot-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Physical Size Used by Snapshot Copies
(privilege: advanced)
Displays the physical size used by all Volume Snapshots residing in the aggregate.
[-snapshot-volume-data-reduction-storage-efficiency-ratio <text>] - Snapshot Volume Data
Reduction Ratio
Displays the Snapshot volume storage efficiency ratio of the aggregate.
[-flexclone-volume-logical-used {<integer>[KB|MB|GB|TB|PB]}] - Logical Size Used by FlexClone
Volumes (privilege: advanced)
Displays the logical size used by all FlexClone volumes residing in the aggregate.
[-flexclone-volume-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Physical Sized Used by
FlexClone Volumes (privilege: advanced)
Displays the physical size used by all FlexClone volumes in the aggregate.
[-flexclone-volume-data-reduction-storage-efficiency-ratio <text>] - FlexClone Volume
Data Reduction Ratio
Displays the FlexClone volume storage efficiency ratio of the aggregate.
[-snapshot-flexclone-volume-data-reduction-storage-efficiency-ratio <text>] - Snapshot
And FlexClone Volume Data Reduction SE Ratio
Displays the Snapshot and FlexClone volume storage efficiency ratio of the aggregate.
[-number-of-offline-volumes <integer>] - Number of Volumes Offline
● Examples
The following example displays information about all aggregates that are owned by nodes in the local
cluster:
Aggregate: aggr1
Node: node1
Aggregate: aggr2
Node: node1
Aggregate: aggr1
Node: node1
Aggregate: aggr2
Node: node1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-resync-status command displays resync status information for each
plex. The command output depends on the parameter or parameters specified with the command. If no
parameters are specified, the command displays the following information about all aggregates:
• Aggregate Name
• Resyncing Plex Name
• Resyncing Percentage
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
This parameter specifies the name of the aggregate.
[-plex <text>] - Plex Name
This parameter specifies the name of the plex.
[-status <text>] - Status
Displays plex status. Possible values are:
• normal
• failed
• empty
• invalid
• uninitialized
• failed assimilation
• limbo
• active
• inactive
• resyncing
These values may appear by themselves or in combination separated by commas; for example,
"normal,active".
[-is-online {true|false}] - Is Online
● Examples
The following example displays resynchronization status for all the aggregates:
Related reference
storage aggregate plex show (page 1044)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-scrub-status command displays the following information about the
scrub status of aggregates:
• Aggregate name
• RAID groups
• Whether the scrub is suspended
• Percentage of the scrub that is completed
• Last scrub time of the aggregate
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays detailed scrub-status information about the
specified aggregate.
[-raidgroup <text>] - RAID Group
If this parameter is specified, the command displays information only about the aggregate that contains
the specified RAID group.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information only about the aggregates on the
specified node. The value local specifies the current node.
[-suspended {true|false}] - Is Suspended
If this parameter is specified, the command displays information only about the aggregates that have
the specified scrub-suspension state (true or false).
[-complete-percentage <percent>] - Percentage Completed
If this parameter is specified, the command displays information only about the aggregates whose
scrubs have the specified completed percentage.
[-last-scrub-time <MM/DD/YYYY HH:MM:SS>] - Last Scrub Time
If this parameter is specified, the command displays information only about the aggregates that have
the specified last-scrub time, in the format MM/DD/YYYY HH:MM:SS.
● Examples
The following example displays scrub-status information for all the aggregates:
The following example displays detailed information about the aggregate named aggr1:
Related reference
storage aggregate scrub (page 986)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-space command displays information about space utilization within
aggregates and any attached external capacity tier. The command output breaks down space usage in the
specified aggregate by feature. If no parameters are specified, the command displays this information about
all aggregates. Note that used percentage for an external capacity tier will be non-zero only if a size limit
was set for that aggregate's attached tier.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate-name <aggregate name>] - Aggregate
If this parameter is specified, the command displays information only about the specified aggregates.
[-bin-num <integer>] - Bin Number
If this parameter is specified, the command displays information only about the aggregates whose bin
number for the storage tier matches the specified value. Typically, bin 0 refers to the performance tier or
active file system and bin numbers greater than 0 refer to the external capacity tiers attached to the
aggregate.
[-tier-name <text>] - Tier Name For Show Command
If this parameter is specified, the command displays information only about the aggregates whose
attached storage tier name matches the specified value.
[-aggregate <aggregate name>] - Aggregate Display Name
If this parameter is specified, the command displays information only about space used in the specified
aggregate or aggregates.
[-aggregate-uuid <UUID>] - Uuid of the Aggregate
If this parameter is specified, the command displays information only about the aggregates whose UUID
matches the specified value.
[-volume-footprints {<integer>[KB|MB|GB|TB|PB]}] - Volume Footprints
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified amount of space in use by volume footprints. A volume's footprint is
the overall amount of space that a volume occupies in the aggregate, including the volume metadata
and data.
[-volume-footprints-percent <percent_no_limit>] - Volume Footprints Percent
If this parameter is specified, the command displays information only about the aggregate or
aggregates whose volume footprints occupy the specified percentage of space.
[-snap-size-total {<integer>[KB|MB|GB|TB|PB]}] - Total Space for Snapshot Copies in Bytes
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified amount of space in use by aggregate Snapshot copies. This field
includes the space that is reserved for Snapshot copies and is not available to volumes or aggregate
data and metadata. It is set to 0 by default.
[-percent-snapshot-space <percent>] - Space Reserved for Snapshot Copies
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified percentage of space in use by aggregate Snapshot copies.
[-aggregate-metadata {<integer>[KB|MB|GB|TB|PB]}] - Aggregate Metadata
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified amount of space in use by aggregate metadata.
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified amount of space in use by cross volume deduplication temporary
metadata.
[-cross-volume-dedupe-temp-metadata-percent <percent_no_limit>] - Aggregate Dedupe
Temporary Metadata Percent
If this parameter is specified, the command displays information only about the aggregate or
aggregates that have the specified percentage of space in use by cross volume deduplication temporary
metadata.
[-object-store-physical-used {<integer>[KB|MB|GB|TB|PB]}] - Total Physical Used Size
If this parameter is specified, the command displays information only about the aggregates whose
physical space use in the attached object store matches the specified value.
[-object-store-physical-used-percent <percent_no_limit>] - Physical Used Percentage
If this parameter is specified, the command displays information only about aggregates whose physical
space in use in the attached object store as a percentage of the license limit matches the specified
value.
[-object-store-referenced-capacity {<integer>[KB|MB|GB|TB|PB]}] - Total Object Store
Referenced Capacity
If this parameter is specified, the command displays information only about the aggregates whose
reference capacity space in use in the attached object store matches the specified value.
[-object-store-referenced-capacity-percent <percent_no_limit>] - Object Store Referenced
Capacity Percentage
If this parameter is specified, the command displays information only about aggregates whose reference
capacity space in use in the attached object store as a percentage of the license limit matches the
specified value.
[-object-store-metadata {<integer>[KB|MB|GB|TB|PB]}] - Object Store Metadata
If this parameter is specified, the command displays information only about the aggregates whose
metadata space in use in the attached object store matches the specified value.
[-object-store-metadata-percent <percent_no_limit>] - Object Store Metadata Percent
If this parameter is specified, the command displays information only about aggregates whose
metadata space in use in the attached object store as a percentage of the license limit matches the
specified value.
[-object-store-unreclaimed-space {<integer>[KB|MB|GB|TB|PB]}] - Total Unreclaimed Space
If this parameter is specified, the command displays information only about the aggregates whose
unreclaimed space in use in the attached object store matches the specified value.
[-object-store-unreclaimed-space-percent <percent_no_limit>] - Object Store Unreclaimed Space
Percentage
If this parameter is specified, the command displays information only about aggregates whose
unreclaimed space in use in the attached object store as a percentage of the license limit matches the
specified value.
[-object-store-size {<integer>[KB|MB|GB|TB|PB]}] - Object Store Size
If this parameter is specified, the command displays information only about the aggregates whose
attached object store size limit matches the specified value.
[-object-store-sis-space-saved {<integer>[KB|MB|GB|TB|PB]}] - Object Store Space Saved by
Storage Efficiency
If this parameter is specified, the command displays information only about the aggregates whose
amount of space saved by storage efficiency matches the specified value.
[-object-store-sis-space-saved-percent <percent_no_limit>] - Object Store Space Saved by
Storage Efficiency Percentage
If this parameter is specified, the command displays information only about the aggregates whose
percentage of space saved by storage efficiency matches the specified value.
● Examples
The following example displays information about all aggregates:
Aggregate : aggr0
Aggregate : aggr1
The following example displays information about all the aggregates in a system including the ones that
have an object store attached to them.
Aggregate : aggr0
Aggregate : aggr1
Performance Tier
Aggregate : aggr1
Object Store: my-store
Feature Used Used%
-------------------------------- ---------- ------
Referenced Capacity 811.2MB 0%
Metadata 0B 0%
Unreclaimed Space 0B 0%
Space Saved by Storage Efficiency 0B 0%
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The command storage aggregate show-spare-disks displays information about spare disks. The
command output depends on the parameter or parameters specified with the command. If no parameters
are specified, the command displays information about all spare disks in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-partition-info ] (privilege: advanced)
Displays the following information about root-data and root-data1-data2 partitioned spares.
• Disk
• Type
• Class
• RPM
• Checksum
• Local Data Usable
• Local Data1 Usable
• Local Data2 Usable
• Local Root Usable
• Physical Size
• Status
| [-instance ]}
If this parameter is specified, the command displays detailed information about each spare disk.
[-original-owner <text>] - Original Owner
Selects the spare disks that match this parameter value.
[-disk <disk path name>] - Disk Name
Selects the spare disks that match this parameter value.
[-checksum-style {advanced_zoned | block | none}] - Checksum Style
Selects the spare disks that match this parameter value. Possible values are:
• block -- Supports block checksum
• advanced_zoned -- Supports advanced zone checksum
• none -- No checksum support
[-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Disk Type
Selects the spare disks that match this parameter value.
[-effective-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM |
SSD-CAP | VMLUN | VMLUN-SSD}] - Effective Disk Type
Selects the spare disks that match this parameter value.
Hard disk drives with the same effective-disk-type value may be mixed together in the same
aggregate depending upon the system's raid.mix.hdd.disktype.capacity and
raid.mix.hdd.disktype.performance option settings. Solid state drives with the same effective-disk-
type value may be mixed together in the same aggregate depending upon the system's
raid.mix.disktype.solid_state option setting.
[-standard-disk-type {SATA | FC | NL-SAS | LUN | SAS | SCSI | SSD | VM-DISK | NVMe-SSD | SSD-CAP |
VM-LUN | VM-LUN-SSDS}] - Standard Disk Type
Selects the spare disks that match this parameter value.
[-disk-class {capacity | performance | archive | solid-state | array | virtual}] - Disk Class
Selects the spare disks that match this parameter value. Possible values are:
• capacity -- Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance -- Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive -- Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• solid-state -- Solid-state drives. Includes disk type SSD, SSD-CAP and SSD-NVM.
• array -- Logical storage devices backed by storage arrays and used by Data ONTAP as disks. Includes
disk type LUN.
• virtual -- Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
Disks with the same disk-class value are compatible for use in the same aggregate.
[-disk-rpm <integer>] - Disk RPM
Selects the spare disks that match this parameter value.
[-effective-disk-rpm <integer>] - Effective Disk RPM
Selects the spare disks that match this parameter value.
Hard disk drives with the same effective-disk-rpm value may be mixed together in the same
aggregate depending upon the system's raid.mix.hdd.rpm.capacity and raid.mix.hdd.rpm.performance
option settings.
[-syncmirror-pool <text>] - Pool Number
Selects the spare disks that match this parameter value.
[-owner-name {<nodename>|local}] - Current Owner
● Examples
Display spare disks owned by node node-b.
Spare Pool
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ----- ----------- ------ -------------- -------- -------- --------
1.1.13 BSAS capacity 7200 block 827.7GB 828.0GB zeroed
1.1.15 BSAS capacity 7200 block 413.2GB 414.0GB zeroed
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ----- ----------- ------ -------------- -------- -------- --------
1.1.13 BSAS capacity 7200 block 827.7GB 828.0GB zeroing, 17% done
1.1.15 BSAS capacity 7200 block 413.2GB 414.0GB zeroing, 28% done
2 entries were displayed.
Related reference
storage disk zerospares (page 1100)
Storage raid-options Commands (page 1194)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate show-status command displays the RAID layout and disk configuration of
aggregates. The command output depends on the parameter or parameters specified with the command. If
no parameters are specified, the command displays information about all aggregates in the cluster.
Note
This command does not use pagination. You can reduce the output by filtering with the parameters
below.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
This parameter currently has no effect.
[-aggregate <text>] - Aggregate Name
Selects the aggregates that match this parameter value.
[-node <nodename>] - Node
Selects the aggregates that match this parameter value.
[-aggregate-uuid <UUID>] - Aggregate UUID
Selects the aggregates that match this parameter value.
● Examples
Display the RAID layout of a Flash Pool aggregate.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate verify command verifies the two plexes of an aggregate. It compares the data
in the two plexes to ensure that the plexes are identical. It can be used whenever the administrator needs
to ensure that the two plexes are completely synchronized with each other. To view any discrepancies, use
the following command:
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate to be verified. If no aggregate is specified then the action
specified by the parameter -action will be taken on all the aggregates.
-action {start|stop|resume|suspend|status} - Action
This parameter specifies the action to be taken. The possible actions are:
• start - Starts a verify.
• stop - Permanently stops a verify. A stopped verify cannot be resumed.
• resume - Resumes a suspended verify.
• suspend - Suspends a verify.
• status - Displays the current status of a verify.
[-plex-to-fix <text>] - Plex to be Corrected in Case of Mismatches
This parameter specifies the name of a plex to fix in case the two plexes of the aggregate do not match.
The default behavior is to log any discrepancies instead of fixing them.
Note
This parameter is only applicable when the command is used to start a verify.
● Examples
The storage aggregate verify command verifies the two plexes of an aggregate. It compares the data
in the two plexes to ensure that the plexes are identical. It can be used whenever the administrator needs
to ensure that the two plexes are completely synchronized with each other. To view any discrepancies, use
the following command:
The following example queries the status of a verify on an aggregate named aggr1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate efficiency show command displays information about the different storage
efficiency of all the aggregates.If no parameters are specified, the command displays the following
information for all aggregates:
• Aggregate
• Node
• Cross-vol-background-dedupe State (Enabled, Disabled)
• Cross-vol-inline-dedupe State (Enabled, Disabled)
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Displays the aggregate name. If this parameter is specified, the command displays detailed information
about the storage efficiency of the specified aggregate.
[-node {<nodename>|local}] - Node
Displays the node which owns the aggregate. If this parameter is specified, the command displays
storage efficiency information only about the aggregates that are located on the specified node.
[-cross-volume-background-dedupe {true|false}] - Cross Volume Background Deduplication
Displays whether the cross volume background deduplication is enabled/disabled in the aggregate.
[-cross-volume-inline-dedupe {true|false}] - Cross Volume Inline Deduplication
Displays whether the cross volume inline deduplication is enabled/disabled in the aggregate.
[-cross-volume-dedupe-savings {true|false}] - Has Cross Volume Deduplication Savings
Displays whether the aggregate may have savings from cross volume deduplication.
[-auto-adaptive-compression-savings {true|false}] - Has Auto Adaptive Compression Savings
Displays whether the aggregate may have auto adaptive compression savings.
● Examples
The following example displays information about all aggregates that are owned by nodes in the local
cluster:
Aggregate: aggr0
Node: vivek6-vsim2
Aggregate: aggr1
Node: vivek6-vsim2
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate efficiency stat command displays storage efficiency statistics.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays storage efficiency statistics for the specified
aggregate.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays storage efficiency statistics for the aggregates that
are located on the specified node.
[-total-blocks-recompressed <integer>] - Total Number of Blocks Recompressed
This field represents the total number of blocks recompressed. If this parameter is specified, the
command displays storage efficiency statistics that match the specified block count.
[-active-blocks-recompressed <integer>] - Number of Active Blocks Recompressed
This field represents the total number of hot blocks recompressed. If this parameter is specified, the
command displays storage efficiency statistics that match the specified block count.
[-inactive-blocks-recompressed <integer>] - Number of Inactive Blocks Recompressed
This field represents the total number of cold blocks recompressed. If this parameter is specified, the
command displays storage efficiency statistics that match the specified block count.
● Examples
The following example displays storage efficiency statistics for all the aggregates that are owned by nodes
in the local cluster:
Aggregate: aggr1
Node: node1
Total Number of Blocks Recompressed: 512
Number of Active Blocks Recompressed: 120
Number of Inactive Blocks Recompressed: 392
Aggregate: aggr2
Node: node1
Total Number of Blocks Recompressed: 0
Number of Active Blocks Recompressed: 0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate cross-volume-dedupe revert-to command is used to revert cross volume
deduplication savings on an aggregate.
● Parameters
-aggregate <aggregate name> - Aggregate
This specifies the aggregate on which cross volume deduplication savings should be reverted. If no
aggregate is specified then it will revert the savings on all aggregates
[-clean-up | -c {true|false}] - Delete Previously Downgraded Metafiles
This specifies whether downgrade metafile needs to be removed so that other efficiency operations can
start on that aggregate.
● Examples
The following example displays information for reverting cross volume background deduplication on
aggregate "aggr1":
cluster:::> storage aggregate efficiency cross-volume-dedupe revert-to -aggregate aggr1 -clean-up true
The revert operation started on aggregate "aggr1".
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate efficiency cross-volume-dedupe show command displays information in
detail about the different storage efficiency of all the aggregates.If no parameters are specified, the
command displays the following information for all aggregates:
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Displays the aggregate name. If this parameter is specified, the command displays detailed information
about the storage efficiency of the specified aggregate.
[-node {<nodename>|local}] - Node
Displays the node which owns the aggregate. If this parameter is specified, the command displays
storage efficiency information only about the aggregates that are located on the specified node.
[-background-progress <text>] - Progress
Displays the information for the aggregates that match the specified progress.
[-background-op-status <text>] - Operation Status
Displays the information for the aggregates that match the specified operation status.
[-background-last-op-state <text>] - Last Operation State
Displays the information for the aggregates that match the specified last operation state.
[-background-last-success-op-begin <Date>] - Last Success Operation Begin Time
Displays the information for the aggregates that match the specified last successful operation begin
time.
[-background-last-success-op-end <Date>] - Last Success Operation End Time
Displays the information for the aggregates that match the specified last successful operation end time.
[-background-last-op-begin <Date>] - Last Operation Begin Time
Displays the information for the aggregates that match the specified last operation begin time.
[-background-last-op-end <Date>] - Last Operation End Time
Displays the information for the aggregates that match the specified last operation end time.
[-background-last-op-error <text>] - Last Operation Error
Displays the information for the aggregates that match the specified last operation error.
[-background-stage <text>] - Stage
Displays the information for the aggregates that match the specified stage.
[-background-checkpoint-time <Date>] - Checkpoint Time
Displays the information for the aggregates that match the specified checkpoint time.
[-background-checkpoint-op-type <text>] - Checkpoint Operation Type
Displays the information for the aggregates that match the specified checkpoint operation type.
[-background-checkpoint-stage <text>] - Checkpoint Stage
Displays the information for the aggregates that match the specified checkpoint stage.
[-background-dedupe {true|false}] - Background State
Displays the information for the aggregates that match the specified cross volume background dedupe
state.
[-inline-dedupe {true|false}] - Inline State
Displays the information for the aggregates that match the specified cross volume inline dedupe state.
[-dedupe-savings {true|false}] - Has Cross Volume Deduplication Savings
Displays the information for the aggregates that has some savings from cross volume deduplication.
● Examples
The following example displays information about all aggregates that are owned by nodes in the local
cluster:
Aggregate: aggr0
Node: vivek6-vsim2
Aggregate: aggr1
Node: vivek6-vsim2
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate cross-volume-dedupe start command is used to start cross volume
background deduplication on an aggregate.
● Parameters
-aggregate <aggregate name> - Aggregate
This specifies the aggregate on which cross volume background deduplication should be started. If no
aggregate is specified then it will start on all aggregates
[-scan-old-data | -s [true]] - Scan Old Data
This option processes all the existing data on all volumes on the aggregate. It prompts for user
confirmation before proceeding. Default value is false.
● Examples
The following example displays information for starting cross volume background deduplication on
aggregate "aggr1":
cluster:::> storage aggregate efficiency cross-volume-dedupe start -aggregate aggr1 -scan-old-data true
The efficiency operation on aggregate "aggr1" has started.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate cross-volume-dedupe stop command is used to stop cross volume
background deduplication on an aggregate.
● Parameters
-aggregate <aggregate name> - Aggregate
This specifies the aggregate on which cross volume background deduplication should be stopped. If no
aggregate is specified then it will stop on all aggregates
● Examples
The following example displays information for stopping cross volume background deduplication on
aggregate "aggr1":
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate encryption show-key-id command displays the key IDs of all NAE
(Aggregate Encryption) aggregates.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate
If this parameter is specified, the command displays information only about the specific NAE (Aggregate
Encryption) aggregate.
[-aggrID <UUID>] - Aggregate UUID
If this parameter is specified, the command displays the key ID of the specified NAE (Aggregate
Encryption) aggregate ID.
[-keyid-index-zero <text>, ...] - 0th Index Keyid
If this parameter is specified, the command displays the 0th index key ID of NAE (Aggregate Encryption)
aggregates.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate inode-upgrade resume command resumes a suspended inode upgrade
process. The inode upgrade process might have been suspended earlier due to performance reasons.
● Parameters
-node {<nodename>|local} - Node Name
If this parameter is specified, the command resumes the upgrade process of an aggregate that is located
on the specified node.
-aggregate <aggregate name> - Aggregate Name
This specifies the aggregate for which the inode upgrade process is to be resumed.
● Examples
The following example resumes an aggregate upgrade process:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate inode-upgrade show command displays information about aggregates
undergoing the inode upgrade process. The command output depends on the parameter or parameters
specified with the command. If no parameters are specified, the command displays the default fields about
all aggregates undergoing the inode upgrade process. The default fields are:
• aggregate
• status
• scan-percent
• remaining-time
• space-needed
• scanner-progress
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate Name
If this parameter is specified with the -node parameter, the command displays detailed information
about the specified aggregate. If only this parameter is specified, the command displays information
about all aggregates that match the specified name.
[-node <nodename>] - Node Name
If this parameter is specified, the command displays information only about the aggregate or
aggregates that are located on the specified node.
[-status {pending|scanning|suspended-initalizing|suspended|cleanup-pending|cleanup|cleanup-
done|suspended-aborting|suspended-removing|suspended-while-removing|suspended-ironing}] -
Upgrade Status
If this parameter is specified, the command displays information only about the aggregate or
aggregates that match the specified inode upgrade status.
[-scan-percent <percent>] - Upgrade Scan Percent Complete
If this parameter is specified, the command displays information only about the aggregate or
aggregates that match the specified inode upgrade progress percentage.
[-space-needed {<integer>[KB|MB|GB|TB|PB]}] - Space Needed to Complete Upgrade
If this parameter is specified, the command displays information only about the aggregate or
aggregates where the space needed to complete the upgrade process matches the specified size.
[-remaining-time <[<integer>h][<integer>m][<integer>s]>] - Remaining Upgrade Time
If this parameter is specified, the command displays information only about the aggregate or
aggregates where the remaining time to complete the inode upgrade process matches the specified
time.
[-scanner-progress <text>] - Scanner Progress
If this parameter is specified, the command displays information only about the aggregate or
aggregates where the progress of the inode upgrade process matches the input.
● Examples
The following example displays information about all aggregates undergoing the inode upgrade process:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store attach command attaches an object store to an aggregate to
create a FabricPool. This command requires two parameters to create a FabricPool - an aggregate and a
configuration to attach an object-store to the aggregate. This command verifies whether the object store is
accessible through the intercluster LIF both from the node on which the aggregate is present as well as its
High Availability (HA) partner node. The command fails if the object store is not accessible. Once an object
store is attached to an aggregate, it cannot be detached.
● Parameters
-aggregate <text> - Name of the Aggregate
This parameter specifies the aggregate to which the object store must be attached to create a
FabricPool.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store to be attached.
The object store configuration has information about object store server name, port, access credentials,
and provider type.
[-allow-flexgroup {true|false}] - Allow Existing FlexGroup Constituents in the Aggregate on Attach
This optional parameter allows attaching object store to an aggregate already containing FlexGroup
constituents. The default value is false. Mixing FabricPools and non-FabricPools within a FlexGroup is
not recommended. All aggregates hosting constituents of a FlexGroup should be attached to the object
store.
● Examples
The following example attaches an object store to aggregate aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store mirror command attaches a second object store to a
FabricPool aggregate to create a mirror. This command verifies whether the object store is accessible
through the intercluster LIF from both the node on which the aggregate is present and from its high-
availability (HA) partner node. The command fails if the object store is not accessible.
● Parameters
-aggregate <text> - Name of the Aggregate
This parameter specifies the aggregate to which the object store must be attached in order to create a
mirror.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the name of the new object store configuration to be attached to the
aggregate. The object store configuration has information about the object store server name, port,
access credentials, and provider type.
● Examples
The following example shows how to create a mirror to aggregate aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store modify command is used to update one or more object store
parameters.
● Parameters
-aggregate <text> - Aggregate Name
This parameter identifies the aggregate to which the object store to be modified is attached.
-object-store-name <text> - ONTAP Name for this Object Store Config
This parameter identifies the configuration name of the object store to be modified.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space
(privilege: advanced)
This optional parameter specifies the usage threshold below which Data ONTAP reclaims unused space
from objects in the object store. When Data ONTAP writes data to the object store, it packages multiple
file system blocks into one object. Over time, blocks stored in an object can be freed, leaving part of the
object unused. When the percentage of used blocks in an object falls below this threshold, a background
task moves the blocks which are still used to a new object. Afterwards, Data ONTAP frees the original
object to reclaim the unused space. Valid values are between 0% and 99%. The default value depends
on the object store's provider type. It is 20% for AWS_S3, 25% for Azure_Cloud, 40% for SGWS, 20% for
IBM_COS, 20% for AliCloud, 20% for GoogleCloud and 40% for ONTAP_S3. Consult the FabricPool best
practices guidelines for more information.
[-tiering-fullness-threshold <percent>] - Aggregate Fullness Threshold Required for Tiering
(privilege: advanced)
This optional parameter specifies the percentage of space in the performance tier which must be used
before data is tiered out to the capacity tier.
[-mirror-type {primary|mirror}] - Object Store Mirror Type
This parameter specifies the object store mirror type. Valid mirror types are primary or mirror.
● Examples
The following example modifies the unreclaimed space threshold of an object store attached to an
aggregate named aggr1:
cluster1::> storage aggregate object-store modify -aggregate aggr1 -object-store-name my-store -unrecla
imed-space-threshold 20%
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store show command displays information about all the object stores
in the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate Name
If this parameter is specified, the command displays information only about the object stores that are
attached to the specified aggregates.
[-object-store-name <text>] - ONTAP Name for this Object Store Config
If this parameter is specified, the command displays information only about object stores whose
configuration name matches the specified names.
[-object-store-availability <object Store Availability>] - Availability of the Object Store
If this parameter is specified, the command displays information only about object stores whose
availability status matches the specified value. Supported values with this parameter are - available and
unavailable.
[-object-store-unavailable-reason <text>] - Reason why Object Store is Unavailable
If this parameter is specified, the command displays information only about object stores whose
unavailability reason matches the specified reason.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations
whose provider type matches the specified value.
[-license-used-percent <percent_no_limit>] - License Space Used Percent
If this parameter is specified, the command displays information only about object stores whose space
used by the aggregate as a percentage of the license limit matches the specified value.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space
(privilege: advanced)
If this parameter is specified, the command displays information only about object stores whose
threshold for reclaiming unused space from objects in the object store matches the specified value.
● Examples
The following example displays all information about all object stores:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate object-store show-freeing-status command displays status information
about the background work that frees an aggregate's objects from an object store after a storage
aggregate delete.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-bin-uuid <UUID>] - UUID of the Bin
If this parameter is specified, the command displays information only about the aggregate attached to
the specified bin UUID.
[-config-id <integer>] - Object Store Config ID
If this parameter is specified, the command displays information only about the aggregate attached to
the object- store with specified config ID.
[-object-store-name <text>] - Object Store Configuration Name
If this parameter is specified, the command displays information only about object stores whose
configuration name matches the specified names.
[-aggregate-name <aggregate name>] - Aggregate
If this parameter is specified, the command displays information only about the specified aggregates
that were deleted.
[-request-state {queued|running|cleaning-up|finishing}] - Request State
If this parameter is specified, the command displays information only about the object stores that have
the specified object freeing request state.
[-num-objects-freed <integer>] - Num Objects Freed
If this parameter is specified, the command displays information only about the object stores that have
the specified number of objects that have been freed.
[-last-error <text>] - The Last Error Encountered
If this parameter is specified, the command displays information only about the object stores that have
the specified last error encountered.
Related reference
storage aggregate delete (page 979)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store show-resync-status command displays information about
the progress of resync for mirror object stores attached to a FabricPool.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays information only about the specified aggregates.
[-primary-object-store-name <text>] - Primary Object Store Name
If this parameter is specified, the command displays information only about aggregates whose primary
object store configuration name matches the specified names.
[-mirror-object-store-name <text>] - Mirror Object Store Name
If this parameter is specified, the command displays information only about aggregates whose mirror
object store configuration name matches the specified names.
[-percent-complete <percent>] - Resync Complete Percentage
If this parameter is specified, the command displays information only about aggregates whose mirror
object store resync progress percentage matches the specified value.
● Examples
The following example displays resync progress for all aggregates with mirror object stores which are not in
sync:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store show-space command displays information about the amount
of space used in the object store for each of the aggregates in FabricPool. The used space is displayed in
both absolute size as well as a percentage of the FabricPool license limit.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <text>] - Aggregate Name
If this parameter is specified, the command displays space information only about object stores that are
attached to the specified aggregates.
[-object-store-name <text>] - ONTAP Name for this Object Store Config
If this parameter is specified, the command displays space information only about object stores whose
configuration name matches the specified names.
[-object-store-availability <object Store Availability>] - Availability of the Object Store
If this parameter is specified, the command displays space information about the object stores whose
availability status matches the specified value. Supported values with this parameter are - available and
unavailable.
[-object-store-unavailable-reason <text>] - Reason why Object Store is Unavailable
If this parameter is specified, the command displays information only about object stores whose
unavailability reason matches the specified reason.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations
whose provider type matches the specified value.
[-license-used-percent <percent_no_limit>] - License Space Used Percent
If this parameter is specified, the command displays space information only about object stores whose
space used by the associated aggregate as a percentage of the license limit matches the specified value.
If the object store does not require a license, then this field is not set.
[-unreclaimed-space-threshold <percent>] - Threshold for Reclaiming Unreferenced Space
(privilege: advanced)
If this parameter is specified, the command displays information only about object stores whose
threshold for reclaiming unused space from objects in the object store matches the specified value.
[-tiering-fullness-threshold <percent>] - Aggregate Fullness Threshold Required for Tiering
(privilege: advanced)
If this parameter is specified, the command displays information only about object stores whose
performance tier fullness threshold for tiering matches the specified value.
[-mirror-type {primary|mirror}] - Object Store Mirror Type
If this parameter is specified, the command displays information about object stores with a mirror-type
that matches the specified value.
[-is-mirror-degraded {true|false}] - This object store is in mirror degraded mode
If this parameter is specified, the command displays information only about mirror object stores which
have the specified mirror degraded state.
[-force-tiering-on-metrocluster {true|false}] - Force Tiering with no Mirror in a MetroCluster
Configuration
If this parameter is specified, the command displays information only about primary object stores for
which force tiering is toggled on for aggregates in a MetroCluster configuration.
[-cluster <Cluster name>] - The name of the Cluster to which the bin belongs
If this parameter is specified, the command displays information only about object stores for which
cluster matches the specified value.
● Examples
The following example displays space information about all object stores:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store unmirror command detaches a second object store from a
mirrored FabricPool. This command verifies whether the primary object store is accessible through the
intercluster LIF from both the node on which the aggregate is present and from its high-availability (HA)
partner node. The command fails if the primary object store is not available.
● Parameters
-aggregate <text> - Name of the Aggregate
This parameter specifies the aggregate from which the mirrored object store must be removed.
● Examples
The following example shows how to unmirror a mirrored FabricPool aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store config create command is used by a cluster administrator to
tell Data ONTAP how to connect to an object store. Following pre-requisites must be met before creating an
object store configuration in Data ONTAP.
• A valid data bucket or container must be created with the object store provider. This assumes that the user
has valid account credentials with the object store provider to access the data bucket.
• The Data ONTAP node must be able to connect to the object store. This includes
- Fast, reliable connectivity to the object store.
- An inter-cluster LIF (Logical Interface) must be configured on the cluster. Data ONTAP will verify
connectivity prior to saving this configuration information.
- If SSL/TLS authentication is required, then valid certificates must be installed.
- FabricPool license (required for Amazon S3 and Azure object stores).
An object-store configuration once created must not be reassociated with a different object-store or
container. See storage aggregate object-store config modify command for more information. If
neither the access-key nor the secret-password are provided while setting up a configuration for
AWS_S3 object store in Cloud Volumes ONTAP, then the access key (access key ID), the secret password
(secret access key), and the session token will be retrieved from EC2 instance metadata for the AWS Identity
and Access Management (IAM) role associated with the EC2 instance. If Data ONTAP is unable to create a
object store configuration, then the command will fail explaining the reason for failure.
● Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the name that will be used to identify the object store configuration. The name
can contain the following characters: "_", "-", A-Z, a-z, and 0-9. The first character must be one of the
following: "_", A-Z, or a-z.
-provider-type <providerType> - Type of the Object Store Provider
This parameter specifies the type of object store provider that will be attached to the aggregate. Valid
options are: AWS_S3 (Amazon S3 storage), Azure_Cloud (Microsoft Azure Cloud), SGWS (StorageGrid
WebScale), IBM_COS (IBM Cloud Object Storage), AliCloud (Alibaba Cloud Object Storage Service),
GoogleCloud (Google Cloud Storage) and ONTAP_S3.
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
This parameter specifies where the system obtains credentials for authentication to an object store. The
available choices depend on the platform (Cloud Volumes ONTAP or not) and provider-type (AWS_S3 or
not). The keys value is always applicable, and if selected means that the access-key and secret-
password are provided by the system administrator. In Cloud Volumes ONTAP, the EC2-IAM value is also
applicable. It means that the IAM role is associated with the EC2 instance, and that the access-key,
secret-password and session token are retrieved from EC2 instance metadata for this IAM role. Note
that -use-iam-role and -auth-type are mutually exclusive, -auth-type EC2-IAM is an equivalent
of -use-iam-role true, and -auth-type key is an equivalent of -use-iam-role false. In Cloud
Volumes ONTAP, the GCP-SA value may also be applicable. It means that a session token is retrieved
from the GCP instance metadata for the Service Account associated with the GCP instance. Similarly,
Azure-MSI means that a session token is retrieved from the Azure instance metadata for the Managed
Service Identity (MSI) associated with the Azure instance. For the AWS_S3 provider, the CAP (C2S
Authentication Portal) value is also applicable. This should only be used when accessing C2S
(Commercial Cloud Services). If the CAP value is specified, then the-cap-url must be specified. See
cap-url.
[-cap-url <text>] - URL to Request Temporary Credentials for C2S Account
This parameter is available only when -auth-type is CAP. It specifies a full URL of the request to a CAP
server for retrieving temporary credentials (access-key, secret-pasword and session token) for accessing
the object store server. The CAP URL may look like: https://123.45.67.89:1234/CAP/api/v1/
credentials?agency=myagency&mission=mymission&role=myrole
-server <Remote InetAddress> - Fully Qualified Domain Name of the Object Store Server
This parameter specifies the Fully Qualified Domain Name (FQDN) of the remote object store server. For
Amazon S3, server name must be an AWS regional endpoint in the format s3.amazonaws.com or s3-
<region>.amazonaws.com, for example, s3-us-west-2.amazonaws.com. The region of the server and the
bucket must match. For more information on AWS regions, refer to 'Amazon documentation on AWS
regions and endpoints'. For Azure, if the -server is a "blob.core.windows.net" or a
"blob.core.usgovcloudapi.net", then a value of -azure-account followed by a period will be added in
front of the server.
Note
This name cannot be modified once a configuration is created.
Note
The value of this field cannot be modified once a configuration is created.
[-ask-azure-private-key {true|false}] - Ask to Enter the Azure Access Key without Echoing
If this parameter is true then one will be asked to enter -azure-private-key without echoing the
input. Default value: true.
[-azure-private-key <text>]} - Azure Access Key
This parameter specifies the access key required to authenticate requests to the Azure object store. See
also ask-azure-private-key. For other object store providers see -secret-password.
[-ipspace <IPspace>] - IPspace to Use in Order to Reach the Object Store
This optional parameter specifies the IPspace to use to connect to the object store. Default value:
Default.
[-is-certificate-validation-enabled {true|false}] - Is SSL/TLS Certificate Validation Enabled
This parameter indicates whether an SSL/TLS certificate of an object store server is validated whenever
an SSL/TLS connection to an object store server is established. This parameter is only applicable when
is-ssl-enabled is true. The default value is true. It is recommended to use the default value to
make sure that Data ONTAP connects to a trusted object store server, otherwise identities of an object
store server are not verified.
[-use-http-proxy {true|false}] - Use HTTP Proxy
This optional parameter indicates whether an HTTP proxy will be used for connecting to an object store.
Note that an HTTP proxy is configured using the vserver http-proxy commands at the diagnostic
privilege level. Default value: false.
[-cluster <Cluster name>] - The Name of the Cluster to which the Configuration Belongs
This optional parameter should only be specified in MetroCluster switched-over mode and specifies the
name of the cluster for which the configuration must be created. By default the configuration is created
for the local cluster.
[-server-side-encryption {none | SSE-S3}] - Encryption of Data at Rest by the Object Store Server
(privilege: advanced)
This parameter specifies if AWS or other S3 compatible object store server must encrypt data at rest. The
available choices depend on provider-type. none encryption (no encryption required) is supported by all
types of S3 (non-Azure) object store servers. SSE-S3 encryption is supported by and is a default for all
types of S3 (non-Azure) object store servers except ONTAP_S3. This is an advanced property. In most
cases it is best not to change default value of "sse_s3" for object store servers which support SSE-S3
encryption. The encryption is in addition to any encryption done by ONTAP at a volume or at an
aggregate level.
[-url-style {path-style | virtual-hosted-style}] - URL Style Used to Access S3 Bucket
This parameter specifies the URL style used to access S3 bucket. This option is only available for non-
Azure object store providers. The available choices and default value depend on provider-type.
● Examples
The following example creates an object store configuration in Data ONTAP:
Related reference
vserver http-proxy commands (page 2138)
storage aggregate object-store config modify (page 1037)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store config delete command removes an existing object store
configuration in Data ONTAP. The configuration cannot be deleted if it is used by any aggregates or if the
system is still freeing objects from the object store from a previously executed storage aggregate
delete command. The command storage aggregate object-store show can be used to view the
aggregates attached to the object store before issuing the delete command.
Note
The storage aggregate object-store show command will not display aggregates that have
been previously deleted but still has objects in the object store.
● Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration to be deleted.
● Examples
The following example deletes an object store configuration named my-store:
Related reference
storage aggregate delete (page 979)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store config modify command is used to update one or more of
object store configuration parameters. This command must not be used to reassociate an existing valid
object-store configuration to a new object-store or container. The modifiable parameters include object
store name, server name, port, access_key, secret-password, ipspace and is-ssl-enabled.
● Parameters
-object-store-name <text> - Object Store Configuration Name
This parameter identifies the configuration to be modified.
[-new-object-store-name <text>] - Object Store Configuration New Name
This optional parameter specifies the new name for the object store configuration.
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
This optional parameter specifies where the system obtains credentials for authentication to an object
store. The available choices depend on the platform (Cloud Volumes ONTAP or not) and provider-type
(AWS_S3 or not). The keys value is always applicable, and if selected means that the access-key and
secret-password are provided by the system administrator. In Cloud Volumes ONTAP, the EC2-IAM
value is also applicable. It means that the IAM role is associated with the EC2 instance, and that the
access-key, secret-password and session token are are retrieved from EC2 instance metadata for
this IAM role. Note that -use-iam-role and -auth-type are mutually exclusive, -auth-type EC2-
IAM is an equivalent of -use-iam-role true, and -auth-type key is an equivalent of -use-iam-
role false. In Cloud Volumes ONTAP, the GCP-SA value may also be applicable. It means that a
session token is retrieved from the GCP instance metadata for the Service Account associated with the
GCP instance. Similarly, Azure-MSI means that a session token is retrieved from the Azure instance
metadata for the Managed Service Identity (MSI) associated with the Azure instance. For the AWS_S3
provider, the CAP (C2S Authentication Portal) value is also applicable. This should only be used when
accessing C2S (Commercial Cloud Services). If the CAP value is specified, then the-cap-url must be
specified. See cap-url.
[-cap-url <text>] - URL to Request Temporary Credentials for C2S Account
This parameter is available only when -auth-type is CAP. It specifies a full URL of the request to a CAP
server for retrieving temporary credentials (access-key, secret-pasword and session token) for accessing
the object store server. The CAP URL may look like: https://123.45.67.89:1234/CAP/api/v1/
credentials?agency=myagency&mission=mymission&role=myrole
[-server <Remote InetAddress>] - Fully Qualified Domain Name of the Object Store Server
This optional parameter specifies the new Fully Qualified Domain Name (FQDN) of the same object store
server. For Amazon S3, server name must be an AWS regional endpoint in the format
s3.amazonaws.com or s3-<region>.amazonaws.com, for example, s3-us-west-2.amazonaws.com. The
region of the server and the bucket must match. For more information on AWS regions, refer to 'Amazon
documentation on AWS regions and endpoints'. For Azure, if the -server is a "blob.core.windows.net"
or a "blob.core.usgovcloudapi.net", then the value of azure-account in the configuration followed by a
period will be added in front of the server. Note that the value of azure-account cannot be modified.
[-is-ssl-enabled {true|false}] - Is SSL/TLS Enabled
This optional parameter indicates whether a secured SSL/TLS connection will be used during data access
to the object store.
[-port <integer>] - Port Number of the Object Store
This optional parameter specifies a new port number to connect to the object store server indicated in
the -server parameter.
[-access-key <text>] - Access Key ID for S3 Compatible Provider Types
This optional parameter specifies a new access key (access key ID) for the AWS S3, SGWS, IBM COS object
stores and ONTAP S3.
[-secret-password <text>] - Secret Access Key for S3 Compatible Provider Types
This optional parameter specifies a new password (secret access key) for the AWS S3, SGWS, IBM COS
object stores and ONTAP S3. For an Azure object store see -azure-private-key. If the -access-key
is specified but the -secret-password is not then one will be asked to enter the -secret-password
without echoing the input.
[-ask-azure-private-key {true|false}] - Ask to Enter the Azure Access Key without Echoing
If this optional parameter is true then one will be asked to enter the -azure-private-key without
echoing the input.
[-azure-private-key <text>] - Azure Access Key
This optional parameter specifies a new access key for Azure object store. For other object store
providers see secret-password. See also ask-azure-private-key.
[-ipspace <IPspace>] - IPspace to Use in Order to Reach the Object Store
This optional parameter specifies new ipspace values for the configuration.
[-is-certificate-validation-enabled {true|false}] - Is SSL/TLS Certificate Validation Enabled
This optional parameter indicates whether an SSL/TLS certificate of an object store server is validated
whenever an SSL/TLS connection to an object store server is established. This parameter is only
applicable when is-ssl-enabled is true. It is recommended to keep the default value which is true
to make sure that Data ONTAP connects to a trusted object store server, otherwise identities of an object
store server are not verified.
[-use-http-proxy {true|false}] - Use HTTP Proxy
This optional parameter indicates whether an HTTP proxy will be used for connecting to an object store.
Note that an HTTP proxy is configured using the vserver http-proxy commands at the diagnostic
privilege level.
[-server-side-encryption {none | SSE-S3}] - Encryption of Data at Rest by the Object Store Server
(privilege: advanced)
This parameter specifies if AWS or other S3 compatible object store server must encrypt data at rest. The
available choices depend on provider-type. none encryption (no encryption required) is supported by all
S3 (non-Azure) object store servers. SSE-S3 encryption is supported by all S3 (non-Azure) object store
servers except ONTAP_S3. This is an advanced property. In most cases it is best not to change default
value of "sse_s3" for object store servers which support SSE-S3 encryption. The encryption is in addition
to any encryption done by ONTAP at a volume or at an aggregate level. Note that changing this option
does not change encryption of data which already exist in the object store.
[-url-style {path-style | virtual-hosted-style}] - URL Style Used to Access S3 Bucket
This parameter specifies the URL style used to access S3 bucket. This option is only available for non-
Azure object store providers. The available choices and default value depend on provider-type.
● Examples
The following example modifies two parameters (port number and is-ssl-enabled) of an object store
configuration named my-store:
cluster1::>storage aggregate object-store config modify -object-store-name my-store -port 1235 -is-ssl-
enabled true
Related reference
vserver http-proxy commands (page 2138)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store config rename command is used to rename an exiting object
store configuration.
● Parameters
-object-store-name <text> - Object Store Configuration Name
This paramter identifies an existing object store configuration.
-new-object-store-name <text> - Object Store Configuration New Name
This parameter specifies the new object store configuration name.
● Examples
The following example renames an object store configuration from my-store to ms1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate object-store config show command displays information about all existing
object store configurations in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-object-store-name <text>] - Object Store Configuration Name
If this parameter is specified, the command displays information only about object store configurations
whose name matches the specified names.
[-object-store-uuid <UUID>] - UUID of the Object Store Configuration
If this parameter is specified, the command displays information only about object store configurations
whose UUID matches the specified UUID values.
[-provider-type <providerType>] - Type of the Object Store Provider
If this parameter is specified, the command displays information only about object store configurations
whose provider type matches the specified value.
[-auth-type <object_store_auth_type>] - Authentication Used to Access the Object Store
If this parameter is specified, the command displays information only about object store configurations
whose authentication type matches the specified value.
[-cap-url <text>] - URL to Request Temporary Credentials for C2S Account
If this parameter is specified, the command displays information only about object store configurations
whose CAP URL matches the specified value.
[-server <Remote InetAddress>] - Fully Qualified Domain Name of the Object Store Server
If this parameter is specified, the command displays information only about object store configurations
whose server name matches the specified value. The server name is specified as a Fully Qualified
Domain Name (FQDN).
[-is-ssl-enabled {true|false}] - Is SSL/TLS Enabled
If this parameter is specified, the command displays information only about object store configurations
whose status about the use of secured communication over the network matches the specified value.
[-port <integer>] - Port Number of the Object Store
If this parameter is specified, the command displays information only about object store configurations
whose port numbers matches the specified value.
[-container-name <text>] - Data Bucket/Container Name
If this parameter is specified, the command displays information only about object store configurations
whose container name matches the specified value. Data ONTAP uses this container name or object
store data bucket while accessing data from the object store.
[-access-key <text>] - Access Key ID for S3 Compatible Provider Types
If this parameter is specified, the command displays information only about AWS S3, SGWS, IBM COS
object store configurations and ONTAP S3 whose access key matches the specified value. Data ONTAP
requires the access key for authorized access to the object store.
[-azure-account <text>] - Azure Account
If this parameter is specified, the command displays information only about Azure object store
configurations whose account matches the specified value. Data ONTAP requires the Azure account for
authorized access to the Azure object store.
[-ipspace <IPspace>] - IPspace to Use in Order to Reach the Object Store
If this parameter is specified, the command displays information only about object store configurations
whose IPspace matches the specified value. Data ONTAP uses the IPspace value to connect to the object
store.
[-iam-role <text>] - IAM Role for AWS Cloud Volumes ONTAP
If this parameter is specified, the command displays information only about object store configurations
whose IAM (Identity and Access Management) role matches the specified value.
[-is-certificate-validation-enabled {true|false}] - Is SSL/TLS Certificate Validation Enabled
If this parameter is specified, the command displays information only about object store configurations
whose status about the validation of SSL/TLS certificate matches the specified value.
[-use-http-proxy {true|false}] - Use HTTP Proxy
If this parameter is specified, the command displays information only about object store configurations
for which usage of HTTP proxy matches the specified value.
[-cluster <Cluster name>] - The Name of the Cluster to which the Configuration Belongs
If this parameter is specified, the command displays information only about object store configurations
for which cluster matches the specified value.
[-server-side-encryption {none | SSE-S3}] - Encryption of Data at Rest by the Object Store Server
(privilege: advanced)
If this parameter is specified, the command displays information only about object store configurations
whose server-side encryption matches the specified value.
[-url-style {path-style | virtual-hosted-style}] - URL Style Used to Access S3 Bucket
If this parameter is specified, the command displays information only about object store configurations
whose URL style matches the specified value.
● Examples
The following example displays all available object store configuration in the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate object-store profiler abort command will abort an ongoing object store
profiler run. This command requires two parameters - an object store configuration and a node on which the
profiler is currently running.
● Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node on which the object store profiler is running.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider
type.
● Examples
The following example aborts the object store profiler :
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate object-store profiler show command is used to monitor progress and
results of the storage aggregate object-store profiler start command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter specifies the node on which the profiler was started.
[-object-store-name <text>] - ONTAP Name for this Object Store Configuration
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider
type.
[-profiler-status <text>] - Profiler Status
Current status of the profiler.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Profiler Start Time
Time at which profiler run started.
● Examples
The following example displays the results of storage aggregate object-store profiler start :
Related reference
storage aggregate object-store profiler start (page 1042)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage aggregate object-store profiler start command writes objects to an object store
and reads those objects to measure latency and throughput of an object store. This command requires two
parameters - an object store configuration and node from which to send the PUT/GET/DELETE operations.
This command verifies whether the object store is accessible through the intercluster LIF of the node on
which it runs. The command fails if the object store is not accessible. The command will create a 10GB
dataset by doing 2500 PUTs for a maximum time period of 60 seconds. Then it will issue GET operations of
different sizes - 4KB, 8KB, 32KB, 256KB for a maximum time period of 180 seconds. Finally it will delete the
objects it created. This command can result in additional charges to your object store account. This is a CPU
intensive command. It is recommended to run this command when the system is under 50% CPU utilization.
● Parameters
-node {<nodename>|local} - Node on Which the Profiler Should Run
This parameter specifies the node from which PUT/GET/DELETE operations are sent.
-object-store-name <text> - Object Store Configuration Name
This parameter specifies the object store configuration that describes the object store. The object store
configuration has information about the object store server name, port, access credentials, and provider
type.
● Examples
The following example starts the object store profiler :
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate plex delete command deletes the specified plex. The aggregate specified
with then -aggregate will be unmirrored and contain the remaining plex. The disks in the deleted plex
become spare disks.
● Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
● Examples
The following example deletes plex0 of aggregate aggr1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate plex offline command takes the specified plex offline. The aggregate
specified with the -aggregate parameter must be a mirrored aggregate and both plexes must be online.
Prior to taking a plex offline, the system will flush all internally-buffered data associated with the plex and
create a snapshot that is written out to both plexes. The snapshot allows for efficient resynchronization
when the plex is subsequently brought back online.
● Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
● Examples
The following example takes plex0 of aggregate aggr1 offline:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate plex online command brings the specified plex online. The aggregate
specified with the -aggregate parameter must be an online mirrored aggregate. The system will initiate
resynchronization of the plex as part of online processing.
● Parameters
-aggregate <aggregate name> - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
-plex <text> - Plex
Name of a plex which belongs to the aggregate specified with the -aggregate parameter.
● Examples
The following example brings plex0 of aggregate aggr1 online:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate plex show command displays information for the specified plex. By default, the
command displays the following information about all plexes:
• Aggregate Name
• Plex Name
• Is Online
• Is Resyncing
• Resyncing Percentage
• Plex Status
To display detailed information about a single plex, use the -aggregate and -plex parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
Name of an existing aggregate which contains the plex specified with the -plex parameter.
● Examples
The following example displays information about all the plexes for all the aggregates:
Aggregate: aggr1
Plex Name: plex1
Status: normal,active
Is Online: true
Resync is in Progress: false
Resyncing Percentage: -
Resync Level: -
Pool: 1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Temporarily stops any reallocation jobs that are in progress. When you use this command, the persistent
state is saved. You can use the storage aggregate reallocation restart command to restart a job
that is quiesced.
There is no limit to how long a job can remain in the quiesced (paused) state.
● Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to temporarily pause the job.
● Examples
Related reference
storage aggregate reallocation restart (page 1046)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Starts a reallocation job. Use this command to restart a quiesced (temporarily stopped) job or a scheduled
scan that is idle for the aggregate.
● Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to restart reallocation scans.
[-ignore-checkpoint | -i [true]] - Ignore Checkpoint
Restarts the job at the beginning when set to true. If you use this command without specifying this
parameter, its effective value is false and the job starts the scan at the point where it was stopped. If you
specify this parameter without a value, it is set to true and the scan restarts at the beginning.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Schedules a reallocation scan for an existing reallocation job. If the reallocation job does not exist, use the
storage aggregate reallocation start command to define a reallocation job.
You can delete an existing reallocation scan schedule. However, if you do this, the job's scan interval reverts
to the schedule that was defined for it when the job was created with the storage aggregate
reallocation start command.
● Parameters
-aggregate <aggregate name> - Aggregate Name
Specifies the aggregate on which you want to schedule reallocation jobs.
[-del | -d [true]] - Delete
Deletes an existing reallocation schedule when set to true. If you use this command without specifying
this parameter, its effective value is false and the reallocation schedule is not deleted. If you specify this
parameter without a value, it is set to true and the reallocation schedule is deleted.
[-cron | -s <text>] - Cron Schedule
Specifies the schedule with the following four fields in sequence. Use a space between field values.
Enclose the values in double quotes.
• minute is a value from 0 to 59.
• hour is a value from 0 (midnight) to 23 (11:00 p.m.).
• day of week is a value from 0 (Sunday) to 6 (Saturday).
• day of month is a value from 1 to 31.
Note
If you specify 31 as the value for the day of month, reallocation scans will not run in any months
with fewer than 31 days.
Use an asterisk "*" as a wildcard to indicate every value for that field. For example, an * in the day of
month field means every day of the month. You cannot use the wildcard in the minute field.
You can enter a number, a range, or a comma-separated list of values for a field.
● Examples
cluster1::> storage aggregate reallocation schedule -aggregate aggr0 -cron "0 23 6 *"
Related reference
storage aggregate reallocation start (page 1049)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays the status of a reallocation scan, including the state, schedule, aggregate and scan id. If you do not
specify the id for a particular reallocation scan, the command displays information about all the existing
reallocation scans.
● Parameters
{ [-fields <fieldname>, ...]
Displays the value of relevant field that you specify for the reallocation scans that are present.
| [-v ]
Specify this parameter to display the output in a verbose format.
| [-instance ]}
Displays information about reallocation scans on aggregates in a list format.
[-id <integer>] - Job ID
Specify this parameter to display the reallocation scan that matches the reallocation job ID that you
specify.
[-aggregate <aggregate name>] - Aggregate Name
Specify this parameter to display the reallocation scan that matches the aggregate that you specify.
[-description <text>] - Job Description
Specify this parameter to display reallocation scans that match the text description that you specify.
[-state {Initial|Queued|Running|Waiting|Pausing|Paused|Quitting|Success|Failure|Reschedule|Error|
Quit|Dead|Unknown|Restart|Dormant}] - Job State
Specify this parameter to display reallocation jobs that match the state that you specify.
[-progress <text>] - Execution Progress
Specify this parameter to list the running reallocation jobs whose progress indicator matches the text
that you provide. For example, if you specify "Starting ..." as the text string for the progress option, then
the system lists all the jobs that are starting.
[-schedule <job_schedule>] - Schedule Name
Specify this parameter to display reallocation scans that match the schedule name that you specify. If
you want a list of all job schedules, use the job schedule show command.
[-global-status <text>] - Global State of Scans
Specify this parameter to indicate if reallocation scans are on or off globally. You must type either of the
following text strings:
• "Reallocation scans are on"
• "Reallocation scans are off"
● Examples
Related reference
job schedule show (page 218)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Begins a reallocation scan on a specified aggregate.
Before performing a reallocation scan, the reallocation job normally performs a check of the current layout
optimization. If the current layout optimization is less than the threshold, then the system does not perform
a reallocation on the aggregate.
You can define the reallocation scan job so that it runs at a specific interval, or you can use the storage
aggregate reallocation schedule command to schedule reallocation jobs.
● Parameters
-aggregate <aggregate name> - Aggregate Name
Specify this parameter to specify the target aggregate on which to start a reallocation scan.
{ [-interval | -i <text>] - Interval Schedule
Specified the schedule in a single string with four fields:
• minute is a value from 0 to 59.
• hour is a value from 0 (midnight) to 23 (11:00 p.m.).
• day of month is a value from 1 to 31.
Note
If you specify 31 as the value for the day of the month, reallocation scans will not run in any of
the months with fewer than 31 days.
● Examples
cluster1::> storage aggregate reallocation start -aggregate aggr0 -interval "0 23 * 6"
Related reference
storage aggregate reallocation schedule (page 1047)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Stops and deletes any reallocation scan running on the specified aggregate. This command stops and
deletes in-progress, scheduled, and quiesced scans.
● Parameters
-aggregate <aggregate name> - Aggregate Name
Specify this parameter to specify the target aggregate on which to stop and delete a reallocation scan.
● Examples
The storage aggregate relocation commands enable you to relocate aggregates from one node to
another node in the same cluster that share storage.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate relocation show command displays status of aggregates which were
relocated in the last instance of relocation operation.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command only displays the fields that
you specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects aggregates from the specified source node.
[-aggregate <text>] - Aggregate Name
Selects the aggregates that match this parameter value.
[-relocation-status <text>] - Aggregates Relocation Status
Selects the aggregates whose relocation status matches this parameter value.
[-destination <text>] - Destination for Relocation
Selects the aggregates that are designated for relocation on the specified destination node.
● Examples
The following example displays the relocation status of aggregates on all nodes in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate relocation start command initiates the relocation of the aggregates from
one node to the partner node in a high-availablility (HA) pair.
● Parameters
-node {<nodename>|local} - Name of the Node that currently owns the aggregate
This specifies the source node where the aggregates to be relocated reside.
-destination {<nodename>|local} - Destination node
This specifies the destination node where aggregates are to be relocated.
-aggregate-list <aggregate name>, ... - List of Aggregates to be relocated
This specifies the list of aggregate names to be relocated from source node to destination node.
[-override-vetoes {true|false}] - Override Vetoes
This specifies whether to override the veto checks for relocation operation. Initiating aggregate
relocation with vetoes overridden will result in relocation proceeding even if the node detects
outstanding issues that would make aggregate relocation dangerous or disruptive. The default value is
false.
[-relocate-to-higher-version {true|false}] - Relocate To Higher Version
This specifies if the aggregates are to be relocated to a node which is running on a higher version of
Data ONTAP than the source node. If an aggregate is relocated to this destination then that aggregate
cannot be relocated back to the source node till the source is also upgraded to the same or higher Data
ONTAP version. This option is not required if the destination node is running on higher minor version,
but the same major version. The default value is false.
[-override-destination-checks {true|false}] - Override Destination Checks
This specifies if the relocation operation should override the check done on destination node. This option
could be used to force a relocation of aggregates even if the destination has outstanding issues. Note
that this could make the relocation dangerous or disruptive. The default value is false.
[-ndo-controller-upgrade {true|false}] - Relocate Aggregates for NDO Controller Upgrade (privilege:
advanced)
This specifies if the relocation operation is being done as a part of non-disruptive controller upgrade
process. Aggregate relocation will not change the home ownerships of the aggregates while relocating
as part of controller upgrade. The default value is false.
● Examples
The following example relocates aggregates name aggr1 and aggr2 from source node node0 to destination
node node1:
cluster1::> storage aggregate relocation start -node node0 -destination node1 -aggregate-list aggr1, ag
gr2
The storage aggregate resynchronization command family manages the number and the order of
aggregates that can start a resynchronization operation at any given time on a node. On SyncMirror enabled
nodes, plexes of mirrored aggregates can go offline because of a variety of issues. When the disks of an
offlined plex come back online, we start a background resynchronization operation on the aggregate to
make sure that the new plex is up to date again. These commands allow the user to control the number of
resynchronization operations that can concurrently execute on a node, as well as the order in which the
aggregates are picked for the resynchronization operation. On a node containing multiple data aggregates,
the resynchronization of critical data aggregates can be prioritized by assigning a higher resync-priority
value than the rest of the data aggregates.
The storage aggregate resynchronization options commands can be used to control the volume
of background resynchronization I/O on a node after a storage or network outage by limiting the number of
aggregates that can resynchronize at the same time on a node.
Related reference
storage aggregate resynchronization options commands (page 1054)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate resynchronization modify command can be used to modify the
resynchronization priority of an aggregate.
When the number of aggregates pending resynchronization is higher than the maximum number of
concurrent resynchronization operations allowed on a node, the aggregates get resynchronized in the order
of their "resync-priority" values.
For example, let the max-concurrent-resync under the storage aggregate resynchronization
options directory for a node be set to two. If there are three aggregates waiting to be resynchronized,
where their respective resync-priority values are high, medium, and low, then the third aggregate is
not allowed to start resynchronization until one of the first two aggregates has completed resynchronizing.
● Parameters
-aggregate <aggregate name> - Aggregate
This parameter specifies the aggregate that is to be modified.
[-resync-priority {high(fixed)|high|medium|low}] - Resynchronization Priority
This parameter specifies the new resynchronization priority value for the specified aggregate. This field
cannot be modified for unmirrored or Data ONTAP system aggregates.
Possible values for this parameter are:
• high: Mirrored data aggregates with this priority value start resynchronization first.
• medium: Mirrored data aggregates with this priority value start resynchronization after all the system
aggregates and data aggregates with 'high' priority value have started resynchronization.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
● Examples
The following example changes the resync-priority of a specified aggregate to medium:
Related reference
storage aggregate resynchronization options commands (page 1054)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate resynchronization show command displays the relative resynchronization
priority for each aggregate in the cluster. When a particular node restricts how many resync operations can
be active concurrently, these priorities are used to prioritize the operations. The maximum concurrent resync
operations for a node is displayed in the storage aggregate resynchronization options show
command. If no parameters are specified, the command displays the following information about all the
aggregates in the cluster:
• Aggregate name
• Node that owns the aggregate
• Resync priority for the aggregate
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-aggregate <aggregate name>] - Aggregate
If this parameter is specified, the command displays the resynchronization priority only for the specified
aggregate.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays the resynchronization priority only for the
aggregates owned by the specified node.
[-resync-priority {high(fixed)|high|medium|low}] - Resynchronization Priority
If this parameter is specified, the command displays only the resynchronization priority that matches the
specified value. Possible values for this parameter are:
• high(fixed): This value is reserved for Data ONTAP system aggregates, which cannot have any other
value for this field. These aggregates always start their resynchronization operation at the first
available opportunity. This value cannot be assigned to a data aggregate.
• high: Mirrored data aggregates with this priority value start resynchronization first.
• medium: Mirrored data aggregates with this priority value start resynchronization after all the system
aggregates and data aggregates with 'high' priority value have started resynchronization.
• low: Mirrored data aggregates with this priority value start resynchronization only after all the other
aggregates have started resynchronization.
When the number of aggregates waiting for resynchronization is higher than the maximum number of
resynchronization operations allowed on a node, then the resync-priority field is used to determine
which aggregate starts resynchronization first. This field is not set for unmirrored aggregates.
● Examples
The following command displays the resynchronization priorities for all the aggregates in the cluster:
Related reference
storage aggregate resynchronization options show (page 1054)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate resynchronization options modify command can be used to modify the
options that govern the resynchronization of aggregates on a given cluster node.
Modifying the max-concurrent-resyncs option changes the number of aggregates that are allowed to
resynchronize concurrently. When the number of aggregates waiting for resynchronization is higher than
this value, the aggregates are resynchronized in the order of their "resync-priority". This value can be
modified using the storage aggregate resynchronization modify command while specifying the -
resync-priority parameter.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which the option is to be modified.
[-max-concurrent-resync <integer>] - Maximum Concurrent Resynchronizing Aggregates
This parameter specifies the new value for the maximum number of concurrent resync operations
allowed on a node. This option must be specified along with the -node parameter. When a node has
active resync operations, setting this parameter to a value that is lower than the number of currently
resyncing aggregates will trigger a user confirmation.
● Examples
The following example changes the maximum concurrent resync operations for the specified node to ten:
Related reference
storage aggregate resynchronization modify (page 1052)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage aggregate resynchronization options show command displays all the options that
govern the resynchronization of aggregates on a given cluster node. If no parameters are specified, the
command displays the following information about all nodes:
• Node for which the information is being displayed
• Maximum number of concurrent resynchronizing aggregates allowed
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays resynchronization options only for the specified
node.
[-max-concurrent-resync <integer>] - Maximum Concurrent Resynchronizing Aggregates
If this parameter is specified, the command displays only the resynchronization option that matches the
specified value.
● Examples
The following example displays the maximum number of concurrent resyncs allowed for each node in the
cluster:
The following example displays the maximum number of concurrent resyncs allowed for a specified node:
The following example displays all the nodes that allow more than five concurrent resync operations:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array modify command lets the user change several array parameters.
● Parameters
-name <text> - Name
Storage array name, either generated by Data ONTAP or assigned by the user.
[-prefix <text>] - Prefix
Abbreviation for the named array.
[-vendor <text>] - Vendor
Array manufacturer.
[-model <text>] - Model
Array model number.
[-options <text>] - options
Vendor specific array settings.
[-max-queue-depth <integer>] - Target Port Queue Depth (privilege: advanced)
The target port queue depth for all target ports on this array.
[-lun-queue-depth <integer>] - LUN Queue Depth (privilege: advanced)
The queue depth assigned to array LUNs from this array.
{ [-is-upgrade-pending {true|false}] - Upgrade Pending (privilege: advanced)
Set this parameter to true if the array requires additional Data ONTAP resilience for a pending firmware
upgrade. Keep this parameter false during normal array operation. This value can not be set to true if -
path-faliover-time is greater than zero.
| [-path-failover-time <integer>] - Path Failover Time (sec)
The time delay (in secs) before switching the I/O path when the path is deleted. The maximum time
delay is 30 sec. The default is 0. This value can not be greater than zero if -is-upgrade-pending is true.
[-all-path-fail-delay <integer>]} - Extend All Path Failure Event (secs)
Use this parameter to increase the delay before Data ONTAP declares an "all path failure" event for an
array. Delaying the "all path failure" event allows Data ONTAP to suspend I/O operations for a longer
period of time before declaring a data access disruption, allowing for I/O operations to resume if any
path comes back online within the specified duration. A valid delay is any value between 30 and 90
seconds. A value of 0 will reset the delay, resulting in default actions being taken whenever an "all path
failure" event is detected.
● Examples
This command changes the model to FastT.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array remove command discards array profile records for a particular storage array from the
cluster database. The command fails if a storage array is still connected to the cluster. Use the storage
array config show command to view the array connectivity status. The array target port can be removed
using the storage array port remove command.
● Parameters
-name <text> - Name
Name of the storage array you want to remove from the database.
● Examples
Related reference
storage array config show (page 1059)
storage array port remove (page 1065)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array rename command permits substitution of the array profile name which Data ONTAP
assigned during device discovery. By default, the name that Data ONTAP assigned to the storage array
during discovery is shown in Data ONTAP displays and command output.
● Parameters
-name <text> - Name
Storage array name either generated by Data ONTAP or assigned by the user.
-new-name <text> - The new name to assign to this array profile. (28 chars max)
New name to assign to the storage array.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array show command displays information about arrays visible to the cluster. If no
parameters are specified, the command displays the following information about all storage arrays:
• Prefix
• Name
• Vendor
• Model
• Options
To display detailed information about a single array, use the -name parameter. The detailed view adds the
following information:
• Serial Number
• Optimization Policy
• Affinity
• Errors
• Path Failover Time
• Extend All Path Failure Event
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
Selects the arrays that match this parameter value.
[-prefix <text>] - Prefix
Abbreviation for the named array.
[-vendor <text>] - Vendor
Array manufacturer.
[-model <text>] - Model
Array model number.
[-options <text>] - options
Vendor specific array settings.
[-serial-number <text>] - Serial Number
Array product identifier.
[-max-queue-depth <integer>] - Target Port Queue Depth (privilege: advanced)
Selects the arrays that match this parameter value.
[-lun-queue-depth <integer>] - LUN Queue Depth (privilege: advanced)
Selects the arrays that match this parameter value.
[-optimization-policy {iALUA|eALUA|symmetric|proprietary|mixed|unknown}] - Optimization Policy
Selects the arrays that match this parameter value.
[-affinity {none|aaa|ap|mixed|unknown}] - Affinity
Selects the arrays that match this parameter value.
[-error-text <text>, ...] - Error Text
Selects the arrays that match this parameter value.
● Examples
The following example displays information about all arrays.
Name: HITACHI_DF600F_1
Prefix: abc
Vendor: HITACHI
Model: DF600F
options:
Serial Number: 4291000000000000
Optimization Policy: iALUA
Affinity: aaa
Error Text:
Path Failover Timeout (sec): 30
Extend All Path Failure Event (secs): 50
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array config show command displays information about how the storage arrays connect
to the cluster, LUN groups, number of LUNS, and more. Use this command to validate the configuration and
to assist in troubleshooting.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-switch ]
● Examples
5006016830229f13 0b
5006016930229f13 0a
1 21 HITACHI_OPEN_1 50060e80034fe704 0c
0d
50060e80034fe714 0a
0b
50060e80034fe715 0b
50060e80034fe716 0c
0d
2 8 EMC_SYMMETRIX_1 50060482cb1bce1d 0a
0b
5006048acb1bce0c 0c
0d
3 10 IBM_UniversalXport_1 202600a0b8322d10 0c
0d
204700a0b8322d10 0a
0b
38 entries were displayed.
Warning: Configuration errors were detected. Use 'storage errors show' for detailed information.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array disk paths show command displays information about disks and array LUNs.
Where it appears in the remainder of this document, "disk" may refer to either a disk or an array LUN. By
default, the command displays the following information about all disks:
• Disk Unique Identifier
• Controller name
• Initiator Port
• LUN ID
• Failover optimization type
• The Use State of the LUN on this path
• Target Port
• Target IQN
• TPGN
• Port speeds
• Kbytes/sec on Disk (Rolling Average)
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all disks, in column style output.
| [-switch ]
Displays the switch port information for all disks, in column style output.
| [-instance ]}
Displays detailed disk information. If no disk path name is specified, this parameter displays the same
detailed information for all disks as does the -disk parameter. If a disk path name is specified, then this
parameter displays the same detailed information for the specified disks as does the -disk parameter.
[-uid <text>] - Disk Unique Identifier
Selects the disks whose unique id matches this parameter value. A disk unique identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:0000000
0:00000000
[-disk <disk path name>] - Disk Name
Displays detailed information about the specified disks.
[-array-name <array name>] - Array Name
Selects information about the LUNs presented by the specified storage array.
[-diskpathnames <disk path name>, ...] - Path-Based Disk Names
Selects information about disks that have all of the specified path names.
[-nodelist {<nodename>|local}, ...] - Controller name
Selects information about disks that are visible to all of the specified nodes .
[-initiator <text>, ...] - Initiator Port
Selects information about disks that are visible to the initiator specified. Disks that are not currently in
use by that initiator are included.
[-lun <integer>, ...] - LUN ID
Selects information about the specified LUNs.
[-target-wwpn <text>, ...] - Target Port
Selects information about disks that are visible on target ports identified by their World Wide Port Name.
[-initiator-side-switch-port <text>, ...] - Initiator Side Switch Port
Selects information about disks visible to an initiator that is connected to the specified switch port.
[-lun-path-use-state <text>, ...] - The Use State of the LUN on this path
Selects information about LUNs reporting the specified in-use state.
[-tpgn <integer>, ...] - Target Port Group Number
Selects information about disks that belong to the specified Target Port Group Number.
[-port-speed <text>, ...] - Port Speed
Selects information about disks served by a Host Bus Adapter that is running at the specified port speed.
[-lun-io-kbps <integer>, ...] - Kbytes/sec on Disk (Rolling Average)
Selects information about the LUNs that have reached the specified I/O throughput.
[-lun-iops <integer>, ...] - Number IOPS per second on disk (Rolling Average)
Selects information about the LUNs that have reached the specified number of IOPs.
[-target-side-switch-port <text>, ...] - Target Side Switch Port
Selects information about disks that are visible on target ports identified by the switch port to which
they are connected.
[-target-port-access-state <text>, ...] - Failover optimization type
Selects information about disks visible on target ports that have the specified access state.
[-initiator-io-kbps <integer>, ...] - Kbytes of I/O per second on Initiator (Rolling Average)
Selects information about disks visible to an initiator that has executed I/O at the specified throughput.
[-initiator-iops <integer>, ...] - Number of IOPS on Initiator (Rolling Average)
Selects information about disks visible to an initiator that has executed the specified number of IOPs.
[-target-io-kbps <integer>, ...] - Kbytes of I/O per second to Target (Rolling Average)
Selects information about disks visible on target ports that have reached the specified I/O throughput.
[-target-iops <integer>, ...] - Number of IOPS to Target (Rolling Average)
Selects information about disks visible on target ports that have performed the specified number of
IOPs.
[-path-link-errors <integer>, ...] - Link Error count on path
Selects information about disks with paths that have incurred the specified number of FC link errors.
[-path-io-kbps <integer>, ...] - Kbytes of I/O per second on Path (Rolling Average)
Selects information about disk with paths that have reached the specified I/O throughput.
[-path-iops <integer>, ...] - Number of IOPS on Path (Rolling Average)
Selects information about disks on those paths that have reached the specified number of IOPs.
[-path-quality <integer>, ...] - Percentage of weighted error threshold
Selects information about disks on paths that have incurred the specified number of errors. The value
displayed is a measure of the health of a path expressed as a percentage of an error threshold. Once a
path has reached or surpassed the error threshold, another path will be selected for I/O transfer, if there
is one available.
[-path-lun-in-use-count <integer>, ...] - Number of LUNs in the in-use state on this path
Selects information about disks with paths that have the specified in-use-count.
[-initiator-lun-in-use-count <integer>, ...] - Number of LUNs in the in-use state on this initiator
Selects information about disks with a path through an initiator that has the specified in-use-count.
[-target-lun-in-use-count <integer>, ...] - Number of LUNs in the in-use state on this target
Selects information about disks with a path through a target port that has the specified in-use-count.
[-preferred-target-port {true|false}, ...] - Whether or not target port group is preferred
Selects information about disks that match the specified parameter value indicating whether the
backing storage is ALUA (Assymetric Logical Unit Access) capable and has specified the array target port
on this path to be a preferred target port for I/O.
[-vmdisk-device-id <integer>, ...] - Virtual disk device ID
Selects information about disks that have the specified virtual disk device ID.
[-host-adapter <text>] - Primary Path Host Adapter
Selects information about disks that are currently using the specified Host Bus Adapter.
[-primary-port <text>] - Primary Path Disk Port
Selects information about disks that use the specified primary port.
[-secondary-name <disk path name>] - Secondary Path Name
Selects information about disks that use the specified secondary path name, for multipath configuration.
[-secondary-port <text>] - Secondary Path Disk Port
Selects information about disks that use the specified secondary port.
● Examples
The following example displays information about all disks:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array port modify command lets the user change array target port parameters.
● Parameters
-name <text> - Name
Selects the array ports that match this parameter value. The storage array name is either generated by
Data ONTAP or assigned by the user.
-wwnn <text> - WWNN
Selects the array ports that match this parameter value.
-wwpn <text> - WWPN
Selects the array ports that match this parameter value.
[-max-queue-depth <integer>] - Target Port Queue Depth
The target port queue depth for this target port.
[-utilization-policy {normal|defer}] - Utilization Policy
The policy used in automatically adjusting the queue depth of the target port based on its utilization.
● Examples
This command changes the maximum queue depth for this target port to 32.
cluster1::> storage array port modify -name HITACHI_DF600F_1 -wwnn 50060e80004291c0 -wwpn 50060e8000429
1c0 -max-queue-depth 32
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array port remove command removes a port from the array database. You might want to
remove ports that are no longer connected to the clustered node. Port information can change after
hardware replacement, rezoning, or similar configuration activities. The database retains the records about
previous ports unless you remove the information.
● Parameters
-name <text> - Name
Selects the array ports that match this parameter value. The storage array name is either generated by
Data ONTAP or assigned by the user.
{ [-wwnn <text>] - WWNN
Selects the array ports that match this parameter value.
[-wwpn <text>] - WWPN
Selects the array ports that match this parameter value.
| [-target-iqn <text>] - Target IQN
● Examples
This command removes a port record from the array profiles database.
cluster1::> storage array port remove -name HITACHI_DF600F_1 -wwnn 50060e80004291c0 -wwpn 50060e8000429
1c0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage array port show command displays all the target ports known to the cluster for a given
storage array (if an array name is specified) or for all storage arrays if no storage array name is specified.
Target ports remain in the database as part of an array profile unless you explicitly remove them from the
database.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-name <text>] - Name
Selects the array ports that match this parameter value. The storage array name is either generated by
Data ONTAP or assigned by the user.
[-wwnn <text>] - WWNN
Selects the array ports that match this parameter value.
[-wwpn <text>] - WWPN
Selects the array ports that match this parameter value.
[-max-queue-depth <integer>] - Target Port Queue Depth
Selects the array ports that match this parameter value.
[-node {<nodename>|local}, ...] - Controller Name
Selects the array ports that match this parameter value.
[-initiator-port <text>, ...] - Initiator Port
Selects the array ports that match this parameter value.
[-average-dynamic-queue-depth <integer>, ...] - Average Dynamic Queue Depth (privilege: advanced)
The average value of the dynamic target port queue depth.
[-average-latency-per-iop <integer>, ...] - Average Latency Per IOP
Selects the array ports that match this parameter value (average latency per I/O performed in
microseconds).
[-average-pending <integer>, ...] - Average Pending (privilege: advanced)
Selects the array ports that match this parameter value (average over time of how many commands are
on the outstanding queue).
[-average-waiting <integer>, ...] - Average Waiting (privilege: advanced)
Selects the array ports that match this parameter value (average over time of how many commands are
on the waiting queue).
[-connection-type {direct|fabric|ISCSI}] - Connection Type
Selects the array ports that match this parameter value (type of connection between the controller and
the back end storage).
[-dynamic-queue-depth <integer>, ...] - Dynamic Queue Depth (privilege: advanced)
Current dynamic target port queue depth, the maximum number of commands allowed outstanding.
[-max-pending <integer>, ...] - Max Pending (privilege: advanced)
Selects the array ports that match this parameter value (largest number of commands observed on the
outstanding queue).
[-max-waiting <integer>, ...] - Max Waiting (privilege: advanced)
Selects the array ports that match this parameter value (largest number of commands observed on the
waiting queue).
[-path-link-errors <integer>, ...] - Link Error count on path
Selects the array ports that match this parameter value.
[-percent-busy <integer>, ...] - Percent Busy
Selects the array ports that match this parameter value (percentage of time I/Os are outstanding on the
port).
[-percent-waiting <integer>, ...] - Percent Waiting
Selects the array ports that match this parameter value (percentage of time there are I/Os waiting on
the throttle list on the target port).
[-switch-port <text>] - Switch Port
Selects the array ports that match this parameter value (for fabric attached connections, the switch port
the array target port is connected to; N/A for direct attached).
[-target-io-kbps <integer>, ...] - Kbytes of I/O per second to Target (Rolling Average)
Selects the array ports that match this parameter value.
[-target-iops <integer>, ...] - Number of IOPS to Target (Rolling Average)
Selects the array ports that match this parameter value.
[-target-lun-in-use-count <integer>, ...] - Target LUN In Use Count
Selects the array ports that match this parameter value (number of IN-USE disks on this target port).
[-target-port-speed <text>] - Target Port Speed
Selects the array ports that match this parameter value (speed that the target port has negotiated with
its connected switch port, or initiator port if direct attached).
[-utilization-policy {normal|defer}] - Utilization Policy
The policy used when sending I/O to an array target port when it reaches maximum queue depth.
Possible values are:
• normal - This policy aggressively competes for target port resources, in effect competing with other
hosts. (default)
• defer - This policy does not aggressively compete for target port resources, in effect deferring to other
hosts.
● Examples
The example below displays the port information for a single port.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The automated-working-set-analyzer show command displays the Automated Working-set Analyzer running
instances.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
This parameter indicates the node name that the AWA instance runs on.
[-flash-cache {true|false}] - Flash Cache Node-wide Modeling
This parameter indicates whether the AWA is modeling flash-cache.
[-aggregate-uuid <UUID>] - Uuid of the Aggregate
This parameter indicates the aggregate uuid that the AWA instance runs on.
[-aggregate <aggregate name>] - Aggregate
This parameter indicates the aggregate name that the AWA instance runs on.
[-working-set-size {true|false}] - Working Set Size
This parameter indicates whether the AWA instance is configured to find the working set size.
[-start-time <Date>] - Starting Time
This parameter indicates the time when the AWA instance was started.
[-total-intervals <integer>] - Total Interval Count
This parameter indicates the total number of intervals that the AWA instance has covered.
[-read-throughput {<integer>[Bps|KBps|MBps|GBps]}] - Read Throughput
This parameter indicates the maximum read throughput over an interval that AWA has observed from
the storage disks.
[-write-throughput {<integer>[Bps|KBps|MBps|GBps]}] - Write Throughput
This parameter indicates the maximum write throughput over an interval that AWA has observed to the
storage disks
[-cacheable-read <percent>] - Cacheable Read
This parameter indicates the maximum percent of cacheable read over an interval that AWA has
observed. Cacheable reads are non-sequential reads, i.e., the percentage of data reads that could have
been cached.
[-cacheable-write <percent>] - Cacheable Write
This parameter indicates the maximum percent of cacheable write over an interval that AWA has
observed. Cacheable writes are random overwrites, percentage of disk writes that could have been
cached.
[-projected-cache-size {<integer>[KB|MB|GB|TB|PB]}] - Max Projected Cache Size
This parameter indicates the projected Flash Pool cache usage.
[-projected-read-hit <percent>] - Projected Read Hit
This parameter indicates the percentage of blocks that could be read from the Flash Pool cache instead
of HDDs.
[-projected-write-hit <percent>] - Projected Write Hit
This parameter indicates the percentage of block overwrites that could go to the Flash Pool cache
instead of HDDs.
[-referenced-interval-id <integer>] - Referenced Interval ID
This parameter indicates the interval in which the cache size effect information is derived from.
[-referenced-interval-time <Date>] - Referenced Interval Time
This parameter indicates the time when the referenced interval for the cache size effect information is
derived from.
[-referenced-interval-cache-size {<integer>[KB|MB|GB|TB|PB]}] - Referenced Interval Cache Size
This parameter indicates the cache size at the end of the referenced interval from which the cache size
effect information is based on.
[-read-hit-20 <percent>] - 20% Cache Read Hit
This parameter indicates the predicted read hit rate when the cache size is 20% of the referenced cache
size.
[-read-hit-40 <percent>] - 40% Cache Read Hit
This parameter indicates the predicted read hit rate when the cache size is 40% of the referenced cache
size.
[-read-hit-60 <percent>] - 60% Cache Read Hit
This parameter indicates the predicted read hit rate when the cache size is 60% of the referenced cache
size.
[-read-hit-80 <percent>] - 80% Cache Read Hit
This parameter indicates the predicted read hit rate when the cache size is 80% of the referenced cache
size.
[-read-hit-100 <percent>] - 100% Cache Read Hit
This parameter indicates the predicted read hit rate when the cache size is 100% of the referenced cache
size.
[-write-hit-20 <percent>] - 20% Cache Write Hit
This parameter indicates the predicted write hit rate when the cache size is 20% of the referenced cache
size.
[-write-hit-40 <percent>] - 40% Cache Write Hit
This parameter indicates the predicted writehit rate when the cache size is 40% of the referenced cache
size.
[-write-hit-60 <percent>] - 60% Cache Write Hit
This parameter indicates the predicted write hit rate when the cache size is 60% of the referenced cache
size.
[-write-hit-80 <percent>] - 80% Cache Write Hit
This parameter indicates the predicted write hit rate when the cache size is 80% of the referenced cache
size.
[-write-hit-100 <percent>] - 100% Cache Write Hit
This parameter indicates the predicted write hit rate when the cache size is 100% of the referenced
cache size.
[-num-intervals-show <integer>] - Number of intervals to show
This parameter indicates the number of intervals to the past this command is showing.
● Examples
The following example shows a running instance of automated-working-set-analyzer on node node1 for
aggregate aggr0.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The automated-working-set-analyzer start command enables the Automated Workload Analyzer that is
capable of doing the following:
• Flash Pool modeling for an aggregate
• Flash Cache modeling for a node - can not specify an aggregate.
• Working set size estimation
• Workload monitoring
● Parameters
-node <nodename> - Node Name
This parameter indicates the node name that the AWA instance runs on.
[-flash-cache {true|false}] - Flash Cache Node-wide Modeling
This parameter indicates whether the AWA is modeling flash-cache.
[-aggregate <aggregate name>] - Aggregate
This parameter indicates the aggregate name that the AWA instance runs on.
[-working-set-size {true|false}] - Working Set Size
This parameter indicates whether the AWA instance is configured to find the working set size.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage automated-working-set-analyzer stop command terminates one or multiple Automated
Workload Analyzer running instances.
● Parameters
-node <nodename> - Node Name
This parameter indicates the node name that the AWA instance runs on.
[-flash-cache {true|false}] - Flash cache node-wide modeling
This parameter indicates whether the AWA is modeling flash-cache.
[-aggregate <aggregate name>] - Aggregate
This parameter indicates the aggregate name that the AWA instance runs on.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The automated-working-set-analyzer volume show command displays the volume statistics reported by the
corresponding Automated Working-set Analyzer running instances.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
This parameter indicates the node name that the AWA instance runs on.
[-flash-cache {true|false}] - Flash Cache Node-wide Modeling
This parameter indicates whether the AWA is modeling flash-cache.
[-vol-uuid <UUID>] - Uuid of the Volume
This parameter indicates the volume uuid that this command is issued on.
[-aggregate <aggregate name>] - Aggregate
This parameter indicates the aggregate name that the AWA instance runs on.
[-volume <volume name>] - Volume
This parameter indicates the volume name that this command is issued on.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk assign command is used to assign ownership of an unowned disk or array LUN to a
specific node. You can also use this command to change the ownership of a disk or an array LUN to another
node. You can designate disk ownership by specifying disk names, array LUN names, wildcards, or all (for all
disks or array LUNs visible to the node). For disks, you can also set up disk ownership autoassignment. You
can also assign disks to a particular pool. You can also assign disks by copying ownership from another disk.
● Parameters
{ [-disk <disk path name>] - Disk Path
This specifies the disk or array LUN that is to be assigned. Disk names take one of the following forms:
• Disks are named in the form <stack-id>.<shelf>.<bay>
• Disks on multi-disk carriers are named in the form <stack-id>.<shelf>.<bay>.<lun>
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP
8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number 16
on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
• Disks that are attached to a switch are named in the form
<node>:<switch_name>:<switch_port>.<loop_ID>. For disks with a LUN, the form is
<node>:<switch_name>:<switch_port>.<loop_ID>L<LUN>. For instance, disk number 08 on port 11 of
switch fc1 on a node named node0a is named node0a:fc1:11.08. The same disk on LUN lun1 is named
node0a:fc1:11.08Llun1.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names,
depending on how the disk is connected. For example, a disk known to a node named alpha as alpha:
1a.19 can be known to a node named beta as beta:0b.37. All names are listed in the output of queries
and are equally valid. To determine a disk's unique identity, run a detailed query and look for the disk's
universal unique identifier (UUID) or serial number.
A subset of disks or array LUNs can be assigned using the wildcard character (*) in the -disk
parameter. Either the -owner, the -sysid, or the -copy-ownership-from parameter must be
specified with the -disk parameter. Do not use the -node parameter with the -disk parameter.
| -disklist <disk path name>, ... - Disk list
This specifies the List of disks to be assigned.
| -all [true] - Assign All Disks
This optional parameter causes assignment of all visible unowned disks or array LUNs to the node
specified in the -node parameter. The -node parameter must be specified with the -all parameter.
When the -copy-ownership-from parameter is specified with the -node parameter, it assigns disk
ownership based on the -copy-ownership-from parameter; otherwise it assigns ownership of the
disks based on the -node parameter. Do not use the -owner or the -sysid parameter with the -all
parameter.
| [-type | -T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Storage Type
This optional parameter assigns ownership of a specific type of disk or array LUN (or a set of disks/array
LUNs) to a node. The -count parameter must be specified with the -type parameter.
-count | -n <integer> - Disk Count
This optional parameter assigns ownership of a number of disks or array LUNs specified in the -count
parameter, to a node.
| -auto [true]} - Auto Assign
This optional parameter causes all visible disks eligible for autoassignment to be immediately assigned
to the node specified in the -node parameter, regardless of the setting of the disk.auto_assign option.
Only unowned disks on loops or stacks owned wholly by that system and which have the same pool
information will be assigned. The -node parameter must be specified with the -auto parameter. Do not
use the -owner, the -sysid, or the -copy-ownership-from parameter with the -auto parameter.
When possible, use the -auto parameter rather than the -all parameter to conform to disk ownership
best practices. The -auto parameter is ignored for array LUNs.
[-pool | -p <integer>] - Pool
This optional parameter specifies the pool to which a disk must be assigned. It can take values of Pool0
or Pool1.
{ [-owner | -o <nodename>] - Owner Name
This optional parameter specifies the node to which the disk or array LUN has to be assigned.
[-sysid | -s <nvramid>] - New Owner ID
This optional parameter specifies the serial number (NVRAM ID) of the node to which the disk or array
LUN has to be assigned.
| [-copy-ownership-from <disk path name>]} - Disk Name to Copy Ownership
This optional parameter specifies the disk name from where the node needs to copy disk ownership
information. You can use this parameter for disks to have the same ownership as the provided input
disk.
[-checksum | -c {block|zoned|advanced_zoned}] - Checksum Compatibility
This optional parameter is used to set the checksum type for a disk or an array LUN. The possible values
are block, zoned, and advanced_zoned. This operation will fail if the specified disk is incompatible with
the specified checksum type. A newly created aggregate with zoned checksum array LUNs is assigned
advanced zoned checksum (AZCS) checksum type. AZCS checksum type provides more functionality than
the "version 1" zoned checksum type which has been supported in previous Data ONTAP releases. Zoned
checksum spare array LUNs added to an existing zoned checksum aggregate continue to be zoned
checksum. Zoned checksum spare array LUNs added to an AZCS checksum type aggregate use the AZCS
checksum scheme for managing checksums. For some disks (e.g. FCAL, SSD, SAS disks), the checksum
type cannot be modified. For more information on modifying the checksum type, refer to the "Physical
Storage Management Guide".
[-force | -f [true]] - Force Flag
This optional parameter forces the assignment of ownership of an already owned disk to a node. This
parameter could also be used to assign an array LUN with a redundancy error, for example, if the array
LUN is available on only one path. For a disk which is part of a live aggregate, even specification of the -
force parameter would not force the assignment, since it would be catastrophic.
[-node | -N <nodename>] - Node Name (For Auto Assign)
This optional parameter is used with either the -auto or the -all parameter. If used with the -auto
parameter, all disks which are visible to the node specified in the -node parameter and which are
eligible for autoassignment would be assigned to it. If used with the -all parameter, all unowned disks
or array LUNs visible to the node would be assigned to it.
{ [-root [true]] - Root Partition of Root-Data or Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter assigns the root partition of a root-data/root-data1-data2 partitioned disk. You
cannot use this parameter with disks that are part of a storage pool. The default value is false.
| [-data [true]] - Data Partition of Root-Data Partitioned Disk (privilege: advanced)
This optional parameter assigns the data partition of a root-data partitioned disk. You cannot use this
parameter with disks that are part of a storage pool. The default value is false.
| [-data1 [true]] - Data1 Partition of Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter assigns the data1 partition of a root-data1-data2 partitioned disk. You cannot
use this parameter with disks that are part of a storage pool. The default value is false.
| [-data2 [true]]} - Data2 Partition of Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter assigns the data2 partition of a root-data1-data2 partitioned disk. You cannot
use this parameter with disks that are part of a storage pool. The default value is false.
● Examples
The following example assigns ownership of an unowned disk named 1.1.16 to a node named node1:
The following example assigns all unowned disks or array LUNs visible to a node named node1 to itself:
The following example autoassigns all unowned disks (eligible for autoassignment) visible to a node named
node1 to itself:
The following two examples show the working of the -force parameter with a spare disk that is already
owned by another system:
The following example assigns ownership of the set of unowned disks on <stack> 1, to a node named
node1:
The following example assigns ownership of unowned disk 1.1.16 by copying ownership from disk 1.1.18:
The following example assigns all unowned disks visible to a node named node1 by copying ownership from
disk 1.1.18:
The following example assigns the root partition of disk 1.1.16 to node1.
cluster1::> storage disk assign -disk 1.1.16 -owner node1 -root true
-force true
The following example assigns the data partition of root-data partitioned disk 1.1.16 to node1.
cluster1::> storage disk assign -disk 1.1.16 -owner node1 -data true
-force true
The following example assigns the data1 partition of root-data1-data2 partitioned disk 1.1.24 to node1.
cluster1::> storage disk assign -disk 1.1.24 -owner node1 -data1 true
-force true
The following example assigns the data2 partition of root-data1-data2 partitioned disk 1.1.24 to node1.z33
cluster1::> storage disk assign -disk 1.1.24 -owner node1 -data2 true
-force true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk fail command can be used to manually force a file system disk to fail. It is used to
remove a file system disk that may be logging excessive errors and requires replacement. To unfail a disk,
use the storage disk unfail command.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the disk to be failed.
[-immediate | -i [true]] - Fail immediately
This parameter optionally specifies whether the disk is to be failed immediately. It is used to avoid Rapid
RAID Recovery and remove the disk from the RAID configuration immediately. Note that when a file
system disk has been removed in this manner, the RAID group to which the disk belongs enters
degraded mode (meaning a disk is missing from the RAID group). If a suitable spare disk is available,
the contents of the disk being removed are reconstructed onto that spare disk.
● Examples
The following example fails a disk named 1.1.16 immediately:
Related reference
storage disk unfail (page 1099)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage disk reassign is deprecated and may be removed in a future release of Data ONTAP. Disk
reassignment is no longer required as part of a controller replacement procedure. For further information,
see the latest controller or NVRAM FRU replacement flyer for your system. This command changes the
ownership of all disks on a node to the ownership of another node. Use this command only when a node has
a complete failure (for instance, a motherboard failure) and is replaced by another node. If the node's disks
have already been taken over by its storage failover partner, use the -force parameter.
● Parameters
-homeid | -s <nvramid> - Current Home ID
This specifies the serial number of the failed node.
-newhomeid | -d <nvramid> - New Home ID
This specifies the serial number of the node that is to take ownership of the failed node's disks.
[-force | -f [true]] - Force
This optionally specifies whether to force the reassignment operation. The default setting is false .
● Examples
In the following example, a node named node0 and having serial number 12345678 has failed. Its disks
have not been taken over by its storage failover partner. A replacement node with serial number 23456789
was installed and connected to node0's disk shelves. To assign node0's disks to the new node, start the new
node and run the following command:
In the following example, a similar failure has occurred, except that node0's disks have been taken over by
its storage failover partner, node1. A new node with serial number 23456789 has been installed and
configured. To assign the disks that previously belonged to node0 to this new node, run the following
command:
cluster::*> storage disk reassign -homeid 12345678 -newhomeid 23456789 -force true
node0's disks 1.1.11, 1.1.12, 1.1.13, 1.1.14, 1.1.15, 1.1.16, 1.1.23 and 1.1.24
were reassigned to new owner with serial number 23456789.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command updates the disk ownership information for all the disks attached to a node to the latest view
for all the nodes in the cluster. During normal operations, disk ownership is kept up to date automatically. In
certain circumstances, however, disk ownership must be updated manually. If this is required, EMS
messages will indicate that this command should be run. If the -node parameter is provided, the disk
ownership information is updated only on the node specified.
● Parameters
[-node {<nodename>|local}] - Node
If this parameter is provided, the disk ownership information is updated only on the node specified.
● Examples
The following example refreshes the disk ownership information for all the nodes in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk remove command removes the specified spare disk from the RAID configuration,
spinning the disk down when removal is complete.
This command does not remove disk ownership information from the disk. Therefore, if you plan to reuse
the disk in a different storage system, you should use the storage disk removeowner command instead.
See the "Physical Storage Management Guide" for the complete procedure.
NOTE: For systems with multi-disk carriers, it is important to ensure that none of the disks in the carrier are
filesystem disks before attempting removal. To convert a filesystem disk to a spare disk, see storage disk
replace .
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the disk to be removed.
● Examples
The following example removes a spare disk named 1.1.16:
Related reference
storage disk removeowner (page 1078)
storage disk replace (page 1079)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage disk remove-reservation command removes persistent reservation from a specified
foreign array LUN.
● Parameters
-disk <disk path name> - Disk Name
This specifies the disk from which persistent reservation is to be removed.
● Examples
The following example removes the persistent reservation from a disk named node1:switch01:port.126L1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk removeowner command removes ownership from a specified disk. Then disk can then
be reassigned to a new owner.
● Parameters
-disk <disk path name> - Disk Name
This specifies the disk whose ownership is to be removed.
{ [-root [true]] - Root Partition of Root-Data/Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the root partition of a root-data/root-data1-data2
partitioned disk. You cannot use this parameter with disks that are part of a storage pool. The default
value is false.
| [-data [true]] - Data Partition of Root-Data Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data partition of a root-data partitioned disk. You
cannot use this parameter with a root-data1-data2 partitioned disk or disks that are part of a storage
pool. The default value is false.
| [-data1 [true]] - Data1 Partition of a Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data1 partition of a root-data1-data2 partitioned
disk. You cannot use this parameter with a root-data partitioned disk or disks that are part of a storage
pool. The default value is false.
| [-data2 [true]]} - Data2 Partition of a Root-Data1-Data2 Partitioned Disk (privilege: advanced)
This optional parameter removes ownership of the data2 partition of a root-data1-data2 partitioned
disk. You cannot use this parameter with a root-data partitioned disk or disks that are part of a storage
pool. The default value is false.
● Examples
The following example removes the ownership from a disk named 1.1.27.
The following example removes ownership of the root partition on disk 1.1.16.
The following example removes ownership of the data partition on disk 1.1.16.
The following example removes ownership of the data1 partition on disk 1.1.23.
The following example removes ownership of the data2 partition on disk 1.1.23.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk replace command starts or stops the replacement of a file system disk with spare
disk. When you start a replacement, Rapid RAID Recovery begins copying data from the specified file system
disk to a spare disk. When the process is complete, the spare disk becomes the active file system disk and
the file system disk becomes a spare disk. If you stop a replacement, the data copy is halted, and the file
system disk and spare disk retain their initial roles.
● Parameters
-disk <disk path name> - Disk Name
This specifies the file system disk that is to be replaced. Disk names take one of the following forms:
• Disks are named in the form <stack-id>.<shelf>.<bay>
• Disks on multi-disk carriers are named in the form <stack-id>.<shelf>.<bay>.<lun>
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP
8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number 16
on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
• Disks that are attached to a switch are named in the form
<node>:<switch_name>:<switch_port>.<loop_ID>. For disks with a LUN, the form is
<node>:<switch_name>:<switch_port>.<loop_ID>L<LUN>. For instance, disk number 08 on port 11 of
switch fc1 on a node named node0a is named node0a:fc1:11.08. The same disk on LUN lun1 is named
node0a:fc1:11.08Llun1.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names,
depending on how the disk is connected. For example, a disk known to a node named alpha as alpha:
1a.19 can be known to a node named beta as beta:0b.37. All names are listed in the output of queries
and are equally valid. To determine a disk's unique identity, run a detailed query and look for the disk's
universal unique identifier (UUID) or serial number.
-action {start | stop} - Action
This specifies whether to start or stop the replacement process.
[-replacement <disk path name>] - Replacement
This specifies the spare disk that is to replace the file system disk.
[-allow-same-carrier [true]] - Allow Same RAID Group Within Carrier
This parameter can be used to allow two disks housed in the same carrier to be in the same RAID group
when you replace a disk in an aggregate.
Having disks in the same carrier in the same RAID group is not desirable because a carrier failure can
cause a simultaneous outage for two disks in the same RAID group. You can replace a disk in an
aggregate with a disk that causes this situation, but when an alternate disk becomes available, Data
ONTAP automatically initiates a series of disk copy operations to put the disks into different RAID groups.
For this reason, you should use this parameter only when necessary. When possible, ensure that disks
housed in the same carrier are in different RAID groups.
This parameter affects only the disk replace operation. It is not a persistent attribute of the aggregate.
[-allow-mixing | -m [true]] - Allow Mixing of Disks of Different RPM or Pool
This optional parameter specifies whether the disk can be replaced with another disk of different RPM or
from different Pool. This parameter affects only the current disk replacement operation.
● Examples
The following example begins replacing a file system disk named 1.0.16 with a spare disk named 1.1.14.
cluster1::> storage disk replace -disk 1.0.16 -replacement 1.1.14 -action start
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk set-foreign-lun command sets or unsets a specified array LUN as foreign. This
command will enable/disable the feature of importing the data from foreign LUN.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the array LUN which is to be set or unset as foreign.
-is-foreign-lun [true] - Is Foreign LUN
If the parameter value specified is true then array LUN is set as foreign. If the parameter value specified
is false then array LUN foreignness is cleared.
● Examples
The following example shows how to set an array LUN as foreign:
The following example shows how to mark an array LUN as not foreign:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk set-led command controls the LED of a specified disk.
You can turn an LED on or off, cause it to blink or stop blinking, or test it.
This command is useful for locating a disk in its shelf.
● Parameters
-action {on|off|blink|blinkoff|testall|resetall} - Action
This parameter specifies the state to which the LED is to be set. Possible values include the following:
• on - The LED is lit steadily
• off - The LED is not lit
• blink - The LED blinks
• blinkoff - The LED stops blinking and is not lit
• testall - This tests the operation of every disk enclosure's hardware and drivers per node. Do not use
this value in normal operation.
• resetall - This resets the LED of every disk on the node and lights up the LED of disks with faults.
{ [-disk <disk path name>] - Disk Name
This specifies the disk whose LED is to be set. Disk names take one of the following forms:
• Disks are named in the form <stack-id>.<shelf>.<bay>
• Disks on multi-disk carriers are named in the form <stack-id>.<shelf>.<bay>.<lun>
• Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and
number is a unique ascending number.
Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP
8.3:
• Disks that are not attached to a switch are named in the form <node>:<host_adapter>.<loop_ID>. For
disks with a LUN, the form is <node>:<host_adapter>.<loop_ID>L<LUN>. For instance, disk number 16
on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is
named node0a:1a.16Llun0.
• Disks that are attached to a switch are named in the form
<node>:<switch_name>:<switch_port>.<loop_ID>. For disks with a LUN, the form is
<node>:<switch_name>:<switch_port>.<loop_ID>L<LUN>. For instance, disk number 08 on port 11 of
switch fc1 on a node named node0a is named node0a:fc1:11.08. The same disk on LUN lun1 is named
node0a:fc1:11.08Llun1.
Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names,
depending on how the disk is connected. For example, a disk known to a node named alpha as alpha:
1a.19 can be known to a node named beta as beta:0b.37. All names are listed in the output of queries
and are equally valid. To determine a disk's unique identity, run a detailed query and look for the disk's
universal unique identifier (UUID) or serial number.
| [-adapter <text>] - Adapter Name
The name of the adapter to which the shelves of disks of interest are attached to.
[-node {<nodename>|local}]} - Node Name
The node for which action is to be taken.
[-duration <integer>] - Duration (minutes)
This specifies the duration, in minutes, that the LED is to remain in the specified state. Only actions "on"
and "blink" are supported.
[-iteration <integer>] - Test iterations
This specifies the number of iterations to run the action for. Only action "test-all" is supported.
● Examples
The following example causes the LEDs on all disks whose names match the pattern Cluster1* to turn on for
5 minutes:
The following example causes the LEDs on all disks attached to adapter 0b on Node2 to turn on for 1
minute:
The following example resets the LEDs on all disks on the local node and causes the LEDs of disks with faults
to turn on:
The following example causes the LEDs on all disks whose names match the pattern Cluster1* to turn on for
2 minutes:
The following example tests the LEDs on all disks owned by the local node for 3 iterations:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk show command displays information about disks and array LUNs. Where it appears in
the remainder of this document "disk" may refer to either a disk or an array LUN. By default, the command
displays the following information about all disks in column style output:
• Disk name
• Usable space on the disk, in human readable units
• Shelf number
• Bay number
• Container type (aggregate, broken, foreign, labelmaint, maintenance, mediator, remote, shared, spare,
unassigned, unknown, volume, or unsupported)
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
• Container name
• Owning node name
To display detailed information about a single disk, use the -disk parameter.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all disks, in column style output.
| [-broken ]
Displays the following RAID-related information about broken disks:
• Original owning node name
• Checksum compatibility
• Disk name
• Outage reason
• Host bus adapter
• Shelf number
• Bay number
• Primary port / Channel
• Pool
• Disk type
• RPM (Revolutions per minute)
• Usable size in human readable units
• Physical size in human readable units
• Current owner node
| [-errors ]
Displays the following disk information about the disks which have errors.
• Disk Name
• Error Type
• Error Description and corresponding corrective action
| [-longop ]
Displays the following information about long-running disk operations, in column style output:
• Disk name
• Whether the disk is marked as prefailed
• Whether the disk is being replaced
• Whether the disk is zeroed
• Copy destination
• Percentage of copy complete
• Percentage of zeroing complete
• Percentage of reconstruction complete
| [-maintenance ]
Displays the following RAID-related information about disks in the maintenance center:
• Original owning node name
• Checksum compatibility
• Disk name
• Outage Reason
• Host bus adapter
• Shelf number
• Bay number
• Primary port / Channel
• Pool
• Disk type
• RPM (Revolutions per minute)
• Usable size in human readable units
• Physical size in human readable units
• Current owner node
| [-ownership ]
Displays the following ownership-related information:
• Disk name
• Aggregate name
• Home node name
• Owning node name
• Disaster recovery home node name
• Home node system id
• Owning node system id
• Disaster recovery home node system id
• Reservation node system id
• SyncMirror pool
| [-partition-ownership ]
Displays the following ownership-related information for partitioned disks:
• Disk name
• Aggregate name
• Owner of root partition on a partitioned disk
• Owner system id of root partition on a partitioned disk
• Owner of data or data1 partition on a root-data or a root-data1-data2 partitioned disk respectively
• Owner system id of data or data1 partition on a root-data or a root-data1-data2 partitioned disk
respectively
• Owner of data2 partition on a root-data1-data2 partitioned disk
• Owner system id of data2 partition on a root-data1-data2 partitioned disk
• Owner of the disk which is partitioned
• Owner system id of the disk which is partitioned
| [-physical ]
Displays the following information about the disk's physical attributes, in column style output:
• Disk name
• Disk type
• Disk vendor
• Disk model
• Firmware revision level
• RPM (Revolutions per minute)
• BPS (Bytes per sector)
| [-port ]
Displays the following path-related information:
• Disk name and disk port associated with disk primary path
• Disk name and disk port associated with the disk secondary path, for a multipath configuration
• Type, shelf, and bay information for the disks
| [-raid ]
Displays the following RAID-related information:
• Disk name
• Container type (aggregate, broken, labelmaint, maintenance, mediator, remote, shared, spare,
unassigned, unknown, or volume)
• Outage reason
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
• RAID group name
• Aggregate name
| [-raid-info-for-aggregate ]
Displays the following RAID-related information about the disks used in an aggregate:
• Owning node name
• Aggregate name
• Plex name
• RAID group name
• Position (copy, data, dparity, orphan, parity, pending, present, shared or tparity)
• Disk name
• Host bus adapter
• Shelf number
• Bay number
• Primary port / Channel
• Pool
• Disk type
• RPM (Revolutions per minute)
• Usable size in human readable units
• Physical size in human readable units
When this parameter is specified, RAID groups that use shared disks are not included. Use storage
aggregate show-status to show information for all RAID groups and aggregates.
| [-spare ]
Displays the following RAID-related information about available spare disks:
• Original owning node name
• Checksum compatibility
• Disk name
• Host bus adapter
• Shelf number
• Bay number
• Primary port / Channel
• Pool
• Disk type
• Disk class
• RPM (Revolutions per minute)
• Usable size in human readable units
• Physical size in human readable units
• Current owner node
| [-ssd-wear ]
Displays the following wear life related information about solid state disks:
• Rated Life Used : An estimate of the percentage of device life that has been used, based on the actual
device usage and the manufacturer's prediction of device life. A value greater than 99 indicates that
the estimated endurance has been used, but this does not necessarily indicate a device failure.
Omitted if value is unknown.
• Spare Blocks Consumed Limit : Spare blocks consumed percentage limit reported by the device. When
the Spare Blocks Consumed percentage for the device reaches this read-only value, Data ONTAP
initiates a disk copy operation to prepare to remove the device from service. Omitted if value is
unknown.
• Spare Blocks Consumed : Percentage of device spare blocks that have been used. Each device has a
number of spare blocks that will be used when a data block can no longer be used to store data. This
value reports what percentage of the spares have already been consumed. Omitted if value is
unknown.
| [-virtual-machine-disk-info ]
Displays information about Data ONTAP virtual disks, their mapped datastores and their specific backing
device attributes, such as: disk or LUN, adapter and initiator details (if applicable).
• Disk name.
Selects information about array LUNs that have been declared to be foreign LUNs.
[-uid <text>] - Disk Unique ID
Selects the disks whose unique id matches this parameter value. A disk unique identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:0000000
0:00000000
[-aggregate <aggregate name>] - Aggregate
Selects information about disks that belong to the specified aggregate.
[-array-name <array name>] - Array Name
Selects information about the LUNs presented by the specified storage array.
[-average-latency <integer>] - Average I/O Latency Across All Active Paths
Selects information about disks that have the specified average latency.
[-bay <integer>] - Bay
Selects information about disks that are located in the carrier within the specified shelf bay.
[-bps <integer>] - Bytes Per Sector
Selects information about disks that have the specified number of bytes per sector. Possible settings are
512, 520, 4096, and 4160.
[-carrier-id <text>] - Carrier ID
Selects information about disks that are located within the specified multi-disk carrier.
[-checksum-compatibility {advanced_zoned | block | none}] - Checksum Compatibility
Selects information about disks that have the specified checksum compatibility.
[-class {capacity | performance | archive | solid-state | array | virtual}] - Disk Class
Selects information about disks that have the specified disk class.
• capacity = Capacity-oriented, near-line disk types. Includes disk types FSAS, BSAS and ATA.
• performance = Performance-oriented, enterprise class disk types. Includes disk types FCAL and SAS.
• archive = Archive class SATA disks in multi-disk carrier storage shelves. Includes disk type MSATA.
• solid-state = Solid-state drives. Includes disk type SSD, SSD-CAP and SSD-NVM.
• array = Logical storage devices backed by storage arrays and used by Data ONTAP as disks. Includes
disk type LUN.
• virtual = Virtual disks that are formatted and managed by the hypervisor. Includes disk type VMDISK.
[-container-type {aggregate | broken | foreign | labelmaint | maintenance | mediator | remote |
shared | spare | unassigned | unknown | unsupported}] - Container Type
Selects information about disks that have the specified container type.
• Aggregate = Disk is used as a physical disk in an aggregate.
• Broken = Disk is in broken pool.
• Foreign = Array LUN has been marked foreign.
• Labelmaint = Disk is in online label maintenance list.
• Maintenance = Disk is in maintenance center.
• Mediator = A mediator disk is a disk used on non-shared HA systems hosted by an external node
which is used to communicate the viability of the storage failover between non-shared HA nodes.
• Remote = Disk belongs to the remote cluster.
• Shared = Disk is partitioned or in a storage pool.
• Spare = Disk is a spare disk.
• Unassigned = Disk ownership has not been assigned.
• Unknown = Container is currently unknown. This is the default setting.
Selects information about disks that have the specified data1 partition home owner system ID. Used
with root-data1-data2 partitioned disks.
[-data2-home-id <nvramid>] - Home Owner System ID of Data2 Partition of Root-Data1-Data2
Partitioned Disk
Selects information about disks that have the specified data2 partition home owner system ID. Used
with root-data1-data2 partitioned disks.
[-disk-io-kbps-total <integer>] - Total Disk Throughput in KBPS Across All Active Paths
Selects information about disks that have attained the specified I/O throughput on all connected paths.
[-disk-iops-total <integer>] - Total Disk IOPs Across All Active Paths
Selects information about disks that have achieved the specified number of IOPs per second on all
connected paths.
[-diskpathnames <disk path name>, ...] - List of Path-Based Disk Names
Selects information about disks that have all of the specified path names.
[-effective-rpm <integer>] - Effective RPM
Selects information about disks with the specified effective rotational speed.
[-dr-home {<nodename>|local}] - Disaster Recovery Home
Selects information about disks that have the specified Disaster home node.
[-dr-home-id <nvramid>] - Disaster Recovery Home System ID
Selects information about disks whose Disaster home node has the specified system id.
[-drawer <integer>] - Drawer
Selects information about disks that are located in the specified drawer.
[-error-type {onepath|onedomain|control|foreign|toobig|toosmall|invalidblocksize|targetasymmap|
deviceassymmap|failovermisconfig|unknown|netapp|fwdownrev|qualfail|diskfail|notallflashdisk}, ...] -
Error Type
Selects information about disks that have the specified error types.
• onepath = The array LUN is accessible only via a single path.
• onedomain = The array LUN is accessible only via a single fault domain.
• control = The array LUN cannot be used because it is a control device.
• foreign = The array LUN is marked as foreign and has some external SCSI reservations other than those
from Data ONTAP.
• toobig = The array LUN exceeds the maximum array LUN size that Data ONTAP supports.
• toosmall = The array LUN is less than the minimum array LUN size that Data ONTAP supports.
• invalidblocksize = The array LUN is not a valid block size.
• targetasymmap = The array LUN is presented more than once on a single target port.
• deviceassymmap = The array LUN is presented with multiple IDs.
• failovermisconfig = The array LUN is configured with inconsistent failover methods.
• unknown = The array LUN from a storage array that is not supported by this version of Data ONTAP.
• netapp = A SAN front-end LUN from one Data ONTAP system that is presented as external storage to
another Data ONTAP system.
• fwdownrev = The disk firmware is a down version.
• qualfail = The disk is not supported.
• diskfail = The disk is in a failed state.
• notallflashdisk = The disk does not match the All-Flash Optimized personality of the system.
[-firmware-revision <text>] - Firmware Revision
Selects information about disks that have the specified firmware revision level.
Selects information about disks that are not in service for the specified reason. Possible values are:
admin failed, admin removed, admin testing, evacuated, bad label, bypassed, failed, init failed, label
version, labeled broken, labelmaint, LUN resized, missing, not responding, predict failure, rawsize
shrank, recovering, sanitizing, sanitized, SnapLock Disk, testing, unassigned, unknown.
[-path-error-count <integer>] - Path Error Count
Selects information about disks that are visible on a path that has incurred the specified number of
errors.
[-path-iops <integer>, ...] - Number of IOPS on Path (Rolling Average)
Selects information about disks on those paths that have reached the specified number of IOPs.
[-path-io-kbps <integer>, ...] - Kbytes of I/O per second on Path (Rolling Average)
Selects information about disk with paths that have reached the specified I/O throughput
[-path-link-errors <integer>, ...] - Link Error count on path
Selects information about disks with paths that have incurred the specified number of FC link errors.
[-path-lun-in-use-count <integer>, ...] - Number of LUNs in the in-use state on this path
Selects information about disks with paths that have the specified in-use-count.
[-path-quality <integer>, ...] - Percentage of weighted error threshold
Selects information about disks on paths that have incurred the specified number of errors. The value
displayed is a measure of the health of a path expressed as a percentage of an error threshold. Once a
path has reached or surpassed the error threshold, another path will be selected for I/O transfer, if there
is one available.
[-physical-size-mb <integer>] - Physical Size (MB)
Selects information about disks that have the specified physical capacity, in megabytes.
[-physical-size {<integer>[KB|MB|GB|TB|PB]}] - Physical Size
Selects information about disks that have the specified physical capacity, in human readable units.
[-physical-size-512b <integer>] - Physical Size in Units of 512 Bytes
Selects information about disks that have the specified physical capacity, in 512-byte chunks. This
parameter is present only for backwards compatibility with Data ONTAP 8.0.
[-plex <text>] - Plex Name
Selects information about disks that belong to the specified RAID plex.
[-pool <text>] - Assigned Pool
Selects information about disks that belong to the specified SyncMirror pool (pool0 or pool1).
[-port-speed <text>, ...] - Port Speed
Selects information about disks that are served by a Host Bus Adapter that is running at the specified
port speed.
[-position <diskPositionType>] - Disk Position
Selects information about disks that have the specified position within their disk container.
[-power-on-hours <integer>] - Hours Powered On
Selects information about disks that have the specified number of hours being powered up.
[-prefailed {true|false}] - Marked for Rapid RAID Recovery?
Selects information about disks that match the specified parameter value indicating whether the disk is
either awaiting or is in process of Rapid RAID Recovery.
[-preferred-target-port {true|false}, ...] - Whether or not target port group is preferred (privilege:
advanced)
Selects information about disks that match the specified parameter value indicating whether the
backing storage is ALUA (Asymmetric Logical Unit Access) capable and has specified the array target
port on this path to be a preferred target port for I/O.
[-primary-port <text>] - Primary Path Disk Port
Selects information about disks that use the specified primary port.
Selects information about disks that are located in a drawer with the specified slot.
[-stack-id <integer>] - Stack ID
A cluster unique id for a collection of one or more interconnected shelves.
[-target-iops <integer>, ...] - Number of IOPS to Target (Rolling Average)
Selects information about disks that are visible on target ports that have performed the specified
number of IOPs.
[-target-io-kbps <integer>, ...] - Kbytes of I/O per second to Target (Rolling Average)
Selects information about disks that are visible on target ports that have reached the specified I/O
throughput.
[-target-lun-in-use-count <integer>, ...] - Number of LUNs in the in-use state on this target
Selects information about disks with a path through a target port that has the specified in-use-count.
[-target-port-access-state <text>, ...] - Failover optimization type
Selects information about disks that are visible on target ports that have the specified access state.
[-target-side-switch-port <text>, ...] - Target Side Switch Port
Selects information about disks that are visible on target ports identified by the switch port to which
they are connected.
[-target-wwpn <text>, ...] - Target Port
Selects information about disks that are visible on target ports identified by their World Wide Port Name.
[-tpgn <integer>, ...] - Target Port Group Number
Selects information about disks that belong to the specified Target Port Group Number.
[-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP | VMLUN |
VMLUN-SSD}] - Disk Type
Selects information about disks that have the specified disk type.
[-usable-size-mb <integer>] - Usable Size (MB)
Selects information about disks that have the specified usable space, in megabytes.
[-usable-size {<integer>[KB|MB|GB|TB|PB]}] - Usable Size
Selects information about disks that have the specified usable space, in human readable units.
[-vendor <text>] - Vendor Name
Selects information about disks that have the specified vendor.
[-vmdisk-device-id <integer>, ...] - Virtual Disk Device ID
Selects information about disks that have the specified virtual disk device ID.
[-zeroed {true|false}] - Zeroed?
Selects information about disks that have (true) or have not (false) been fully pre-zeroed.
[-zeroing-percent <integer>] - Percentage of Zeroing Complete
Selects information about disks that are zeroing and have the specified percentage complete.
[-carrier-serialno <text>] - Carrier Serial Number
Selects information about disks that are located within the multi-disk carrier specified by the serial
number.
[-vmdisk-target-address <text>] - Target Address of the VM Disk
Displays the VM Disk's target address either in the form of bus target lun or bus unit.
[-path-count <integer>] - Number of Paths to Disk
Selects information about disks that have the specified number of paths.
● Examples
The following example displays information about all disks:
The following example displays detailed information about a disk named 1.0.75
4d2 62 2 Gb/S 3 0
Errors:
-
The following example displays RAID-related information about disks used in an aggregate:
Disk HA Shelf Bay Chan Pool Type RPM Size Size Owner
--------------- ------------ ---- ------ ----- ------ -------- -------- --------
1.1.23 0b 1 23 A Pool0 FCAL 10000 132.8GB 134.2GB node1
1.1.25 0b 1 25 A Pool0 FCAL 10000 132.8GB 133.9GB node1
1.1.26 0b 1 26 A Pool1 FCAL 10000 132.8GB 133.9GB node1
1.1.27 0b 1 27 A Pool1 FCAL 10000 132.8GB 134.2GB node1
Home Owner: node2
Checksum Compatibility: block
Usable Physical
Disk HA Shelf Bay Chan Pool Type RPM Size Size Owner
--------------- ------------ ---- ------ ----- ------ -------- -------- --------
1.1.19 0a 1 19 B Pool1 FCAL 10000 132.8GB 133.9GB node2
1.1.20 0a 1 20 B Pool0 FCAL 10000 132.8GB 133.9GB node2
1.1.21 0a 1 21 B Pool0 FCAL 10000 132.8GB 133.9GB node2
[...]
The following example displays RAID-related information about disks in maintenance center:
Related reference
storage aggregate show-status (page 1016)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage disk unfail command can be used to unfail a broken disk.
If the attempt to unfail the disk is unsuccessful, the disk remains in the broken state.
The disk unfail command prompts for confirmation unless you specify the "-quiet" parameter.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the disk to be unfailed.
[-spare | -s [true]] - Make the Disk Spare
This parameter specifies whether the unfailed disk will be made a spare disk. The disk is forced to
become a spare disk if this parameter is specified.
If this parameter is not specified, the disk is brought back into its parent aggregate. Setting this
parameter might result in the aggregate coming back online if it is not complete or online. The default
value is false.
[-quiet | -q [true]] - Confirmations off
You can set this parameter to true to suppress the confirmation messages. However, before proceeding
with the command, you should be aware that the confirmation message contains important information
about the effect of unfailing a disk. This command cannot be reversed once it is invoked. The default
value is false.
● Examples
The following example unfails a disk named 1.1.16 to become a spare disk:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release of Data ONTAP. Use the
"storage disk firmware update" command.
The storage disk updatefirmware command updates the firmware on one or more disks. You can
download the latest firmware by using the storage firmware download command. You can specify a list
of one or more disks whose firmware is to be updated by using the -disk parameter, or you can update the
firmware on all local disks by omitting the -disk parameter.
● Parameters
[-disk <disk path name>, ...] - Disk
This specifies the disk or disks whose firmware is to be updated.
If you do not specify this option, all local disks' firmware is updated.
● Examples
The following example updates the firmware on all disks:
Related reference
storage disk firmware update (page 1105)
storage firmware download (page 1158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk zerospares command zeroes all non-zeroed spare disks in all nodes or a specified
node in the cluster. A node must be online to zero disks. This operation must be done before a disk can be
reused in another aggregate. This version of ONTAP uses fast zeroing, which converts a spare disk from non-
zeroed to zeroed without the long wait times required when physically zeroing a disk.
● Parameters
[-owner {<nodename>|local}] - Owner
If this parameter is specified, only non-zeroed spares assigned to the specified node will be zeroed.
Otherwise, all non-zeroed spares in the cluster will be zeroed.
● Examples
The following example zeroes all non-zeroed spares owned by a node named node4, using fast zeroing:
The following example zeroes all non-zeroed spares owned by a node named node2 by physically writing
zeros to the entire disk:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk error show command displays disk component and array LUN configuration errors.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-uid <text>] - UID
Displays the error information of the disk whose unique ID matches the value you specify. A disk unique
identifier has the form:
20000000:875D4C32:00000000:00000000:00000000:00000000:00000000:00000000:0000000
0:00000000
[-array-name <array name>] - Array Name
Displays the errors of the storage array whose name you specified.
[-node {<nodename>|local}] - Controller Name
Displays the error information for the disks on the clustered node whose name you specified.
[-disk <disk path name>] - Disk
Displays detailed error information about the disk you specified.
[-serial-number <text>] - Serial Number
Displays the error information for the disk whose serial number you specified.
[-error-id <integer>, ...] - Error ID
Displays the error information for the disks whose Error IDs match IDs you specified.
[-error-type {onepath|onedomain|control|foreign|toobig|toosmall|invalidblocksize|targetasymmap|
deviceassymmap|failovermisconfig|unknown|netapp|fwdownrev|qualfail|diskfail|notallflashdisk}, ...] -
Error Type
Displays all disk errors of the error types you specified, grouped by type.
• onepath = The array LUN is accessible only via a single path.
• onedomain = The array LUN is accessible only via a single fault domain.
• control = The array LUN cannot be used because it is a control device.
• foreign = The array LUN is marked as foreign and has some external SCSI reservations other than those
from Data ONTAP.
• toobig = The array LUN exceeds the maximum array LUN size that Data ONTAP supports.
• toosmall = The array LUN is less than the minimum array LUN size that Data ONTAP supports.
• invalidblocksize = The array LUN is not a valid block size.
• targetasymmap = The array LUN is presented more than once on a single target port.
• deviceassymmap = The array LUN is presented with multiple IDs.
• failovermisconfig = The array LUN is configured with inconsistent failover methods.
• unknown = The array LUN from a storage array that is not supported by this version of Data ONTAP.
• netapp = A SAN front-end LUN from one Data ONTAP system that is presented as external storage to
another Data ONTAP system.
• fwdownrev = The disk firmware is a down version.
• qualfail = The disk is not supported.
• diskfail = The disk is in a failed state.
• notallflashdisk = The disk does not match the All-Flash Optimized personality of the system.
● Examples
The following example displays configuration errors seen in the system:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage disk firmware revert command reverts firmware on all disks or a specified list of disks on
a node.
You can specify a list of one or more disks whose firmware is to be reverted by using the -disk parameter.
You can revert the firmware on all the disks owned by a node by using the -node parameter.
This command can make the disks inaccessible for up to five minutes after the start of its execution.
Therefore, the network sessions that use the concerned node must be terminated before running the
storage disk firmware revert command. This is particularly true for CIFS sessions that might be
terminated when this command is executed.
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields
firmware-revision command, where the firmware version for the disks is NA02:
The firmware files are stored in the /mroot/etc/disk_fw directory on the node. The firmware file name is in
the form of "product-ID.revision.LOD". For example, if the firmware file is for Seagate disks with product ID
X225_ST336704FC and the firmware version is NA01, the file name is X225_ST336704FC.NA01.LOD. If the
node in this example contains disks with firmware version NA02, the /mroot/etc/disk_fw/
X225_ST336704FC.NA01.LOD file is downloaded to every disk when you execute this command.
How to Revert the Firmware for an HA Pair in a Cluster
Use the following procedure to perform a revert on the disks in an HA environment:
• Make sure that the nodes are not in takeover or giveback mode.
• Download the latest firmware on both nodes by using the storage firmware download command.
• Revert the disk firmware on Node A's disks by entering the storage disk firmware revert -node
node-A command.
• Wait until the storage disk firmware revert command completes on Node A, and then revert the
firmware on Node B's disks by entering the storage disk firmware revert -node node-B command.
● Parameters
{ -disk <disk path name>, ... - Disk Name
Specifies the disk or disks whose firmware is to be reverted.
| -node {<nodename>|local}} - Node Name
Specifies the node name. The disk firmware will be reverted on all the disks owned by the node specified
by this parameter.
● Examples
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields
firmware-revision command, where the firmware version for the disks is NA02:
The following example reverts the firmware on all disks owned by cluster-node-01:
Warning: Disk firmware reverts can be disruptive to the system. Reverts involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being reverted. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware revert process is complete.
Disk firmware reverts should only be done one node at a time. Disk
firmware reverts can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
The following example reverts the firmware on disk 1.5.0 which is owned by node cluster-node-04:
Warning: Disk firmware reverts can be disruptive to the system. Reverts involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being reverted. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware revert process is complete.
Disk firmware reverts should only be done one node at a time. Disk
firmware reverts can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
Related reference
storage disk show (page 1083)
storage firmware download (page 1158)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage disk firmware show-update-status command displays the state of the background disk
firmware update process.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Use the storage disk firmware update command to manually update firmware on all disks or a
specified list of disks on a node. However, the recommended way to update disk firmware in a cluster is to
enable automatic background firmware update by enabling the -bkg-firmware-update parameter for all
of the nodes in the cluster. You can do this by entering the storage disk option modify -node * -
bkg-firmware-update on command.
You can download the latest firmware on the node by using the storage firmware download command.
You can specify a list of one or more disks whose firmware is to be updated by using the -disk parameter.
You can update the firmware on all the disks owned by a node by using the -node parameter.
This command can make the disks inaccessible for up to five minutes after the start of its execution.
Therefore, the network sessions that use the concerned node must be terminated before running the
storage disk firmware update command. This is particularly true for CIFS sessions that might be
terminated when this command is executed.
The firmware is automatically downloaded to disks, which report previous versions of the firmware. For
information on automatic firmware update downloads, see "Automatic versus Manual Firmware Download".
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields
firmware-revision command, where the firmware version for the disks is NA01:
The firmware files are stored in the /mroot/etc/disk_fw directory on the node. The firmware file name is in
the form of "product-ID.revision.LOD". For example, if the firmware file is for Seagate disks with product ID
X225_ST336704FC and the firmware version is NA02, the filename is X225_ST336704FC.NA02.LOD. The
revision part of the file name is the number against which the node compares each disk's current firmware
version. If the node in this example contains disks with firmware version NA01, the /mroot/etc/disk_fw/
X225_ST336704FC.NA02.LOD file is used to update every eligible disk when you execute this command.
Automatic versus Manual Firmware Download
The firmware is automatically downloaded to those disks that report previous versions of firmware following
a system boot or disk insertion. Note that:
• A manual download is a disruptive operation that makes disks inaccessible for up to five minutes after the
download is started. Network sessions that use the node must be terminated before running the storage
disk firmware update command.
• The firmware is not automatically downloaded to the node's partner node in an HA pair.
• The firmware is not automatically downloaded to unowned disks on nodes configured to use software-
based disk ownership.
• The bkg-firmware-update parameter controls how the automatic firmware download feature works:
- If the bkg-firmware-update parameter is set to off, then the storage disk firmware update will
update the firmware on the drives in parallel.
- If the bkg-firmware-update parameter is set to on, then the storage disk firmware update will
update spares and filesystem disks in a nondisruptive manner in the background after boot. Firmware
downloads for these disks will be done sequentially by temporarily taking them offline one at a time for
the duration of the download. After the firmware is updated, the disk will be brought back online and
restored to its normal operation.
During an automatic download to an HA environment, the firmware is not downloaded to the disks owned
by the HA partner.
When you use the storage disk firmware update command, the firmware is:
• Updated on every disk regardless of whether it is on the A-loop, the B-loop, or in an HA environment.
• If the node is configured in a software-based disk ownership system, only disks owned by this node are
updated.
During an automatic firmware download in a MetroCluster(TM) environment, the firmware is not
downloaded to the disks owned by the partner cluster. During both manual and automatic firmware
download in a MetroCluster-over-IP environment, the firmware is not downloaded to any remote disks
located at the partner cluster while Disaster Recovery is in progress.
Follow the instructions in "How to Update the Firmware for an HA Pair in a Cluster" to ensure that the
updating process is successful. Data ONTAP supports redundant path configurations for disks in a non-HA
configuration. The firmware is automatically downloaded to disks on the A-loop or B-loop of redundant
configurations that are not configured in an HA pair and are not configured to use software-based disk
ownership.
Automatic Backgroud Firmware Update
The firmware can be updated in the background so that the firmware update process does not impact the
clients. This functionality is controlled with the bkg-firmware-update parameter. You can modify the
parameter by using the CLI storage disk option modify -node node_name -bkg-firmware-update
on|off command. The default value for this parameter is "on".
When disabled or set to "off", storage disk firmware update will update the firmware in automated
mode. This means that all disks which had older firmware revision will be updated regardless of whether
they are spare or filesystem disks.
When enabled or set to "on", the background storage disk firmware update will update firmware in
automated mode only on disks that can be successfully taken offline from active filesystem RAID groups and
from the spare pool. To ensure a faster boot process, the firmware is not downloaded to spares and
filesystem disks at boot time.
This provides the highest degree of safety available, without the cost of copying data from each disk in the
system twice. Disks are taken offline one at a time and then the firmware is updated on them. The disk is
brought online after the firmware update and a mini/optimized reconstruct happens for any writes, which
occurred while the disk was offline. Background disk firmware update will not occur for a disk if its
containing RAID group or the volume is not in a normal state (for example, if the volume/plex is offline or
the RAID group is degraded). However, due to the continuous polling nature of background disk firmware
update, firmware updates will resume after the RAID group/plex/volume is restored to a normal mode.
Similarly, background disk firmware updates are suspended for the duration of any reconstruction within the
system.
How to Update the Firmware for an HA Pair in a Cluster
The best way to update the firmware in a cluster with HA pairs is to use automatic background firmware
update by enabling the option bkg-firmware-update parameter for each node. Enable the -bkg-
firmware-update parameter on all the nodes by entering the storage disk option modify -node
node_name -bkg-firmware-update on command. Alternatively, use the following procedure to
successfully perform a manual update on the disks in an HA environment:
• Make sure that the nodes are not in takeover or giveback mode.
• Download the latest firmware on both the nodes by using the storage firmware download command.
• Install the new disk firmware on Node A's disks by entering the storage disk firmware update -
node node-A command.
• Wait until the storage disk firmware update command completes on Node A, and then install the
new disk firmware on Node B's disks by entering the storage disk firmware update -node node-B
command.
● Parameters
{ -disk <disk path name>, ... - Disk
Specifies the disk or disks whose firmware is to be updated.
| -node {<nodename>|local}} - node
Specifies the node name. The disk firmware will be updated on all the disks owned by the node specified
by this parameter.
● Examples
If you need to view the current firmware versions, use the storage disk show -fields firmware-revision
command. The following example displays partial output from the storage disk show -fields
firmware-revision command, where the firmware version for the disks is NA01:
The following example updates the firmware on all disks owned by cluster-node-01:
Warning: Disk firmware updates can be disruptive to the system. Updates involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being updated. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware update process is complete.
Disk firmware updates should only be done one node at a time. Disk
firmware updates can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
The following example updates the firmware on disk 1.5.0 which is owned by node cluster-node-04:
Warning: Disk firmware updates can be disruptive to the system. Updates involve
power cycling all of the affected disks, as well as suspending disk
I/O to the disks being updated. This delay can cause client
disruption. Takeover/giveback operations on a high-availability (HA)
group will be delayed until the firmware update process is complete.
Disk firmware updates should only be done one node at a time. Disk
firmware updates can only be performed when the HA group is healthy;
they cannot be performed if the group is in takeover mode.
Related reference
storage disk option modify (page 1109)
storage firmware download (page 1158)
The storage disk option command displays or modifies the settings of disk options.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk option modify command modifies the background firmware update setting,
automatic copy setting, controls automatic disk assignment of all disks assigned to a specified node, or
modifies the policy of automatic disk assignment of unowned disks.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node that owns the disks whose options are to be modified.
[-bkg-firmware-update {on|off}] - Background Firmware Update
This parameter specifies whether firmware updates run as a background process. The default setting is
on, which specifies that firmware updates to spare disks and file system disks is performed
nondisruptively via a background process. If the option is turned off, automatic firmware updates occur
at system startup or during disk insertion.
[-autocopy {on|off}] - Auto Copy
This parameter specifies whether data is to be automatically copied from a failing disk to a spare disk in
the event of a predictive failure. The default setting is on. It is sometimes possible to predict a disk
failure based on a pattern of recovered errors that have occurred. In such cases, the disk reports a
predictive failure. If this option is set to on, the system initiates Rapid RAID Recovery to copy data from
the failing disk to an available spare disk. When data is copied, the disk is marked as failed and placed
in the pool of broken disks. If a spare is not available, the node continues to use the disk until it fails. If
the option is set to off, the disk is immediately marked as failed and placed in the pool of broken disks.
A spare is selected and data from the missing disk is reconstructed from other disks in the RAID group.
The disk does not fail if the RAID group is already degraded or is being reconstructed. This ensures that a
disk failure does not lead to the failure of the entire RAID group.
[-autoassign {on|off}] - Auto Assign
This parameter specifies whether automatic assignment of unowned disks is enabled or disabled. The
default setting is on. This parameter is used to set both a node-specific and a cluster-wide disk option.
[-autoassign-policy {default|bay|shelf|stack}] - Auto Assignment Policy
This parameter defines the granularity at which auto assign should work. This option is ignored if the -
autoassign option is off. Auto assignment can be done at the stack/loop, shelf, or bay level. The possible
values for the option are default, stack, shelf, and bay. The default value is platform dependent. It is
stack for all non-entry platforms and single-node systems, whereas it is bay for entry-level platforms.
● Examples
The following example sets the background firmware update setting to on for all disks belonging to a node
named node0:
The following example shows how to enable auto assignment for the disks on node1:
The following example shows how to modify the auto assignment policy on node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage disk option show command displays the settings of the following disk options:
• Background firmware update
• Automatic copying of data to a spare disk in the event of a predictive failure
• Automatic assignment of disks
• Policy that governs automatic assignment of unowned disks
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the node that owns the disks. If this parameter is not specified, the command displays
information about the disk options on all the nodes.
[-bkg-firmware-update {on|off}] - Background Firmware Update
Selects the disks that match this parameter value.
[-autocopy {on|off}] - Auto Copy
Selects the disks that match this parameter value.
[-autoassign {on|off}] - Auto Assign
Displays the auto assignment status of unowned disks. The default value is on.
[-autoassign-policy {default|bay|shelf|stack}] - Auto Assignment Policy
Selects the disks that match the automatic assignment policy value:
• Default
• Stack/loop
• Shelf
• Bay
● Examples
The following example displays disk-option settings for disks owned by all nodes in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage dqp show command displays information about the Disk Qualification Package (DQP). The
command displays the following information:
• Node Name
• Package Date
• File Version
• File Name
• Drive Record Count
• Drive Alias Record Count
• Device Class Record Count
• System Class Record Count
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays DQP information on the specified node.
[-package-date <text>] - Package Date
Selects DQP information with the specified package date.
[-file-version <text>] - File Version
● Examples
The following example displays the brief version of Disk Qualification Package details:
The following example displays the -instance version of Disk Qualification Package details:
Node: cluster1-01
Package Date: 20180907
Version: 3.17
File Name: qual_devices_v3
Drive Record Count: 626
Alias Record Count: 196
Device Record Count: 28
System Record Count: 3
Node: cluster1-02
Package Date: 20180907
Version: 3.17
File Name: qual_devices_v3
Drive Record Count: 626
Alias Record Count: 196
Device Record Count: 28
System Record Count: 3
2 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage encryption disk destroy command cryptographically destroys a self-encrypting disk
(SED), making it incapable of performing I/O operations. This command performs the following operations:
• Employs the inherent erase capability of SEDs to cryptographically sanitize the disk
• Permanently locks the disk to prevent further data access
• Changes the data and FIPS authentication keys to random values that are not recorded except within the
SED.
Use this command with extreme care. The only mechanism to restore the disk to usability (albeit without
the data) is the storage encryption disk revert-to-original-state operation that is available
only on disks that have the physical secure ID (PSID) printed on the disk label.
The destroy command requires you to enter a confirmation phrase before proceeding with the operation.
The command releases the cluster shell after launching the operation. Monitor the output of the storage
encryption disk show-status command for command completion.
Upon command completion, remove the destroyed SED from the system.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the disk you want to cryptographically destroy. See the man page
for the storage disk modify command for information about disk-naming conventions.
[-force-all-states [true]] - Destroy All Matching Disks
When this parameter is false or not specified, the operation defaults to spare and broken disks only, as
reported in the output of the storage disk show command. When you specify this parameter as true,
it allows you to cryptographically destroy all matching disk names regardless of their state, including
those in active use in aggregates. This allows a quick destroy of all system disks if you use the -disk
parameter with the asterisk wildcard (*). If you destroy active disks, the nodes might not be able to
continue operation, and might halt or panic.
● Examples
The following command cryptographically destroys the disk 1.10.20:
cluster1::>
If you do not enter the correct confirmation phrase, the operation is aborted:
cluster1::>
The following command quickly cryptographically destroys all system disks, including those in active use in
aggregates and shared devices:
cluster1::>
Related reference
storage disk show (page 1083)
storage encryption disk revert-to-original-state (page 1116)
storage encryption disk show-status (page 1121)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage encryption disk modify command changes the data protection parameters of self-
encrypting disks (SEDs) and FIPS-certified SEDS (FIPS SEDs); it also modifies the FIPS-compliance AK (FIPS
AK) of FIPS SEDs. The current data AK and FIPS AK of the device are required to effect changes to the
respective AKs and FIPS compliance. The current and new AKs must be available from the key servers or
onboard key management.
The command releases the cluster shell after launching the operation. Monitor the output of the storage
encryption disk show-status command for command completion.
Note
To properly protect data at rest on a FIPS SED and place it into compliance with its FIPS certification
requirements, set both the Data and FIPS AKs to a value other than the device's default key;
depending on the device type, the default may be manufacture secure ID (MSID), indicated by a key
ID with the special value 0x0, or a null key represented by a blank AK. Verify the key IDs by using the
storage encryption disk show and storage encryption disk show-fips commands.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the SED or FIPS SED that you want to modify.
{ [-data-key-id <text>] - Key ID of the New Data Authentication Key
This parameter specifies the key ID associated with the data AK that you want the SED to use for future
authentications. When the provided key ID is the MSID, data at rest on the SED is not protected from
unauthorized access. Setting this parameter to a non-MSID value automatically engages the power-on-
lock protections of the device, so that when the device is power-cycled, the system must authenticate
with the device using the AK to reenable I/O operations. You cannot specify the null default key; use
MSID instead.
| [-fips-key-id <text>]} - Key ID of the New Authentication Key for FIPS Compliance
This parameter specifies the key ID associated with the FIPS AK that you want the FIPS SED to apply to
SED credentials other than the one that protects the data. When the value is not the MSID, these
credentials are changed to the indicated AK, and other security-related items are set to conform to the
FIPS certification requirements ("FIPS compliance mode") of the device. You may set the -fips-key-id
to any one of the key IDs known to the system. The FIPS key ID may, but does not have to, be the same
as the data key ID parameter. Setting -fips-key-id to the MSID key ID value disables FIPS compliance
mode and restores the FIPS-related authorites and other components as required (other than data) to
their default settings. A nonMSID FIPS-comliance key may be applied only to a FIPS SED.
● Examples
The following command changes both the AK and the power-cycle protection to values that protect the data
at rest on the disk. Note that the -data-key-id and -fips-key-id parameters require one of the key IDs
that appear in the output of the security key-manager query command.
The following command changes the FIPS AK and sets the device into FIPS-compliance mode. Note that the
-fips-key-id parameter requires one of the key IDs that appear in the output of the security key-
manager query command.
Related reference
storage encryption disk show-status (page 1121)
storage encryption disk show (page 1118)
security key-manager query (page 608)
security key-manager create-key (page 604)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Some self-encrypting disks (SEDs) are capable of an operation that restores them as much as possible to
their as-manufactured state. The storage encryption disk revert-to-original-state command
invokes this special operation that is available only in SEDs that have the physical secure ID (PSID) printed
on their labels.
The PSID is unique to each SED, meaning the command can revert only one SED at a time. The disk must be
in a "broken" or "spare" state as shown by the output of the storage disk show command.
The operation in the SED accomplishes the following changes:
• Sanitizes all data by changing the disk encryption key to a new random value
• Sets the data authentication key (AK) and FIPS AK to the default values
• Resets the data locking controls
• Resets the power-on lock state to false
• Initializes other vendor-unique encryption-related parameters
The command releases the cluster shell after launching the operation. Monitor the output of the storage
encryption disk show-status command for command completion.
When the operation is complete, it is possible to return the SED to service using the storage disk unfail
command in advanced privilege mode. To do so, you might also need to reestablish ownership of the SED
using the storage disk assign command.
● Parameters
-disk <disk path name> - Disk Name
The name of the SED to be reverted to its as-manufactured state. See the man page for the storage
disk modify command for information about disk-naming conventions.
-psid <text> - Physical Secure ID
The PSID printed on the SED label.
● Examples
The following command shows a SED being returned to its as-manufactured state:
Related reference
storage disk show (page 1083)
storage encryption disk show-status (page 1121)
storage disk unfail (page 1099)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage encryption disk sanitize command cryptographically sanitizes one or more self-
encrypting disks (SEDs), making the existing data on the SED impossible to retrieve. This operation employs
the inherent erase capability of SEDs to perform all of the following changes:
• Sanitizes all data by changing the disk encryption key to a new random value
• Sets the data authentication key (AK) to the default AK (manufacture secure ID/MSID or null, depending
on the device type)
• Unlocks the data band
• Resets the power-on lock state to false
There is no method to restore the disk encryption key to its previous value, meaning that you cannot recover
the data on the SED. Use this command with extreme care.
The sanitize command requires you to enter a confirmation phrase before proceeding with the operation.
The command releases the cluster shell after launching the operation. Monitor the output of the storage
encryption disk show-status command for command completion. When the operation is complete, it
is possible to return the SED to service using the storage disk unfail command in advanced privilege
mode. To do so, you might also need to reestablish ownership of the SED using the storage disk assign
command.
● Parameters
-disk <disk path name> - Disk Name
This parameter specifies the name of the SEDs you want to cryptographically sanitize. See the man page
for the storage disk modify command for information about disk-naming conventions.
[-force-all-states [true]] - Sanitize All Matching Disks
When this parameter is false or not specified, the operation defaults to spare and broken disks only, as
reported in the output of the storage disk show command. When you specify this parameter as true,
it allows you to cryptographically sanitize all matching disk names regardless of their state, including
those in active use in aggregates. This allows a quick erasure of all system data if you use the -disk
parameter with the asterisk wildcard (*). If you sanitize active disks, the nodes might not be able to
continue operation, and might halt or panic.
● Examples
The following command sanitizes the disk 1.10.20:
cluster1::>
If you do not enter the correct confirmation phrase, the operation is aborted:
cluster1::>
The following command quickly cryptographically sanitizes all system disks, including those in active use in
aggregates and shared devices:
cluster1::>
Related reference
storage disk show (page 1083)
storage encryption disk show-status (page 1121)
storage disk unfail (page 1099)
storage disk assign (page 1072)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage encryption disk show command displays information about encrypting drives. When no
parameters are specified, the command displays the following information about all encrypting drives:
• Disk name
• The protection mode of the device
• The key ID associated with the data authentication key ("data AK")
In MetroCluster systems, the information is valid from the cluster that owns the drive, or from the DR cluster
when in switchover mode. If information is not available, perform the show command from the cluster
partner.
You can use the following parameters together with the -disk parameter to narrow the selection of
displayed drives or the information displayed about them.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-fips ]
If you specify this parameter, the command displays the key ID associated with the FIPS-compliance
authentication key ("FIPS AK") instead of the data key ID.
| [-instance ]}
If you specify this parameter, the command displays detailed disk information about all disks, or only
those specified by a -disk parameter.
[-disk <disk path name>] - Disk Name
If you specify this parameter, the command displays information about the specified disks. If you specify
a single disk path name, the output is the same as when you use the -instance parameter. See the man
page for the storage disk modify command for information about disk-naming conventions. Default
is all self-encrypting disks.
[-container-name <text>] - Container Name
This parameter specifies the container name associated with an encrypting drive. If you specify an
aggregate name or other container name, only the encrypting drives in that container are displayed. See
the man page for the storage disk show command for a description of the container name. Use the
storage aggregate show-status and storage disk show commands to determine which
aggregates the drives are in.
[-container-type {aggregate | broken | foreign | labelmaint | maintenance | mediator | remote |
shared | spare | unassigned | unknown | unsupported}] - Container Type
This parameter specifies the container type associated with an encrypting drive. If you specify a
container type, only the drives with that container type are displayed. See the man page for the
storage disk show command for a description of the container type.
[-data-key-id <text>] - Key ID of the Current Data Authentication Key
This parameter specifies the key ID associated with the data AK that the encrypting drive requires for
authentication with its data-protection authorities. The special key ID 0x0 indicates that the current data
AK of the drive is the default manufacture secure ID (MSID) that is not secret. Some devices employ an
initial null default AK that appears as a blank data-key-id; you cannot specify a null data-key-id value.
To properly protect data at rest on the device, modify the data AK using a key ID that is not a default
value (MSID or null). When you modify the data AK with a non-MSID key ID, the system automatically
sets the device's power-on lock enable control so that authentication with the data AK is required after a
device power-cycle. Use storage encryption disk modify -data-key-id key-id to protect the
data. Use storage encryption disk modify -fips-key-id key-id to place the drives into FIPS-
compliance mode.
[-fips-key-id <text>] - Key ID of the Current FIPS Authentication Key
This parameter specifies the key ID associated with the FIPS authentication key ("FIPS AK") that the
system must use to authenticate with FIPS-compliance authorities in FIPS-certified drives. This
parameter may not be set to a non-MSID value in drives that are not FIPS-certified.
[-is-power-on-lock-enabled {true|false}] - Is Power-On Lock Protection Enabled?
This parameter specifies the state of the control that determines whether the encrypting drive requires
authentication with the data AK after a power-cycle. The system enables this control parameter
automatically when you use the storage encryption disk modify -data-key-id command to set
the data AK to a value other than the default AK. Data is protected only when this parameter is true and
the data AK is not a default. Compare with the values of the -protection-mode parameter below.
● Examples
The following command displays information about all encrypting drives:
Note in the example that only disk 1.10.2 is fully protected with FIPS mode, power-on-lock enable, and an
AK that is not the default MSID.
The following command displays information about the protection mode and FIPS key ID for all encrypting
drives:
Note again that only disk 1.10.2 is fully protected with FIPS-compliance mode set, power-on-lock enabled,
and a data AK that is not the default MSID.
The following command displays the individual fields for disk 1.10.2:
Related reference
storage disk show (page 1083)
storage aggregate show-status (page 1016)
storage encryption disk modify (page 1114)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage encryption disk show-status command displays the results of the latest destroy,
modify, or sanitize operation of the storage encryption disk command family. Use this command
to view the progress of these operations on self-encrypting disks (SEDs).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node Name
If you specify this parameter, the command displays disk encryption status for the nodes that match this
parameter.
[-is-fips-support {true|false}] - Node Supports FIPS Disks
If you specify this parameter, the command displays disk encryption status for the nodes that match this
parameter (true means the node supports FIPS-certified self-encrypting drives).
[-latest-op <Storage Disk Encryption Operation>] - Latest Operation Requested
If you specify this parameter, the command displays disk encryption status for the nodes with a most
recent storage encryption disk operation that matches this parameter (one of destroy, modify,
revert-to-original-state, sanitize, or unknown).
[-op-start-time <MM/DD/YYYY HH:MM:SS>] - Operation Start Time
Selects the nodes with operation start times that match this parameter.
[-op-execute-time <integer>] - Execution Time in Seconds
If you specify this parameter, the command displays disk encryption status for the nodes with operation
execution time that matches this parameter. The operation may be partial or completed.
[-disk-start-count <integer>] - Number of Disks Started
If you specify this parameter, the command displays disk encryption status for the nodes that started
this number of SEDs in their latest operation.
[-disk-done-count <integer>] - Number of Disks Done
Selects the nodes that report this number of SEDs having completed the latest operation, successfully or
not.
[-disk-success-count <integer>] - Number of Disks Successful
If you specify this parameter, the command displays disk encryption status for the nodes that report this
number of SEDs that successfully completed the latest operation. When the operation is finished, if the
success count is not the same as the started count, some additional detail is available using the -
instance or -node parameters.
[-disk-no-key-id-count <integer>] - Number of Disks with Key ID Not Found
If you specify this parameter, the command displays disk encryption status for the nodes that report this
number of SEDs that failed the latest operation because Data ONTAP could not find the Key IDs
associated with the required authentication key of the SED.
[-disk-no-authent-count <integer>] - Number of Disks Not Authenticated
If you specify this parameter, the command displays disk encryption status for the nodes that report this
number of SEDs that failed the latest operation because the identified Authentication Key could not
authenticate with the SED.
[-op-sequence-count <integer>] - Sequence Count of Latest Operation
If you specify this parameter, the command displays disk encryption status for that nodes that match the
value list.
● Examples
When no operation has been requested since node boot, the status for that node is empty. If you enter a
node name, the output is in the same format as for the -instance parameter.
Once an operation begins, the status is dynamic until all devices have completed. When disks are modified,
sanitized, or destroyed, sequential executions of storage encryption disk show-status appear as in
this example that shows the progress of a modify operation on three SEDs on each node of a two-node
cluster:
Related reference
storage encryption disk commands (page 1113)
storage encryption disk destroy (page 1113)
storage encryption disk modify (page 1114)
storage encryption disk revert-to-original-state (page 1116)
storage encryption disk sanitize (page 1117)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage errors show command displays configuration errors with back end storage arrays.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-uid <text>] - UID
Selects the disks that match this parameter value.
[-array-name <array name>] - Array Name
Selects the disks that have the specified name for the storage array that is connected to the cluster.
[-node {<nodename>|local}] - Controller Name
Selects the disks that match this parameter value.
[-disk <disk path name>] - Disk
Selects the disks that match this parameter value.
[-serial-number <text>] - Serial Number
Selects the disks that match this parameter value.
[-error-id <integer>, ...] - Error ID
Selects the disks with error-id values that match this parameter value.
[-error-type {onepath|onedomain|control|foreign|toobig|toosmall|invalidblocksize|targetasymmap|
deviceassymmap|failovermisconfig|unknown|netapp|fwdownrev|qualfail|diskfail|notallflashdisk}, ...] -
Error Type
Selects the disks with error types values that match this parameter value.
● Examples
The following example displays configuration errors seen in the system:
--------------------
vnci9124s54:1-24.126L23 (600a0b800019e999000036b24bac3983): This array LUN reports an invalid block siz
e and is not usable. Only a block size of 512 is supported.
This contains commands to display and modify storage failover related options of a node.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover giveback command returns storage that has failed over to a node's partner back
to the home node. This operation fails if other resource-intensive operations (for instance, system dumps)
are running and make the giveback operation potentially dangerous or disruptive. Some options are
available only at the advanced privilege level and higher. Run the storage failover show-giveback
command to check the status of giveback operations.
Note
• If the system ID of the partner has changed while the node is in takeover mode, the storage
failover giveback command updates the ownership of the partner's disks to the new system ID
while giving back.
• If the giveback operation fails due to the operation being vetoed by a subsystem, check the syslog
or EMS output for a subsystem-specific reason for the abort. The corrective action is subsystem-
specific and is detailed in the corrective action portion of the message. Follow the corrective action
specified by the subsystem and then reissue the storage failover giveback command. If you
cannot perform the corrective action, then use the override-vetoes option in the storage
failover giveback command to force the giveback.
• If the giveback operation fails because the node cannot communicate with its partner, check the
EMS output for the corrective action. Follow the corrective action and then reissue the storage
failover giveback command. If you cannot perform the corrective action, then use the -
require-partner-waiting false option in the storage failover giveback command to
force the giveback. This parameter is available only at the advanced privilege level and higher.
• If the node does not receive notification that the partner has brought online the given-back
aggregate and its volumes, the storage failover show-giveback command displays the
giveback status for the aggregate as failed. A possible reason for this failure is that the partner is
overloaded and slow in bringing the aggregate online. Run the storage aggregate show
command to verify that the aggregate and its volumes are online on the partner node. The node
will not attempt the giveback operation for remaining aggregates. To force the giveback, use the -
require-partner-waiting false option in the storage failover giveback command. This
parameter is available only at the advanced privilege level and higher.
● Parameters
{ -ofnode {<nodename>|local} - Node to which Control is Givenback
Specifies the node whose storage is currently taken over by its partner and will be given back by the
giveback operation.
| -fromnode {<nodename>|local}} - Node Initiating Giveback
Specifies the node that currently holds the storage that is to be returned to the partner node.
[-require-partner-waiting {true|false}] - Require Partner in Waiting (privilege: advanced)
If this optional parameter is used and set to false, the storage is given back regardless of whether the
partner node is available to take back the storage or not. If this parameter is used and set to true, the
storage will not be given back if the partner node is not available to take back the storage. If this
parameter is not used, the behavior defaults to the setting of the -check-partner option set with the
storage failover modify command.
[-override-vetoes [true]] - Override All Vetoes
If this optional parameter is used, the system overrides veto votes during a giveback operation. If this
parameter is not used, the system does not proceed with a giveback if it is vetoed. This parameter, if
used, can only be set to true.
[-only-cfo-aggregates [true]] - Giveback Only CFO Aggregates
If this optional parameter is used, giveback of only the CFO aggregates (root aggregate and CFO style
data aggregates) will be attempted. If this parameter is not used, giveback of all the aggregates (CFO
and SFO aggregates) will be attempted. This parameter, if used, can only be set to true.
● Examples
The following example gives back storage that is currently held by a node named node1. The partner must
be available for the giveback operation to occur.
The following example gives back only the CFO aggregates to a node named node2 (the aggregates are
currently held by a node named node1). The partner must be available for the giveback operation to occur,
and the veto-giveback process can be overridden.
Related reference
storage failover modify (page 1125)
storage failover show-giveback (page 1142)
storage aggregate show (page 987)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover modify command changes the storage-failover options for a node. Some options
are available only at the advanced privilege level and higher.
● Parameters
-node {<nodename>|local} - Node
This specifies the node whose storage-failover options are to be modified.
{ [-enabled {true|false}] - Takeover Enabled
This optionally specifies whether storage failover is enabled. The default setting is true .
This optionally specifies whether the node automatically takes over for its partner if the partner reboots.
The default setting is true . Takeover can occur if the partner exceeds the expected time to reboot even
when this option is set to false. The expected time to reboot is different for different platforms. The
minimum expected time to reboot is 180 seconds. The -inhibit-takeover option of the system
node reboot command overrides this option: if a node is rebooted with -inhibit-takeover set to
true then takeover does not occur, even if the takeover on reboot option is true. If a node does
takeover due to the partner rebooting, then it will automatically giveback after the reboot, even if the -
auto-giveback option is set to false . This is non-persistent behavior: if the node does takeover due
to partner reboot and then itself reboots (prior to giveback) then it will not automatically giveback if the
-auto-giveback option is set to false .
[-delay-seconds <integer>] - Delay Before Auto Giveback (secs)
This optionally specifies the minimum time that a node will stay in takeover state prior to performing an
automatic giveback. If the taken over node recovers quickly (for example, if the takeover was due to a
reboot), by delaying the giveback for a few minutes the outage during the takeover and giveback can be
reduced to two short outages instead of one longer one. The allowed range is 0 to 600, inclusive. The
default setting is 600 seconds. This option affects all types of auto-giveback. This parameter is available
only at the advanced privilege level and higher.
Note
This delay does not affect manual giveback.
Note
This parameter is independent of the -auto-giveback parameter. If this parameter is enabled
and the takeover is due to disruption in the partner's operation, giveback will be initiated even if -
auto-giveback parameter is false.
This optionally specifies the amount of time, in seconds, the source node has to wait for the destination
node to complete the aggregate migration before declaring the migration as failed. The default setting
is 120 seconds.
● Examples
The following example enables the storage-failover service on a node named node0:
The following examples enable storage-failover takeover on a short uptime of 30 seconds on a node named
node0:
Related reference
system node reboot (page 1407)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover show command displays information about storage-failover configurations. By
default, the command displays the following information:
• Node name.
• Partner node name.
• Whether storage failover is possible.
• The current state of storage failover. If the takeover is disabled the appropriate reason would be
displayed.
To display detailed information about storage failover on a specific node, run the command with the -node
parameter. The detailed view adds the following information:
• Node NVRAM ID.
• Partner NVRAM ID.
• Whether storage failover is enabled.
• Whether the storage-failover interconnect is available.
• Status of individual storage-failover interconnect links.
• Type and vendor of the storage-failover interconnect.
• Partner State
• Status codes from the takeover-by-partner process. Possible values include:
- NVRAM_DOWN
- OPERATOR_DISABLE_NVRAM
- PARTNER_RESET
- FM_TAKEOVER
- NVRAM_MISMATCH
- OPERATOR_DENY
- CLUSTER_DISABLE
- VERSION
- SHELF_HOT
- REVERT_IN_PROGRESS
- HALT_NOTKOVER
- TAKEOVER_ON_PANIC
• Reasons why takeover is not possible, if applicable. Possible values include:
- NOT_INIT
- DISABLED
- DEGRADED
- MBX_UNKNOWN
- FM_VERSION
- PARTNER_DISABLED
- OPERATOR_DENY
- NVRAM_MISMATCH
- VERSION
- IC_ERROR
- BOOTING
- SHELF_HOT
- PARTNER_REVERT_IN_PROGRESS
- LOCAL_REVERT_IN_PROGRESS
- PARTNER_TAKEOVER
- LOCAL_TAKEOVER
- HALT_NOTKOVER
- LOG_UNSYNC
- UNKNOWN
- WAITING_FOR_PARTNER
- LOW_MEMORY
- HALTING
- MBX_UNCERTAIN
- NO_AUTO_TKOVER
• Time until takeover, in seconds.
• Time until auto giveback, in seconds.
• Delay for auto giveback, in seconds.
• List of local mailbox disks.
• List of partner mailbox disks.
• Whether operator-initiated planned takeover will be optimized for performance by relocating SFO (non-
root) aggregates serially to the partner prior to takeover.
You can specify additional parameters to select the displayed information. For example, to display
information only about storage-failover configurations whose interconnect is down, run the command with
-interconnect-up false.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-options ]
Displays the following information:
• Node name
• Whether automatic giveback operations are enabled
• Whether long-running operations are terminated when an automatic giveback operation is initiated
• Whether the node checks its partner's readiness before initiating a giveback operation
• The time, in seconds, that the node remains unresponsive before its partner initiates a takeover
operation
• Whether the node automatically takes over for its partner if the partner fails
• Whether the node automatically takes over for its partner if the partner panics
• Whether the node automatically takes over for its partner if the partner reboots
• whether Hardware Assisted Takeover is enabled
• Ip address on which the partner node listens to the Hardware Assist alerts
• Port number on which the partner node listens to the Hardware Assist alerts
• Whether operator-initiated planned takeover will be optimized for performance by relocating SFO
(non-root) aggregates serially to the partner prior to takeover
If this parameter is specified when the privilege level is set to advanced or higher, the command
displays the information in the previous list and the following additional information:
• Whether the node takes over for its partner if its partner fails after a period of time, which is listed in
the following field
• The number of seconds before the node takes over for its partner
• The number of times the node attempts an automatic giveback operation within a period of time
• The number of minutes in which the automatic giveback attempts can occur
• Whether storage-failover status is communicated via mailbox disks
• The interval at which the node reads its partner node's status from the mailbox disks
• The interval at which the node writes its status to the mailbox disks
• The interval at which Hardware assist h/w sends a heartbeat
• The number of times the Hardware assist alert is sent
| [-takeover-status ]
Displays the following information:
• Node name
• Partner name
• Takeover enabled
• Takeover possible
• Interconnect up
• State
• Node NVRAM ID
• Partner NVRAM ID
• VERSION
• IC_ERROR
• BOOTING
• SHELF_HOT
• PARTNER_REVERT_IN_PROGRESS
• LOCAL_REVERT_IN_PROGRESS
• PARTNER_TAKEOVER
• LOCAL_TAKEOVER
• HALT_NOTKOVER
• LOG_UNSYNC
• UNKNOWN
• WAITING_FOR_PARTNER
• LOW_MEMORY
• HALTING
• MBX_UNCERTAIN
• NO_AUTO_TKOVER
[-interconnect-up {true|false}] - Interconnect Up
Selects the nodes that have the specified interconnect-up setting.
[-interconnect-links <text>] - Interconnect Links
Selects the nodes that have the specified interconnect-links setting.
[-interconnect-type <text>] - Interconnect Type
Selects the nodes that have the specified interconnect-type setting.
[-state-description <text>] - State Description
Selects the nodes that have the specified state-description setting.
[-partner-state <text>] - Partner State
Selects the nodes that have the specified partner-state setting. Possible values include:
• OPERATOR COMPLETED
• DEBUGGUER COMPLETED
• PROGRESS COUNTER
• I/O ERROR
• BAD CHECKSUM
• RESERVED
• UNKNOWN
• INITIALIZING
• IN POWER-ON SELF TEST
• BOOTING
• BOOT FAILED
• WAITING
• KERNEL LOADED
• UP
• IN DEBUGGER
• WAITING FOR OPERATOR INPUT
• DUMPING CORE
• HALTED
• REBOOTING
• WAITING FOR GIVEBACK (DISK RESERVATIONS)
• WAITING FOR GIVEBACK (HA MAILBOXES)
• DUMPING SPARECORE
• MULTI-DISK PANIC
• IN TAKEOVER
[-time-until-takeover <integer>] - Time Until Takeover
Selects the nodes that have the specified time-until-takeover setting.
[-partner-reason <text>, ...] - Reason Takeover not Possible by Partner
Selects the nodes that have the specified partner-reason text setting.
[-killpackets {true|false}] - Issue Kill Packets (privilege: advanced)
Selects the nodes that have the specified kill packets setting.
[-partner-aggregates {true|false}] - Control Partner Aggregates (privilege: advanced)
Selects the nodes that have the specified partner aggregates setting.
[-current-index <integer>] - Current Progress Index (privilege: advanced)
Selects the nodes that have the specified current-progress index setting.
[-current-entry <text>] - Current Progress Entry (privilege: advanced)
Selects the nodes that have the specified current-progress entry setting.
[-maximum-index <integer>] - Maximum Progress Index (privilege: advanced)
Selects the nodes that have the specified maximum-progress index setting.
[-pmbox-status <text>, ...] - Primary Mailbox Status (privilege: advanced)
Selects the nodes that have the specified primary mailbox status setting. Possible values include:
• MBX_STATUS_NODISKS
• MBX_STATUS_UNCERTAIN
• MBX_STATUS_STALE
• MBX_STATUS_CONFLICTED
• MBX_STATUS_OLD_VERSION
• MBX_STATUS_NOT_FOUND
• MBX_STATUS_WRONG_STATE
• MBX_STATUS_BACKUP
[-bmbox-status <text>, ...] - Backup Mailbox Status (privilege: advanced)
Selects the nodes that have the specified backup-mailbox status setting. See the description of the -
pmbox-status parameter for a list of possible values.
[-major-seq-num-local <integer>] - Local Major Sequence Number (privilege: advanced)
Selects the nodes that have the specified mailbox heartbeat major sequence number on the local node.
[-minor-seq-num-local <integer>] - Local Minor Sequence Number (privilege: advanced)
Selects the nodes that have the specified mailbox heartbeat minor sequence number on the local node.
[-major-seq-num-partner <integer>] - Partner Major Sequence Number (privilege: advanced)
Selects the nodes that have the specified mailbox heartbeat major sequence number on the partner
node.
[-minor-seq-num-partner <integer>] - Partner Minor Sequence Number (privilege: advanced)
Selects the nodes that have the specified mailbox heartbeat minor sequence number on the partner
node.
[-local-mbx-node-status <Mailbox Status>] - Local Mailbox Node Status (privilege: advanced)
Selects the nodes that have the specified local mailbox node status. Possible values include:
• MBX_UNKNOWN - Local node is up, mailbox uninitialized
• MBX_TAKEOVER_DISABLED - Local node is up but takeover is disallowed
• MBX_TAKEOVER_ENABLED - Local node is up and takeover is allowed
• MBX_TAKEOVER_ACTIVE - Partner node has taken over
• MBX_GIVEBACK_DONE - Giveback completed, but local node has not yet restarted
[-mbx-abs-time-local <integer>] - Local Mailbox Absolute Time (privilege: advanced)
Selects the nodes that have the specified local mailbox channel absolute time. This time is measured in
msecs since 1/1/1970 (epoch).
[-mbx-sk-time-local <integer>] - Local Mailbox Kernel Time (privilege: advanced)
Selects the nodes that have the specified local mailbox channel Kernel Time.
[-mbx-sk-cycles-local <integer>] - Local Mailbox CPU Cycles (privilege: advanced)
Selects the nodes that have the specified local mailbox channel CPU Cycle count.
[-ic-abs-time-local <integer>] - Local IC Absolute Time (privilege: advanced)
Selects the nodes that have the specified local Interconnect channel absolute time. This time is
measured in msecs since 1/1/1970 (epoch).
[-ic-sk-time-local <integer>] - Local IC Kernel Time (privilege: advanced)
Selects the nodes that have the specified local Interconnect channel Kernel Time.
[-ic-sk-cycles-local <integer>] - Local IC CPU Cycles (privilege: advanced)
Selects the nodes that have the specified local Interconnect channel CPU Cycle count.
[-partner-mbx-node-status <Mailbox Status>] - Partner Mailbox Node Status (privilege: advanced)
Selects the nodes that have the specified partner mailbox node status. Possible values include:
• MBX_UNKNOWN
• MBX_TAKEOVER_DISABLED
• MBX_TAKEOVER_ENABLED
• MBX_TAKEOVER_ACTIVE
• MBX_GIVEBACK_DONE
[-mbx-abs-time-partner <integer>] - Partner Mailbox Absolute Time (privilege: advanced)
Selects the nodes that have the specified partner mailbox channel absolute time. This time is measured
in msecs since 1/1/1970 (epoch).
[-mbx-sk-time-partner <integer>] - Partner Mailbox Kernel Time (privilege: advanced)
Selects the nodes that have the specified partner mailbox channel Kernel Time.
[-mbx-sk-cycles-partner <integer>] - Partner Mailbox CPU Cycles (privilege: advanced)
Selects the nodes that have the specified partner mailbox channel CPU Cycle count.
[-mbx-major-seq-num-partner <integer>] - Partner Mailbox Major Sequence Number (privilege:
advanced)
Selects the nodes that have the specified partner mailbox channel major sequence number.
[-mbx-minor-seq-num-partner <integer>] - Partner Mailbox Minor Sequence Number (privilege:
advanced)
Selects the nodes that have the specified partner mailbox channel minor sequence number.
[-ic-abs-time-partner <integer>] - Partner IC Absolute Time (privilege: advanced)
Selects the nodes that have the specified partner Interconnect channel absolute time. This time is
measured in msecs since 1/1/1970 (epoch).
Selects the nodes that have the specified firmware-reception time in CPU Cycles.
[-booting-received <integer>] - Booting Received (privilege: advanced)
Selects the nodes that have the specified booting-reception time.
[-transit-time <integer>] - Transit Event Time (privilege: advanced)
Selects the nodes whose last failover event occurred at the specified time.
[-pnormal <integer>] - Primary Normal IO Time (privilege: advanced)
Selects the nodes that have the specified normal primary-mailbox I/O time.
[-ptransition <integer>] - Primary Transition IO Time (privilege: advanced)
Selects the nodes that have the specified transitional primary-mailbox I/O time.
[-bnormal <integer>] - Backup Normal IO Time (privilege: advanced)
Selects the nodes that have the specified normal backup-mailbox I/O time.
[-btransition <integer>] - Backup Transition IO Time (privilege: advanced)
Selects the nodes that have the specified transitional backup-mailbox I/O time.
[-logs-unsynced <integer>] - Logs Unsynced Count (privilege: advanced)
Selects the nodes that have the specified count of unsynchronized logs.
[-auto-giveback {true|false}] - Auto Giveback Enabled
Selects the nodes that have the specified auto-giveback setting.
[-detection-time <integer>] - Takeover Detection Time (secs)
Selects the nodes that have the specified detection-time setting.
[-onfailure {true|false}] - Takeover on Failure Enabled (privilege: advanced)
Selects the nodes that have the specified takeover-on-failure setting.
[-onpanic {true|false}] - Takeover on Panic Enabled
Selects the nodes that have the specified takeover-on-panic setting.
[-onshort-uptime {true|false}] - Takeover on Short Uptime Enabled (privilege: advanced)
Selects the storage-failover configurations that match this parameter value.
[-short-uptime <integer>] - Short Uptime (secs) (privilege: advanced)
Selects the nodes that have the specified short-uptime value.
[-attempts <integer>] - Number of Giveback Attempts (privilege: advanced)
Selects the nodes that have the specified number of giveback attempts.
[-attempts-time <integer>] - Giveback Attempts Period (minutes) (privilege: advanced)
Selects the nodes that have the specified time setting for giveback attempts.
[-propagate {true|false}] - Propagate Status via Mailbox (privilege: advanced)
Selects the nodes that have the specified propagate-status-via-mailbox setting.
[-read-interval <integer>] - Node Status Read Interval (secs) (privilege: advanced)
Selects the nodes that have the specified read interval.
[-write-interval <integer>] - Node Status Write Interval (secs) (privilege: advanced)
Selects the nodes that have the specified write interval.
[-onreboot {true|false}] - Takeover on Reboot Enabled
Selects the nodes that have the specified takeover-on-reboot setting.
[-delay-seconds <integer>] - Delay Before Auto Giveback (secs)
Selects the nodes that have the specified delay (in seconds) for the auto giveback.
[-hwassist {true|false}] - Hardware Assist Enabled
Selects the nodes that have the specified hwassist setting.
[-hwassist-partner-ip <IP Address>] - Partner's Hwassist IP
Selects the nodes that have the specified hwassist-partner-ip setting.
[-hwassist-partner-port <integer>] - Partner's Hwassist Port
● Examples
The following example displays information about all storage-failover configurations:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover show-giveback command displays information about the giveback status of
high-availability (HA) partner aggregates. The command displays the following information when no
parameters are specified:
• Node name
• Partner aggregate name
• Giveback Status
You can specify additional parameters to display only the information that matches those parameters. For
example, to display information only about a particular aggregate, run the command with the -aggregate
aggregate_name parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is used, the command displays information about the giveback status of the
aggregates belonging to the HA partner of the specified node.
[-aggregate <text>] - Aggregate
If this parameter is used, the command displays information about the giveback status of the specified
aggregate.
[-giveback-status <text>, ...] - Aggregates Giveback State
If this parameter is used, the command displays information about the aggregates with the specified
giveback status.
[-destination <text>] - Destination for Giveback
If this parameter is used, the command displays information about the giveback status of the
aggregates whose destination after the giveback is the specified node.
● Examples
The following example displays information about giveback status on all nodes:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover show-takeover command displays information about the takeover status of
nodes in a cluster. The command also displays the takeover status of aggregates being taken over. During
each phase of takeover, the takeover node and the node being taken over display their takeover status and
the status of the aggregates being taken over. The command displays the following information when no
parameters are specified:
• Node name
• Node takeover status - This contains a descriptive information about the phase of takeover.
• Aggregate
• Aggregate takeover status - This contains the following information:
- Takeover status of the aggregate, such as "Done", "Failed", "In progress" and "Not attempted yet".
- Reason for an aggregate takeover failure.
- Corrective action, in case of an aggregate takeover failure.
You can specify additional parameters to display only the information that matches those parameters. For
example, to display information only about a particular node, run the command with the -node node_name
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays the specified fields for all nodes, in column style
output.
| [-instance ]}
If this parameter is specified, the command displays the same detailed information as for the -node
parameter, but for all nodes.
[-node {<nodename>|local}] - Node Name
If this parameter is specified, the command displays information about the takeover status of the
specified node, and the takeover status of the aggregates being taken over.
● Examples
The following example shows the takeover status of two nodes, nodeA and nodeB, in an High Availability
(HA) pair, when both are in normal mode; neither node has taken over its HA partner. In this case, there is
no takeover status for the aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when
nodeA is in the SFO phase of an optimized takeover of nodeB. In this case, nodeA does not have information
about the takeover status of nodeB's aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when
nodeA has completed the SFO phase of an optimized takeover of nodeB (but has not completed the CFO
phase of the optimized takeover). In this case, nodeA has information about the takeover status of nodeB's
aggregates.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when
nodeA has completed the SFO and CFO phases of an optimized takeover of nodeB. In this case, nodeA has
information about the takeover status of nodeB's aggregates. Since nodeB is not operational, an Remote
Procedure Call(RPC) error is indicated in the command output.
The following example shows the takeover status of two nodes, nodeA and nodeB, in an HA pair, when
nodeA has aborted the SFO phase of an optimized takeover of nodeB. In this case, nodeA does not have
information about the takeover status of nodeB's aggregates.
relocation status of
the partner.
- -
nodeB Optimized takeover
by partner aborted.
aggr1 Failed: Destination node did
not online the aggregate on
time. To takeover the
remaining aggregates, run the
"storage failover takeover
-ofnode nodeB
-bypass-optimization true"
command. To giveback the
relocated aggregates, run the
"storage failover giveback
-ofnode nodeB" command.
aggr2 Not attempted yet
CFO aggregates Not attempted yet.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover takeover command initiates a takeover of the partner node's storage.
● Parameters
{ -ofnode {<nodename>|local} - Node to Takeover
This specifies the node that is taken over. It is shut down and its partner takes over its storage.
| -bynode {<nodename>|local}} - Node Initiating Takeover
This specifies the node that is to take over its partner's storage.
[-option <takeover option>] - Takeover Option
This optionally specifies the style of takeover operation. Possible values include the following:
• normal - Specifies a normal takeover operation; that is, the partner is given the time to close its
storage resources gracefully before the takeover operation proceeds. This is the default value.
• immediate - Specifies an immediate takeover. In an immediate takeover, the takeover operation is
initiated before the partner is given the time to close its storage resources gracefully. The use of this
option results in an immediate takeover which does not do a clean shutdown. In case of NDU this can
result in a NDU failure.
If this option is specified, migration of data LIFs from the partner will be
delayed even if the -skip-lif-migration-before-takeover option is
not specified. If possible, migrate the data LIFs to another node prior to
specifying this option.
• allow-version-mismatch - If this value is specified, the takeover operation is initiated even if the
partner is running a version of software that is incompatible with the version running on the node. In
this case, the partner is given the time to close its storage resources gracefully before the takeover
operation proceeds. However, the takeover operation will not be allowed if the partner has higher
WAFL or RAID label versions. Use this value as part of a nondisruptive upgrade or downgrade
procedure.
• force - If this value is specified, the takeover operation is initiated even if the node detects an error
that normally prevents a takeover operation from occurring. This value is available only at the
advanced privilege level and higher.
The use of this option can potentially result in data loss. If the HA
interconnect is detached or inactive, or the contents of the failover
partner's NVRAM cards are unsynchronized, takeover is normally disabled.
Using the -force option enables a node to take over its partner's storage
despite the unsynchronized NVRAM, which can contain client data that
can be lost upon storage takeover.
The use of this parameter might result in the takeover proceeding even if
the node detects issues that can potentially make the takeover dangerous or
disruptive.
● Examples
The following example causes a node named node0 to initiate a negotiated optimized takeover of its
partner's storage:
The following example causes a node named node0 to initiate an immediate takeover of its partner's
storage:
In a high-availability configuration, a storage controller monitors its partner's health using heartbeats. On
storage controller failures, the takeover starts after a storage controller misses several heartbeats - which by
default can take up to 15 seconds after the failure occurs. When the hardware-assisted takeover (hwassist)
feature is enabled, the service processer of the storage controller (SP, RLM or BMC) is able to detect various
failures, such as: Power Loss, Power Cycle, and POST Error. In these cases, the failover detection time is less
than a second, allowing the takeover to start much sooner. Use the "storage failover modify" command to
configure the hwassist feature.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover hwassist show command displays information about the hardware-assisted
storage failover status on each node. By default, the command displays the following information:
• Node name.
• Partner node name.
• Whether hardware-assisted failover is enabled.
• IP address on which the local node receives hardware-assisted failover alerts.
• Port on which the local node receives hardware-assisted failover alerts.
• Hardware-assisted failover status.
• If the monitor is inactive, the reason it is inactive.
• If the monitor is inactive, the corrective action to make it active.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays information only about hwassist on the specified
nodes.
[-partner-name {<nodename>|local}] - Name of the Partner Node
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified partner node.
[-enabled {true|false}] - Local Hardware Assist Enabled
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified enabled state.
[-local-status <text>] - Local Node's Hwassist Status
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified local status (active or inactive).
[-local-ip <text>] - IP Address on Which Local Node is Listening
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified local IP address.
[-local-port <integer>] - Port on Which Local Node is Listening
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified local UDP port.
[-local-inactive <text>] - Local Node's Hwassist Inactive Status Reason
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified inactive reason.
[-local-action <text>] - Corrective Action on Local Node
If you specify this parameter, the command displays information only about hwassist on nodes with the
specified corrective action.
● Examples
The following example displays the hardware-assisted failover information for node cluster1-01 and its HA
partner node cluster1-02:
Inactive Reason: -
Corrective Action: -
Keep-Alive Status: healthy
cluster1-02
Partner: cluster1-01
Hwassist Enabled: true
Hwassist IP: 10.225.248.21
Hwassist Port: 4444
Monitor Status: active
Inactive Reason: -
Corrective Action: -
Keep-Alive Status: healthy
Related reference
storage failover modify (page 1125)
storage failover show (page 1128)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover hwassist test command tests the Hardware Assist h/w connectivity between
the two nodes in a HA pair. The test result can be one of the following.
• Hardware Assist is not initialized.
• HW assist is not supported.
• Partner is throttling alerts.
• Resource is busy.
• Hardware Assist h/w returned an error.
• No response from partner.Timed out.
• Unexpected abort.
• Partner has taken over.
• Interconnect is down between nodes.
• Partner is not booted up yet.
● Parameters
-node {<nodename>|local} - Node
This specifies the node from which a test alert is initiated.
● Examples
The following command issues a test alert from the node cluster1-01:
In a Hardware Assisted environment a node receives various alerts with from the partner node,periodically.
Data ONTAP keeps track of all these alerts and maintains statistics about them. The user can see these
statistics or clear them.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover hwassist stats clear command clears the statistics information maintained
by Hardware Assist functionality.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the statistics are to be cleared.
● Examples
The following example clears the hwassist statistics on the node cluster1-01:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage failover hwassist stats show command displays statistics about the hardware assist
alerts processed by a node. The command displays the following information for each alert:
• Locally enabled.
• Partner Inactive Reason.
• Alert type.
• Event that triggered the alert.
• The number of times the alert has been received.
• Whether takeover was possible on receiving the alert.
• The last time at which the alert was received.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
● Examples
The following example displays the hwassist statistics for the node ha1:
Node: ha1
Local Enabled: true
Partner Inactive Reason: -
The following example displays the hwassist statistics for the node ha1 where hardware assist hardware is
not supported.
Node: ha1
Local Enabled: false
Partner Inactive Reason: HW assist is not supported on partner.
This contains commands related to displaying and modifying internal options for storage failover of a node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage failover internal-options show command displays the following information about
the storage failover configuration:
• Node name
• Whether automatic giveback is enabled
• Whether partner checking is enabled
• Takeover detection time, in seconds
• Whether takeover on failover is enabled
• Whether takeover on panic is enabled
• Whether takeover on reboot is enabled
• Whether hardware-assisted takeover is enabled
• IP address on which the partner node listens to the hardware-assisted takeover alerts
• Port on which the partner node listens to the hardware-assisted takeover alerts
• Whether takeover on short uptime is enabled (detailed view only)
• Short uptime interval, in seconds (detailed view only)
• Number of giveback attempts (detailed view only)
• Giveback attempt interval, in minutes (detailed view only)
• Whether status is propagated through SFO mailboxes (detailed view only)
• Status read interval, in seconds (detailed view only)
• Status write interval, in seconds (detailed view only)
• Hardware-assisted takeover retry count (detailed view only)
• Hardware-assisted takeover heartbeat period (detailed view only)
• Whether operator-initiated planned takeover is optimized
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-more ]
This parameter displays the following additional information: :
• Node name
• Whether takeover on short uptime is enabled
• Short uptime interval, in seconds
• Number of giveback attempts
• Giveback attempt interval, in minutes
• Whether status is propagated through SFO mailboxes
• Status read interval, in seconds
• Status write interval, in seconds
● Examples
The following example displays detailed information about the internal options for storage failover on a
node named node2:
Node: node2
Auto Giveback Enabled: false
Check Partner Enabled: true
Takeover Detection Time (secs): 15
Takeover On Failure Enabled: true
Takeover On Panic Enabled: false
Takeover On Short Uptime Enabled: true
Short Uptime (secs): -
Number of Giveback Attempts: 3
Giveback Attempts Minutes: 10
Propagate Status Via Mailbox: true
Node Status Read Interval (secs): 5
Node Status Write Interval (secs): 5
Failover the Storage when Cluster Ports Are Down: -
Failover Interval when Cluster Ports Are Down (secs): -
Takeover on Reboot Enabled: true
Delay Before Auto Giveback (secs): 300
Hardware Assist Enabled: true
Partner's Hw-assist IP:
Partner's Hw-assist Port: 4444
Hw-assist Health Check Interval (secs): 180
Hw-assist Retry count: 2
HA mode: ha
Bypass Takeover Optimization Enabled: true
Mailbox disks are part of root aggregate. High Availability related information is written persistently on
mailbox disks. This directory contains command to display information related to local and partner mailbox
disks.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage failover mailbox-disk show command lists the mailbox disks that are used by storage
failover. The command displays the following information:
• Node name
• Whether the mailbox disk is owned by the local node or by its partner
• Disk name
• Disk universal unique identifier (UUID)
This command is available only at the advanced privilege level and higher.
● Parameters
{ [-fields <fieldname>, ...]
If -fields <fieldname>,... is used, the command displays only the specified fields.
| [-instance ]}
If this parameter is used, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects the mailbox disks that are associated with the specified node.
[-location {local|partner|tertiary}] - Mailbox Owner
Selects the mailbox disks that have the specified relationship to the node.
[-diskindex <integer>] - Mailbox Disk Index
Selects the mailbox disk that has the specified index number.
[-diskname <text>] - Mailbox Disk Name
Selects the mailbox disks that match the specified disk name.
[-diskuuid <text>] - Mailbox Disk UUID
Selects the mailbox disks that match the specified UUID.
[-physical-location {local|partner|mediator}] - Mailbox Disk Physical Location
Selects the mailbox disks that match the specified physical location.
[-location-id <nvramid>] - System ID of the Node where the Disk is Attached
Selects the mailbox disks that match the specified location-id.
[-location-name <text>] - Mailbox Disk Location
Selects the mailbox disks that match the specified location-name.
● Examples
The following example displays information about the mailbox disks on a node named node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage failover progress-table show displays status information about storage-failover
operations. This information is organized in a resource table. The command displays the following
information:
• Node name
• Resource-entry index number
• Resource-entry name
• Resource-entry state
• Resource-entry failure code
• Resource-entry time delta
This command is available only at the advanced privilege level and higher.
● Parameters
{ [-fields <fieldname>, ...]
If -fields <fieldname>, ... is used, the command will only displays only the specified fields.
| [-instance ]}
If this parameter is used, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects the status information for the specified node.
[-index <integer>] - Resource Table Index
Selects the status information for the specified index number.
[-entryname <text>] - Resource Table Entry Name
Selects the status information for the specified entry name.
[-state <text>] - Resource Table Entry State
Selects the status information for the specified state. Possible values include UP, START_RUNNING,
START_DONE, START_FAILED, STOP_RUNNING, STOP_FAILED, TAKEOVER_BARRIER, and
ONLY_WHEN_INITD.
[-failurecode <text>] - Entry Failure Code
Selects the status information for the specified failure code. Possible values include OK, FAIL,
FAIL_ALWAYS, HANG, PANIC, and VETO.
[-timedelta <integer>] - Entry Time Delta
Selects the status information for the specified time delta.
● Examples
The following example displays the entire storage-failover resource table:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware download command downloads the ACP processor, disk, and shelf firmware to a
specified node. This command can also be used to download the disk qualification package.
Use the storage disk firmware update command to install downloaded disk firmware.
Use the system node run local storage download shelf command to install downloaded disk shelf
module firmware.
Use the system node run local storage download acp command to install downloaded ACP
processor firmware.
● Parameters
-node {<nodename>|local} - Node
This specifies the node to which the firmware is to be downloaded.
-package-url <text> - Package URL
This specifies the path to the firmware package.
The packaged ACP processor, disk, and shelf firmware files need to have ".AFW", ".LOD", and ".SFW" file
extensions, respectively.
The following URL protocols are supported: ftp, http, tftp and file. The file URL scheme can be used to
specify the location of the package to be fetched from an external device connected to the storage
controller. Currently, only USB mass storage devices are supported. The USB device is specified as file://
usb0/<filename>. The package must be present in the root directory of the USB mass storage device.
● Examples
The following example downloads a disk firmware package with the path ftp://example.com/fw/disk-
fw-1.2.LOD.zip to a node named node1:
Related reference
storage disk firmware update (page 1105)
system node run (page 1409)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware acp delete command deletes the specified ACP processor firmware file from all
nodes that are currently part of the cluster.
● Parameters
-filename <text> - Firmware Filename
Specifies the firmware file to delete.
● Examples
The following example deletes the ACP processor firmware file with the name ACP-IOM3.0150.AFW.FVF on
each node:
Related reference
storage firmware acp show (page 1160)
storage firmware acp rename (page 1159)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware acp rename command renames the specified ACP processor firmware file on each
node.
● Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
● Examples
The following example renames the ACP processor firmware file with the name ACP-IOM3.0150.AFW.FVF to
ACP-IOM3.AFW.FVF on each node:
Related reference
storage firmware acp show (page 1160)
storage firmware acp delete (page 1159)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage firmware acp show command displays the ACP processor firmware files present on each
node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
● Examples
The following example displays the ACP processor firmware files on each node:
Node: Node1
Node: Node2
ACP-IOM3.AFW
ACP-IOM6.0210.AFW
ACP-IOM6.0210.AFW.FVF
8 entries were displayed.
Related reference
storage firmware acp delete (page 1159)
storage firmware acp rename (page 1159)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware disk delete command deletes the specified disk firmware file on each node.
● Parameters
-filename <text> - Storage Firmware Filename
Specifies the firmware file to delete.
● Examples
The following example deletes the disk firmware file with the name X262_SMOOST25SSX.NA06.LOD on each
node:
Related reference
storage firmware disk show (page 1162)
storage firmware disk rename (page 1161)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware disk rename command renames the specified disk firmware file on each node.
● Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
-newname <text> - New Filename
This parameter specifies the new name of the firmware file.
● Examples
The following example renames the disk firmware file with the name X262_SMOOST25SSX.NA06.LOD to
X262_SMOOST25SSX.LOD on each node:
Related reference
storage firmware disk show (page 1162)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage firmware disk show command displays the disk firmware files present on each node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
● Examples
The following example displays the disk firmware files on each node:
Node: Node1
Node: Node2
Related reference
storage firmware disk delete (page 1161)
storage firmware disk rename (page 1161)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware shelf delete command deletes the specified shelf firmware file from all nodes
that are currently part of the cluster.
● Parameters
-filename <text> - Storage Firmware Filename
Specifies the firmware file to delete.
● Examples
The following example deletes the shelf firmware file with the name IOM12.0210.SFW on each node:
Related reference
storage firmware shelf show (page 1164)
storage firmware shelf rename (page 1163)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage firmware shelf rename command renames the specified shelf firmware file on each node.
● Parameters
-oldname <text> - Old Filename
This parameter specifies the firmware file to rename.
-newname <text> - New Filename
This parameter specifies the new name of the firmware file.
● Examples
The following example renames the shelf firmware file with the name IOM12.0210.SFW to IOM12.000.SFW
on each node:
Related reference
storage firmware shelf show (page 1164)
storage firmware shelf delete (page 1163)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage firmware shelf show command displays the shelf firmware files present on each node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the files that match the specified node name.
[-filename <text>] - Storage Firmware File
Selects the files that match the specified filename.
● Examples
The following example displays the shelf firmware files on each node:
Node: Node1
Node: Node2
Related reference
storage firmware shelf delete (page 1163)
storage firmware shelf rename (page 1163)
The storage iscsi-initiator commands configure the list of iSCSI targets. These commands are only
supported on high-availability shared-nothing virtualized platforms.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage iscsi-initiator add-target command adds an iSCSI target to a node's list of targets.
This command is only supported on high-availability shared-nothing virtualized platforms.
● Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node to which the iSCSI target will be added.
-label <text> - User Defined Identifier
Specifies a label for the target to be added.
-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner} - Target Type
Specifies the type of the target. It is used by the node to determine how to use the LUNs. There are five
target types:
• partner - The partner target should belong to the node's HA partner. This allows the node to access its
partner's disks.
• mailbox - A mailbox target's LUNs are used exclusively as HA mailboxes.
• external - External targets' LUNs can be used by the node but do not play a role in HA.
• dr_auxiliary - The DR auxiliary target for MetroCluster over IP. Not a valid target type for the add-target
command.
• dr_partner - The DR partner target for MetroCluster over IP. Not a valid target type for the add-target
command.
-target-portal <text> - Target Portal
Specifies the target's IP address and listening TCP port. The port is not required if it is the default iSCSI
port (3260). Examples of correct target portals are 10.0.0.2 and 10.0.0.2:860.
-target-name <text> - iSCSI Name
Specifies the iSCSI target name such as an IQN (iSCSI qualified name).
[-status-admin {down|up}] - Administrative Status (default: up)
Use to specify whether the initial administrative status of the connection is up or down. The default
setting is up.
● Examples
The following example adds and connects to an iSCSI target from the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage iscsi-initiator connect command connects a node to the specified target. This
command is only supported on high-availability shared-nothing virtualized platforms.
● Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node to which the iSCSI target will be connected.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to connect to.
● Examples
The following example adds and connects to an iSCSI target from the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage iscsi-initiator disconnect command disconnects a node from the specified target.
This command is only supported on high-availability shared-nothing virtualized platforms.
● Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node from which the iSCSI target will be disconnected.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to disconnect from.
● Examples
The following example adds and connects to an iSCSI target from the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage iscsi-initiator remove-target command removes an iSCSI target from a node's list of
targets. This command is only supported on high-availability shared-nothing virtualized platforms.
● Parameters
-node {<nodename>|local} - Node
Specifies the name of the Data ONTAP node from which the iSCSI target will be removed.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
-label <text> - User Defined Identifier
Specifies the label of the target to be removed.
● Examples
The following example adds and connects to an iSCSI target from the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage iscsi-initiator show displays the list of iSCSI targets configured for each Data ONTAP
node in the cluster. This command is only supported on high-availability shared-nothing virtualized
platforms.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Represents the name of the Data ONTAP node for which information is to be displayed. If this parameter
is not specified, the command displays information about all nodes in the cluster.
[-target-type {external|mailbox|partner|partner2|dr_auxiliary|dr_partner}] - Target Type
Selects targets with the specified target type.
[-label <text>] - User Defined Identifier
Selects targets with the specified label.
[-target-portal <text>] - Target Portal
Selects targets with the specified portal.
[-target-name <text>] - iSCSI Name
Selects targets with the specified target name.
[-status-admin {down|up}] - Administrative Status
Selects targets with the specified administrative status.
● Examples
The following example displays the list of iSCSI targets for each node in the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage path quiesce command quiesces I/O on one path to a LUN. It also quiesces the given entire
path immediately or can monitor the given path for error threshold before quiesce. After the I/O has been
quiesced, no new I/O is sent on the path unless the storage path resume command is issued to continue
I/O.
● Parameters
-node {<nodename>|local} - Node name
The name of the clustered node for which information is being displayed.
-initiator <initiator name> - Initiator Port
Initiator port that the clustered node uses.
-target-wwpn <wwpn name> - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
{ [-lun-number <integer>] - LUN Number
Logical Unit number. The range is: [0...65535]. If this parameter is not specified, Data ONTAP resumes
the entire path to an array.
● Examples
The following example suspends I/O between node vbv3170f1b, port 0a and the array port
50001fe1500a8669, LUN 1.
node::> storage path quiesce -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -lun-number 1
The following example suspends I/O immediately between node vbv3170f1b, port 0a and the array port
50001fe1500a8669.
The following example suspends I/O between node vbv3170f1b, port 0a and the array port
50001fe1500a8669 after reaching 10 or more errors in duration of 5 mins.
node::> storage path quiesce -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -path-failure
-threshold 10 -wait-duration 5
Related reference
storage path resume (page 1169)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage path resume command continues I/O flow to an array LUN on a path or the entire path that
was previously quiesced. It also disables the path failures monitoring feature, if it was enabled using the
storage path quiesce -path-failure-threshold count command.
● Parameters
-node {<nodename>|local} - Node name
The name of the clustered node for which information is being displayed.
-initiator <initiator name> - Initiator Port
Initiator port that the clustered node uses.
-target-wwpn <wwpn name> - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-lun-number <integer>] - LUN Number
Logical Unit number. The range is: [0...65535]. If this parameter is not specified, Data ONTAP resumes
the entire path to an array.
● Examples
The following example resumes I/O between node vbv3170f1b, port 0a and the array port
50001fe1500a8669, LUN 1
node::> storage path resume -node vbv3170f1b -initiator 0a -target-wwpn 50001fe1500a8669 -lun-number 1
The following example resumes I/O between node vbv3170f1b, port 0a and the array port
50001fe1500a8669
Related reference
storage path quiesce (page 1168)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage path show command displays path based statistics. The default command shows:
• Node name
• Initiator port
• Target port
• Target IQN
• TPGN
• Port speeds
• Path I/O in Kbytes/sec
• IOPs
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-array ]
Using this option displays:
• Array name
• Target port
• Target IQN
• Target I/O in Kbytes/sec
• Target side switch port
• Path I/O in Kbytes/sec
• Initiator side switch port
• Initiator I/O in Kbytes/sec
• Initiator port
| [-by-target ]
Using this option displays the same information as the array option, but grouped by target port.
| [-detail ]
Using this option displays the same information as the array and by-target options, but adds the
following:
• Target IOPs
• Target LUNs
• Path IOPs
• Path errors
• Path quality
• Path LUNs
• Initiator IOPs
• Initiator LUNs
| [-switch ]
Using this option adds switch port information to the default display.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-array-name <array name>] - Array Name
Name of the storage array that is connected to the cluster.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of
target ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of I/O per second on the path.
[-path-iops <integer>] - Number of IOPS on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the path
[-initiator-io-kbps <integer>] - Kbytes of I/O per second on Initiator (Rolling Average)
Rolling average of I/O per second on the initiator port.
[-initiator-iops <integer>] - Number of IOPS on Initiator (Rolling Average)
Rolling average of Kbytes of I/O per second on the initiator port.
[-target-io-kbps <integer>] - Kbytes of I/O per second to Target (Rolling Average)
Rolling average of I/O per second on the target port.
[-target-iops <integer>] - Number of IOPS to Target (Rolling Average)
Rolling average of Kbytes of I/O per second on the target port.
[-target-side-switch-port <text>] - Target Side Switch Port
Switch port connected to the array.
[-path-link-errors <integer>] - Link Error count on path
Fibre Channel link error count.
[-path-quality <integer>] - Percentage of weighted error threshold
A number representing the threshold of errors that is allowed on the path. Path quality is a weighted
error value. When the error weight of a path exceeds the threshold, I/O is routed to a different path.
● Examples
The following example shows the default display.
The following example shows how the information is displayed with the array option.
The following example shows how the information is displayed when grouped by target.
The following example shows how the information is displayed with the switch option.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage path show-by-initiator command displays path based statistics. The output is similar to
the storage path show command but the output is listed by initiator.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage array that is being used.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
Switch port connected to the array.
[-array-name <array name>] - Array Name
Name of the storage array that is connected to the cluster.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of
target ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of I/O per second on the path.
[-path-iops <integer>] - Number of IOPS on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the path
[-initiator-io-kbps <integer>] - Kbytes of I/O per second on Initiator (Rolling Average)
Rolling average of I/O per second on the initiator port.
[-initiator-iops <integer>] - Number of IOPS on Initiator (Rolling Average)
Rolling average of Kbytes of I/O per second on the initiator port.
[-target-io-kbps <integer>] - Kbytes of I/O per second to Target (Rolling Average)
Rolling average of I/O per second on the target port.
[-target-iops <integer>] - Number of IOPS to Target (Rolling Average)
Rolling average of Kbytes of I/O per second on the target port.
● Examples
Related reference
storage path show (page 1170)
The storage pool command family provides the ability to create and manage SSD storage pools. Storage
pools are collections of solid-state disks (SSDs) that can be shared between multiple Flash Pool aggregates
and between two nodes of an HA pair.
Note
Storage pools are not supported in All-Flash aggregates.
The use of SSD storage pools is optional. Flash Pool aggregates can use whole SSDs, or they can use SSD
capacity from storage pools. When multiple aggregates share the SSD capacity from an SSD storage pool,
there is a reduction in parity overhead and you have the ability to share high SSD performance across
multiple aggregates and across both nodes of an HA pair. A storage pool contains a minimum of 3 and a
maximum of 29 SSDs.
When an SSD storage pool is created using the storage pool create command, the SSDs are divided into
four equal-sized partitions. The capacity of the group of disks is expressed in terms of allocation units. Each
allocation unit is 25% of the capacity. The storage pool initially contains unprovisioned allocation units
which can be displayed using the storage pool show-available-capacity command.
In an HA configuration, each node takes ownership of two allocation units representing 50% of the total
capacity. If desired, the ownership of the allocation units can be adjusted using the storage pool
reassign command before the capacity is used in an aggregate.
Storage pools do not have an associated RAID type. The RAID type is determined when an allocation unit is
added to an aggregate using the storage aggregate add-disks command. A storage pool contains four
allocation units, and they might be used in up to four aggregates. You can add multiple allocation units to a
Flash Pool aggregate to increase its cache capacity.
The space in an SSD storage pool can be expanded by adding SSDs to the storage pool using the storage
pool add command. The size of each of the four allocation units will expand by 25% of the capacity of the
disks being added. For example, if an SSD with a usable size of 745 GB is added to a storage pool that is part
of four aggregates, each aggregate will grow its cache or usable capacity by 186.25 GB. If a different
allocation is desired, create a new SSD storage pool using the storage pool create command.
All storage pool available capacity can be provisioned into aggregates. Available capacity within a storage
pool is not used to protect against a disk failure. In the case of an SSD failure or predicted failure, Data
ONTAP moves a suitable whole spare SSD from outside the storage pool into the storage pool and begins the
recovery process(using either reconstruction or Rapid RAID Recovery, whichever is appropriate).
Related reference
storage pool create (page 1178)
storage pool show-available-capacity (page 1184)
storage pool reassign (page 1180)
storage aggregate add-disks (page 971)
storage pool add (page 1176)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool add command increases the total capacity of an existing storage pool by adding the
specified SSDs to the storage pool. The disks are split into four equal partitions and added to each of the
allocation units of the storage pool. If any allocation units from the storage pool have already been
allocated to an aggregate, the cache capacity of that aggregate is increased automatically.
For example, if an SSD with a usable size of 745 GB is added to a storage pool that is part of four
aggregates, each aggregate will grow its cache capacity by 186.2 GB. If a different allocation is desired,
create a new storage pool using the storage pool create command.
Note
Storage pools are not supported in all-flash aggregates.
● Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool to which disks are to be added.
{ [-disk-count <integer>] - Number of Disks to Add in Storage Pool
This parameter specifies the number of disks that are to be added to the storage pool. The disks to be
added come from the pool of spare disks.
[-nodes {<nodename>|local}, ...] - Nodes From Which Spares Should be Selected
This parameter specifies a list of nodes from which SSD disks are selected for addition to the storage
pool. If this parameter is not specified, disks to be added to the storage pool can be selected from both
the nodes sharing the storage pool. Use this parameter to restrict the selection of spare disks to one
particular node.
| [-disk-list <disk path name>, ...]} - List of Spare Disks
This parameter specifies a list of disks to be added to the storage pool. In an HA configuration, SSDs
being added to a storage pool can be owned by either node in the HA pair.
{ [-quiet [true]] - Confirmations off (privilege: advanced)
When set to true, this parameter specifies the operation should be executed without pausing for
confirmation.
| [-simulate [true]]} - Simulate Storage Pool Addition
When set to true, this parameter specifies the operation should be performed as a simulation. The
command reports which aggregates would grow automatically as a result of adding the disks to the
storage pool. The disks are not added to the storage pool.
● Examples
In this example, the user requests a report detailing the changes that would occur if a new disk is added to
the storage pool SP1. In this case, 186.2 GB of cache is added to the Flash Pool aggregates
nodeA_flashpool_1 and nodeB_flashpool_1. There are two unprovisioned allocation units in the storage pool
and therefore the storage pool available capacity also grows by 372.5 GB.
This operation will result in capacity being allocated in the following way:
Check via simulation whether there is available capacity within allocation units in storage pool SP1 for
allocation units that are provisioned into aggregates.
Info: This operation results in capacity being allocated in the following way:
The following example adds one disk to a storage pool named SP1. The spare disks are selected from either
local node or its partner or both based on spare availability.
Related reference
storage pool create (page 1178)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool create command creates an SSD storage pool using a given list of spare SSDs.
When a storage pool is created, Data ONTAP splits the capacity provided by the SSDs into four equally-sized
allocation units. In an HA configuration, two allocation units (containing 50% of the total capacity) are
assigned to each node in the HA pair. This assignment can be modified using the storage pool
reassign command.
After the storage pool is created, its allocation units can be provisioned into Flash Pool aggregates using the
storage aggregate add-disks command and the -storage-pool parameter.
Note
Storage pools are not supported in All-Flash aggregates.
● Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the name of the storage pool that is to be created. The SSDs are partitioned and
placed into the new storage pool.
{ [-nodes {<nodename>|local}, ...] - Nodes Sharing the Storage Pool
This parameter specifies a list of nodes from which SSD disks are selected to create the storage pool. If
two nodes are specified then they need to be in HA configuration. Spare disks are selected from either
node or its partner or both. If this parameter is not specified, storage pool will be created by selecting
disks from either the node or its partner or both from where command is run.
-disk-count <integer> - Number of Disks in Storage Pool
This parameter specifies the number of disks that are to be included in the storage pool. The disks in this
newly created storage pool come from the pool of spare disks. The smallest disks in this pool are added
to the storage pool first, unless you specify the -disk-size parameter.
[-disk-size {<integer>[KB|MB|GB|TB|PB]}] - Disk Size
This parameter specifies the size of the disks on which the storage pool is to be created. Disks with a
usable size between 95% and 105% of the specified size are selected.
| -disk-list <disk path name>, ...} - Disk List for Storage Pool Creation
This parameter specifies a list of SSDs to be included in the new storage pool. The SSDs must be spare
disks and can be owned by either node in an HA pair.
[-simulate [true]] - Simulate Storage Pool Creation
This option simulates the storage pool creation and prints the allocation unit size that would be used for
the storage pool.
● Examples
The following example creates a storage pool named SP1. The storage pool contains 3 SSD disks, the spare
disks selected are from either local node, or its partner or both based on spare availability.
The following example creates a storage pool named SP2. The storage pool contains 3 SSD disks, the spare
disks selected are from either node0, or its partner node1 or both based on spare availability.
The following example creates a storage pool named SP3 from four SSDs using disk list.
cluster1::> storage pool create -storage-pool SP3 -disk-list 1.0.13, 1.0.15, 1.0.17, 1.0.19
Related reference
storage pool reassign (page 1180)
storage aggregate add-disks (page 971)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool delete command deletes an existing SSD storage pool. At the end of the operation,
the SSDs are converted back to spare disks.
● Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool that you want to delete. You can delete the storage pool only
if all of the allocation units in the storage pool are available.
● Examples
Verify that storage pool SP3 is ready for deletion by confirming it has four available allocation units and
then delete it.
Warning: Are you sure you want to delete storage pool "SP3"? {y|n}: y
[Job 313] Job succeeded: storage pool delete job for "SP3" completed successfully
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool reassign command changes the ownership of unprovisioned (available) storage pool
allocation units from one HA partner to the other for an existing storage pool.
● Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool within which available capacity is reassigned from one node to
another.
-from-node {<nodename>|local} - Reassign Available Capacity from This Node
This parameter specifies the name of the node that currently owns the allocation units.
-to-node {<nodename>|local} - Reassign Available Capacity to This Node
This parameter specifies the name of the node that will now own the allocation units.
-allocation-units <integer> - Allocation Units
This parameter specifies the number of allocation units to be reassigned.
● Examples
Move an available allocation unit from node-b to node-a in preparation for provisioning the allocation units
on node-a.
cluster1::*> storage pool reassign -storage-pool SP2 -from-node node-b -to-node node-a -allocation-unit
s 1
[Job 310] Job succeeded: storage pool reassign job for "SP2" completed successfully
Related reference
storage pool show-available-capacity (page 1184)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool rename command changes the name of the storage pool.
● Parameters
-storage-pool <storage pool name> - Storage Pool Name
This parameter specifies the storage pool name.
-new-name <storage pool name> - New Name of the Storage Pool
This parameter specifies the new name of the storage pool.
● Examples
Renaming storage pool "sp1" to "sp2"
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool show command displays information about SSD storage pools in the cluster. By default,
the command displays information about all storage pools in the cluster. You can specify parameters to limit
the output to a specific set of storage pools.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Storage Pool Name
Selects the storage pools that match this parameter value.
[-storage-pool-uuid <UUID>] - UUID of Storage Pool
Selects the storage pools that match this parameter value.
[-nodes {<nodename>|local}, ...] - Nodes Sharing the Storage Pool
Selects the storage pools that match this parameter value.
In an HA pair, either node name may be specified.
[-disk-count <integer>] - Number of Disks in Storage Pool
Selects the storage pools that match this parameter value.
[-allocation-unit-size {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Size
Selects the storage pools that match this parameter value.
Allocation units represent the unit of storage allocated to aggregates from this storage pool.
[-allocation-unit-data-size-raid4 {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Data Size for
RAID4
This parameter shows the amount of additional data capacity provided if an allocation unit from this
storage pool was added to an aggregate with -raidtype as raid4.
[-allocation-unit-data-size-raid-dp {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Data Size
for RAID-DP
This parameter shows the amount of additional data capacity provided if an allocation unit from this
storage pool was added to an aggregate with -raidtype as raid_dp.
[-allocation-unit-data-size-raid-tec {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Data Size
for RAID-TEC
This parameter shows the amount of additional data capacity provided if an allocation unit from this
storage pool was added to an aggregate with -raidtype as raid_tec.
[-storage-type <SSD>] - Storage Type
Selects the storage pools that match this parameter value.
Only the SSD type is supported for this version of Data ONTAP.
[-pool-usable-size {<integer>[KB|MB|GB|TB|PB]}] - Storage Pool Usable Size
Selects the storage pools that match this parameter value.
The pool-usable-size is the sum of the capacities of the allocation units that are assigned to nodes
but not yet provisioned. The amount of pool-usable-size that is contributed to the cache or usable
capacity of an aggregate depends upon the RAID type used when provisioning the allocation units.
[-pool-total-size {<integer>[KB|MB|GB|TB|PB]}] - Storage Pool Total Size
Selects the storage pools that match this parameter value.
The pool-total-size is the sum of the capacities of allocation units belonging to this storage pool.
[-is-healthy {true|false}] - Is Pool Healthy?
Selects the storage pools that match this parameter value.
For storage pools with is-healthy false, the unhealthy-reason parameter provides more
information.
is-healthy must be true to provision allocation units from a storage pool into an aggregate.
[-pool-state <State of the Storage Pool>] - State of the Storage Pool
Selects the storage pools that match this parameter value. Possible states are:
• normal - the storage pool is operating normally.
● Examples
Display the storage pools in the cluster.
The following example displays the details of a storage pool named SmallSP. Only one of its four allocation
unit has been provisioned, so 75% of its size is available (usable).
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool show-aggregate command displays allocation information for SSD storage pools in
the cluster. The command output depends upon the parameter or parameters specified with the command.
If no parameters are specified, the command displays information about allocations of all storage pools in
the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the storage pools that match this parameter value.
[-aggregate <aggregate name>] - Aggregate
Selects the storage pools that match this parameter value.
[-capacity {<integer>[KB|MB|GB|TB|PB]}] - Capacity
Selects the storage pools that match this parameter value.
Capacity includes space provided by data and parity portions of each allocation unit. Only the data
portions of each allocation unit contribute to the cache or usable capacity of Flash Pool.
[-allocated-unit-count <integer>] - Number of AU's Assigned to This Aggregate
Selects the storage pools that match this parameter value.
[-original-owner <text>] - Original Owner Name
Selects the storage pools that match this parameter value.
[-node {<nodename>|local}] - Node
Selects the storage pools that match this parameter value.
● Examples
Display information about the aggregate or aggregates using a storage pool called SP2:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool show-available-capacity command displays information about available capacity
in SSD storage pools on each node in the cluster. The command output depends upon the parameter or
parameters specified with the command. If no parameters are specified, the command displays information
about available capacities in all shared pools in the cluster.
Storage pool available capacity is data storage space that has not yet been provisioned into Flash Pool.
Allocation units might be provisioned into aggregates using the storage aggregate add-disks
command and the -storage-pool parameter.
Note
All storage pool available capacity can be provisioned into aggregates. Available capacity within a
storage pool is not used to protect against a disk failure. In the case of an SSD failure or predicted
failure, Data ONTAP moves a suitable whole SSD spare disk from outside the storage pool into the
storage pool and begins the recovery process (using either reconstruction or Rapid RAID Recovery,
whichever is appropriate).
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the available capacities that match this parameter value.
[-node {<nodename>|local}] - Node
Selects the available capacities that match this parameter value.
[-allocation-unit-size {<integer>[KB|MB|GB|TB|PB]}] - Allocation Unit Size
Selects the available capacities that match this parameter value.
Allocation units are the units of storage capacity that are available to be provisioned into aggregates.
[-storage-type <SSD>] - Type of Storage Pool
Selects the available capacities that match this parameter value. Only the SSD type is supported for this
version of Data ONTAP.
[-allocation-unit-count <integer>] - Number of Allocation Units Available
Selects the available capacities that match this parameter value.
Allocation units are the units of storage capacity that are available to be provisioned into aggregates.
Each allocation unit is one minimum unit of allocation (MUA) and its capacity is given as allocation-
unit-size.
[-syncmirror-pool <text>] - SyncMirror Pool
Selects the available capacities that match this parameter value.
The SyncMirror pool of an allocation unit must match the SyncMirror pool of the disks of the aggregate
when adding allocation units into an aggregate.
Mirroring of aggregates that are provisioned from SSD storage pools is not supported.
[-available-size {<integer>[KB|MB|GB|TB|PB]}] - Total Usable Available Size
Selects the available capacities that match this parameter value.
The available-size is the sum of the capacities of the allocation units that are assigned but not yet
provisioned. The amount of available-size that is contributed to the cache or usable capacity of an
aggregate depends upon the RAID type used when provisioning the allocation units.
● Examples
In this example, two nodes of an HA pair share available capacity from two storage pools, SP1 and SP2.
There are a total of 5 allocation units that have not yet been provisioned.
Related reference
storage aggregate add-disks (page 971)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage pool show-disks command displays information about disks in storage pools in the cluster.
The command output depends on the parameter or parameters specified with the command. If no
parameters are specified, the command displays information about all disks in all storage pools in the
cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-storage-pool <storage pool name>] - Name of Storage Pool
Selects the storage pools that match this parameter value.
[-disk <disk path name>] - Name of the disk
Selects the storage pools with the disks that match this parameter value.
[-disk-type {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS | SSD | VMDISK | SSD-NVM | SSD-CAP |
VMLUN | VMLUN-SSD}] - Disk Type
Selects the storage pools with the disks that match this parameter value. Only the SSD type is supported
for this version of Data ONTAP.
[-usable-size {<integer>[KB|MB|GB|TB|PB]}] - Disk Usable Size
Selects the storage pools with the disks that match this parameter value.
In this command, usable-size refers to the sum of the capacities of all of the partitions on the disk.
[-total-size {<integer>[KB|MB|GB|TB|PB]}] - Total Size
Selects the storage pools with the disks that match this parameter value.
[-node-list <nodename>, ...] - List of Nodes
Selects the storage pools with the disks that are visible to all of the specified nodes.
● Examples
Show information about SSDs in a storage pool called SP2.
Storage
Disk Type Usable Size Total Size
-------- ------- ----------- -----------
1.0.16 SSD 745.0GB 745.2GB
1.0.18 SSD 745.0GB 745.2GB
1.0.20 SSD 745.0GB 745.2GB
1.0.22 SSD 745.0GB 745.2GB
The storage port command family manages the storage ports of the cluster. The command set allows you
to view the current status of all storage ports. You can also enable, disable, reset, or rescan a given port or
reset a device behind a port.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage port disable command disables a specified storage port.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be disabled.
[-force [true]] - Force (privilege: advanced)
Use this optional parameter to force the disabling of the storage port. The parameter can be used to
disable the specified port even if some devices can only be accessed using this port. Note that doing so
might cause multiple device failures.
● Examples
The following example disables port 0a on node node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage port enable command enables a specified storage port.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be enabled.
● Examples
The following example enables port 0a on node node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage port modify command modifies the attributes of a storage port.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the name of the node.
-port <text> - Port
This parameter specifies the name of the port.
[-mode {storage|network}] - Mode
This parameter specifies the usage mode for an Ethernet port - storage or network use. The mode
parameter only applies to Ethernet ports that are not dedicated to storage connectivity.
[-force [true]] - Force (privilege: advanced)
This parameter allows the operation to override warnings.
● Examples
The following example places a nondedicated Ethernet port in storage mode:
Speed VLAN
Node Port Type Mode (Gb/s) State Status ID
-------------- ---- ----- ------- ------ -------- --------------- ----
cluster1-01
e0b ENET network - - - -
e0c ENET - 100 enabled online 30
cluster1::> storage port modify -node cluster1-01 -port e0b -mode storage
Speed VLAN
Node Port Type Mode (Gb/s) State Status ID
-------------- ---- ----- ------- ------ -------- --------------- ----
cluster1-01
e0b ENET storage 0 enabled offline 30
e0c ENET - 100 enabled online 30
e0e ENET storage 0 enabled offline 30
e0f ENET - 100 enabled online 30
cluster1-02
e0b ENET storage 0 enabled offline 30
e0c ENET - 100 enabled online 30
e0e ENET storage 0 enabled offline 30
e0f ENET - 100 enabled online 30
8 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage port rescan command rescans a specified storage port. This command is not supported on
Ethernet storage ports (type = ENET).
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be rescanned.
● Examples
The following example rescans port 0a on node node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage port reset command resets a specified storage port. This command is not supported on
Ethernet storage ports (type = ENET).
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port that needs to be reset.
● Examples
The following example resets port 0a on node node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage port reset-device command resets a device behind a port. If the device is behind a SAS
port, you need to specify the shelf name and bay ID where the device resides. If the device is behind a FC
port, you need to specify the loop ID of the device. This command is not supported on Ethernet storage ports
(type = ENET).
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the port resides.
-port <text> - Port
Use this parameter to specify the port used to reset the device.
{ -shelf-name <text> - Shelf Name
Use this parameter to specify the shelf where the device resides.
-bay-id <integer> - Bay ID
Use this parameter to specify the bay where the device resides.
| -loop-id <integer>} - Loop ID
Use this parameter to specify the loop ID of the device.
● Examples
The following example resets a device behind SAS port 0a on node node1:
cluster1::> storage port reset-device -node node1 -port 0a -shelf-name 1.0 -bay-id 10
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage port show command displays information about the storage ports in the cluster. If no
parameters are specified, the default command displays the following information about the storage ports:
• Node
• Port
• Type
• Speed
• State
• Status
To display detailed profile information about a single storage port, use the -node and -port parameters.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the storage ports, in column style output.
| [-errors ]
Displays the following error status information about the storage ports which have errors:
• Error type
• Error severity
• Error description
| [-instance ]}
Displays expanded information about all the storage ports in the system. If a storage port is specified,
then this parameter displays detailed information for that port only.
[-node {<nodename>|local}] - Node
Displays detailed information about the storage ports on the specified node.
[-port <text>] - Port
Selects the ports with the specified port name.
[-port-type {Unknown|SAS|FC|ENET}] - Port Type
Selects the ports of the specified type.
[-port-speed {0|1|1.5|2|2.5|3|4|5|6|8|10|12|14|16|25|32|40|100}] - Port Speed
Selects the ports with the specified speed.
[-state {enabled|disabled|enable-pending|disable-pending}] - Port State
Selects the ports with the specified state.
[-status {unknown|online|online-degraded|offline|link-down}] - Port Status
Selects the ports with the specified operational status.
[-description <text>] - Description
Selects the ports with the specified description.
● Examples
The following example displays information about all storage ports in the cluster:
Speed VLAN
Node Port Type Mode (Gb/s) State Status ID
-------------- ---- ----- ------- ------ -------- --------- ----
cluster1-01
4a SAS - 0 enabled offline -
4b SAS - 0 enabled offline -
4c SAS - 0 enabled offline -
4d SAS - 0 enabled offline -
e3a ENET network - - - -
e3b ENET storage 100 enabled online 30
e5a ENET storage 100 enabled online 30
e5b ENET network - - - -
cluster1-02
4a SAS - 0 enabled offline -
4b SAS - 0 enabled offline -
4c SAS - 0 enabled offline -
4d SAS - 0 enabled offline -
e3a ENET network - - - -
e3b ENET storage 100 enabled online 30
The following example displays detailed information about port e5a on node node1:
Node: cluster1-01
Port: e5a
Port Type: ENET
Mode: storage
Description: 40G/100G Ethernet Controller CX5
Firmware Revision: 16.23.1020
MAC Address: ec:0d:9a:65:e4:44
Is Dedicated: false
Serial Number: MT1730X00227
Connector Vendor: Molex Inc.
Connector Part Number: 112-00322
Connector Serial Number: 532120266
Port Speed: 100 Gb/s
Port State: enabled
Port Status: online
The sub commands storage raid-options modify and storage raid-options show are used to
change and view the configurable node RAID options. Following are the RAID options that can be
configured:
raid.background_disk_fw_update.enable
This option determines the behavior of automatic disk firmware update. Valid values are on or off. The
default value is on. If the option is set to on, firmware updates to spares and file-system disks are performed
in a non-disruptive manner in the background. If the option is turned off automatic firmware update occur
when the system is booted or a disk is inserted.
raid.disk.copy.auto.enable
This option determines the action taken when a disk reports a predictive failure. Valid values for this option
are on or off. The default value for this option is on except for capacity-optimized platforms, where the
default value is off.
Sometimes, it is possible to predict that a disk will fail soon based on a pattern of recovered errors that have
occurred on the disk. In such cases, the disk reports a predictive failure to Data ONTAP. If this option is set to
on, Data ONTAP initiates Rapid RAID Recovery to copy data from the failing disk to a spare disk. When data
is copied, the disk is marked failed and placed in the pool of broken disks. If a spare is not available, the
node will continue to use the prefailed disk until the disk fails.
If the option is set to off, the disk is immediately marked as failed and placed in the pool of broken disks. A
spare disk is selected and data from the missing disk is reconstructed from other disks in the RAID group.
The disk does not fail if the RAID group is already degraded or is being reconstructed. This ensures that a
disk failure does not lead to the failure of the entire RAID group.
raid.disktype.enable
This option is obsolete. Use options raid.mix.hdd.disktype.capacity and
raid.mix.hdd.disktype.performance instead.
raid.media_scrub.rate
This option sets the rate of media scrub on an aggregate. Valid values for this option range from 300 to
3000 where a rate of 300 represents a media scrub of approximately 512 MB per hour, and 3000 represents
a media scrub of approximately 5GB per hour. The default value for this option is 600, which is a rate of
approximately 1GB per hour.
raid.min_spare_count
This option specifies the minimum number of spare drives required to avoid warnings about low spare
drives. If every filesystem drive has the minimum number of spare drives specified in raid.min_spare_count
that are appropriate replacements, then no warning is displayed for low spares. This option can be set from
0 to 4. The default setting is 0 for capacity-optimized platforms and 1 for others. Setting this option to 0
means that no warnings will be displayed for low spares, even if there are no spares available. This option
can be set to 0 only on capacity-optimized platforms, as well as standalone or HA systems with 16 or fewer
attached drives with RAID-DP aggregates. A setting of 0 is not allowed on systems with RAID4 aggregates.
raid.mirror_read_plex_pref
This option specifies the plex preference when reading from a mirrored aggregate on a MetroCluster-
configured system. There are three possible values:
• local indicates that all reads are handled by the local plex (plex consisting of disks from Pool0).
• remote indicates that all reads are handled by the remote plex (plex consisting of disks from Pool1).
• alternate indicates that the handling of read requests is shared between the two plexes.
This option is ignored if the system is not in a MetroCluster configuration. The option setting applies to all
aggregates on the node.
raid.mix.hdd.disktype.capacity
Controls mixing of SATA, BSAS, FSAS and ATA disk types. The default value is on, which allows mixing.
When this option is set to on, SATA, BSAS, FSAS and ATA disk types are considered interchangeable for all
aggregate operations, including aggregate creation, adding disks to an aggregate, and replacing disks
within an existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.
If you set this option to off, SATA, BSAS, FSAS and ATA disks cannot be combined within the same aggregate.
If you have existing aggregates that combine those disk types, those aggregates will continue to function
normally and accept any of those disk types.
Note
This option is ignored in storage aggregate create and storage aggregate add-disks
commands, when either of -disktype or -diskclass parameters are used. It is better to use the -
disktype or -diskclass parameter instead of relying on this option.
raid.mix.hdd.disktype.performance
Controls mixing of FCAL and SAS disk types. The default value is off, which prevents mixing.
If you set this option to on, FCAL and SAS disk types are considered interchangeable for all aggregate
operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an
existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.
When this option is set to off, FCAL and SAS disks cannot be combined within the same aggregate. If you
have existing aggregates that combine those disk types, those aggregates will continue to function
normally and accept either disk type.
Note
This option is ignored in storage aggregate create and storage aggregate add-disks
commands, when either of -disktype or -diskclass parameter is used. It is better to use the -
disktype or -diskclass parameter instead of relying on this option.
raid.mix.disktype.solid_state
Controls mixing of SSD and SSD-NVM disk types. The default value is on, which allows mixing.
If you set this option to on, SSD and SSD-NVM disk types are considered interchangeable for all aggregate
operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an
existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.
When this option is set to off, SSD and SSD-NVM disks cannot be combined within the same aggregate. If
you have existing aggregates that combine those disk types, those aggregates will continue to function
normally and accept either disk type.
Note
This option is ignored in storage aggregate create and storage aggregate add-disks
commands, when either of -disktype or -diskclass parameter is used. It is better to use the -
disktype or -diskclass parameter instead of relying on this option.
raid.mix.hdd.rpm.capacity
This option controls separation of capacity-based hard disk drives (ATA, SATA, BSAS, FSAS, MSATA) by
uniform rotational speed (RPM). If you set this option to off, Data ONTAP always selects disks with the same
RPM when creating new aggregates or when adding disks to existing aggregates using these disk types. If
you set this option to on, Data ONTAP does not differentiate between these disk types based on rotational
speed. For example, Data ONTAP might use both 5400 RPM and 7200 RPM disks in the same aggregate. The
default value is on.
raid.mix.hdd.rpm.performance
This option controls separation of performance-based hard disk drives (SAS, FCAL) by uniform rotational
speed (RPM). If you set this option to off, Data ONTAP always selects disks with the same RPM when creating
new aggregates or when adding disks to existing aggregates using these disk types. If you set this option to
on, Data ONTAP does not differentiate between these disk types based on rotational speed. For example,
Data ONTAP might use both 10K RPM and 15K RPM disks in the same aggregate. The default value is off.
raid.reconstruct.perf_impact
This option sets the overall performance impact of RAID reconstruction. When the CPU and disk bandwidth
are not consumed by serving clients, RAID reconstruction consumes as much bandwidth as it needs. If the
serving of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control
over the CPU and disk bandwidth that can be taken away for reconstruction, and thereby enables control
over the negative performance impact on the serving of clients. As the value of this option is increased, the
speed of reconstruction also increase. The possible values low, medium, and high. The default value is
medium. When mirror resync and reconstruction are running at the same time, the system does not
distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In
this case, the combined resource utilization of these operations is limited to the maximum resource
entitlement for individual operations.
raid.resync.num_concurrent_ios_per_rg
This option changes the duration of a resync by modifying the number of concurrent resync I/Os in progress
for each resync'ing raidgroup. The legacy, and default value, is 1. As the value of this option is increased, the
speed of resync is increased. This will have a negative performance impact on the serving of clients.
raid.resync.perf_impact
This option sets the overall performance impact of RAID mirror resync (whether started automatically by the
system or implicitly by an operator-issued command). When the CPU and disk bandwidth are not consumed
by serving clients, a resync operation consumes as much bandwidth as it needs. If the serving of clients is
already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and
disk bandwidth that can be taken away for resync operations, and thereby enables control over the negative
performance impact on the serving of clients. As the value of this option is increased, the speed of resync
also increases. The possible values are low, medium, and high. The default value is medium. When RAID
mirror resync and reconstruction are running at the same time, the system does not distinguish between
their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the
combined resource utilization of these operations is limited to the maximum resource entitlement for
individual operations.
raid.rpm.ata.enable
This option is obsolete. Use option raid.mix.hdd.rpm.capacity instead.
raid.rpm.fcal.enable
This option is obsolete. Use option raid.mix.hdd.rpm.performance instead.
raid.scrub.perf_impact
This option sets the overall performance impact of RAID scrubbing (whether started automatically or
manually). When the CPU and disk bandwidth are not consumed by serving clients, scrubbing consumes as
much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and disk
bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for
scrubbing, and thereby enables control over the negative performance impact on the serving of clients. As
the value of this option is increased, the speed of scrubbing also increases. The possible values for this
option are low, medium, and high. The default value is low. When scrub and mirror verify are running at the
same time, the system does not distinguish between their separate resource consumption on shared
resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations is
limited to the maximum resource entitlement for individual operations.
raid.scrub.schedule
This option specifies the weekly schedule (day, time and duration) for scrubs started automatically. On a
non-AFF system, the default schedule is daily at 1 a.m. for the duration of 4 hours except on Sunday when it
is 12 hours. On an AFF system, the default schedule is weekly at 1 a.m. on Sunday for the duration of 6
hours. If an empty string ("") is specified as an argument, it will delete the previous scrub schedule and add
the default schedule. One or more schedules can be specified using this option. The syntax is duration[h|
m]@weekday@start_time,[duration[h|m]@weekday@start_time,...] where duration is the time period for
which scrub operation is allowed to run, in hours or minutes ('h' or 'm' respectively).
Weekday is the day on which the scrub is scheduled to start. The valid values are sun, mon, tue, wed, thu,
fri, sat.
start_time is the time when scrub is schedule to start. It is specified in 24 hour format. Only the hour (0-23)
needs to be specified.
For example, options raid.scrub.schedule 240m@tue@2,8h@sat@22 will cause scrub to start on every
Tuesday at 2 a.m. for 240 minutes, and on every Saturday at 10 p.m. for 480 minutes.
raid.timeout
This option sets the time, in hours, that the system will run after a "single-disk failure in a RAID4 group",
"two-disk failure in a RAID-DP group" or "three-disk failure in a RAID-TEC group" condition has caused the
system to go into "degraded mode", "double-degraded mode" or "triple-degraded mode" respectively. This
option also controls the shutdown timer after an NVRAM/NVMEM battery failure has occurred. The default is
24, the minimum acceptable value is 0 and the largest acceptable value is 4,294,967,295. If the
raid.timeout option is specified when the system is in "degraded mode", "double-degraded mode" or "triple-
degraded mode", the timeout is set to the value specified and the timeout is restarted. If the value specified
is 0, automatic system shutdown is disabled.
Note
For capacity-optimized platforms, this option only applies to non-RAID failures (e.g. NVRAM battery).
Automatic system shutdown is disabled for degraded raidgroups, irrespective of the value of this
option.
raid.verify.perf_impact
This option sets the overall performance impact of RAID mirror verify. When the CPU and disk bandwidth are
not consumed by serving clients, a verify operation consumes as much bandwidth as it needs. If the serving
of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the
CPU and disk bandwidth that can be taken away for verify, and thereby enables control over the negative
performance impact on the serving of clients. As you increase the value of this option, the verify speed also
increases. The possible values are low, medium, and high. The default value is low. When scrub and mirror
verify are running at the same time, the system does not distinguish between their separate resource
consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization
of these operations is limited to the maximum resource entitlement for individual operations.
Related reference
storage raid-options modify (page 1198)
storage raid-options show (page 1198)
storage aggregate create (page 976)
storage aggregate add-disks (page 971)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage raid-options modify command is used to modify the available RAID options for each
node in a cluster. The options are described in the storage raid-options manual page.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which the RAID option is to be modified.
-name <text> - Option Name
This parameter specifies the RAID option to be modified. To see the list of RAID options that can be
modified, use the storage raid-options show command.
[-value <text>] - Option Value
This parameter specifies the value of the selected RAID option.
Related reference
storage raid-options show (page 1198)
Storage raid-options Commands (page 1194)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage raid-options show command displays information about all the RAID options in a cluster.
The options are described in the storage raid-options manual page.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about all the RAID options on the specified node.
● Examples
The following example shows the RAID scrub settings for a node named node1:
Related reference
Storage raid-options Commands (page 1194)
storage raid-options modify (page 1198)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf show command displays information about all the storage shelves in the storage
system. If no parameters are specified, the default command displays the following information about the
storage shelves:
• Shelf Name
• Shelf ID
• Serial Number
• Model
• Module Type
• Status
To display detailed profile information about a single storage shelf, use the -shelf parameter.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the storage shelves, in column style output.
| [-bay ]
Displays the following details about the disk bays in the storage shelf:
• The unique positional identifier of the disk bay
• Whether a disk drive is installed in the bay
• Bay type
• Operational status of the disk bay
| [-connectivity ]
Displays the following details about the connectivity from the node to the storage shelf:
• Node name
• Initiator side switch port
• Target side switch port
• World-wide port name
• Target Port Group Number (TPGN)
| [-cooling ]
Displays the following details about the cooling elements and temperature sensors of the storage shelf:
• Element ID of the cooling fan
• The current speed of the cooling fan in revolutions per minute (rpm)
• Operational status of the cooling fan
• Sensor ID of the temperature sensor element
• Temperature at the sensor in degrees Celsius
• Whether the current temperature at the sensor is the ambient temperature
• Low critical threshold value for the temperature sensor
• Low warning threshold value for the temperature sensor
• High critical threshold value for the temperature sensor
• High warning threshold value for the temperature sensor
• Operational status for the temperature sensor
| [-errors ]
Displays the following error status information about the storage shelves that have errors:
• Error type
• Error description
| [-module ]
Displays the following details about the I/O modules attached to the storage shelf:
• Module ID
• Module part number
• Serial number of the Enclosure Services Controller Electronics element
• Whether monitoring is enabled on this module
• Whether this module is the SAS expander master module
• Whether this module is the element reporting
• Version of the firmware installed on the module
• Latest firmware revision
• Number of times, since the last boot, that this module has been swapped
• Operational status of the module
| [-port ]
Displays the following details about the storage shelf ports:
• Expander phy element identifier
• SAS shelf port type
• World-wide Port Name of the SAS port
• Operational physical link rate of the SAS port in Gb/s
• Negotiated physical link rate of the SAS port in Gb/s
• Power status of the SAS port
• Status of the SAS port
• Fibre Channel shelf port ID
• Fibre Channel shelf port type
• Fibre Channel shelf port status
| [-power ]
Displays the following details about the power supplies, voltage sensors, and current sensors of the
storage shelf:
• Power Supply Unit (PSU) number
• PSU type
• PSU part number
• PSU serial number
• PSU power rating in watts
• PSU crest factor
• Power drawn from the PSU in watts
• Whether the PSU can be reset via software control
• Whether the auto power reset of the PSU is enabled
• PSU firmware revision
• Operational status of the PSU
Displays information only about the storage shelves that have bays that match the type of bay you
specify.
[-bay-has-disk {true|false}, ...] - Bay Has Disk
Displays information only about the storage shelves that have bays with disk drives inserted in them
(true) or empty bays (false).
[-bay-op-status {unknown|normal|error}, ...] - Bay Operational Status
Displays information only about the storage shelves that have bays that match the operational state you
specify.
[-controller {<nodename>|local}, ...] - Controller Name
Displays information only about the storage shelves that are connected to the node you specify.
[-controller-uuid <text>, ...] - Controller UUID
Displays information only about the storage shelves that are connected to the node UUID you specify.
[-initiator <text>, ...] - Initiator
Displays information only about the storage shelves that are visible to the initiator you specify.
[-initiator-wwpn <text>, ...] - Initiator WWPN
Displays information only about the storage shelves that are visible to the initiator WWPN you specify.
[-initiator-side-switch-port <text>, ...] - Initiator Side Switch Port
Displays information only about the storage shelves that are visible to an initiator connected to the
switch port you specify.
[-target-side-switch-port <text>, ...] - Target Side Switch Port
Displays information only about the storage shelves visible on target ports identified by the switch port
to which they are connected.
[-target-port <text>, ...] - Target Port
Displays information only about the storage shelves visible on the specified target ports identified by
their World-Wide Port Name (WWPN).
[-tpgn {<integer>|-}, ...] - Target Port Group Number
Displays information only about the storage shelves that belong to the Target Port Group Name (TPGN)
you specify.
[-port-speed {<integer>|-}, ...] - Port Speed
Displays information only about the storage shelves with ports that match the port speed you specify.
[-io-kbps {<integer>|-}, ...] - Kbytes/sec on Storage Shelf
Displays information only about the storage shelves visible to an initiator that has executed I/O at the
throughput you specify.
[-iops {<integer>|-}, ...] - Number IOPS per Second on Storage Shelf
Displays information only about the storage shelves visible to an initiator that has executed the number
of IOPs you specify.
[-current-sensor-id {<integer>|-}, ...] - Current Sensor ID
Displays information only about the storage shelves with current sensor that matches the current sensor
ID you specify.
[-current-sensor-location <text>, ...] - Current Sensor Location
Displays information only about the storage shelves with current sensors installed at the location you
specify.
[-current-sensor-reading {<integer>|-}, ...] - Current Sensor Reading
Displays information only about the storage shelves with current sensors that match the current reading
you specify.
[-current-op-status {unknown|normal|over-current-critical|under-current-critical|not-supported|
not-installed}, ...] - Operational Status
Displays information only about the storage shelves with current sensors that match the operational
status you specify.
[-fan-id {<integer>|-}, ...] - Fan ID
Displays information only about the storage shelves with cooling fans that match the fan IDs you
specify.
[-fan-location <text>, ...] - Fan Location
Displays information only about the storage shelves with cooling fans installed.
[-fan-rpm {<integer>|-}, ...] - Fan Rotation Per Minute
Displays information only about the storage shelves with cooling fans that match the rpm rate you
specify.
[-fan-op-status {unknown|normal|off|error|not-supported|not-installed}, ...] - Fan Operational
Status
Displays information only about the storage shelves with cooling fans that match the operational status
you specify.
[-module-id <text>, ...] - Module ID
Displays information only about the storage shelves with an I/O module that matches the module ID you
specify.
[-module-location <text>, ...] - Module Location
Displays information only about the storage shelves with I/O modules in the specified shelf module
slots.
[-module-part-number <text>, ...] - Module Part Number
Displays information only about the storage shelves with I/O modules that match the module part
numbers you specify.
[-is-sas-master-module {true|false}, ...] - Is SAS Expander Master Module?
Displays information only about the storage shelves with a SAS master I/O module (true) or an I/O
module that is not a SAS master (false). This parameter applies only to SAS shelves.
[-is-monitor-active {true|false}, ...] - Is Monitor Active?
Displays information only about the storage shelves whose monitoring is enabled (true) or disabled
(false).
[-enclosure-type <text>, ...] - Module Enclosure Type
Displays information only about the storage shelves that match the enclosure types you specify.
[-es-serial-number <text>, ...] - ES Electronics Element Serial Number
Displays information only about the storage shelves with I/O modules that match the electronics serial
numbers you specify.
[-module-fru-id <text>, ...] - Field Replaceable Unit ID
Displays information only about the storage shelves with I/O modules that match the field replaceable
unit (FRU) IDs you specify.
[-module-is-reporting-element {true|false}, ...] - Is Reporting Element?
Displays information only about the storage shelves with element reporting I/O modules (true) or not
(false).
[-module-fw-revision <text>, ...] - Firmware Revision
Displays information only about the storage shelves with I/O modules that match the firmware revision
you specify.
[-module-latest-fw-revision <text>, ...] - Latest Firmware Revision
Displays information only about the storage shelves with I/O modules that match the latest firmware
revision you specify.
[-module-fw-progress {not-available|ready|in-progress|failed}, ...] - Module Firmware Progress
Displays information only about the storage shelves with I/O modules that match the specified firmware
update progress.
[-module-swap-count {<integer>|-}, ...] - Module Swap Count
Displays information only about the storage shelves whose I/O modules have been swapped the
specified number of times.
[-module-op-status {unknown|normal|warning|error}, ...] - Module Operational Status
Displays information only about the storage shelves with I/O modules that match the operational status
you specify.
[-sas-port-id <text>, ...] - Port ID
Displays information only about the storage shelves with SAS Ports that match the port IDs you specify.
[-sas-port-type {unknown|circle|square|sil|disk|in|out|unused|host|dcm|aux1|aux2|hi_ho|a_to_b|
b_to_a}, ...] - Port Type
Displays information only about the storage shelves with SAS Ports that match the SAS port type you
specify.
[-sas-port-wwpn <text>, ...] - Port World Wide Port Name
Displays information only about the storage shelves with SAS Ports that match the World-Wide Port
Names you specify.
[-sas-port-speed <text>, ...] - Port Speed
Displays information only about the storage shelves with SAS Ports that match the port speed you
specify.
[-sas-negotiated-port-speed <text>, ...] - Negotiated Port Speed
Displays information only about the storage shelves with SAS Ports that match the negotiated port
speed you specify.
[-sas-port-power-status <text>, ...] - Port Power Status
Displays information only about the storage shelves with SAS Ports that match the power status you
specify.
[-sas-port-op-status {error|normal|off|unknown|byp-bad-term|bad-zone-recovery|byp_clk_thr|
byp_comma_los|byp_crc_brst_thr|byp_data_timeout|byp_drv_fault|byp_drv_pcycle|byp_drv_pwr|
byp_drv_self|byp_gen|byp_init|byp_lip_brst_thr|byp_lip_f8|byp_lip_rate_thr|byp_lipf7|byp_ltbi|
byp_man|byp_no_drive|byp_osc|byp_other_thr|byp_rec_los|byp_rport|byp_stall_thr|byp_wrd_brst_thr|
byp_wrd_rate_thr|byp_xmit_fault|diag_transmit|inserted|loopback|status_unknown|
warn_high_clk_delta|warn_high_crc_rate|warn_high_lip|warn_high_wrd_rate|term|phy_dis_clk_fault|
phy_dis_crc_err|phy_dis_crc_err_burst|phy_dis_disparity|phy_dis_disparity_burst|
phy_dis_emulate_reserve|phy_dis_inval_dword|phy_dis_inval_dword_burst|phy_dis_loss_dword|
phy_dis_loss_dword_burst|phy_dis_man_smp|phy_dis_manual|phy_dis_mirrored|empty|
phy_dis_phy_change|phy_dis_phy_change_burst|phy_dis_phy_reset|phy_dis_phy_reset_burst|
phy_dis_phy_unused|phy_ena|phy_ena_not_attach|phy_ena_unknown|phy_unknown|
phy_dis_illegal}, ...] - Port Operational Status
Displays information only about the storage shelves with SAS Ports that match the operational status
you specify. Displays information only about the storage shelves with SAS Ports that match the
operational status you specify.
[-sas-port-module-id {A|B}, ...] - Port Module ID
Displays information only about the storage shelves with SAS Ports that match the module ID you
specify.
[-fc-port-id <text>, ...] - Fibre Channel Port ID
Displays information only about the storage shelves with FC Ports that match the port IDs you specify.
[-fc-port-mode {unknown|circle|square|sil|disk|in|out|unused|host|dcm|aux1|aux2|hi_ho|a_to_b|
b_to_a}, ...] - Fibre Channel Port Mode
Displays information only about the storage shelves with FC Ports that match the port modes you
specify.
[-fc-port-op-status {error|normal|off|unknown|byp-bad-term|bad-zone-recovery|byp_clk_thr|
byp_comma_los|byp_crc_brst_thr|byp_data_timeout|byp_drv_fault|byp_drv_pcycle|byp_drv_pwr|
byp_drv_self|byp_gen|byp_init|byp_lip_brst_thr|byp_lip_f8|byp_lip_rate_thr|byp_lipf7|byp_ltbi|
byp_man|byp_no_drive|byp_osc|byp_other_thr|byp_rec_los|byp_rport|byp_stall_thr|byp_wrd_brst_thr|
byp_wrd_rate_thr|byp_xmit_fault|diag_transmit|inserted|loopback|status_unknown|
warn_high_clk_delta|warn_high_crc_rate|warn_high_lip|warn_high_wrd_rate|term|phy_dis_clk_fault|
phy_dis_crc_err|phy_dis_crc_err_burst|phy_dis_disparity|phy_dis_disparity_burst|
phy_dis_emulate_reserve|phy_dis_inval_dword|phy_dis_inval_dword_burst|phy_dis_loss_dword|
phy_dis_loss_dword_burst|phy_dis_man_smp|phy_dis_manual|phy_dis_mirrored|empty|
phy_dis_phy_change|phy_dis_phy_change_burst|phy_dis_phy_reset|phy_dis_phy_reset_burst|
phy_dis_phy_unused|phy_ena|phy_ena_not_attach|phy_ena_unknown|phy_unknown|
phy_dis_illegal}, ...] - Fibre Channel Port Operational Status
Displays information only about the storage shelves with FC Ports that match the operational status you
specify.
[-psu-id {<integer>|-}, ...] - Power Supply Unit ID
Displays information only about the storage shelves with power supply units (PSU) that match the unit
IDs you specify.
[-psu-location <text>, ...] - Power Supply Unit Location
Displays information only about the storage shelves with PSUs that are located at the specified location
inside the shelf.
[-psu-type <text>, ...] - Power Supply Unit Type
Displays information only about the storage shelves with PSUs that match the PSU types you specify.
[-psu-part-number <text>, ...] - Power Supply Unit Part Number
Displays information only about the storage shelves with PSUs that match the PSU part number you
specify.
[-psu-serial-number <text>, ...] - Power Supply Unit Serial Number
Displays information only about the storage shelves with PSUs that match the PSU serial numbers you
specify.
[-psu-reset-capable {true|false}, ...] - Power Supply Unit Reset Capability
Displays information only about the storage shelves with reset capable PSUs (true) or reset incapable
PSUs (false).
[-psu-is-enabled {true|false}, ...] - Power Supply Unit Enable/Disable Status
Displays information only about the storage shelves with PSUs that are enabled (true) or disabled
(false).
[-psu-fw-version <text>, ...] - Power Supply Unit Firmware Version
Displays information only about the storage shelves with PSUs that have the firmware version you
specify.
[-psu-op-status {unknown|normal|error|dc-over-voltage|dc-under-voltage|dc-over-current|over-
temperature-error|failed|off|not-supported|not-installed}, ...] - Operational Status
Displays information only about the storage shelves with PSUs that match the operational status you
specify.
[-psu-power-rating {<integer>|-}, ...] - Power Supply Power Ratings In Watts
Displays information only about the storage shelves with PSUs that match the power rating you specify.
[-psu-crest-factor {<integer>|-}, ...] - Power Supply Crest Factor
Displays information only about the storage shelves with PSUs that match the crest factor value you
specify.
[-psu-power-drawn {<integer>|-}, ...] - Power Drawn From PSU In Watts
Displays information only about the storage shelves with PSUs that match the drawn power you specify.
[-temp-sensor-id {<integer>|-}, ...] - Sensor Name
Displays information only about the storage shelves with temperature sensors that match the sensor IDs
you specify.
[-temp-sensor-location <text>, ...] - Sensor Location
Displays information only about the storage shelves with temperature sensors that match the specified
sensor locations inside the shelf.
[-temp-sensor-reading {<integer>|-}, ...] - Temperature Reading
Displays information only about the storage shelves with temperature sensors that match the
temperature reading you specify.
[-temp-is-ambient {true|false}, ...] - Temperature Reading at Ambient Value
Displays information only about the storage shelves with temperature sensors whose current
temperature reading is ambient (true) or not (false).
[-temp-high-critical-threshold {<integer>|-}, ...] - High Critical Threshold
Displays information only about the storage shelves with temperature sensors that match the high
critical threshold you specify.
[-temp-high-warning-threshold {<integer>|-}, ...] - High Warning Threshold
Displays information only about the storage shelves with temperature sensors that match the high
warning threshold you specify.
[-temp-low-warning-threshold {<integer>|-}, ...] - Low Warning Threshold
Displays information only about the storage shelves with temperature sensors that match the low
warning threshold you specify.
[-temp-low-critical-threshold {<integer>|-}, ...] - Low Critical Threshold
Displays information only about the storage shelves with temperature sensors that match the low
critical threshold you specify.
[-temp-op-status {unknown|normal|under-temperature|over-temperature|error|not-supported|not-
installed}, ...] - Operational Status
Displays information only about the storage shelves with temperature sensors that match the
operational status you specify.
[-voltage-sensor-id {<integer>|-}, ...] - Voltage Sensor ID
Displays information only about the storage shelves with voltage sens- ors that match the sensor IDs
you specify.
[-voltage-sensor-location <text>, ...] - Voltage Sensor Location
Displays information only about the storage shelves with voltage sens- ors that match the specified
sensor locations inside the shelf.
[-voltage-sensor-reading <text>, ...] - Voltage Current Reading
Displays information only about the storage shelves with voltage sens- ors that match the voltage
reading you specify.
[-voltage-op-status {unknown|normal|over-voltage-critical|under-voltage-critical|not-supported|
not-installed|not-recoverable}, ...] - Operational Status
Displays information only about the storage shelves with voltage sens- ors that match the operational
status you specify.
[-nsm-port-module-id {A|B}, ...] - Port Module ID
Displays information only about the storage shelves with PCIe Ports from the specified module.
[-nsm-port-id <integer>, ...] - Port ID
Displays information only about the storage shelves with PCIe Ports that match the specified ID.
[-nsm-port-type {cpu|disk|cx5|ethernet}, ...] - Port Type
Displays information only about the storage shelves with PCIe Ports that match the specified type.
[-nsm-port-state {ok|off-link-disabled|off-dll-link|link-down|no-drive}, ...] - Port State
Displays information only about the storage shelves with PCIe Ports that match the specified state.
● Examples
The following example displays information about all storage shelves:
Module Operational
Shelf Name Shelf ID Serial Number Model Type Status
----------------- -------- --------------- ----------- ------ -----------
1.1 1 6000832415 DS2246 IOM6 Critical
1.2 2 6000647652 DS2246 IOM6 Normal
1.3 3 6000003844 DS2246 IOM6 Normal
1.4 4 SHJ000000013A9E DS4246 IOM6 Normal
1.5 5 SHJ000000013A84 DS4246 IOM6 Normal
1.6 6 6000005555 DS2246 IOM6 Normal
6 entries were displayed.
cluster1::>
The following example displays expanded information about a storage shelf named 1.2:
Modules: Module is
Monitor Is Reporting FW Update Latest Swap Ope
rational Module
ID Part No. ES Serial No. is Active Master Element Progress FW Rev. FW Rev. Count Sta
tus Location
--- ------------ --------------- --------- ------ --------- ------------- ------- ------- ----- ---
-------- --------------
a 111-00190+A0 8006437891 true false false not-available 0191 - 0 nor
mal rear of the shelf at the top left
b 111-00190+A0 8006435180 true true true not-available 0191 - 0 nor
mal rear of the shelf at the top right
Paths:
Speed
Controller Initiator Initiator Side Switch Port Target Side Switch Port Target Port
TPGN Gb/s I/O KB/s IOPS
------------------ --------- -------------------------- -------------------------- ----------------
-- ------ ----- -------- -------
stsw-8020-01 0a - - -
- - - -
stsw-8020-01 2b - - -
- - - -
stsw-8020-02 0a - - -
- - - -
stsw-8020-02 2b - - -
- - - -
Voltage Sensors:
Voltage Operational
ID (V) Status Sensor Location
--- ------- ---------------------- ------------------------
1 5.70 normal rear of the shelf on the lower left power supply
2 12.300 normal rear of the shelf on the lower left power supply
3 5.70 normal rear of the shelf on the lower right power supply
4 12.180 normal rear of the shelf on the lower right power supply
Current Sensors:
Current Operational
ID (mA) Status Sensor Location
--- ------- ---------------------- ------------------------
1 0 normal rear of the shelf on the lower left power supply
2 0 normal rear of the shelf on the lower left power supply
3 0 normal rear of the shelf on the lower right power supply
4 0 normal rear of the shelf on the lower right power supply
Fans:
Speed Operational
ID (RPM) Status Fan Location
--- ------ ----------------------- ----------------------
1 3000 normal rear of the shelf on the lower left power supply
2 2970 normal rear of the shelf on the lower left power supply
3 3000 normal rear of the shelf on the lower right power supply
4 2970 normal rear of the shelf on the lower right power supply
Temperature:
-- Thresholds °C --
Temp Is Low Low High High Operational
ID °C Ambient Crit Warn Crit Warn Status Sensor Location
--- ---- ------- ---- ---- ---- ---- ------------------ --------------------------------
1 23 true 0 5 42 40 normal front of the shelf on the left, on the OPS
panel
2 26 false 5 10 55 50 normal inside of the shelf on the midplane
3 24 false 5 10 55 50 normal rear of the shelf on the lower left power s
upply
4 39 false 5 10 70 65 normal rear of the shelf on the lower left power s
upply
5 25 false 5 10 55 50 normal rear of the shelf on the lower right power
supply
6 36 false 5 10 70 65 normal rear of the shelf on the lower right power
supply
7 25 false 5 10 60 55 normal rear of the shelf at the top left, on shelf
module A
8 26 false 5 10 60 55 normal rear of the shelf at the top right, on shel
f module B
SAS Ports:
-- Port Speeds Gb/s -- Power Port
FC Ports:
Port
ID Port Type Status
--- --------- -----------
- - -
Bays:
Has Operational
ID Disk Bay Type Status
--- ----- ----------- -----------
0 true single-disk normal
1 true single-disk normal
2 true single-disk normal
3 true single-disk normal
4 true single-disk normal
5 true single-disk normal
6 true single-disk normal
7 true single-disk normal
8 true single-disk normal
9 true single-disk normal
10 true single-disk normal
11 true single-disk normal
12 false single-disk normal
13 false single-disk normal
14 false single-disk normal
15 false single-disk normal
16 false single-disk normal
17 false single-disk normal
18 false single-disk normal
19 false single-disk normal
20 false single-disk normal
21 false single-disk normal
cluster1::>
The following example displays information about the power supplies, voltage sensors and current sensors
of the storage shelf 1.1:
Voltage Sensors:
Voltage Operational
ID (V) Status
--- ------- ----------------------
1 5.70 normal
2 12.180 normal
3 5.70 normal
4 12.300 normal
Current Sensors:
Current Operational
ID (mA) Status
--- ------- ----------------------
1 0 normal
2 0 normal
3 3900 normal
4 0 normal
Errors:
------
Critical condition is detected in storage shelf power supply unit "1". The unit might fail.
Critical over temperature failure for temperature sensor "1". Current temperature: "75" C ("167" F
).
cluster1::>
The following example displays information about the cooling elements and temperature sensors inside the
storage shelf 1.2:
Fans:
Speed Operational
ID (RPM) Status
-- ----- -------------
1 3000 normal
2 3000 normal
3 3000 normal
4 2970 normal
Temperature:
-- Thresholds °C --
Temp Is Low Low High High Operational
ID °C Ambient Crit Warn Crit Warn Status
--- ---- ------- ---- ---- ---- ---- ------------------
1 23 true 0 5 42 40 normal
2 26 false 5 10 55 50 normal
3 24 false 5 10 55 50 normal
4 39 false 5 10 70 65 normal
5 25 false 5 10 55 50 normal
6 36 false 5 10 70 65 normal
7 25 false 5 10 60 55 normal
8 27 false 5 10 60 55 normal
Errors:
------
-
cluster1::>
The following example displays information about the connectivity from the node to the storage shelf 1.2:
Paths:
Controller Initiator Initiator Side Switch Port Target Side Switch Port Target Port
TPGN
------------------ --------- -------------------------- -------------------------- ----------------
-- ------
stsw-8020-01 0a - - -
-
stsw-8020-01 2b - - -
-
stsw-8020-02 0a - - -
-
stsw-8020-02 2b - - -
-
Errors:
------
-
cluster1::>
The following example displays information about the disk bays of the storage shelf 1.2:
Bays:
Has Operational
ID Disk Bay Type Status
--- ----- ----------- -----------
0 true single-disk normal
1 true single-disk normal
2 true single-disk normal
3 true single-disk normal
4 true single-disk normal
5 true single-disk normal
6 true single-disk normal
7 true single-disk normal
8 true single-disk normal
9 true single-disk normal
10 true single-disk normal
11 true single-disk normal
12 false single-disk normal
13 false single-disk normal
14 false single-disk normal
15 false single-disk normal
16 false single-disk normal
17 false single-disk normal
18 false single-disk normal
19 false single-disk normal
20 false single-disk normal
21 false single-disk normal
22 false single-disk normal
23 false single-disk normal
Errors:
------
-
cluster1::>
The following example displays information about the ports of the storage shelf 1.2:
SAS Ports:
-- Port Speeds Gb/s -- Power Port
Phy # IOM Port Type WWPN Operational Negotiated Status Status
----- --- --------- ----------------- ----------- ---------- ------ -----------
0 A Square 500a098004b063b0 6.0 - - Enabled
1 A Square 500a098004b063b0 6.0 - - Enabled
2 A Square 500a098004b063b0 6.0 - - Enabled
3 A Square 500a098004b063b0 6.0 - - Enabled
4 A Circle 500a09800569f03f 6.0 - - Enabled
5 A Circle 500a09800569f03f 6.0 - - Enabled
6 A Circle 500a09800569f03f 6.0 - - Enabled
7 A Circle 500a09800569f03f 6.0 - - Enabled
8 A Disk 500605ba00c1cb8d 6.0 6.0 on Enabled
9 A Disk 500605ba00c1ea8d 6.0 6.0 on Enabled
10 A Disk 500605ba00c1d111 6.0 6.0 on Enabled
11 A Disk 500605ba00c1bc49 6.0 6.0 on Enabled
12 A Disk 500605ba00c1cdfd 6.0 6.0 on Enabled
13 A Disk 500605ba00c1c531 6.0 6.0 on Enabled
14 A Disk 500605ba00c1eb05 6.0 6.0 on Enabled
15 A Disk 500605ba00c1ec29 6.0 6.0 on Enabled
16 A Disk 500605ba00c1bc29 6.0 6.0 on Enabled
17 A Disk 500605ba00c1c471 6.0 6.0 on Enabled
18 A Disk 500605ba00c039a9 6.0 6.0 on Enabled
19 A Disk 500605ba00c1c4dd 6.0 6.0 on Enabled
20 A Disk - - - - Empty
21 A Disk - - - - Empty
22 A Disk - - - - Empty
23 A Disk - - - - Empty
24 A Disk - - - - Empty
25 A Disk - - - - Empty
26 A Disk - - - - Empty
27 A Disk - - - - Empty
28 A Disk - - - - Empty
29 A Disk - - - - Empty
30 A Disk - - - - Empty
31 A Disk - - - - Empty
32 A SIL - - - - Disabled
33 A SIL - - - - Disabled
34 A SIL - - - - Disabled
35 A SIL - - - - Disabled
0 B Square 500a098004af9e30 6.0 - - Enabled
1 B Square 500a098004af9e30 6.0 - - Enabled
2 B Square 500a098004af9e30 6.0 - - Enabled
3 B Square 500a098004af9e30 6.0 - - Enabled
FC Ports:
Port
ID Port Type Status
--- --------- -----------
- - -
Errors:
------
-
cluster1::>
The following example displays error information about the storage shelves that have errors:
The storage shelf acp command family manages the alternate control path connectivity on the system.
The command set allows you to view the current connectivity, and the ACP modules. You can also configure
the connection or disable the connection across the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Configure the ACP connectivity on the cluster.
● Parameters
-is-enabled {true|false} - Is Enabled?
Configures the connectivity to the specified state.
[-subnet <IP Address>] - Subnet
Configures the connectivity to the specified subnet.
[-netmask <IP Address>] - Netmask
Configures the connectivity to the specified netmask.
[-channel {out-of-band|in-band}] - Channel
Configures the connectivity to the specified channel.
● Examples
The following example configures out-of-band ACP connectivity on each node:
cluster1::> storage shelf acp configure -is-enabled true -channel out-of-band -subnet 192.168.0.1 -netm
ask 255.255.255.0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays information about the ACP connectivity on each node
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <field-name>", ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to display the fields to specify.
| [-errors ]
If you specify the -errors parameter, the command displays detailed information about all modules with
errors.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-is-enabled {true|false}] - Is Enabled?
Selects the nodes that are enabled or disabled.
[-port <text>] - Port
Selects the nodes that match the specified port on which ACP is configured.
[-address <IP Address>] - IP Address
Selects the nodes with the specified IP address.
[-subnet <IP Address>] - Subnet
Selects the nodes with the specified subnet.
[-netmask <IP Address>] - Netmask
Selects the nodes with the specifed netmask.
[-connection-status {no-connectivity|partial-connectivity|full-connectivity|additional-connectivity|
unknown-connectivity|not-available|connection-disabled}] - Connection Status
Selects the nodes with the specified connection status.
[-error-id <integer>] - Error ID
Selects the node with the specified error ID.
[-error-type {No-Error|Connection-Issue|Connection-Activity|Module-Error|Shelf-Error}] - Error Type
The error type, in case of a connection error.
[-error-severity {unknown|notice|warning|error|critical}] - Error Severity
The error severity, in case of a connection error.
[-error-text <text>] - Error Text
Selects the node with the specified error text.
[-corrective-action <text>] - Corrective Action
Selects the node with the specified corrective action.
[-channel {unknown|out-of-band|in-band}] - Channel
Selects the nodes that has channel configured out-of-band or in-band.
● Examples
The following example displays ACP connectivity on each node (in-band):
The following example displays ACP connectivity on each node (out of band):
The following example displays the -instance output of the storage acp show (in-band) command. Use this
command to display details on connectivity and configuration.
Node: fas2750-rtp-1b
Channel: in-band
Enable Status: true
Connection Status: active
2 entries were displayed.
The following example displays the -instance output of the storage acp show (out-of-band) command. Use
this command to display details on connectivity and configuration.
Node: fas2750-rtp-1b
Channel: out-of-band
Enable Status: true
Port: e0P
IP Address: 192.168.1.75
Subnet: 192.168.0.1
Netmask: 255.255.252.0
Connection Status: full-connectivity
2 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays information about the modules connected to each node
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <field-name>", ... parameter, the command output also includes the specified
field or fields. You can use '-fields ?' to dis play the fields to specify
| [-errors ]
If you specify the -errors parameter, the command displays detailed information about all modules with
errors.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the modules that match this parameter value.
[-mac-address <text>] - MAC Address
Selects the module that match the specified MAC address.
[-module-name <text>] - Module name
Selects the module that match the specified module name.
[-module-address <IP Address>] - IP Address
Selects the module that match the specified IP address.
[-protocol-version <text>] - Protocol Version
Selects the modules that match the specified protocol version.
[-firmware-version <text>] - Firmware Version
Selects the modules that match the specified firmware version.
[-acpa-id <integer>] - ACPA assigner ID
Selects the modules that match the specified ACPA ID.
[-shelf-serial-number <text>] - Shelf Serial Number
Selects the modules that match the specified shelf serial number.
[-iom-type {Unknown|iom3|iom6|iom6e|iom12|iom12e|iom12f}] - IOM Type
Selects the modules that match the specified IOM type (IOM3/IOM6/IOM6E).
[-last-contact <integer>] - Last Contact (secs)
Selects the modules that match the specified last contact.
[-state {unknown|initializing|discovery-complete|awaiting-inband|no-inband|active|awaiting-bootp|
updating-firmware|connection-error|firmware-update-required|rebooting|fail|unsupported|degraded|
shelf-off}] - Local Node State
Selects the modules that match the specified state.
[-stack-id {<integer>|-}] - Stack ID
Selects the modules that match the specified stack ID.
[-shelf-id <text>] - Shelf ID
Selects the modules that match the specified shelf ID.
[-adapter-name <text>] - Adapter Name
Selects the modules that match the specified adapter name.
[-error-id <integer>, ...] - Error ID
Selects the modules that match the specified error ID.
[-error-text <text>, ...] - Error Text
● Examples
The following example displays the ACP modules connected to each node:
The following example displays the -instance output of the storage shelf acp module show. More details on
each module can be seen here.
Node: stor-v4-1a-1b-01
Module Name: 1.10.A
Mac Address: 00:a0:98:19:53:ee
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.3.239
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Active
Last Contact: 203
Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.10.B
Mac Address: 00:a0:98:19:55:16
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.1.23
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Active
Last Contact: 206
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.254.B
Mac Address: 00:a0:98:32:d6:ac
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.173
Protocol Version: 1.2.2. 8
Assigner ID: 1.2.2. 8
State: Active
Last Contact: 215
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-01
Module Name: 1.254.A
Mac Address: 00:a0:98:32:d6:dc
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.221
Protocol Version: 1.2.2. 8
Assigner ID: 1.2.2. 8
State: Active
Last Contact: 218
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Node: stor-v4-1a-1b-02
Module Name: 1.106.B
Mac Address: 00:a0:98:19:55:16
IOM Type: IOM6E
Shelf Serial Number: SHJMS000000001A
IP Address: 192.168.1.23
Protocol Version: 2.1.1.21
Assigner ID: 2.1.1.21
State: Initializing
Last Contact: 209
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Module Name: 1.10.B
Mac Address: 00:a0:98:32:d6:ac
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.173
Protocol Version: 1.2.2.8
Assigner ID: 1.2.2.8
State: Initializing
Last Contact: 217
Power Cycle Count: 0
Power Off Count: 0
Power On Count: 0
Expander Reset Count: 0
Expander Power Cycle Count: 0
Node: stor-v4-1a-1b-02
Module Name: 1.10.A
Mac Address: 00:a0:98:32:d6:dc
IOM Type: IOM6
Shelf Serial Number: 6000368103
IP Address: 192.168.2.221
Protocol Version: 1.2.2.8
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf drawer show command displays information for storage shelf drawers in the storage
system. If no parameters are specified, the default command displays the following information for the
drawers:
• Shelf Name
• Drawer Number
• Status
• Closed/Open
• Disk Count
• Firmware
To display detailed information for a single drawer, use the -shelf and -drawer parameters.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all drawers, in column style output.
| [-errors ]
Displays the following error status information about the drawers that have errors:
• Status
• Error Description
| [-instance ]
Displays expanded information for all drawers in the system. If a shelf and drawer are specified, then
this parameter displays the same detailed information for the specified drawer as does the -shelf and -
drawer parameters.
[-shelf <text>] - Shelf Name
Displays the drawers in the storage shelf that matches the specified shelf name.
[-drawer <integer>] - Drawer Number
Displays the drawers that match the specified drawer number.
[-node {<nodename>|local}] - Node Name
Displays the drawers that are present for the specified node.
● Examples
The following example displays information about all drawers:
Drawer Disk
Shelf Drawer Status A/B Closed? Count Firmware A/B
----- ------ ----------------- ------- ----- -----------------
2.5
1 normal/normal closed 4 00000634/00000634
2 normal/normal closed 4 00000634/00000634
3 normal/normal closed 4 00000634/00000634
4 normal/normal closed 5 00000634/00000634
5 normal/normal closed 4 00000634/00000634
5 entries were displayed.
cluster1::>
The following example displays expanded information about drawer 1 in shelf 2.5:
Shelf: 2.5
Drawer ID: 1
Part Numer: 111-03071
Serial Number: 021604008153
Drawer is Closed?: closed
Disk Count: 4
Firmware A: 00000634
Firmware B: 00000634
Path A: ok
Path B: ok
Status A: normal
Status B: normal
Drawer is Supported?: yes
Vendor Name: NETAPP
Mfg. Date: 02/2016
FRU Type: SASDRWR
Error Description: -
cluster1::>
The following example displays error information about the drawers that have errors:
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf drawer show-phy command displays information for drawer PHYs in the storage
system. If no parameters are specified, the default command displays the following information about PHYs:
• Shelf Name
• Drawer Number
• PHY Number
• Type
• SAS Address
• State
To display detailed information about a single PHY, use the -shelf, -drawer, and -phy parameters.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all drawer PHYs, in column style output.
| [-instance ]
Displays expanded information for all drawer PHYs in the system. If a shelf, drawer, and PHY are
specified, then this parameter displays the same detailed information for the PHY you specify as does
the -shelf, -drawer, and -phy parameters.
[-shelf <text>] - Shelf Name
Displays the PHYs in the storage shelf that matches the specified shelf name.
[-drawer <integer>] - Drawer Number
Displays the PHYs in the drawers that match the specified drawer number.
[-phy <integer>] - PHY Number
Displays the PHYs that match the specified PHY number.
[-node {<nodename>|local}] - Node Name
Displays the PHYs that are present for the specified node.
[-type {unknown|disk|virtual|input}] - Type
Displays the PHYs with the specified type.
[-physical-id <integer>] - Physical ID
Displays the PHYs that match the specified physical-id.
[-sas-address <text>] - Attached SAS Address
Displays the PHYs with the specified attached sas address.
[-state-a {unknown|enabled|disabled}] - State Module A
Displays the PHYs for which module A has the specified state.
[-state-b {unknown|enabled|disabled}] - State Module B
Displays the PHYs for which module B has the specified state.
[-status-a <Drawer PHY Status>] - Status Module A
Displays the PHYs with module A currently operating under the specified status.
[-status-b <Drawer PHY Status>] - Status Module B
Displays the PHYs with module B currently operating under the specified status.
● Examples
The following example displays information about all drawer PHYs:
The following example displays expanded information for PHY 0 of drawer 1 in shelf 2.5:
Shelf: 2.5
Drawer ID: 1
PHY Number: 0
Type: disk
Physical ID: 1
SAS Address: 00c5005079183f85
State A: enabled
State B: enabled
Status A: enabled-12gbs
Status B: enabled-12gbs
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf drawer show-slot command maps each drawer and slot number to the
corresponding bay number.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields in column style output.
| [-instance ]
Displays all slot information.
[-shelf <text>] - Shelf Name
Displays the slots in the shelf that matches the specified shelf name.
[-bay <integer>] - Bay Number
Displays the slots that have the specified bay number.
[-node {<nodename>|local}] - Node Name
Displays the slots that are present for the specified node.
[-drawer <integer>] - Drawer Number
Displays the slots in the drawers that match the specified drawer number.
[-slot <integer>] - Slot Number
Displays the slots that match the specified slot number.
[-is-installed {yes|no}] - Is Disk Installed
Displays the slots that have a disk installed.
● Examples
The following example displays the mapping from drawer and slot number to bay number:
5 17 no
6 18 yes
7 19 no
8 20 no
9 21 yes
10 22 no
11 23 no
3
0 24 yes
1 25 no
2 26 no
3 27 yes
4 28 no
5 29 no
6 30 yes
7 31 no
8 32 no
9 33 yes
10 34 no
11 35 no
4
0 36 yes
1 37 no
2 38 no
3 39 yes
4 40 yes
5 41 no
6 42 yes
7 43 no
8 44 no
9 45 yes
10 46 no
11 47 no
5
0 48 yes
1 49 no
2 50 no
3 51 yes
4 52 no
5 53 no
6 54 yes
7 55 no
8 56 no
9 57 yes
10 58 no
11 59 no
60 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage shelf firmware show-update-status command displays the state of the Shelf Firmware
Update process.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the node that matches this parameter value.
[-update-status {running|idle}] - Disk Shelf Firmware Update Status
Selects the nodes whose SFU process status matches this parameter value. Possible values are:
• running - Disk shelf firmware update is in progress.
• idle - Disk shelf firmware update is not in progress.
[-in-progress-count <integer>] - Number of Shelves with Earlier Revisions Being Updated
Selects the nodes that matches the number of shelves the SFU process is updating to this parameter
value. This specifies the number of shelves with earlier revisions that are being updated.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage shelf firmware update command updates the firmware on one or more shelves. You can
download the latest firmware by using the storage firmware download command. You can specify a
shelf whose firmware is to be updated by using the -shelf parameter. You can update the firmware on all
the shelves by not providing the -shelf parameter. All the shelves of a specific module type can be
updated by providing a value to the -module-type parameter.
● Parameters
{ [-shelf <text>] - Shelf Name
This specifies the name of the shelf whose firmware is to be updated.
● Examples
The following example updates the firmware on all the shelves in the cluster:
The following example updates the firmware on all shelves with the IOM6 module type:
The following example refreshes the firmware on all shelves with the IOM6 module type:
Related reference
storage firmware download (page 1158)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf location-led modify command modifies the on/off state of the shelf location LED.
● Parameters
-shelf-name <text> - Shelf Name
This parameter specifies the shelf whose LED is to be turned on or turned off.
[-led-status {on|off}] - Location LED
This parameter specifies whether the shelf location LED needs to be turned on or turned off.
● Examples
The following example turns on the shelf location LED of the specified shelf.
cluster1::> storage shelf location-led modify -node node1 -shelf-name 1.0 -led-status on
The following example turns off the shelf location LED of the specified shelf.
cluster1::> storage shelf location-led modify -node node1 -shelf-name 1.0 -led-status off
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf location-led show command displays the state of shelf location LED.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]
If you specify the -instance parameter, the command displays detailed information about all fields.
[-shelf-name <text>] - Shelf Name
Selects the shelves whose shelf-name matches this parameter value.
[-node {<nodename>|local}] - Node Name
Selects the nodes that match this parameter value.
[-stack-id <integer>] - Stack ID
Selects the shelves whose stack-id matches this parameter value.
[-shelf-id <integer>] - Shelf ID
Selects the shelves whose shelf-id matches this parameter value.
[-led-status {on|off}] - Location LED
Shows the state of the shelf location LED.
● Examples
The following example shows the state of the shelf location LED for each shelf.
8.1 8 1 off
4 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage shelf port show command displays information for storage shelf ports in the storage
system. If no parameters are specified, the default command displays the following information for the
ports:
• Shelf Name
• ID
• Module
• State
• Internal?
To display detailed information for a single port, use the -shelf and -id parameters.
● Parameters
{ [-fields <fieldname>, ...]
Displays output in column style about the specified fields for all shelf ports.
| [-cables ]
Displays information about all cables connected to the shelf ports.
| [-instance ]
Displays expanded information for all shelf ports in the system. If a shelf and ID are specified, then this
parameter displays the same detailed information for the specified port as does the -shelf and -id
parameters.
[-shelf <text>] - Shelf Name
Displays the ports in the storage shelf that matches the specified shelf name.
[-id <integer>] - Port ID
Displays the ports that match the specified ID.
[-node {<nodename>|local}] - Node Name
Displays the ports that are present for the specified node.
[-module-id {A|B}] - Module ID
Displays the ports from the specified shelf module ID.
[-is-internal {true|false}] - Is Port Internal?
Displays the ports that are internal.
[-location <text>] - Location
Displays the ports with the specified location.
[-is-cable-connected {true|false}] - Is Cable Connected?
Displays the ports that have cables connected to them.
[-is-error {true|false}] - Any Errors?
Displays the ports for which errors have been logged.
● Examples
The following example displays information about all shelf ports:
The following example displays expanded information about port 0 in shelf 1.4:
Shelf: 1.4
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command takes the specified tape drive offline.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
{ -name <text> - Tape Drive Device Name
Use this parameter to specify the device name of the tape drive that needs to be taken offline. The
format of the device -name name includes a prefix to specify how the tape cartridge is handled and a
suffix to describe the density of the tape. The prefix suggests 'r', 'nr' or 'ur' for rewind, no rewind, or
unload/reload and a suffix shows density of 'l', 'm', 'h' or 'a'. For example, a tape device name for this
operation might have the form "nrst8m" were 'nr' is the 'no rewind' prefix, 'st8' is the alias-name and 'm'
is the tape density. You can use the 'storage tape show -device-names' command to find more
information about device names of tape drives attached to a node.
| -device-id <text>} - Tape Drive Device ID
Use this parameter to specify the device ID of the tape drive that needs to be taken offline.
● Examples
The following example takes the tape drive with device name 'nrst8m' offline. This tape drive is attached to
cluster1-01.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command brings a specified tape drive online.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
{ -device-id <text> - Tape Drive Device ID
Use this parameter to specify the device ID of the tape drive that needs to be brought online.
| -name <text>} - Tape Drive Device Name
Use this parameter to specify the device name of the tape drive that needs to be brought online. The
format of the device -name name includes a prefix to specify how the tape cartridge is handled and a
suffix to describe the density of the tape. The prefix suggests 'r', 'nr' or 'ur' for rewind, no rewind, or
unload/reload and a suffix shows density of 'l', 'm', 'h' or 'a'. For example, a tape device name for this
operation might have the form "nrst8m" were 'nr' is the 'no rewind' prefix, 'st8' is the alias-name and 'm'
is the tape density. You can use the 'storage tape show -device-names' command to find more
information about device names of tape drives attached to a node.
● Examples
The following example brings the tape drive with device id sw4:2.126L4 attached to the node, cluster1-01,
online.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command changes the tape drive cartridge position.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
-name <text> - Tape Drive Device Name
Use this parameter to specify the device name of the tape drive whose cartridge position needs to be
changed. The format of the device -name includes a prefix to specify how the tape cartridge is handled
and a suffix to describe the density of the tape. The prefix suggests 'r', 'nr' or 'ur' for rewind, no rewind, or
unload/reload and a suffix shows density of 'l', 'm', 'h' or 'a'. For example, a tape device name for this
operation might have the form "nrst8m" were 'nr' is the 'no rewind' prefix, 'st8' is the alias-name and 'm'
is the tape density. You can use the 'storage tape show -device-names' command to find more
information about device names of tape drives attached to a node.
-operation {weof|fsf|bsf|fsr|bsr|rewind|erase|eom} - Tape Position Operation
Use this parameter to specify the tape positioning operation. The possible values for -operation are:
• weof - Write end-of-file marks
• fsf - Forward space end-of-file marks
• bsf - Backward space end-of-file marks
• fsr - Forward space records
• bsr - Backward space records
• rewind - Rewind the tape
• erase - Erase then entire tape media from current position
• eom - Position the tape at end of data (end of media if full)
[-count <integer>] - Count for Positioning
Use this parameter to specify the count for a tape positioning operation. You can specify this parameter
only with the following operations: weof, fsf, bsf, fsr, and bsr. The default value of this parameter is one.
● Examples
The following example specifies a rewind operation on a tape device. Note the -count parameter does not
need to be specified for this type of operation.
cluster1::> storage tape position -node cluster1-01 -name nrst8m -operation rewind
The following example specifies an fsf (forward space filemark) operation on a tape device. Note the -count
parameter specifies 5 forward space filemarks for this operation.
cluster1::> storage tape position -node cluster1-01 -name nrst1a -operation fsf -count 5
The following example specifies an eom (end-of-media) operation on a tape device. The 'eom' positions a
tape at end of data (end of media if full). Note the -count parameter does not need to be specified for this
type of operation.
cluster1::> storage tape position -node cluster1-01 -name rst0h -operation eom
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command resets a specified tape drive.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
-device-id <text> - Tape Drive Device ID
Use this parameter to specify the device ID of the tape drive is to be reset.
● Examples
The following example resets the tape drive with device ID sw4:2.126L3 attached to the node, cluster1-01.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape show command displays information about tape drives and media changers attached
to the cluster. Where it appears in the remainder of this document "device" may refer to either a tape drive
or a media changer. By default, this command displays the following information about all tape drives and
media changers:
• Node to which the tape drive/media changer is attached
• Device ID of the tape drive/media changer
• Description of the tape drive/media changer
• Type of device: tape drive or media changer
• Functional status of the tape drive/media changer
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-alias ]
Displays the tape drive/media changer alias with the following details:
• Node to which tape drive/media changer is attached
• Device ID of the tape drive/media changer
• Alias name of the tape drive/media changer
• Alias mapping for tape drive/media changer
| [-connectivity ]
Displays the connectivity from the node to the tape drive/media changer with the following details:
• Node to which tape drive/media changer is attached
• Device ID of the tape drive/media changer
• Tape drive/media changer description
• Type of device: tape drive or media changer
• Interface type for the tape drive/media changer
• World Wide Node Name of tape drive/media changer
• World Wide Port Name of tape drive/media changer
• Serial Number of tape drive/media changer
• Tape drive/media changer errors
• Initiator port which hosts the tape drive/media changer
• Alias name of the tape drive/media changer
• Operational state of tape drive/media changer
• Functional status of tape drive/media changer
| [-device-names ]
Displays the tape drive names for used tape positioning using the following details: rewind, no rewind,
unload/reload and density
• Node to which tape drive/media changer is attached
• Device ID of the tape drive/media changer
• Tape drive/media changer description
• Device Names that include Rewind, no Rewind, Unload/Reload
| [-status ]
Displays the status of tape drive/media changer with the following details:
• Device ID of the tape drive/media changer
● Examples
The following example displays information about all tape drives and media changers attached to the
cluster:
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11 tape drive HP LTO-3 error
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11L1 media changer PX70-TL normal
The following example displays detailed information about a tape drive named sw4:10.11
Node: cluster1-01
Device ID Device Type Description Status
---------------------- -------------- ------------------------------ --------
sw4:10.11 tape drive HP LTO-3 error
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape show-errors command displays error information about tape drives attached to the
cluster. By default, this command displays the following information about all tape drives:
• Node to which the tape drive is attached
• Device ID of the tape drive
• Type of device(tape drive)
• Description of the tape drive
• Alias name of the tape drive
• Tape drive errors
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives on the specified node.
[-device-id <text>] - Device ID
Selects the tape drive with the specified device ID.
[-device-type <text>] - Device Type
Selects the devices with the specified type of tape drive.
[-description <text>] - Description
Selects the tape drives with the specified description.
[-alias-name <text>] - Alias Name
Selects the tape drive with the specified alias name.
[-wwnn <text>] - World Wide Node Name
Selects the tape drives with the specified World Wide Node Name.
[-wwpn <text>] - World Wide Port Name
Selects the tape drive with the specified World Wide Port Name.
[-serial-number <text>] - Serial Number
Selects the tape drive with the specified serial number.
[-error <text>] - Tape Drive Error Description
Selects the tape drives with the specified error string.
[-initiator <text>] - Initiator Port
Selects the tape drives with the specified initiator port.
● Examples
The following example shows error information for all tape drives attached to cluster1.
Node: node1
Device ID: 2d.0
Device Type: tape drive
Description: IBM LTO-6 ULT3580
Alias: st2
Errors: -
The following example shows error information for tape drive sw4:2.126L1 attached to the node, node1.
Node: node1
Device ID: sw4:2.126L1
Device Type: tape drive
Description: Hewlett-Packard LTO-3
Alias: st3
Errors: -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This storage tape show-media-changer command displays information about media changers
attached to the cluster. By default, this command displays the following information about all media
changers:
• Device ID of media changer
• Description of media changer
• World Wide Node Name of media changer
• World Wide Port Name of media changer
• Serial number of media changer
• Media changer errors
• Node to which the media changer is attached
• Initiator port which hosts the media changer
• Alias name of media changer
• Operational state of media changer
• Functional status of media changer
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-device-id <text>] - Device ID
Selects the media changer with the specified device ID.
[-node {<nodename>|local}] - Node
Displays detailed information about media changers on the specified node.
[-description <text>] - Description
Selects the media changers with the specified description.
[-alias-name <text>] - Alias Name
Selects the media changer with the specified alias name.
[-wwnn <text>] - World Wide Node Name
Selects the media changers with the specified World Wide Node Name.
[-wwpn <text>] - World Wide Port Name
>Selects the media changer with the specified World Wide Port Name.
[-serial-number <text>] - Serial Number
Selects the media changer with the specified serial number.
[-device-if-type {unknown|fibre-channel|SAS|pSCSI}] - Device If Type
Selects the media changers with the specified interface type.
[-device-state {unknown|available|ready-write-enabled|ready-write-protected|offline|in-use|error|
reserved-by-another-host|normal}] - Operational State of Device
Selects the media changers with the specified operational state.
[-error <text>] - Media Changer Error Description
Selects the media changers with the specified error string.
[-initiator <text>] - Initiator Port
Selects the media changers with the specified initiator port.
● Examples
The following example displays information about all media changers attached to the cluster:
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 2b mc0 in-use normal
Description: NEO-TL
WWNN: 2:001:000e11:10b919
WWPN: 2:002:000e11:10b919
Serial Number: 00FRU7800000_LL0
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 5a mc1 available normal
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the supported and qualification status of all tape drives recognized by Data ONTAP
attached to a node in the cluster. This includes nonqualified tape drives. Such tape drives do not have a Tape
Configuration File (TCF) on the storage system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the tape drives that match this parameter value.
[-tape-drive <text>] - Tape Drive Name
Selects the tape drives that match this parameter value.
[-is-supported {true|false}] - Tape Drive Supported
Selects the tape drives that match this parameter value.
[-status <text>] - Supported Status
Selects the tape drives that match this parameter value.
● Examples
The following example displays support and qualification status of tape drives recognized by Data ONTAP.
The command also identifies tape drives attached to the node that are nonqualified (not supported).
Node: Node1
Is
Tape Drive Supported Support Status
------------------------- --------- ---------------------------------
sw4:2.126L6 false Nonqualified tape drive
Hewlett-Packard C1533A true Qualified
Hewlett-Packard C1553A true Qualified
Hewlett-Packard Ultrium 1 true Qualified
Node: Node1
Is
Tape Drives Supported Support Status
------------------------------ --------- -------------------------------
Sony SDX-300C true Qualified
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This storage tape show-tape-drive command displays information about tape drives attached to the
cluster. By default, this command displays the following information about all tape drives:
• Device ID of tape drive
• Description of tape drive
• World Wide Node Name of tape drive
• World Wide Port Name of tape drive
• Serial Number of tape drive
• Tape drive errors
• Node to which the tape drive is attached
• Initiator port which hosts the tape drive
• Alias name of tape drive
• Operational state of tape drive
• Functional status of tape drive
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-device-id <text>] - Device ID
Selects the tape drive with the specified device ID.
[-node {<nodename>|local}] - Node
● Examples
The following example displays information about all tape drives attached to the cluster:
Errors: -
Paths:
Node Initiator Alias Device State Status
------------------------ --------- ------- ------------------------ --------
cluster1-01 2a st0 ready-write-enabled normal
Errors: -
Paths:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command enables or disables diagnostic tape trace operations for all tape drives attached to the node
you have specified.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which the tape trace feature is enabled or disbled.
[-is-trace-enabled {true|false}] - Tape Trace Enabled or Disabled
Use this parameter to enable or disable the tape trace feature. By default, the tape trace feature is
enabled.
● Examples
The following example enables tape trace operation on the node, cluster1-01.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command clears alias names for a tape drive or media changer.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
{ -name <text> - Alias Name That Is to Be Cleared
Use this parameter to specify the alias name that is to be cleared. You can use the 'storage tape show -
alias' command to find more information about alias names of tape drives and media changers attached
to a node. The -clear-scope and -name parameters are mutually exclusive. If you specify the -name
parameter, a single alias name is cleared.
| -clear-scope {tape|media-changer|all}} - Scope of Alias Clear Operation
Use this parameter to specify the scope of the alias clear operation. The -clear-scope and -name
parameters are mutually exclusive. If you specify the -clear-scope parameter, multiple aliases are
cleared depending upon the value of the parameter.
● Examples
The following example clears an alias name 'st3' attached to the node, cluster1-01.
The following example clears all tape drive alias names attached to the node, cluster1-01.
The following example clears all media changer alias names attached to the node, cluster1-01.
The following example clears both tape and media changer alias names attached to the node, cluster1-01.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command sets an alias name for a tape drive or media changer.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node to which the tape drive is attached.
-name <text> - Alias Name for Tape Drive or Media Changer
Use this parameter to specify the alias name for tape drive or media changer. For a tape drive alias
name, the format is 'st' followed by one or more digits. For a media changer alias name, the format is
'mc' followed by one or more digits.
-mapping <text> - Mapping for Alias Name
Use this parameter to specify the mapping for an alias name. Use the format 'SN[<serial-number>]'.
Valid mapping for serial numbers are in the format 'SN[<serial-number>]' where the <serial-number> is
from 2 to 90 characters long and includes the following characters: 0-9, a-z, and A-Z.
● Examples
The following example sets an alias name 'st3' for a tape drive with serial number SN[123456]L4 attached
to the node, node1.
cluster1::storage tape alias> set -node node1 -name st3 -mapping SN[123456]L4.
The following example sets an alias name 'mc1' for a media changer with serial number SN[65432] attached
to the node, node1.
cluster1::storage tape alias> set -node node1 -name mc1 -mapping SN[65432].
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays aliases of all tape drives and media changers attached to every node in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example shows the aliases of all tape drives and media changers attached to every node in
the cluster:
Node: node1
Alias Mapping
-------------------------------------- ----------------------------------------
mc0 SN[00FRU7800000_LL0]L1
mc1 SN[00FRU7800000_LL1]L1
mc2 SN[aa6a64c69360a0980248c8]
mc3 SN[c940abe8b0c3a0980248c8]
mc4 SN[fba082e6b335a0980248c8]L5
st0 SN[HU19487T7N]
st1 SN[1068000230]
st10 SN[fba0c508b335a0980248c8]L7
Node: node2
Alias Mapping
-------------------------------------- ----------------------------------------
mc1 SN[c940982fc48c8]
st3 SN[ST456HT8N]L3
st2 SN[HG68000230]L2
st11 SN[aba673980248c8]L7
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The storage tape config-file delete command deletes the specified tape drive configuration file
from all nodes that are currently part of the cluster.
● Parameters
-filename <text> - Config File Filename
This parameter specifies the name of the tape configuration file that will be deleted from all nodes that
are currently part of the cluster.
● Examples
The following example deletes the specified tape drive configuration files on every node that is currently
part of the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape config-file get command uploads a specified tape drive configuration file to each
node that is currently part of the cluster.
● Parameters
-url <text> - Config File URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP and TFTP, are accepted.
● Examples
The following example uploads the specified tape drive configuration file to each node that is currently part
of the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape config-file show command lists the tape drive configuration files loaded onto each
node in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about tape drive configuration files for the specified node.
[-config-file <text>] - Tape Config File
Selects information about the tape drive configuration file specified.
● Examples
The following example lists the tape drive config files loaded onto each node in the cluster:
Node: node1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays information such as how the storage tape libraries connect to the cluster, LUN
groups, number of LUNs, WWPN, and switch port information. Use this command to verify the cluster's
storage tape library configuration or to assist in troubleshooting.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-switch ]
If you specify this parameter, switch port information is shown.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller Name
The name of the clustered node for which information is being displayed.
[-group <integer>] - LUN Group
A LUN group is a set of LUNs that shares the same path set.
[-target-wwpn <text>] - Library Target Ports
The World Wide Port Name of a storage tape library port.
[-initiator <text>] - Initiator
The host bus adapter that the clustered node uses to connect to storage tape libraries.
[-array-name <array name>] - Library Name
Name of the storage tape library that is connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
This identifies the switch port that connects to the tape library's target port.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
This identifies the switch port that connects to the node's initiator port.
[-lun-count <integer>] - Number of LUNS
This is a command-line switch (-lun-count) used to restrict what LUN groups are displayed in the output.
● Examples
The following example displays the storage tape library configuration information.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays path information for a tape library and has the following parameters by default:
• Node name
• Initiator port
• Target port
• TPGN (Target Port Group Number)
• Port speeds
• Path I/O in Kbytes/sec
• IOPs
● Parameters
{ [-fields <fieldname>, ...]
fields used to be used in this display
| [-detail ]
Using this option displays the following:
• Target IOPs
• Target LUNs
• Path IOPs
• Path errors
• Path quality
• Path LUNs
• Initiator IOPs
• Initiator LUNs
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-array-name <array name>] - Library Name
Name of the storage tape library that is connected to the cluster.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage tape library that is being used.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of
target ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
● Examples
The following example displays the path information for a storage tape library
cluster1-01
3d 50050763124b4d6f 61 4 Gb/S 0 0
cluster1-01
0b 510a09800000412d 35 4 Gb/S 0 0
cluster1-01
0b 510a09820000412d 1 4 Gb/S 0 0
3 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays path information for every initiator port connected to a tape library. The output is
similar to the storage library path show command but the output is listed by initiator.
● Parameters
{ [-fields <fieldname>, ...]
fields used to be used in this display
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Controller name
The name of the clustered node for which information is being displayed.
[-initiator <text>] - Initiator Port
Initiator port that the clustered node uses.
[-target-wwpn <text>] - Target Port
Target World Wide Port Name. Port on the storage tape library that is being used.
[-initiator-side-switch-port <text>] - Initiator Side Switch Port
Switch port connected to the clustered node.
[-target-side-switch-port <text>] - Target Side Switch Port
Switch port connected to the tape library.
[-array-name <array name>] - Library Name
Name of the storage tape library that is connected to the cluster.
[-tpgn <integer>] - Target Port Group Number
TPGN refers to the target port group to which the target port belongs. A target port group is a set of
target ports which share the same LUN access characteristics and failover behaviors.
[-port-speed <text>] - Port Speed
Port Speed of the specified port.
[-path-io-kbps <integer>] - Kbytes of I/O per second on Path (Rolling Average)
Rolling average of Kbytes of I/O per second on the library path.
[-path-iops <integer>] - Number of I/O per second on Path (Rolling Average)
Rolling average of I/O per second on the library path.
[-initiator-io-kbps <integer>] - Kbytes of I/O per second on Initiator (Rolling Average)
Rolling average of Kbytes of I/O per second on the initiator port.
[-initiator-iops <integer>] - Number of I/O per second on Initiator (Rolling Average)
>Rolling average of I/O per second on the initiator port.
[-target-io-kbps <integer>] - Kbytes of I/O per second to Target (Rolling Average)
Rolling average of Kbytes of I/O per second on the target port.
[-target-iops <integer>] - Number of I/O per second to Target (Rolling Average)
Rolling average of I/O per second on the target port.
● Examples
The following example displays the path information by initiator for a storage tape library.
sw_tape:1 0 510a0982000
0412d TAPE_LIB_1
3d 0 N/A 0 N/A 0 50050763124
b4d6f TAPE_LIB_2
3 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape load-balance modify command modifies the tape load balance setting for a
specified node in the cluster.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node on which the tape load balance setting is to be modified.
[-is-enabled {true|false}] - Is Tape Load Balance Enabled
This parameter specifies whether tape load balancing is enabled on the node. The default setting is
false.
● Examples
The following example modifies the tape load balance setting on node1 in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage tape load-balance show command displays tape load balance settings for each node in
the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about tape load balancing for the specified node.
[-is-enabled {true|false}] - Is Tape Load Balance Enabled
Selects information about load balance configuration as specified by enabled or disabled setting.
● Examples
The following example shows the load balance setting for each node in the cluster:
Node Enabled
--------------------------- ---------
node1 false
node2 false
2 entries were displayed.
The system commands enable you to monitor and control cluster nodes.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge add command enables you to add FC-to-SAS bridges for SNMP monitoring in a
MetroCluster configuration.
● Parameters
-name <text> - Bridge Name
This parameter identifies the bridge being added. It is required only when the managed-by parameter is
set to in-band.
[-managed-by {SNMP|in-band}] - Bridge Management Method
This parameter specifies whether the bridge uses the SNMP or in-band management method.
FibreBridge 6500N uses SNMP only; FibreBridge 7500N may use either.
[-address <IP Address>] - Bridge Management Port IP Address
This parameter specifies the IP address of the bridge that is being added for monitoring.
[-snmp-community <text>] - SNMP Community
This parameter specifies the SNMP community set on the bridge that is being added for monitoring.
[-veto-backend-fabric-check {true|false}] - Veto Backend Fabric Check (privilege: advanced)
If specified, the system bridge add command will not check if the bridge is present in the
MetroCluster's backend fabric. By default, it does not let you add bridges that are not present.
● Examples
The following command adds a bridge with IP address '10.226.197.16' for monitoring:
ATTO_FibreBridge6500N_3
Not Set Atto FibreBridge 6500N
2000001086609e0e false SNMP -
ATTO_FibreBridge6500N_4
Not Set Atto FibreBridge 6500N
2000001086609c06 false SNMP -
4 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge modify enables you to modify certain parameters for identifying and accessing the
FC-to-SAS bridges added for monitoring in a MetroCluster configuration.
● Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge.
[-address <IP Address>] - Bridge IP Address
This parameter specifies the IP address of the bridge.
[-snmp-community <text>] - SNMP Community Set on the Bridge
This parameter specifies the SNMP community set on the bridge.
[-managed-by {SNMP|in-band}] - Bridge Management Method
This parameter specifies whether the bridge uses the SNMP or in-band management method.
FibreBridge 6500N uses SNMP only; FibreBridge 7500N may use either.
[-node-visible-list <text>, ...] - Nodes Bridge is Visble To
This parameter specifies bridges that are visible to the node.
● Examples
The following command modifies 'ATTO_10.226.197.16' bridge SNMP community to 'public':
cluster1::>
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge refresh command triggers a refresh of the SNMP data for the MetroCluster FC
switches and FC-to-SAS bridges. It does not do anything if the refresh is already going on. The FC switches
and FC-to-SAS bridges must have been previously added for monitoring by using the system switch
fiber-channel add and system bridge add commands respectectively.
● Examples
The following command triggers a refresh for the SNMP data:
cluster1::*>
Related reference
system bridge add (page 1264)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge remove enables you to remove FC-to-SAS bridges that were previously added for
SNMP monitoring.
● Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge added for monitoring.
● Examples
The following command removes 'ATTO_10.226.197.16' bridge from monitoring:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The storage bridge run-cli command enables you to execute an ATTO bridge command.
● Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge that the command is to be executed on.
-command <text> - CLI command to execute
This parameter specifies the command to be executed on the named bridge.
● Examples
The following example executes a command on a bridge
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge show command displays information about all the storage bridges in the MetroCluster
configuration. The bridges must have been previously added for monitoring using the system bridge add
command. If no parameters are specified, the default command displays the following information about
the bridges:
• Bridge
• Symbolic Name
• Vendor
• Model
• Bridge WWN
• Is Monitored
• Is Bridge Secure
• Managed By
• Monitor Status
To display detailed profile information about a single bridge, use the -name parameter.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the bridges, in column style output.
| [-connectivity ]
Displays the following details about the connectivity from different entities to the bridge:
• Node
• Initiator
• Initiator Side Switch Port
• Target Side Switch Port
• Target Port WWN
• Target Port Number
| [-cooling ]
Displays the following details about the chassis temperature sensor(s) on the bridge:
• Sensor Name
• Reading in degree Celsius (C)
• Fan operational status
• Minimum Safe Operating Temperature in degree Celsius (C)
• Maximum Safe Operating Temperature in degree Celsius (C)
• Sensor Status
| [-error ]
Displays the errors related to the bridge.
| [-ports ]
Displays the following details about the bridge FC ports:
• Port number
• Port administrative status
• Port operational status
• Port operating mode
• Port negotiated speed
• Peer world wide name
Displays the following details about the bridge SAS ports:
• Port number
• Port negotiated data rate
• Port data rate capability
• Port PHY1 operational status
• Port PHY2 operational status
• Port PHY3 operational status
• Port PHY4 operational status
• Port administrative status
• Port operational status
• Peer world wide name
| [-power ]
Displays the status of the replaceable power supplies for the FibreBridge 7500N only:
• Power supply name
• Power supply status
| [-sfp ]
Displays the following details about the bridge FC ports Small Form-factor Pluggable (SFP):
• Port number
• SFP vendor
Displays information only about the bridges that have the SAS ports with the PHY reset count you
specify.
[-sas-port-sync-losses-list <integer>, ...] - Bridge SAS Port Sync Loss Count List
Displays information only about the bridges that have the SAS ports with the synchronization losses you
specify.
[-sas-port-disparity-count-list <integer>, ...] - Bridge SAS Port Disparity Count List
Displays information only about the bridges that have the SAS ports with the disparity count you specify.
[-sas-port-invalid-dword-list <integer>, ...] - Bridge SAS Port Invalid DWORD Count List
Displays information only about the bridges that have the SAS ports with the invalid DWORD count you
specify.
[-sas-port-stats-index-list <integer>, ...] - Bridge SAS Port Index List
Displays information only about the bridges that have the SAS ports with the indexes you specify.
[-sas-port-stats-data-rate-capability-list {unknown|1.5Gbps|3Gbps|6Gbps|12Gbps}, ...] -
Bridge SAS Port Data Rate Capability List
Displays information only about the bridges that have the SAS ports with the data rate capabilities you
specify.
[-sas-port-stats-negotiated-data-rate-list {unknown|1.5Gbps|3Gbps|6Gbps|12Gbps}, ...] -
Bridge SAS Port Negotiated Data Rate List
Displays information only about the bridges that have the SAS ports with the negotiated data rates you
specify.
[-sas-port-stats-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS
[{+|-}hh:mm]}] - Bridge SAS Port Statistics Data Last Successful Refresh Timestamp
Displays information only about the bridges that match the SAS port stats data last successful refresh
timestamp you specify.
[-fc-sfp-port-index-list <integer>, ...] - Bridge FC Port Index List
Displays information only about the bridges that have the FC ports with the indexes you specify.
[-fc-port-sfp-vendor-list <text>, ...] - Bridge FC Port SFP Vendor List
Displays information only about the bridges that have the FC ports with the SFP vendors you specify.
[-fc-port-sfp-serial-number-list <text>, ...] - Bridge FC Port SFP Serial Number List
Displays information only about the bridges that have the FC ports with the SFP serial numbers you
specify.
[-fc-port-sfp-part-number-list <text>, ...] - Bridge FC Port SFP Part Number List
Displays information only about the bridges that have the FC ports with the SFP part numbers you
specify.
[-fc-port-sfp-data-rate-capability-list {2Gb|4Gb|8Gb|16Gb|32Gb}, ...] - Bridge FC Port SFP
Data Rate Capability List
Displays information only about the bridges that have the FC ports with the SFP data rate capabilities
you specify.
[-fc-port-sfp-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS
[{+|-}hh:mm]}] - Bridge FC Port SFP Data Last Successful Refresh Timestamp
Displays information only about the bridges that match the FC ports SFP data last successful refresh
timestamp you specify.
[-sas-qsfp-port-index-list <integer>, ...] - Bridge SAS Port Index List
Displays information only about the bridges that have the SAS ports with the indexes you specify.
[-sas-port-qsfp-vendor-list <text>, ...] - Bridge SAS Port QSFP Vendor List
Displays information only about the bridges that have the SAS ports with the QSFP vendors you specify.
[-sas-port-qsfp-serial-number-list <text>, ...] - Bridge SAS Port QSFP Serial Number List
Displays information only about the bridges that have the SAS ports with the QSFP serial numbers you
specify.
[-sas-port-qsfp-type-list {unknown|optical|active-copper|passive-copper}, ...] - Bridge SAS Port
QSFP Type List
Displays information only about the bridges that have the SAS ports with the QSFP types you specify.
[-sas-port-qsfp-part-number-list <text>, ...] - Bridge SAS Port QSFP Part Number List
Displays information only about the bridges that have the SAS ports with the QSFP part numbers you
specify.
[-sas-port-qsfp-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS
[{+|-}hh:mm]}] - Bridge SAS Port QSFP Data Last Successful Refresh Timestamp
Displays information only about the bridges that match the SAS ports QSFP data last successful refresh
timestamp you specify.
[-mini-sas-hd-index-list <integer>, ...] - Bridge Mini-SAS HD Index List
Displays information only about the bridges that have SAS ports with the Mini-SAS HD indexes that you
specify.
[-mini-sas-hd-vendor-list <text>, ...] - Bridge Mini-SAS HD Vendor List
Displays information only about the bridges that have SAS ports with the Mini-SAS HD vendors that you
specify.
[-mini-sas-hd-serial-number-list <text>, ...] - Bridge Mini-SAS HD Serial Number List
Displays information only about the bridges that have SAS ports with the Mini-SAS HD serial numbers
that you specify.
[-mini-sas-hd-type-list <text>, ...] - Bridge Mini-SAS HD Type List
Displays information only about the bridges that have SAS ports with the Mini-SAS HD types that you
specify.
[-mini-sas-hd-part-number-list <text>, ...] - Bridge Mini-SAS HD Part Number List
Displays information only about the bridges that have SAS ports with the Mini-SAS HD part numbers that
you specify.
[-mini-sas-hd-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS
[{+|-}hh:mm]}] - Bridge Mini-SAS HD Data Last Successful Refresh Timestamp
Displays information only about the bridges that match the SAS ports Mini-SAS HD data with the last
successful refresh timestamp that you specify.
[-power-supply-index-list <integer>, ...] - Bridge Power Supply Index List
Displays information only about the bridges that have power supplies with the indexes that you specify.
[-power-supply-name-list <text>, ...] - Bridge Power Supply Name List
Displays information only about the bridges that have power supplies with the name that you specify.
[-power-supply-status-list {unknown|down|up}, ...] - Bridge Power Supply Status List
Displays information only about the bridges that have power supplies with the status that you specify.
[-power-supply-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS
[{+|-}hh:mm]}] - Bridge Power Supply Data Last Successful Refresh Timestamp
Displays information only about the bridges that match the power supply last data with the last
successful refresh timestamp that you specify.
[-node-list {<nodename>|local}, ...] - Node Name List
Displays information only about the bridges that are connected to the nodes you specify.
[-initiator-list <text>, ...] - Initiator List
Displays information only about the bridges that are connected to the nodes hosting the initiators you
specify.
[-initiator-side-switch-port-name-list <text>, ...] - Initiator Side Switch Port Name List
Displays information only about the bridges that are connected to the initiator-side switch ports you
specify.
[-target-side-switch-port-name-list <text>, ...] - Target Side Switch Port Name List
Displays information only about the bridges that are connected to the target-side switch ports you
specify.
[-target-port-wwn-list <text>, ...] - Target Port WWN List
Displays information only about the bridges that match the target ports with world-wide names you
specify.
[-target-port-index-list <integer>, ...] - Target Port Index List
Displays information only about the bridges that match the target ports with indexes you specify.
● Examples
The following example displays information about all bridges:
cluster1::>
The following example displays connectivity (node to bridge) information about all bridges:
The following command displays cooling (temperature sensors) information about all bridges:
The following command displays the error information about all bridges:
The following command displays the detailed information about all the bridges:
The following command displays power supply information about all bridges:
FC Ports:
Admin Oper Neg
Ports Status Status Port Mode Speed WWPN
----- -------- ------- ------------- ------- ----------------
1 enabled online n-port 8gb 2100001086603824
2 enabled offline unknown unknown 2200001086603824
SAS Ports:
Neg Data
Data Rate PHY1 PHY2 PHY3 PHY4 Admin Oper
Ports Rate Cap Status Status Status Status Status Status WWPN
----- ---- ---- ------- ------- ------- ------- -------- ------- ------------
1
3Gbps
6Gbps online online online online enabled online 5001086000603824
2
6Gbps
6Gbps offline offline offline offline disabled offline 0000000000000000
The following command displays port SFP information about all bridges:
Errors: -
Last Update Time: 10/22/2015 13:27:37 -04:00
FC SFP:
Speed
Ports Vendor Serial Number Part Number Capability
----- ----------------- ------------------ ------------------------ ----------
1 AVAGO AD1020A01FC AFBR-57D7APZ 8Gbps
2 AVAGO AD1020A01F7 AFBR-57D7APZ 8Gbps
SAS QSFP:
Mini-SAS HD:
FC SFP:
Speed
Ports Vendor Serial Number Part Number Capability
----- ----------------- ------------------ ------------------------ ----------
1 AVAGO AC1442J00L5 AFBR-57F5MZ 16Gbps
2 AVAGO AC1442J00L0 AFBR-57F5MZ 16Gbps
SAS QSFP:
Mini-SAS HD:
The following command displays port statistics information about all bridges:
FC Ports:
Oper Neg Link Sync CRC Rx Tx
Ports Status Port Mode Speed Failure Losses Error Words Words
----- ------- ------------- ------- ------- ------ ----- ---------- ----------
1 online n-port 8gb 0 0 0 2721271731 3049186605
2 offline unknown unknown 1 1 0 0 0
SAS Ports:
Invalid Disparity Sync PHY Link CRC
SAS PHY Neg Speed Dword Error Loss Reset Changed Error
Port Port Speed Capability Count Count Count Count Count Count
---- ---- ------- ---------- ------- --------- ----- ----- ------- -----
1 0 3Gbps 6Gbps 28262 26665 2 0 1 0
1 1 3Gbps 6Gbps 2110 1794 20 0 1 0
1 2 3Gbps 6Gbps 20435 18857 13 0 1 0
1 3 3Gbps 6Gbps 4573 3353 16 0 1 0
2 0 6Gbps 6Gbps 66 53 0 0 0 0
2 1 6Gbps 6Gbps 27478 25137 2 0 0 0
2 2 6Gbps 6Gbps 20537 17322 9 0 0 0
2 3 6Gbps 6Gbps 23629 21767 10 0 0 0
Related reference
system bridge add (page 1264)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge config-dump collect command retrieves a dumpconfiguration file from a system
bridge.
● Parameters
-bridge <text> - Bridge Name
Use this parameter to retrieve a dumpconfiguration file from the specified bridge.
● Examples
The following example retrieves a dumpconfiguration file from bridge ATTO_FibreBridge7500N_1:
cluster1::*>
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge config-dump delete command deletes dumpconfiguration files previously
retrieved with the system bridge config-dump collect command.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to delete a dumpconfiguration file stored on the specified node.
-file <text> - Config File
Use this parameter to delete the dumpconfiguration file with the specified file name.
● Examples
The following example deletes dsbridge_config.FB7500N100001.2017-04-28_14_49_30.txt from node1:
cluster1::*>
Related reference
system bridge config-dump collect (page 1281)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge config-dump show command displays information about all the dumpconfiguration
files previously retrieved with the system bridge config-dump collect command. If no parameters
are specified, the default command displays the following information about the dumpconfiguration files:
• Node
• File Name
• Timestamp
• Bridge
• Bridge Serial Number
To display detailed information about a single dumpconfiguration file, use the -node and -file
parameters.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays information about the dumpconfiguration files stored on the node that matches the specified
node name.
[-file <text>] - Config File
Displays information about the dumpconfiguration files that match the specified file name.
[-bridge <text>] - Bridge Name
Displays information about the dumpconfiguration files from the bridge that matches the specified
bridge name.
[-serial-number <text>] - Serial Number of Bridge
Displays information about the dumpconfiguration files from the bridge that matches the specified
serial number.
[-timestamp <MM/DD/YYYY HH:MM:SS>] - Time of Collection
Displays information about the dumpconfiguration files that were collected at the specified time.
● Examples
The following example displays information about all dumpconfiguration files:
Bridge: ATTO_FibreBridge7500N_1
cluster1::*>
The following example displays detailed information about all dumpconfiguration files:
Node: node1
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-05-01_09_53_53.txt
Timestamp: 5/1/2017 09:53:53
Bridge Serial Number: FB7500N100001
Node: node2
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-04-28_14_48_35.txt
Timestamp: 4/28/2017 14:48:35
Bridge Serial Number: FB7500N100001
Node: node2
Bridge Name: ATTO_FibreBridge7500N_1
Filename: dsbridge_config.FB7500N100001.2017-04-28_15_50_20.txt
Timestamp: 4/28/2017 15:50:20
Bridge Serial Number: FB7500N100001
3 entries were displayed.
cluster1::*>
Related reference
system bridge config-dump collect (page 1281)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge coredump collect command retrieves a core file from a bridge.
● Parameters
-name <text> - Bridge Name
This parameter specifies the bridge name from which the coredump file is to be collected.
● Examples
The following example retrieves a coredump from bridge ATTO_FibreBridge7500N_1:
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge coredump delete command deletes a coredump file previously retrieved with the
system bridge coredump collect command.
● Parameters
-name <text> - Bridge Name
This parameter specifies the name of the bridge that the coredump file belongs to.
-corename <text> - Coredump Filename
This parameter specifies the name of the coredump file to be deleted.
● Examples
The following example deletes coredump file core.FB7500N100018.1970-01-05.17_50_30.mem collected
from bridge ATTO_FibreBridge7500N_1:
cluster1::>
Related reference
system bridge coredump collect (page 1283)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system bridge coredump show command displays information about all the coredump files
previously retrieved with the system bridge coredump collect command. If no parameters are
specified, the default command displays the following information about the coredump files:
• Bridge Name
• Bridge Serial Number
• Coredump Filename
• Located on Node
• Panic Timestamp
• Panic String
To display detailed information about a single coredump file, use the -node and -corename parameters.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <text>] - Bridge Serial Number
Use this parameter to select the coredump files from the bridge that matches the specified bridge serial
number.
[-corename <text>] - Coredump Filename
Use this parameter to select the coredump files that matches the specified file name.
[-name <text>] - Bridge Name
Use this parameter to select the coredump files from the bridge that matches the specified bridge name.
[-node <nodename>] - Located on Node
Use this parameter to select the coredump the coredump files that are located on the specified node.
[-panic-time <MM/DD/YYYY HH:MM:SS>] - Panic Timestamp
Use this parameter to select the coredump files that were collected at the specified time.
[-panic-string <text>] - Panic String
Use this parameter to select the coredump files that matches the specified panic string.
● Examples
The following example displays information about all coredump files:
cluster1::>
Related reference
system bridge coredump collect (page 1283)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge firmware update command downloads the firmware onto the bridge. The bridge
needs to be rebooted for the firmware update to occur. The firmware file to be used is specified by the -uri
parameter.
● Parameters
-bridge <text> - Bridge Name
This specifies the bridge whose firmware needs to be updated.
-uri <text> - URI
This parameter specifies the URI from which the firmware file is downloaded onto the bridge.
[-skip [true]] - Skip Checking for Port Path = 2
Use this optional parameter to skip the bridge path checking allowing a firmware file to be downloaded
onto the bridge. Note that doing so might cause multiple device failures.
● Examples
The following example updates the firmware on bridge ATTO_FibreBridge7500N_1.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge options modify command alters the value of configurable options which affect all
the ATTO FibreBridges in the cluster configuration.
● Parameters
-option-name <text> - Option Name
The options supported are:
• enable.automatic.addition
Allowable values: {true, false}
The enable.automatic.addition option controls whether or not ATTO FibreBridge 7500N and
FibreBridge 7600N bridges are automatically added for in-band monitoring by the Fabric Health
Monitor.
If set to true, the feature is enabled and all ATTO FibreBridge 7500N and FibreBridge 7600N bridges in
the cluster will automatically be added for in-band monitoring by the Fabric Health Monitor. This is
the preferred and default value.
If set to false, the feature is disabled and ATTO bridges will not automatically be added for monitoring.
This value should only be used if you do not want to monitor the bridges at all, or if you want them to
be monitored via SNMP.
The ATTO FibreBridge 6500N does not have the capability to be monitored by in-band management,
so this option does not apply to the 6500N.
Note that this command is cluster-specific. To affect both clusters of a MetroCluster system, the
command must be executed once on each cluster of the MetroCluster system.
[-option-value <text>] - Option Value
This parameter provides the value for each option. Allowable values for each option are specified in the
option description above.
● Examples
The following example sets the enable.automatic.addition option to true:
siteA::*>
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system bridge options show command displays the value of configurable options which apply to all
ATTO FibreBridges in the cluster configuration. If the user specifies the command without parameters, the
output displays the current value of all the configurable options supported by the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-option-name <text>] - Option Name
The options supported are:
• enable.automatic.addition
Allowable values {true, false}.
This option controls whether or not bridges are automatically added for in-band monitoring by the
Fabric Health Monitor.
If set to true, the feature is enabled and all ATTO 7500N and 7600N bridges will automatically be
added for in-band monitoring by the Fabric Health Monitor.
If set to false, the feature is disabled and ATTO FibreBridges will not be automatically added for
monitoring. They can be left unmonitored, or manually added for SNMP monitoring.
Note that this command is cluster-specific. To determine the option setting on both clusters of a
MetroCluster system, the command must be executed once on each cluster of the MetroCluster system.
[-option-value <text>] - Option Value
The option-value parameter specifies the value of the option. Allowable values are described for each
option supported in the list above.
● Examples
The following example displays the current setting of the bridge options:
siteA::*>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system chassis show command displays information about all the chassis in the cluster. By default,
the command displays the following information about all the chassis in the cluster:
• Chassis ID
• Status
• List of nodes in the chassis
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about all the chassis in the cluster.
[-chassis-id <text>] - Chassis ID
Selects information about the specified chassis.
[-member-nodes {<nodename>|local}, ...] - List of Nodes in the Chassis
Selects information about the chassis with the specified member node list.
[-num-nodes <integer>] - Number of Nodes in the Chassis
Selects information about the chassis with the specified number of nodes.
● Examples
The following example displays information about all the chassis in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system chassis fru show command displays information about all the major chassis specific FRUs
in the cluster. By default, the command displays the following information about all the FRUs in the cluster:
• Chassis ID
• FRU name
• FRU type
• FRU state
• Nodes sharing the FRU
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about FRUs.
[-node {<nodename>|local}] - Node
Specifies the primary node name in the cluster on which Chassis health monitor is running.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-fru-name <text>] - FRU Name
Selects information about the FRU with the specified FRU name.
[-type {controller|psu|fan|dimm|bootmedia|ioxm|nvram|nvdimm}] - FRU Type
Selects information about all the FRUs with the specified FRU type.
[-name <text>] - FRU ID
Selects information about the FRU with the specified FRU unique name.
[-state <text>] - FRU State
Selects information about all the FRUs with the specified state.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about all the FRUs with the specified status.
[-display-name <text>] - Display Name for the FRU
Selects information about all the FRUs with the specified FRU display name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Monitor Name
Selects information about all the FRUs with the specified monitor name.
[-model <text>] - Model Type
Selects information about all the FRUs with the specified FRU model.
[-shared {shared|not_shared}] - Shared Resource
Selects information about all the FRUs with the specified sharing type.
[-chassis-id <text>] - Chassis ID
Selects information about all the FRUs in the specified chassis.
[-additional-info <text>] - Additional Information About the FRU
Selects information about all the FRUs with the specified additional information.
[-connected-nodes {<nodename>|local}, ...] - List of Nodes Sharing the FRU
Selects information about all the FRUs with the specified node list.
[-num-nodes <integer>] - Number of Nodes Sharing the FRU
Selects information about all the FRUs with the specified number of connected nodes.
● Examples
The following example displays information about all major chassis specific FRUs in the cluster:
Node: node1
FRU Serial Number: XXT122737891
FRU Name: PSU1 FRU
FRU Type: psu
FRU Name: XXT122737891
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup copy command copies a configuration backup from one node in
the cluster to another node in the cluster.
Use the system configuration backup show command to display configuration backups to copy.
● Parameters
-from-node {<nodename>|local} - Source Node
Use this parameter to specify the name of the source node where the configuration backup currently
exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to copy.
-to-node {<nodename>|local} - Destination Node
Use this parameter to specify the name of the destination node where the configuration backup copy is
created.
● Examples
The following example copies the configuration backup file node1.special.7z from the node node1 to
the node node2.
cluster1::*> system configuration backup copy -from-node node1 -backup node1.special.7z -to-node node2
[Job 295] Job is queued: Copy backup job.
Related reference
system configuration backup show (page 1294)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup create command creates a new configuration backup file.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to create the backup file.
[-backup-name <text>] - Backup Name
Use this parameter to specify the name of the backup file to create. The backup name cannot contain a
space or any of the following characters: * ? /
[-backup-type {node|cluster}] - Backup Type
Use this parameter to specify the type of backup file to create.
● Examples
The following example creates a a new cluster configuration backup file called node1.special.7z on the
node node1.
cluster1::*> system configuration backup create -node node1 -backup-name node1.special.7z -backup-type
cluster
[Job 194] Job is queued: Cluster Backup OnDemand Job.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup delete command deletes a saved configuration backup.
Use the system configuration backup show command to display saved configuration backups.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the source node where the configuration backup currently
exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to delete.
● Examples
The following example shows how to delete the configuration backup file node1.special.7z from the
node node1.
Related reference
system configuration backup show (page 1294)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup download command copies a configuration backup from a source
URL to a node in the cluster.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node to which the configuration backup is downloaded.
-source <text> - Source URL
Use this parameter to specify the source URL of the configuration backup to download.
[-backup-name <text>] - Backup Name
Use this parameter to specify a new local file name for the downloaded configuration backup.
[-validate-certificate {true|false}] - Validate Digital Certificate
Use this parameter with the value true to validate the digital certificate of the remote server.
● Examples
The following example shows how to download a configuration backup file from a URL to a file named
exampleconfig.download.7z on the node node2.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup rename command changes the file name of a configuration backup
file.
Use the system configuration backup show command to display configuration backups to rename.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the source node where the configuration backup currently
exists.
-backup <text> - Backup Name
Use this parameter to specify the name of the configuration backup file to rename.
-new-name <text> - New Name
Use this parameter to specify a new name for the configuration backup file.
● Examples
The following example renames the saved configuration file download.config.7z on the node node1 to
test.config.7z.
cluster1::*> system configuration backup rename -node node1 -backup download.config.7z -new-name test.c
onfig.7z
Related reference
system configuration backup show (page 1294)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup show command displays information about saved configuration
backups.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects configuration backups that are saved on the node you specify.
[-backup <text>] - Backup Name
Selects configuration backups that have the backup name you specify.
[-backup-type {node|cluster}] - Backup Type
Selects configuration backups of the type you specify.
[-time <MM/DD HH:MM:SS>] - Backup Creation Time
Selects configuration backups that were saved on the date and time you specify.
[-cluster-name <text>] - Cluster Name
Selects configuration backups that were saved in the cluster that has the name you specify.
[-cluster-uuid <UUID>] - Cluster UUID
Selects configuration backups that were saved in the cluster that has the UUID you specify.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Backup
Selects configuration backups that have the file size you specify.
[-nodes-in-backup {<nodename>|local}, ...] - Nodes In Backup
Selects configuration backups that include the configuration of the nodes you specify.
[-version <text>] - Software Version
Selects configuration backups that have the software version you specify.
[-is-auto {true|false}] - Backup Created from Schedule (true or false)
A value of true selects configuration backups that were created from a schedule. A value of false selects
configuration backups that were created manually.
[-schedule <text>] - Name of Backup Schedule
Selects configuration backups that were created by the schedule you specify.
● Examples
The following example shows typical output for this command.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup upload command copies a configuration backup from a node in
the cluster to a remote URL.
NOTE: the web server to which you are uploading the configuration backup file must have PUT operations
enabled for HTTP and POST operations enabled for HTTPS. Some web servers may require the installation of
an additional module. For more information, see your web server's documentation.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node from which the configuration backup is uploaded.
-backup <text> - Backup Name
Use this parameter to specify the file name of the configuration backup to upload.
-destination <text> - Destination URL
Use this parameter to specify the destination URL of the configuration backup.
[-validate-certificate {true|false}] - Validate Digital Certificate
Use this parameter with the value true to validate the digital certificate of the remote server.
● Examples
The following example show how to upload the configuration backup file testconfig.7z from the node
node2 to a remote URL.
cluster1::*> system configuration backup upload -node node2 -backup testconfig.7z -destination ftp://ww
w.example.com/config/uploads/testconfig.7z
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup settings modify command changes settings for configuration
backup.
● Parameters
[-destination <text>] - Backup Destination URL
Use this parameter to specify the destination URL for uploads of configuration backups. Use the value ""
to remove the destination URL.
[-username <text>] - Username for Destination
Use this parameter to specify the user name to use to log in to the destination system and perform the
upload. Use the system configuration backup settings set-password command to change
the password used with this user name.
[-validate-certificate {true|false}] - Validate Digital Certificate
Use this parameter with the value true to validate the digital certificate of the remote server. Digital
certificate validation is disabled by default.
[-numbackups1 <integer>] - Number of Backups to Keep for Schedule 1
Use this parameter to specify the number of backups created by backup schedule 1 to keep on the
destination system. If the number of backups exceeds this number, the oldest backup is removed.
[-numbackups2 <integer>] - Number of Backups to Keep for Schedule 2
Use this parameter to specify the number of backups created by backup schedule 2 to keep on the
destination system. If the number of backups exceeds this number, the oldest backup is removed.
[-numbackups3 <integer>] - Number of Backups to Keep for Schedule 3
Use this parameter to specify the number of backups created by backup schedule 3 to keep on the
destination system. If the number of backups exceeds this number, the oldest backup is removed.
● Examples
The following example shows how to set the destination URL and user name used for uploads of
configuration backups.
Related reference
system configuration backup settings set-password (page 1296)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup settings set-password command sets the password used for
uploads of configuration backups. This password is used along with the username you specify using the
system configuration backup settings modify command to log in to the system and perform the
upload. Enter the command without parameters. The command prompts you for a password, and for a
confirmation of that password. Enter the same password at both prompts. The password is not displayed.
Use the system configuration backup settings show command to display the destination URL for
configuration backups. Use the system configuration backup settings modify command to
change the destination URL and remote username for configuration backups.
● Examples
The following example shows successful execution of this command.
Related reference
system configuration backup settings modify (page 1295)
system configuration backup settings show (page 1297)
system configuration backup upload (page 1295)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration backup settings show command displays current settings for
configuration backup. These settings apply to backups created automatically by schedules. By default, the
command displays the URL to which configuration backups are uploaded, and the user name on the remote
system used to perform the upload.
Use the system configuration backup settings set-password command to change the password
used with the user name on the destination. Use the system configuration backup settings
modify command to change the destination URL and username for uploads of configuration backups, and
to change the number of backups to keep for each schedule.
● Parameters
[-instance ]
Use this parameter to display detailed information about configuration backup settings, including the
number of backups to keep for each backup schedule.
● Examples
The following example displays basic backup settings information.
The following example shows detailed output using the -instance parameter.
Related reference
system configuration backup settings set-password (page 1296)
system configuration backup settings modify (page 1295)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery cluster modify command modifies the cluster recovery status.
This command should be used to end the cluster recovery after all recovery procedures are applied.
● Parameters
[-recovery-status {complete|in-progress|not-in-recovery}] - Cluster Recovery Status
Use this parameter with the value complete to set the cluster recovery status after the cluster has been
recreated and all of the nodes have been rejoined to it. This enables each node to resume normal
system operations. The in-progress and not-in-recovery values are not applicable to this
command.
● Examples
The following example modifies the cluster recovery status.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery cluster recreate command re-creates a cluster, using either
the current node or a configuration backup as a configuration template. After you re-create the cluster,
rejoin nodes to the cluster using the system configuration recovery cluster rejoin command.
● Parameters
-from {node|backup} - From Node or Backup
Use this parameter with the value node to re-create the cluster using the current node as a
configuration template. Use this parameter with the value backup to re-create the cluster using a
configuration backup as a configuration template.
[-backup <text>] - Backup Name
Use this parameter to specify the name of a configuration backup file to use as a configuration template.
If you specified the -from parameter with the value backup, you must use this parameter and specify a
backup name. Use the system configuration backup show command to view available
configuration backup files.
● Examples
The following example shows how to re-create a cluster using the node node1 as a configuration template.
The following example shows how to re-create a cluster using the configuration backup
siteconfig.backup.7z as a configuration template.
cluster1::*> system configuration recovery cluster recreate -from backup -backup siteconfig.backup.7z
Related reference
system configuration backup show (page 1294)
system configuration recovery cluster rejoin (page 1299)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery cluster rejoin command rejoins a node to a new cluster
created earlier using the system configuration recovery cluster recreate command. Only use
this command to recover a node from a disaster. Because this synchronization can overwrite critical cluster
information, and will restart the node you specify, you are required to confirm this command before it
executes.
● Parameters
-node {<nodename>|local} - Node to Rejoin
Use this parameter to specify the node to rejoin to the cluster.
● Examples
This example shows how to rejoin the node node2 to the cluster.
Warning: This command will rejoin node "node2" into the local cluster,
potentially overwriting critical cluster configuration files. This
command should only be used to recover from a disaster. Do not perform
any other recovery operations while this operation is in progress.
This command will cause node "node2" to reboot.
Do you want to continue? {y|n}: y
Related reference
system configuration recovery cluster recreate (page 1298)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery cluster show command displays the cluster recovery status.
Cluster recovery status is "not-in-recovery" under normal operations, and it becomes "in-progress" if a new
cluster is created using the system configuration recovery cluster recreate command with the -
from backup parameter. When cluster recovery status is "in-progress", wait until the output of the "Is
Recovery Status Persisted" field is true before using the system configuration recovery cluster
rejoin command to recover other nodes in the cluster.
● Examples
The following example displays the cluster recovery status.
Related reference
system configuration recovery cluster recreate (page 1298)
system configuration recovery cluster rejoin (page 1299)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery cluster sync command synchronizes a node with the cluster
configuration. Only use this command to recover a node from a disaster. Because this synchronization can
overwrite critical cluster information, and will restart the node you specify, you are required to confirm this
command before it executes.
● Parameters
-node {<nodename>|local} - Node to Synchronize
Use this parameter to specify the name of the node to synchronize with the cluster.
● Examples
The following example shows the synchronization of the node node2 to the cluster configuration.
Warning: This command will synchronize node "node2" with the cluster
configuration, potentially overwriting critical cluster configuration
files on the node. This feature should only be used to recover from a
disaster. Do not perform any other recovery operations while this
operation is in progress. This command will cause all the cluster
applications on node "node2" to restart, interrupting administrative
CLI and Web interface on that node.
Do you want to continue? {y|n}: y
All cluster applications on node "node2" will be restarted. Verify that the cluster applications go onl
ine.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system configuration recovery node restore command restores the configuration of the local
node from a configuration backup file.
Use the system configuration backup show command to view available configuration backup files.
● Parameters
-backup <text> - Backup Name
Use this parameter to specify the name of a configuration backup file to use as the configuration
template.
[-nodename-in-backup <text>] - Use Backup Identified by this Nodename
Use this parameter to specify a node within the configuration backup file to use as a configuration
template. Only specify this parameter if you are specifying a name other than the name of the local
node.
[-force [true]] - Force Restore Operation
Use this parameter with the value true to force the restore operation and overwrite the current
configuration of the local node. This overrides all compatibility checks between the node and the
configuration backup. The configuration in the backup is installed even if it is not compatible with the
node's software and hardware.
Use this parameter with the value false to be warned of the specific dangers of restoring and be
prompted for confirmation before executing the command. This value also assures that the command
performs compatibility checks between configuration stored in the backup and the software and
hardware of the node. The default is false.
● Examples
The following example shows how to restore the configuration of the local node from the configuration
backup of node3 that is stored in the configuration backup file example.backup.7z.
Related reference
system configuration backup show (page 1294)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller show command displays information about all the controllers in the cluster.
These commands are available for 80xx, 25xx and later systems. Earlier models are not supported. By
default, the command displays the following information about all the controllers in the cluster:
• Controller name
• System ID
• System serial number
• Controller model name
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about all the controllers in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the specified controller.
[-system-id <text>] - System ID
Selects information about the controller with the specified System ID.
[-model <text>] - Model Name
Selects information about the controllers with the specified model name.
[-part-number <text>] - Part Number
Selects information about the controllers with the specified part number.
[-revision <text>] - Revision
Selects information about the controllers with the specified revision.
[-serial-number <text>] - Serial Number
Selects information about the controller with the specified system serial number.
[-controller-type <text>] - Controller Type
Selects information about the controllers with the specified controller type.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about the controllers with the specified health monitor status.
[-chassis-id <text>] - Chassis ID
Selects information about the controllers with the specified chassis ID.
● Examples
The below example displays information about all controllers in the cluster.
The example below displays detailed information about specified controller in the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller bootmedia show command displays details of the bootmedia devices present
in all the nodes in a cluster. These commands are available for 80xx, 25xx and later systems. Earlier models
are not supported. By default, the command displays the following information about the bootmedia:
• Node name
• Display name
• Vendor ID
• Device ID
• Memory size
• Bootmedia state
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the bootmedia devices.
[-node {<nodename>|local}] - Node
Selects the bootmedia device that is present on the specified node.
[-serial-num <text>] - Serial Number
Selects the bootmedia devices with the specified serial number.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the bootmedia devices with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the bootmedia devices with the specified device ID.
[-display-name <text>] - Display Name
● Examples
The following example displays the information of the bootmedia devices present in all the nodes in a
cluster:
Size Bootmedia
Node Display Name Vendor ID Device ID (MB) State Status
------------- -------------------- --------- --------- ------- --------- ------
node1 Micron Technology 634 655 1929 good ok
0x655
node2 Micron Technology 634 655 1929 good ok
0x655
The example below displays the detailed information about the bootmedia present in a node.
Node: node1
Vendor ID: 634
Device ID: 655
Display Name: Micron Technology 0x655
Unique Name: Micron Technology 0x655 (ad.0)
Health Monitor Name: controller
USBMON Health Monitor: present
Bootmedia State: good
Max memory size(in MB): 1929
Status: ok
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller bootmedia show-serial-number command displays the Boot Media Device
serial number. These commands are available for 80xx, 25xx and later systems. Earlier models are not
supported. By default, the command displays the following information about the bootmedia:
• Node name
• Display name
• Serial Number
• Size
• Bootmedia state
• Status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the bootmedia devices.
[-node {<nodename>|local}] - Node
Selects the bootmedia device that is present on the specified node.
[-serial-num <text>] - Serial Number
Selects the bootmedia devices with the specified serial number.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the bootmedia devices with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the bootmedia devices with the specified device ID.
[-display-name <text>] - Display Name
Selects the bootmedia devices with the specified display name.
[-unique-name <text>] - Unique Name
Selects the bootmedia device with the specified unique name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the bootmedia devices with the specified health monitor.
[-usbmon-status {present|not-present}] - Bootmedia Health Monitor
Selects the bootmedia devices with the specified USBMON status.
[-device-state {good|warn|bad}] - Bootmedia State
Selects the bootmedia devices with the specified device state.
[-size <integer>] - Max Memory Size (MB)
Selects the bootmedia devices with the specified memory size.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the bootmedia devices with the specified health monitor status.
● Examples
The following example displays the information of the bootmedia devices present in all the nodes in a
cluster:
The following example displays the detailed information about the bootmedia present in a node:
Node: node1
Vendor ID: 8086
Device ID: 8d02
Display Name: TOSHIBA THNSNJ060GMCU
Unique Name: /dev/ad4s1 (TOSHIBA THNSNJ060GMCU)
Health Monitor Name: controller
Bootmedia Health Monitor: present
Bootmedia State: good
Max memory size(in MB): 16367
Status: ok
Serial number: Y4IS104FTNEW
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller clus-flap-threshold show command allows the display of the threshold for
link flapping counts for all nodes. This threshold would be the number of times the cluster port links for a
given node can flap (go down) within a polling period before triggering an alert.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller config show command displays system configuration information for the
devices present in the controller. To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value.
[-device <text>] - Device
Selects the configuration information that matches the specified device.
[-subslot <integer>] - Subslot Number
Selects the configuration information that matches the specified subslot.
[-info <text>] - Device Info
Selects the configuration information that matches the specified device information.
[-slot <text>] - Slot
Selects the configuration information that matches the specified slot.
● Examples
The following example displays configuration information for slot 1 of the controller:
Node: node1
Sub- Device/
Slot slot Information
---- ---- --------------------------------------------------------------------
1 - NVRAM10 HSL
Device Name: Interconnect HBA: Generic OFED Provider
Port Name: ib1a
Default GID: fe80:0000:0000:0000:0000:0000:0000:0104
Base LID: 0x104
Active MTU: 8192
Data Rate: 0 Gb/s (8X)
Link State: DOWN
QSFP Vendor: Amphenol
QSFP Part Number: 112-00436+A0
QSFP Type: Passive Copper 1m ID:00
QSFP Serial Number: APF16130066875
QSFP Vendor: Amphenol
QSFP Part Number: 112-00436+A0
QSFP Type: Passive Copper 1m ID:00
QSFP Serial Number: APF16130066857
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller config show-errors displays configuration errors.
• node
• description
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the PCI devices.
[-node {<nodename>|local}] - Node
Displays configuration errors on the specified node.
[-verbose [true]] - Verbose Output?
The -verbose parameter enables verbose mode, resulting in the display of more detailed output.
[-description <text>] - Error Description
Displays the node with the specified configuration error.
● Examples
The example below displays configuration errors on all the nodes in the cluster.
cluster1::>
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller config pci show-add-on-devices command displays information about
the PCIe devices in I/O expansion slots. The command displays the following information about the PCIe
devices:
• Node
• Model
• Type
• Slot
• Device
• Vendor
• Sub-device ID
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about PCI devices.
[-node {<nodename>|local}] - Node
Selects the PCI devices that are present in the specified node.
[-model <text>] - Model String
Selects the PCI devices that are present on the system with the specified model name.
[-type <integer>] - Device Type
Selects the PCI devices with the specified device type.
[-slot-and-sub <text>] - PCI Slot Number
Selects the PCI devices present in the specified slot or slot-subslot combination.
[-device <text>] - Device
Selects the PCI devices with the specified device ID.
[-vendor <text>] - Vendor Number
Selects the PCI devices with the specified vendor ID.
[-sub-device-id <integer>] - Sub Device ID
Selects the PCI devices with the specified sub-device ID.
● Examples
The example below displays information about PCI devices found in I/O expansion slots of all the nodes in
the cluster.
15 1 0x10FB 0x8086 0
13 1 0x150E 0x8086 1
7 1 0x1528 0x8086 0
cluster1-02 FAS6240
6 7 0x2532 0x1077 10
5 1 0x1527 0x8086 0
2 7 0x6732 0x15B3 0
3 1 0x8030 0x1077 0
1 2 0x8001 0x11F8 0
15 1 0x10FB 0x8086 0
13 1 0x150E 0x8086 1
7 1 0x1528 0x8086 0
16 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller config pci show-hierarchycommand displays the PCI Hierarchy of all PCI
devices found in a controller. The command displays the following information about the PCI devices:
• Node
• Level
• Device
• Link Capability
• Link Status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for PCI devices.
[-node {<nodename>|local}] - Node
Displays the PCI hierarchy of the specified node.
[-level <integer>] - PCI Device Level
Displays the PCI devices that match the specified level within the PCI hierarchy.
[-pci-device <text>] - PCI Device
Displays the PCI devices that match the specified device description.
[-link-cap <text>] - Link Capability
Displays the PCI devices that match the specified link capability.
[-link-status <text>] - Link Status
Displays the PCI devices that match the specified link status.
● Examples
The example below displays the PCI hierarchy for all of the nodes in the cluster.
Node: cluster1-01
Node: cluster1-02
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller coredump-device show command displays details of the coredump devices
present in all the nodes in a cluster. This command is only applicable to AFF A700 systems. By default, the
command displays the following information about the coredump device:
• Node name
• Display name
• Vendor ID
• Device ID
• Memory size
• Device state
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for the coredump device.
[-node {<nodename>|local}] - Node
Selects the coredump device that is present on the specified node.
[-serial-num <text>] - Serial Number
Selects the coredump device with the specified serial number.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the coredump device with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the coredump device with the specified device ID.
[-display-name <text>] - Display Name
Selects the coredump device with the specified display name.
[-unique-name <text>] - Unique Name
Selects the coredump device with the specified unique name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the coredump device with the specified health monitor.
[-health-status {present|not-present}] - Health Status
Selects the coredump device with the specified health status.
[-device-state {good|warn|bad}] - Device State
Selects the coredump device with the specified device state.
[-size <integer>] - Max Memory Size (GB)
Selects the coredump device with the specified memory size.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the coredump device with the specified health monitor status.
● Examples
The following example displays the information of the coredump devices present in all the nodes in a
cluster:
Size Device
Node FRU/Display Name Vendor ID Device ID (GB) State Status
------------- -------------------- --------- --------- ------- --------- ------
cluster1-01
Coredump/X9170A 1b85 4018 1024 good ok
cluster1-02
Coredump/X9170A 1b85 4018 1024 good ok
2 entries were displayed.
The example below displays the detailed information about the coredump device present in a node.
Node: cluster1-01
Serial Number: A22P5061550000644
Vendor ID: 1b85
Device ID: 4018
Display Name: Coredump/X9170A
Unique Name: 0X91701190020741OCZ000Z63000001T0003500
Health Monitor Name: controller
Health Status: present
Device State: good
Max Memory Size (GB): 1024
Status: ok
Shelf Connected Status: present
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller coredump-device show-serial-number command displays the serial
number of coredump devices. This command is only appliable to AFF A700 systems. By default, the
command displays the following information about the coredump device:
• Node name
• Display name
• Serial number
• Size
• Device state
• Status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for the coredump device.
[-node {<nodename>|local}] - Node
Selects the coredump device that is present on the specified node.
[-serial-num <text>] - Serial Number
Selects the coredump device with the specified serial number.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the coredump device with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the coredump device with the specified device ID.
[-display-name <text>] - Display Name
Selects the coredump device with the specified display name.
[-unique-name <text>] - Unique Name
Selects the coredump device with the specified unique name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the coredump device with the specified health monitor.
[-health-status {present|not-present}] - Health Status
Selects the coredump device with the specified health status.
[-device-state {good|warn|bad}] - Device State
Selects the coredump device with the specified device state.
[-size <integer>] - Max Memory Size (GB)
Selects the coredump device with the specified memory size.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the coredump device with the specified health monitor status.
[-shelf-status {present|not-present}] - Shelf Connected Status
Selects the coredump device with the specified shelf connected status.
● Examples
The following example displays the information of the coredump device present in all the nodes in a cluster:
Size Device
Node FRU/Display Name Serial Number (GB) State Status
------------- -------------------- ------------------- ------- --------- ------
cluster1-01
Coredump/X9170A A22P5061550000644 1024 good ok
cluster1-02
Coredump/X9170A A22P5061550000137 1024 good ok
2 entries were displayed.
The following example displays the detailed information about the coredump device present in a node:
Node: cluster1-01
Serial Number: A22P5061550000644
Vendor ID: 1b85
Device ID: 4018
Display Name: Coredump/X9170A
Unique Name: 0X91701190020741OCZ000Z63000001T0003500
Health Monitor Name: controller
Health Status: present
Device State: good
Max Memory Size (GB): 1024
Status: ok
Shelf Connected Status: present
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller environment show displays information about all environment FRUs in the
cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not supported.
By default, the command displays the following information about the environment FRUs in the cluster:
• Node
• FRU name
• FRU state
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about the environment FRUs.
[-node {<nodename>|local}] - Node
Selects information about all the environment FRUs that the specified node owns.
[-serial-number <text>] - FRU Serial Number
Selects information about all the environment FRUs with the specified serial number.
[-fru-name <text>] - FRU Name
Selects information about the environment FRU with the specified FRU name.
[-type {controller|psu|fan|dimm|bootmedia|ioxm|nvram|nvdimm}] - FRU Type
Selects information about all the environment FRUs with the specified FRU type.
[-name <text>] - Name
Selects information about all the environment FRUs with the specified unique name.
[-state <text>] - FRU State
Selects information about all the environment FRUs with the specified FRU state.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about all the environment FRUs with the specified health monitor status.
● Examples
The following example displays information about all major environment FRUs in the cluster:
The following example displays detailed information about a specific environment FRU:
cluster1::> system controller environment show -node node1 -fru-name "PSU1 FRU" -instance
Node: node1
FRU Serial Number: XXT122737891
FRU Name: PSU1 FRU
FRU Type: psu
Name: XXT122737891
FRU State: GOOD
Status: ok
Display Name for the FRU: PSU1 FRU
Monitor Name: controller
Model Type: none
Shared Resource: shared
Chassis ID: 4591227214
Additional Information About the FRU: Part Number: 114-00065+A0
Revision: 020F
Manufacturer: NetApp
FRU Name: PSU
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller flash-cache show command displays the current Flash Cache device
information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, only status information for the matching node is displayed.
[-device-id <integer>] - Device ID
If this parameter is specified, only status information for the matching device-id is displayed.
[-slot <integer>] - Slot
If this parameter is specified, only status information for the matching slot is displayed. Slot can have a
format of slot, slot-subslot, or domain:bus:slot.
[-subslot <integer>] - Subslot
If this parameter is specified, only status information for the matching subslot is displayed.
[-domain <integer>] - Domain
If this parameter is specified, only status information for the matching PCI domain is displayed.
[-bus <integer>] - Bus
If this parameter is specified, only status information for the matching PCI bus is displayed.
[-slot-string <text>] - Slot String
If this parameter is specified, only status information for the matching slot is displayed. Format can be
slot or slot-subslot.
[-device-state {ok|erasing|erased|failed|removed|online|offline_failed|degraded|
offline_threshold}] - Device State
If this parameter is specified, only status information for the matching device-state is displayed.
[-model-number <text>] - Model Number
If this parameter is specified, only status information for the matching model-number is displayed.
[-part-number <text>] - Part Number
If this parameter is specified, only status information for the matching part-number is displayed.
[-serial-number <text>] - Serial Number
If this parameter is specified, only status information for the matching serial-number is displayed.
[-firmware-version <text>] - Firmware Version
If this parameter is specified, only status information for the matching firmware-version is displayed.
[-firmware-file <text>] - Firmware Filename
If this parameter is specified, firmware-file is the base name of the firmware file located in disk_fw to
update the device firmware.
[-hardware-revision <text>] - Hardware Revision
If this parameter is specified, only status information for the matching hardware-revision is displayed.
● Examples
The following example displays the current state of all Flash Cache devices:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller flash-cache secure-erase run command securely erases the givien Flash
Cache device.
● Parameters
-node {<nodename>|local} - Node
Selects the node of the specified Flash Cache devices.
-device-id <integer> - Device Id
Selects the device-id of the specified Flash Cache devices.
● Examples
The following example securely erases the selected Flash Cache device:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller flash-cache secure-erase show command displays the current Flash Cache
device secure-erase status.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, only status information for the matching node is displayed.
[-device-id <integer>] - Device Id
If this parameter is specified, only status information for the matching device-id is displayed.
[-device-state {ok|erasing|erased|failed|removed}] - Device State
If this parameter is specified, only status information for the matching device-state is displayed.
● Examples
The following example displays the current state of all the Flash Cache devices:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller fru show command displays information about all the controller specific Field
Replaceable Units (FRUs) in the cluster. These commands are available for 80xx, 25xx and later systems.
Earlier models are not supported. By default, the command displays the following information about all the
FRUs in the cluster:
• Node
• FRU name
• Health monitor subsystem
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about the controller specific FRUs in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the FRUs in the specified node.
[-subsystem <Subsystem>] - Subsystem
Selects information about the FRUs of the specified subsystem.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-fru-name <text>] - Name of the FRU
Selects information about the FRU with the specified FRU name.
[-type {controller|psu|fan|dimm|bootmedia|ioxm|nvram|nvdimm}] - FRU Type
Selects information about the FRU with the specified FRU type.
[-name <text>] - FRU Name
Selects information about the FRU with the specified unique name.
[-state <text>] - FRU State
Selects information about the FRU with the specified state.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about the FRU with the specified health monitor status.
[-display-name <text>] - Display Name for the Fru
Selects information about the FRU with the specified display name.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Monitor Name
Selects information about the FRU with the specified health monitor type.
[-model <text>] - Model Type
Selects information about the FRU with the specified model.
[-chassis-id <text>] - Chassis ID
Selects information about the FRU with the specified chassis ID.
[-location <text>] - Location of the FRU
Selects information about the FRU with the specified FRU location.
[-additional-info <text>] - Additional Information About the FRU
Selects information about the FRU with the specified additional information.
● Examples
The example below displays information about all controller specific FRUs in the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller fru show-manufacturing-info command displays manufacturing
information for field replaceable units (FRUs) installed in the system. The information includes FRU-
description, serial number, part number, and revision number. To display more details, use the -instance
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays detailed information about the installed FRUs in the system.
[-node {<nodename>|local}] - Node
Selects a specific node's installed FRUs.
[-system-sn <text>] - System Serial Number
Selects information about installed FRUs with the specified system serial number.
[-model-name <text>] - Model Name
Selects information about installed FRUs with the specified model name.
[-system-id <text>] - System ID
Selects information about installed FRUs with the specified system ID.
[-kernel-version <text>] - Kernel Version
Selects information about installed FRUs with the specified kernel version.
[-firmware-release <text>] - Firmware Release
Selects information about installed FRUs with the specified firmware release.
[-description <text>] - FRU Description
Selects information about installed FRUs with the specified FRU description.
[-fru-subtype <text>] - FRU Sub-type
Selects information about the FRU with the specified FRU subtype.
[-serial-number <text>] - FRU Serial Number
Selects information about the FRU with the specified serial number.
[-part-number <text>] - FRU Part Number
Selects information about the FRU with the specified part number.
[-revision <text>] - FRU Revision of Part Number
Selects information about the FRU with the specified revision.
[-manufacturer <text>] - FRU Manufacturer
Selects information about the FRU with the specified manufacturer.
[-manufacture-date <text>] - FRU Manufacturing Date
Selects information about the FRU with the specified manufacture date.
[-product-id <text>] - FRU Product Identifier
Selects information about the FRU with the specified product ID.
[-firmware-version <text>] - FRU Firmware Version
Selects information about the FRU with the specified firmware version.
● Examples
The following example displays all installed FRUs in the system:
Node: platsw-lodi-1-01
System Serial Number: 791541000047
Model Name: FAS9040
System ID: 0537024373
Firmware release: 10.0X18
Kernel Version: NetApp Release sysMman_3887886_1608151712: Mon Aug 15
15:54:00 PDT 2016
FRU Description FRU Serial Number FRU Part Number FRU Rev.
------------------------ ------------------------ ------------------ ----------
Mother Board 031537000390 111-02419 40
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller fru led disable-all command turns off all the controller and IOXM FRU
fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by
a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault
LEDs. In addition, there is a summary FRU fault LED on the external face-plate of both the controller and
IOXM; labeled with a "!". A summary fault LED will be on when any of the internal FRU fault LEDs are on.
Only the controller and IOXM internal FRU fault LEDs can be controlled by the end-user. The summary fault
LEDs are turned on and off based on the simple policy described above. If you want to turn off the summary
fault LED, you must turn off all internal FRU fault LEDs.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs.
Externally visible fault LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the
controller or IOXM is removed from the chassis. In addition, internal FRU fault LEDs will remain on until
explicitly turned off by the end-user, even after a FRU has been replaced.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media
devices, NV batteries and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N
is the number of FRUs of that particular type that exist in the controller or IOXM. Both controller and IOXM
have a FRU map label for use in physically locating internal FRUs. The FRU ID/slot tuple used by the system
controller fru led show command matches that specified on the FRU map label.
● Examples
Turn off all FRU fault LEDs.
Related reference
system controller fru led modify (page 1326)
system controller fru led show (page 1327)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller fru led enable-all command turns on all the controller and IOXM FRU fault
LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by
a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault
LEDs. In addition, there is a summary FRU fault LED on the external face-plate of both the controller and
IOXM; labeled with a "!". A summary fault LED will be on when any of the internal FRU fault LEDs are on.
Only the controller and IOXM internal FRU fault LEDs can be controlled by the end-user. The summary fault
LEDs are turned on and off based on the simple policy described above. If you want to turn off the summary
fault LED, you must turn off all internal FRU fault LEDs.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs.
Externally visible fault LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the
controller or IOXM is removed from the chassis. In addition, internal FRU fault LEDs will remain on until
explicitly turned off by the end-user, even after a FRU has been replaced.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media
devices, NV batteries and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N
is the number of FRUs of that particular type that exist in the controller or IOXM. Both controller and IOXM
have a FRU map label for use in physically locating internal FRUs. The FRU ID/slot tuple used by the system
controller fru led show command matches that specified on the FRU map label.
● Examples
Turn on all FRU fault LEDs.
Related reference
system controller fru led modify (page 1326)
system controller fru led show (page 1327)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller fru led modify command modifies the current state of the controller and
IOXM FRU fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by
a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault
LEDs. In addition, there is a summary FRU fault LED on the external face-plate of both the controller and
IOXM; labeled with a "!". A summary fault LED will be on when any of the internal FRU fault LEDs are on.
Only the controller and IOXM internal FRU fault LEDs can be controlled by the end-user. The summary fault
LEDs are turned on and off based on the simple policy described above. If you want to turn off the summary
fault LED, you must turn off all internal FRU fault LEDs.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs.
Externally visible fault LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the
controller or IOXM is removed from the chassis. In addition, internal FRU fault LEDs will remain on until
explicitly turned off by the end-user, even after a FRU has been replaced.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media
devices, NV batteries and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N
is the number of FRUs of that particular type that exist in the controller or IOXM. Both controller and IOXM
have a FRU map label for use in physically locating internal FRUs. The FRU ID/slot tuple used by the system
controller fru led show command matches that specified on the FRU map label.
● Parameters
-node {<nodename>|local} - Node
Selects FRU fault LEDs on the specified nodes.
● Examples
Turn off DIMM 3's FRU fault LED.
cluster1::*> system controller fru led modify -node node1 -fru-id dimm -fru-slot 3 -fru-state off
cluster1::*> system controller fru led modify -node node1 -fru-id pci -fru-slot * -fru-state on
Related reference
system controller fru led show (page 1327)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller fru led show command displays information about the current state of the
controller and IOXM FRU fault LEDs.
A FRU (Field Replaceable Unit) is any piece of the system that is designed to be easily and safely replaced by
a field technician.
Both the controller and IOXM FRUs have a number of internal FRUs for which there are corresponding fault
LEDs. In addition, there is a summary FRU fault LED on the external face-plate of both the controller and
IOXM; labeled with a "!". A summary fault LED will be on when any of the internal FRU fault LEDs are on.
All FRU fault LEDs are amber in color. However, not all amber LEDs in the system are FRU fault LEDs.
Externally visible fault LEDs are labeled with a "!" and internal FRU fault LEDs remain on, even when the
controller or IOXM is removed from the chassis.
FRUs are identified by a FRU ID and slot tuple. FRU IDs include: DIMMs, cards in PCI slots, boot media
devices, NV batteries and coin cell batteries. For each FRU ID, the FRUs are numbered 1 through N, where N
is the number of FRUs of that particular type that exist in the controller or IOXM. Both controller and IOXM
have a FRU map label for use in physically locating internal FRUs. The FRU ID/slot tuple used by the system
controller fru led show command matches that specified on the FRU map label.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects FRU fault LEDs on the specified nodes.
[-fru-id <FRU LED key>] - FRU ID
Selects the FRU fault LEDs that match the specified FRU type.
[-fru-slot <integer>] - FRU Slot
Selects the FRU fault LEDs that match the specified slot.
[-fru-bay <text>] - FRU Bay
Selects the FRU fault LEDs that match the specified bay.
[-fru-state {on|off|unknown}] - FRU State
Selects the FRU fault LEDs that match the specified status.
[-lit-by <text>] - Lit By
Selects the FRU fault LEDs that were lit by the specified source.
● Examples
List the current state of all FRU fault LEDs.
cluster1::*> system controller fru led show -node host1 -fru-id controller -fru-slot 1
Node FRU Type Bay Slot State Lit By
--------------------- ----------- --- ---- ------- -------
host1
controller A 1 off -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller hardware-license show command displays information about the controller's
hardware license. The command display the following information:
• Status
• Is Valid?
• Is Permanent?
• Expiration Date
● Examples
The following example displays hardware license status:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller hardware-license update command downloads the hardware license from a
specified location and updates the license file. By default, the license will be authenticated and verified
against the system serial number and model string of the controller before performing the update of the
hardware license. Update won't happen if the either of the following happens: authentication fails, the
system serial number doesn't match, or the model string doesn't match.
● Parameters
-package <text> - Package URL
This specifies the location from which the software image is to be downloaded. The location can be
specified in any of the following ways:
• As an HTTP URL in the form http://host_name[:port]/path_to_file. For instance, http://example.com/
downloads/license.json. The management utility prompts you for a user name and password before
beginning the download.
Note
If you use HTTP to transfer software images, be aware that the management utility does not
check whether the Web server is password protected; if it is not, press Enter at the prompt for
user name and password.
Note
The HTTPS URL scheme requires that you install the HTTPS server certificate on the system by
using the command "security certificate install -type server-ca".
● Examples
The following example downloads license file and install it into the FRU ROM
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller ioxm show command displays the details of the IO expansion modules (IOXMs)
that are connected to the nodes in a cluster. These commands are available for 80xx, 25xx and later
systems. Earlier models are not supported. By default, the command displays the following information
about the IOXMs:
• Node name
• Display name
• Is IOXM present?
• Power status
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all the IOXMs.
[-node {<nodename>|local}] - Node
Selects the IOXM that is connected to the specified node.
[-chassis-config {c-i|c-c|c-b}] - Controller-IOXM or Controller-Controller or Controller-Blank
Selects the IOXMs with the specified chassis configuration.
[-is-present {present|not-present}] - IOXM Presence
Selects the IOXMs that are connected and detected (present) or connected but not detected (not-
present).
[-power {good|bad}] - Power to IOXM
Selects the IOXMs with the specified power state.
[-display-name <text>] - Display Name
Selects the IOXMs with the specified display name.
● Examples
The example below displays the information of all the IOXMs that are connected to the nodes in a cluster.
The example below displays detailed information of an IOXM that is connected to a node.
Node: node1
Controller-IOXM or Controller-Controller or Controller-Blank: c-i
IOXM Presence: present
Power to IOXM: good
Display Name: node1/IOXM
Unique Name: 8006459930
Health Monitor Name: controller
IOXM Health: ok
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller location-led modify command modifies the current state of the location
LED. When lit, the location LED can help you find the controller in the data center.
There is a blue location LED on every controller and on the front of the chassis. When you turn on the
location LED for either controller, the chassis location LED automatically turns on. When both controller
location LEDs are off, the chassis location LED automatically turns off.
After the location LED is turned on, it stays illuminated for 30 minutes and then automatically shuts off.
● Parameters
-node {<nodename>|local} - Node
Selects the location LED on the specified filers.
[-state {on|off}] - LED State
Modifies the state of the location LED on the filer.
● Examples
The following example turns on the location LED:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller location-led show command shows the current state of the location LED.
When lit, the location LED can help you find the controller in the data center.
There is a blue location LED on every controller and on the front of the chassis. When you turn on the
location LED for either controller, the chassis location LED automatically turns on. When both controller
location LEDs are off, the chassis location LED automatically turns off.
After the location LED is turned on, it stays illuminated for 30 minutes and then automatically shuts off.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the location LED on the specified filers.
[-state {on|off}] - LED State
Displays the location LED's status.
● Examples
The following example lists the current state of the location LED:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller memory dimm show command displays information about the DIMMs in all the
nodes in the cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not
supported. By default, the command displays the following information about all the DIMMs in the cluster:
• Node
• DIMM name
• Uncorrectable ECC error count
• Correctable ECC error count
• CECC Alert Method
• CPU socket
• Channel
• Slot number
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information about the DIMMs in all the controllers in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the DIMMs in the specified node.
[-pds-id <integer>] - DIMM ID
Selects information about the DIMMs with the specified DIMM ID.
[-slotname <text>] - Slot Name
Selects information about the DIMMs with the specified slot name.
[-socket <integer>] - CPU Socket
Selects information about the DIMMs with the specified socket ID.
[-channel <integer>] - Channel
Selects information about the DIMMs with the specified channel number.
[-slot-no <integer>] - Slot Number on a Channel
Selects information about the DIMMs with the specified slot number.
[-serial <text>] - Serial Number
Selects information about the DIMMs with the specified serial number.
[-part-no <text>] - Part Number
Selects information about the DIMMs with the specified part number.
[-cecc-count <integer>] - Correctable ECC Error Count
Selects information about the DIMMs with the specified correctable ECC error count.
[-uecc-count <integer>] - Uncorrectable ECC Error Count
Selects information about the DIMMs with the specified uncorrectable ECC error count.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects information about the DIMMs with the specified health monitor.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects information about the DIMMs with the specified health monitor status.
[-name <text>] - Unique Name of DIMM
Selects information about the DIMMs with the specified unique name.
[-display-name <text>] - Display Name for the DIMM
Selects information about the DIMMs with the specified display name.
[-alt-cecc-method <text>] - CECC Alert Method
Selects information about the DIMMs with the specified CECC error alert method.
[-alt-cecc-dimm {true|false}] - Replace DIMM
Selects information about the DIMMs with the specified replace DIMM value.
● Examples
The example below displays information about the DIMMs in all the nodes in the cluster.
The example below displays detailed information about a specific DIMM in a specific controller.
cluster1::> system controller memory dimm show -instance -node node1 -pds-id 1
Node: node1
DIMM ID: 1
Slot Name: DIMM-1
CPU Socket: 0
Channel: 0
Slot Number on a Channel: 0
Serial Number: AD-01-1306-2EA01E9A
Part Number: HMT82GV7MMR4A-H9
Correctable ECC Error Count: 0
Uncorrectable ECC Error Count: 0
Health Monitor Name: controller
Status: ok
Unique Name of DIMM: DIMM-1
Display Name for the DIMM: DIMM-1
CECC Alert Method: bucket
Replace DIMM: false
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller nvram-bb-threshold show command displays the threshold for the NVRAM
bad block counts for a node.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller pci show command displays details of the PCI devices present in all of the
nodes in a cluster. These commands are available for 80xx, 25xx and later systems. Earlier models are not
supported. By default, the command displays the following information about the PCI devices:
• Node name
• Display name
• Correctable error count
• Functional link width
• Functional link speed
• Health monitor status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed information for all of the PCI devices.
[-node {<nodename>|local}] - Node
Selects the PCI devices that are present in the specified node.
[-bus-number <integer>] - Bus Number
Selects the PCI devices with the specified bus number.
[-device-number <integer>] - Device Number
Selects the PCI devices with the specified device number.
[-function-number <integer>] - Function Number
Selects the PCI devices with the specified function number.
[-slot-number <integer>] - Slot Info
Selects the PCI devices with the specified slot number.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the PCI devices monitored by the specified health monitor.
[-vendor-id <Hex Integer>] - Vendor ID
Selects the PCI devices with the specified vendor ID.
[-device-id <Hex Integer>] - Device ID
Selects the PCI devices with the specified device ID.
● Examples
The example below displays the information about the PCIe devices present in all of the nodes in the cluster.
The example below displays detailed information about a PCIe device in a node.
Node: cluster1-01
Bus Number: 1
Device Number: 0
Function Number: 0
Slot Info: 0
Health Monitor Name: controller
Vendor ID: 11f8
Device ID: 8001
Physical Link Width: 4
Functional Link Width: 4
Physical Link Speed(GT/s): 5GT/s
Functional Link Speed(GT/s): 5GT/s
Unique Name: ontap0@pci0:1:0:0
Correctable Error Count: 0
Status: ok
Display Name: Ontap PCI Device 0
Correctable Error Difference: 0
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller pcicerr threshold modify command modifies node-wide PCIe correctable
error threshold counts in the cluster.
● Parameters
[-pcie-cerr-threshold <integer>] - Corr. Error Limit
The PCIe error threshold count that would trigger an alert if exceeded.
[-nvram-bb-threshold <integer>] - NVRAM Bad Block limit
The NVRAM bad block threshold count that would trigger an alert if exceeded.
● Examples
The example below displays the information about setting node-wide PCIe error threshold count in the
cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller pcicerr threshold show command displays information about node-wide
PCIe correctable error threshold counts in the cluster.
● Examples
The example below displays the information about node-wide PCIe error threshold count in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller platform-capability show command displays information about all
platform capabilities for each controller in the cluster. By default, the command displays the following
information about all controllers in the cluster:
• Controller Name
• Capability ID
• Capability Supported?
• Capability Name
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays detailed information about all controllers in the cluster.
[-node {<nodename>|local}] - Node
Selects information about the specified controller.
[-capability-id <integer>] - Capability ID
Selects the desired capability ID.
[-supported <text>] - Supported?
Selects the desired capability support state (true or false).
[-name <text>] - Capability Name
Selects the desired capability name.
● Examples
The following example displays platform capability information for the controller:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace cancel command is used to cancel a controller replacement that is in
paused state (paused-on-request, paused-on-error or paused-for-intervention). The update cannot be
canceled if it is not in a paused state.
● Examples
The following example displays a cancel operation:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace pause command is used to pause a currently running replacement.
The operation pauses at the next predefined update stage (for example, after finishing the current task it
will pause the next restartable task) which might take some time to reach. When the update reaches the
pause point, it transitions into the pause-on-request state.
● Examples
The following example displays pause operation:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace resume command is used to resume an update that is currently in one
of paused-on-request, paused-on-error or paused-for-intervention states. If the update is not paused then
an error is returned.
● Examples
The following example shows a resume operation:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace show command displays overall information about the currently
running, or previously run controller replacement operation. The command displays the following
information:
• Operation Status
• Error message
• Recommended action
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays information about automated nondisruptive operation:
Step Details:
--------------------------------------------
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace show-details command displays detailed information about the
currently running and previously run non-disruptive controller replacement operations. The command
displays the following information:
• Phase
• Node
• Task name
• Task status
• Error message
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-operation-identifier {None|Controller-replacement}] - Operation Identifier
Specifies the NDO operation identifier.
[-task-identifier <integer>] - Task Identifier
Specifies the identification number of the task.
[-node <nodename>] - Node That Performs Operation
Specifies the node that is to be replaced.
[-task-phase {None|Initialization|Precheck|Commit|Collect-info|Preop|Resource-release|Verification|
Resource-regain|Postop|Uncommit|Postcheck|Cleanup}] - Task Phase
Specifies the phase of the operation.
[-task-name <text>] - Name of the Task
Specifies the name of the task.
[-task-status {None|Ready-to-start|In-progress|Waiting|Paused-on-error||Paused-for-intervention|
Paused-on-request|Completed-on-first-node|Completed|Failed|Pause_req|Canceled}] - Status of the
Task
Specifies the status of the task.
[-task-error <text>] - Error During the Task Execution
Specifies the error occured.
[-task-recovery-action <text>] - Action to Recover from Error
Specifies the action to be taken in case of error.
● Examples
The following example displays detailed information about the non-disruptive replacement operation:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system controller replace start command is used to initiate a controller-replacement
operation. The update is preceded by a validation of the HA pair to ensure that any issues that might affect
the update are identified.
There are predefined points in the update when the update can be paused (either requested by the user or
by the operation in case of an error or for manual intervention).
● Parameters
-nodes <nodename>, ... - Nodes for Controller Replacement
Specifies the nodes that are to be replaced.
[-simulate [true]] - Simulate Controller Replacement
Dry run the operation. Checks for all validations.
[-skip-metrocluster-check {true|false}] - Skip Metrocluster Check before Replacement
Skips the DR mirror offline check when performing Metrocluster validation. In 4-node Metrocluster
configuration, if controller replacement is already complete one one site, then the partner site should
replace its controllers by setting this parameter to true. The default value is false.
● Examples
The following example shows the replacement operation:
2. Verify that NVMEM or NVRAM batteries of the new nodes are charged, and charge them if they are
not. You need to physically check the new nodes to see if
the NVMEM or NVRAM batteries are charged. You can check the battery status either by connecting to
Attention: Do not try to clear the NVRAM contents. If there is a need to clear the contents of NVR
AM, contact NetApp technical support.
3. If you are replacing the controllers with an used one, please ensure to run wipeconfig before c
ontroller replacement
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller service-event delete command removes the service event from the list and
extinguishes all related FRU attention LEDs.
In some cases, where the underlying fault condition remains, the service event might be reported again,
causing it to reappear in the list. In such cases, it is necessary to remedy the underlying fault condition in
order to clear the service event.
● Parameters
-node {<nodename>|local} - Node
Selects service events on the specified nodes.
-event-id <integer> - Service Event ID
Selects the service events that match the specified event identifier. Together with the node, this field
uniquely identifies the row to delete. Use the system controller service-event show command
to find the event identifier for the service event to delete.
● Examples
The following example lists the currently active service events. Then, using the listed Service Event ID, the
service event is deleted:
Related reference
system controller service-event show (page 1344)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller service-event show command displays one or more events that have been
detected by the system for which a physical service action might be required. Physical service actions
sometimes involve replacing or re-seating misbehaving FRUs. In such cases FRU attention LEDs will be
illuminated to assist in physically locating the FRU in need of attention. When the FRU in question is
contained within another FRU, both the inner and outer FRU attention LEDs will be lit. It creates a path of
LEDs that starts at the chassis level and leads to the FRU in question. For example, if a DIMM is missing from
the controller motherboard, the storage OS will detect this and log a service event whose location is the
DIMM slot on the controller. The DIMM slot LED, controller LED and chassis LED will all be lit to create a path
of LEDs to follow.
FRU Attention LEDs that are not visible from outside of the system (e.g. those on the controller motherboard
such as DIMMs, boot device etc.) will remain on for a few minutes, even after power is removed from the
containing FRU. As such, when the controller is removed from the chassis, a DIMM slot FRU attention LED
will remain on, helping to locate the FRU in need of attention.
Generally, service events are cleared automatically when the issue is resolved. The corresponding FRU
attention LEDs are extinguished accordingly. In cases where the service event request is caused by an
environmental issue, it might be necessary to manually remove the service event from the list. This can be
done using the system controller service-event delete command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
● Examples
The following example lists the currently active service events.
Related reference
system controller service-event delete (page 1343)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller slot module insert command adds a module on the controller.
● Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
● Examples
The following example adds a module in the local node:
p2i030::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller slot module remove command removes a module on the controller.
● Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
● Examples
The following example removes a module in the local node:
p2i030::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller slot module replace command powers off a module on the controller for
replacement.
● Parameters
-node {<nodename>|local} - Node
Selects the PCIe modules that are present in the specified node.
-slot <text> - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
● Examples
The following example powers off a module in the local node:
p2i030::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller slot module show command displays hotplug status of a module on the
controller. The command displays the following information about the PCIe modules:
• Node
• Slot
• Module
• Status
To display more details, use the -instance parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the PCIe modules that are present in the specified node.
[-slot <text>] - Slot Number
Selects the PCIe modules present in the specified slot or slot-subslot combination.
[-status <text>] - Module Status
Selects hotplug status for PCIe modules.
[-card <text>] - Module Name
Selects module name for PCIe modules.
● Examples
The following example displays hotplug status of PCI modules found in the local node:
::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller sp config show command displays the following configuration information of
the service processor for all nodes in the cluster:
• Node name
• Service processor status
• Service processor firmware version
• Booted firmware version
• Service processor configuration status
• Physical Ethernet link status of service processor
• Health monitor status
To display more details, use the -instance parameter. These commands are available for 80xx, 25xx and
later systems. Earlier models are not supported.
● Parameters
{ [-fields <fieldname>, ...]
Selects the field that you specify.
| [-instance ]}
Displays detailed configuration information of the service processor.
[-node {<nodename>|local}] - Node
Use this parameter to list the service processor configuration of the specific node.
[-version <text>] - Firmware Version
Selects the service processor configuration with the specified firmware version.
[-boot-version {primary|backup}] - Booted Version
Selects the service processor configuration with the specified version of the currently booted partition.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the service processor configuration with the specified monitor name.
[-sp-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|updating}]
- SP Status
Selects the service processor configuration with the specified status of service processor.
[-sp-config {true|false}] - Auto Update Configured
Selects information about the service processor with the specified configuration status of the service
processor.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the service processor configuration information with the specified service processor status.
[-link-status {up|down|disabled|unknown}] - Public Link Status
Selects the service processor configuration with the specified physical ethernet link status.
[-name <text>] - Display Name
Selects the service processor configuration with the specified unique name.
● Examples
The example below displays configuration of the service processor in all the nodes in the cluster:
The example below displays configuration of the service processor of a particular node in detail:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system controller sp upgrade show command displays the following information about the
service processor firmware of all the nodes in the cluster:
• Node name
• Is new firmware available?
• Is autoupdate enabled?
• Status of autoupdate
• Health monitor status
To display more details, use the -instance parameter. These commands are available for 80xx, 25xx and
later systems. Earlier models are not supported.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays detailed upgrade information of the service processor.
[-node {<nodename>|local}] - Node
Use this parameter to list the upgrade information of the service processor on the specified node.
[-new-fw-avail {true|false}] - New Firmware Available
Selects the information of the service processors which have new firmware available.
[-new-fw-version <text>] - New Firmware Version
Selects the information about service processors with the specified firmware version.
[-auto-update {true|false}] - Auto Update
Selects the information about service processors with the specified state.
[-auto-update-stat {installed|corrupt|updating|auto-updating|none}] - Auto Update Status
Selects the information about service processors with the specified auto update status.
[-auto-update-sttime <MM/DD/YYYY HH:MM:SS>] - Auto Update Start Time
Selects the information about service processors with the specified start time.
[-auto-update-entime <MM/DD/YYYY HH:MM:SS>] - Auto Update End Time
Selects the information about service processors with the specified end time.
[-auto-update-per <integer>] - Auto Update Percent Done
Selects the information about service processors with the specified auto update percentage completed.
[-auto-update-maxret <integer>] - Auto Update Maximum Retries
Selects the information about service processors with the specified maximum number of retries.
[-auto-update-curret <integer>] - Auto Update Current Retries
Selects the information about service processors with the specified number of current retries.
[-auto-update-prevstat {failed|passed}] - Previous AutoUpdate Status
Selects the information about service processors with the specified automatic update status.
[-monitor {node-connect|system-connect|system|controller|chassis|cluster-switch|example|ethernet-
switch}] - Health Monitor Name
Selects the information about service processors with the specified monitor name.
[-status {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Status
Selects the information about service processors with the specified health monitor status.
[-name <text>] - Display Name
Selects the information about service processors with the specified display name.
● Examples
The example below displays service processor upgrade information for all nodes in the cluster:
The example below displays the detailed service processor upgrade information for a specific node:
Node: node1
New Firmware Available: false
New Firmware Version: Not Applicable
Auto Update: true
Auto Update Status: installed
Auto Update Start Time: Thu Oct 20 20:06:03 2012 Etc/UTC
Auto Update End Time: Thu Oct 20 20:09:19 2012 Etc/UTC
Auto Update Percent Done: 0
Auto Update Maximum Retries: 5
Auto Update Current Retries: 0
Previous AutoUpdate Status: passed
Health Monitor Name: controller
Status: ok
Display Name: SP Upgrade
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display feature usage information in the cluster on a per-node and per-week basis.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
● Examples
The following example displays a usage output filtered by the serial number and feature name:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Display usage summary information about features in the cluster on a per-node basis. The summary
information includes counter information such as the number of weeks the feature was in use and the last
date and time the feature was used. Additional information can also be displayed by using the -instance
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example displays a usage summary output for a cluster of two nodes:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system fru-check show command checks and displays the results of quick diagnostic tests done for
certain FRUs of each controller in the cluster. The tests are not intended to be exhaustive, but simply to do a
quick check of certain FRUs especially after replacement.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-instance ]}
Selects detailed information (if available) for all the FRUs.
[-node {<nodename>|local}] - Node
Selects the FRUs that belong to the node that has the specified name.
[-serial-number <text>] - FRU Serial Number
Selects the FRU matching the specified serial number.
[-fru-name <text>] - FRU Name
Selects the FRU matching the specified fru-name.
[-fru-type {controller|dimm|bootmedia|nvram|nvdimm}] - FRU Type
Selects the FRUs of the specified type.
[-fru-status {pass|fail|unknown}] - Status
Selects the FRUs whose FRU check status matches that specified. "pass" indicates the FRU is operational.
"fail" indicates the FRU is not operating correctly. "unknown" indicates a failure to obtain FRU
information during the check.
[-display-name <text>] - Display Name
Selects the FRU matching the specified display name.
[-location <text>] - Location
Selects the FRUs whose location matches that specified. Example: Memory Slot: 1
[-additional-info <text>] - Additional Info
Selects the FRUs whose additional information matches that specified. Example: Part No: 69003140-I00-
NTA-T
[-reason <text>] - Details
Selects the FRUs whose failure reason matches that specified.
system ha commands
The system ha interconnect status directory displays the interconnect device nic reset configuration
information.
Related reference
High-Availability interconnect device status information (page 1371)
The system ha interconnect config directory displays the interconnect device basic configuration
information.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect config show command displays the high-availability interconnect device
basic configuration information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-transport <text>] - Interconnect Type
Selects the nodes that match this HA interconnect transport type.
[-local-sysid <integer>] - Local System ID
Selects the nodes that match this local system unique identifier.
[-partner-sysid <integer>] - Partner System ID
Selects the nodes that match this partner system unique identifier.
[-initiator {local|partner}] - Connection Initiator
Selects the nodes that match this parameter value. The value is the initiator of the connection request.
[-port-name <text>, ...] - Port
Selects the nodes that match this port name.
[-ipaddress <text>, ...] - IP Address
Selects the nodes that match this IP address.
[-interface {backplane|external}] - Interface
Selects the nodes that match this parameter value. external means the HA interconnect links between
partner nodes are connected externally. backplane means the HA interconnect links between partner
nodes are connected over the backplane.
● Examples
The following example displays the HA interconnect configuration information on FAS8000 series nodes in
the cluster:
Node: ic-f8040-01
Interconnect Type: Infiniband (Mellanox ConnectX)
Local System ID: 536875713
Partner System ID: 536875678
Connection Initiator: local
Interface: backplane
Node: ic-f8040-02
Interconnect Type: Infiniband (Mellanox ConnectX)
Local System ID: 536875678
Partner System ID: 536875713
Connection Initiator: partner
Interface: backplane
The following example displays the HA interconnect configuration information on FAS2500 series nodes in
the cluster:
Node: ic-f2554-03
Interconnect Type: Infiniband (Mellanox Sinai)
Local System ID: 1781036608
Partner System ID: 1780360209
Connection Initiator: local
Interface: backplane
Node: ic-f2554-04
Interconnect Type: Infiniband (Mellanox Sinai)
Local System ID: 1780360209
Partner System ID: 1781036608
Connection Initiator: partner
Interface: backplane
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect link off command turns off the specified link on the high-availability
interconnect device. For the nodes in the cluster with two external high-availability interconnect links, you
must specify the link number (0-based) to turn off the specified link. For the nodes in the cluster with
interconnect links over the backplane, you must specify the link number 1 to turn off the link.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies the node on which the interconnect link is to be turned off. The
value "local" specifies the current node.
-link {0|1} - Link
This mandatory parameter specifies the interconnect link number (0-based) to turn off.
● Examples
The following example displays output of the command on the nodes with a single interconnect link or
nodes with interconnect links over the backplane:
The following example displays output of the command on the nodes with two interconnect links connected
externally:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect link on command turns on the specified link on the high-availability
interconnect device. For the nodes in the cluster with two external high-availability interconnect links, you
must specify the link number (0-based) to turn on the specified link. For the nodes in the cluster with
interconnect links over the backplane, you must specify the link number 1 to turn on the link.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies the node on which the interconnect link is to be turned on. The
value "local" specifies the current node.
-link {0|1} - Link
This mandatory parameter specifies the interconnect link number (0-based) to turn on.
● Examples
The following example displays output of the command on the nodes with a single interconnect link or
nodes with interconnect links over the backplane:
The following example displays output of the command on the nodes with two interconnect links connected
externally:
The system ha interconnect ood directory manages the interconnect device out-of-order delivery
capability configuration. These command are only supported on FAS2500 series nodes in the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood clear-error-statistics command enables you to clear all the
error statistics collected for the out-of-order delivery-capable high-availability interconnect device. This
command is only supported on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the error statistics cleared. The value "local"
specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood clear-performance-statistics command enables you to clear
all the performance statistics collected for the out-of-order delivery-capable high-availability interconnect
device. This command is only supported on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the performance statistics cleared. The value
"local" specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood disable-optimization command disables the optimization
capability on the high-availability interconnect device. The command is only supported on FAS2500 series
nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the optimization disabled. The value "local"
specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood disable-statistics command disables collection of the statistics
on the out-of-order delivery-capable high-availability interconnect device. This command is only supported
on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the statistics collection disabled. The value
"local" specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood enable-optimization command enables you to turn on
optimization (coalescing out-of-order delivery requests) on the high-availability interconnect device. This
command is only supported on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the optimization enabled. The value "local"
specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood enable-statistics command enables collection of the statistics
on the out-of-order delivery-capable high-availability interconnect device. This command is only supported
on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will have the statistics collection enabled. The value
"local" specifies the current node.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood send-diagnostic-buffer command enables you to run a short
out-of-order delivery diagnostic test. The command sends a buffer to the partner controller over the high-
availability interconnect. This command is only supported on FAS2500 series nodes in the cluster.
● Parameters
-node <nodename> - Node
This mandatory parameter specifies which node will send the diagnostic buffer to its partner. The value
"local" specifies the current node.
● Examples
The following example demonstrates how to use this command to send a diagnostic buffer to the partner:
Displays the high-availability interconnect device out-of-order delivery configuration information. This
command is supported only on FAS2500 series nodes in the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect ood status show command displays configuration information of the
out-of-order delivery-capable high-availability interconnect devices. This command is supported only on
FAS2500 series nodes in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-is-ood-enabled {true|false}] - Is OOD Enabled
Selects the nodes that match this parameter value.
[-is-coalescing-enabled {true|false}] - Is Coalescing Enabled
Selects the nodes that match this parameter value.
● Examples
The following example displays the HA interconnect device out-of-order delivery configuration information
on FAS2500 series nodes in the cluster.
Node: ic-f2554-03
NIC Used: 0
Is OOD Enabled: true
Is Coalescing Enabled: true
Node: ic-f2554-04
NIC Used: 0
Is OOD Enabled: true
Is Coalescing Enabled: true
2 entries were displayed.
The system ha interconnect port directory displays the interconnect device port information.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect port show command displays the high-availability interconnect device
port physical layer and link layer status information.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in the cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-link-monitor {on|off}] - Link Monitor Detection
Selects the nodes that match this parameter value.
[-port <integer>, ...] - Port Number
Selects the nodes that match this parameter value.
[-phy-layer-state {invalid|sleep|polling|disabled|port-configuration-testing|linkup|link-error-
recovery|phytest|reserved}, ...] - Physical Layer State
Selects the nodes that match this parameter value.
[-link-layer-state {invalid|down|initialize|armed|active|reserved}, ...] - Link Layer State
Selects the nodes that match this parameter value.
[-phy-link-up-count <integer>, ...] - Physical Link Up Count
Selects the nodes that match this parameter value. The value is total number of times the link on a
given port is transitioned up.
[-phy-link-down-count <integer>, ...] - Physical Link Down Count
Selects the nodes that match this parameter value. The value is total number of times the link on a
given port is transitioned down.
[-is-active-link {true|false}, ...] - Is the Link Active
Selects the nodes that match this parameter value. The value true means the interconnect data channels
are established on this link.
● Examples
The following example displays the HA interconnect device port information on FAS8000 series nodes in the
cluster:
Related reference
system ha interconnect statistics show-scatter-gather-list (page 1365)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect statistics clear-port command clears the high-availability
interconnect device port statistics. This command is supported only on FAS2500 series and FAS8000 series
nodes in the cluster.
Note
To display the high-availability interconnect device port statistics, use the statistics show -
object ic_hw_port_stats command.
● Parameters
-node <nodename> - Node
Selects the nodes that match this parameter value.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect statistics clear-port-symbol-error command clears the high-
availability interconnect device port symbol errors. This command is supported only on FAS2500 series
nodes in the cluster.
Note
To display the high-availability interconnect device port statistics, use the statistics show -
object ic_hw_port_stats command.
● Parameters
-node <nodename> - Node
Selects the nodes that match this parameter value.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect statistics show-scatter-gather-list command displays the
high-availability interconnect device scatter-gather list entry statistics. Out of all possible 32 entries in a
scatter-gather list, the command displays only the entries that have valid data.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-sge <integer>, ...] - Scatter-Gather Entry
Selects the nodes that match this scatter-gather element index value.
[-total-count <integer>, ...] - Total Count
Selects the nodes that match this parameter value. The value is the total number of times a particular
scatter-gather list element is used.
[-total-size <integer>, ...] - Total Size
Selects the nodes that match this parameter value. The value is the total number of bytes written by the
high-availability interconnect device using a particular scatter-gather list element.
● Examples
Node: ic-f8040-02
Entry Count Size
----- ----------------- ----------------
1 1544405 310004390
2 6217 16779908
3 1222 12003411
4 338606 5543436659
6 2 41980
7 2 46136
8 18 589824
9 18 589824
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect statistics performance show command displays the high-
availability interconnect device performance statistics.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields from all nodes in cluster.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields from the specified node in the cluster.
[-elapsed <integer>] - Elapsed Time (secs)
Selects the nodes that match this parameter value. Displays the total elapsed time between statistics
collection start time to end time. During the initialization stage, statistics collection starts when the
partner node is up and ready. After the initialization stage, the statistics collection start time is reset
after every execution of this command. This means that after the initialization stage, elapsed time
represents the time between current command execution and previous command execution.
[-qmax-wait <integer>] - Maximum Queue Wait Count
Selects the nodes that match this wait value. The queue maximum wait value is the total number of
times the interconnect device waited to post requests on the send queue.
[-qmax-wait-time <integer>] - Average Queue Wait Time (usecs)
Selects the nodes that match this average wait time value. The queue maximum wait time is the
average amount of time the interconnect device waited to post requests on the send queue.
[-qmax-timeout <integer>] - Maximum Queue Timeouts
Selects the nodes that match this parameter value. The queue maximum timeout value is the total
number of times the interconnect device timed out waiting to post requests on the send queue.
[-preempt-timeout <integer>] - Preempt Timeouts
Selects the nodes that match this parameter value. The timeout value is the total number of times
polling on the given transfer ID is preempted.
● Examples
The following example displays the HA interconnect device performance statistics for FAS8000 series nodes
in the cluster:
Node: ic-f8040-02
Elapsed Time (secs): 12
Maximum Queue Wait Count: 29
Average Queue Wait Time (usecs): 68
Remote NV Messages Average Time (usecs): 1386
Total Remote NV Transfers: 19190
Remote NV Average Transfer Size: 375
Remote NV Transfers Average Time (usecs): 670
Total IC waits for Given ID: 304
Average IC Waitdone Time (usecs): 5
Total IC isdone Checks: 1409
Total IC isdone Checks Success: 1409
Total IC isdone Checks Failed: 0
IC Small Writes: 20964
IC 4K Writes: 5
IC 8K Writes: 99
IC 16K+ Writes: 229
IC XORDER Writes: 10261
IC XORDER Reads: 0
RDMA Read Count: 337
Average IC Waitdone RDMA-READ Time (usecs): 0
Average MB/s: 0.57080
Average Bytes per Transfer: 187
Total Transfers: 42883
Average Time for NVLOG Sync (msecs): 1009
Maximum Time for NVLOG Sync (msecs): 1009
Maximum Scatter-Gather Elements in a List: 32
Total Receive Queue Waits to Post Buffer: 0
The following example displays the HA interconnect device performance statistics for FAS2500 series nodes
in the cluster:
Node: ic-f2554-03
Elapsed Time (secs): 253
Maximum Queue Wait Count: 11
Average Queue Wait Time (usecs): 6837
Maximum Queue Timeouts: 0
Preempt Timeouts: 0
Non-Preempt Timeouts: 0
Notify Timeouts: 0
Remote NV Messages Average Time (usecs): 3343
Total Remote NV Transfers: 59643
Remote NV Average Transfer Size: 8715
Remote NV Transfers Average Time (usecs): 4258
Total IC waits for Given ID: 180
Average IC Waitdone Time (usecs): 3187
Total IC isdone Checks: 499981
Total IC isdone Checks Success: 59922
Total IC isdone Checks Failed: 440059
IC Small Writes: 98722
IC 4K Writes: 5747
IC 8K Writes: 7719
IC 16K+ Writes: 25793
IC XORDER Writes: 66735
IC XORDER Reads: 0
RDMA Read Count: 574
Average IC Waitdone RDMA-READ Time (usecs): 229
Average MB/s: 2.1207
Average Bytes per Transfer: 4680
Total Transfers: 138302
Average Time for NVLOG Sync (msecs): 1236
Maximum Time for NVLOG Sync (msecs): 1236
Maximum Scatter-Gather Elements in a List: 27
Node: ic-f2554-04
Elapsed Time (secs): 257
Maximum Queue Wait Count: 7
Average Queue Wait Time (usecs): 10172
Maximum Queue Timeouts: 0
Preempt Timeouts: 0
Non-Preempt Timeouts: 0
Notify Timeouts: 0
Remote NV Messages Average Time (usecs): 4237
Total Remote NV Transfers: 47134
Remote NV Average Transfer Size: 9559
Remote NV Transfers Average Time (usecs): 5463
Total IC waits for Given ID: 178
Average IC Waitdone Time (usecs): 1890
Total IC isdone Checks: 393191
Total IC isdone Checks Success: 47382
Total IC isdone Checks Failed: 345809
IC Small Writes: 78369
IC 4K Writes: 3815
IC 8K Writes: 6005
IC 16K+ Writes: 22993
IC XORDER Writes: 53529
IC XORDER Reads: 0
RDMA Read Count: 524
Average IC Waitdone RDMA-READ Time (usecs): 62
Average MB/s: 2.3682
Average Bytes per Transfer: 5143
Total Transfers: 111501
Average Time for NVLOG Sync (msecs): 822
Maximum Time for NVLOG Sync (msecs): 822
Maximum Scatter-Gather Elements in a List: 27
The system ha interconnect status directory displays the interconnect device status information. At
instance level, these commands displays detailed information about the interconnect device.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system ha interconnect status show command displays the high-availability interconnect
connection status. Connection status information displayed by this command varies by controller model. For
nodes with two HA interconnect links over the backplane or connected externally, this command displays
the following information:
• Node
• Link status on the first port
• Link status on the second port
• Is the link on first port active?
• Is the link on second port active?
• Interconnect RDMA status
For nodes with a single HA interconnect link, this command displays following the information:
• Node
• Link status
• Interconnect RDMA status
Running the command with the -instance or -node parameter displays detailed information about the
interconnect device and its ports.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields for the specified node or all the nodes.
[-node {<nodename>|local}] - Node
Use this parameter to display all the fields for the specified node.
● Examples
The following example displays status information about the HA interconnect connection on FAS8000 series
nodes with two HA interconnect links in the cluster:
Node: ic-f8040-01
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
Node: ic-f8040-02
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
2 entries were displayed.
The following example displays status information about the HA interconnect connection on FAS2500 series
nodes with a single HA interconnect link in the cluster:
Node: ic-f2554-01
Link Status: up
IC RDMA Connection: up
Node: ic-f2554-02
Link Status: up
IC RDMA Connection: up
2 entries were displayed.
The following example displays detailed information about the HA interconnect link when parameters like -
instance, -node are used with the system ha interconnect status show command
Node: ic-f8040-01
Link 0 Status: up
Link 1 Status: up
Is Link 0 Active: true
Is Link 1 Active: false
IC RDMA Connection: up
Slot: 0
Driver Name: IB Host Adapter i0 (Mellanox ConnectX MT27518 rev. 0)
Firmware: 2.11.534
Debug Firmware: no
Interconnect Port 0 :
Port Name: ib0a
GID: fe80:0000:0000:0000:00a0:9800:0030:33ec
Base LID: 0x3ec
MTU: 4096
Data Rate: 40 Gb/s (4X) QDR
Link Information: ACTIVE
Interconnect Port 1 :
Port Name: ib0b
GID: fe80:0000:0000:0000:00a0:9800:0030:33ed
Base LID: 0x3ed
MTU: 4096
Data Rate: 40 Gb/s (4X) QDR
Link Information: ACTIVE
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health alert delete command deletes all the alerts on the cluster with the specified input
parameters.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to delete alerts generated on a cluster only on the node you specify.
-monitor <hm_type> - Monitor
Use this parameter to delete alerts generated on a cluster only on the monitor you specify.
-alert-id <text> - Alert ID
Use this parameter to delete alerts generated on a cluster only on the alert ID you specify.
-alerting-resource <text> - Alerting Resource
Use this parameter to delete alerts generated on a cluster on the alerting resource you specify.
● Examples
This example shows how to delete an alert with the specified alert-id:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health alert modify command suppresses alerts generated on the cluster and sets the
acknowledgement state for an alert.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which you want to change the state.
-monitor <hm_type> - Monitor
Use this parameter to specify the monitor name on which you want to change the state.
-alert-id <text> - Alert ID
Use this parameter to specify the alert ID on which you want to change the state.
-alerting-resource <text> - Alerting Resource
Use this parameter to specify the alerting resource name on which you want to change the state.
[-acknowledge {true|false}] - Acknowledge
Use this parameter to set the acknowledgement state to true or false.
[-suppress {true|false}] - Suppress
Use this parameter to set the suppress state to true or false.
[-acknowledger <text>] - Acknowledger
Use this parameter to set the acknowledger as the filter for setting state.
[-suppressor <text>] - Suppressor
Use this parameter to set the suppressor as the filter for setting state.
● Examples
This example modifies the alert field states on the cluster:
cluster1::> system health alert modify -node * -alert-id DualPathToDiskShelf_Alert -suppress true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health alert show command displays information about all the alerts generated on the
system. Using -instance will add detailed information.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Displays the following additional information about each alert:
• Node name
• Resource name
• Severity of the alert
• Time of alert generation
• Suppress state of the alert
• Acknowledge state of the alert
• Probable cause for the alert
• Possible effect due to the alert
• Recommended corrective actions to follow
[-node {<nodename>|local}] - Node
Selects the alerts generated for the specified node.
[-monitor <hm_type>] - Monitor
● Examples
The example below displays information about all the alerts generated in the cluster:
Node: node1
Resource: Shelf ID 2
Severity: Major
Suppress: false
Acknowledge: false
Tags: quality-of-service, nondisruptive-upgrade
Probable Cause: Disk shelf 2 does not have two paths to controller
node1.
Possible Effect: Access to disk shelf 2 via controller node1 will be
lost with a single hardware component failure (e.g.
cable, HBA, or IOM failure).
Corrective Actions: 1. Halt controller node1 and all controllers attached to disk shelf 2.
2. Connect disk shelf 2 to controller node1 via two paths following the rules in t
he Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
The example below displays additional information about a specific alert generated in the cluster:
cluster1::> system health alert show -monitor node-connect -alert-id DualPathToDiskShelf_Alert -instanc
e
Node: node1
Monitor: node-connect
Alert ID: DualPathToDiskShelf_Alert
Alerting Resource: 50:05:0c:c1:02:00:0f:02
Subsystem: SAS-connect
Indication Time: Mon Mar 21 10:26:38 2011
Perceived Severity: Major
Probable Cause: Connection_establishment_error
Description: Disk shelf 2 does not have two paths to controller node1.
Corrective Actions: 1. Halt controller node1 and all controllers attached to disk shelf 2.
2. Connect disk shelf 2 to controller node1 via two paths following the rules i
n the Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
Possible Effect: Access to disk shelf 2 via controller node1 will be lost with a single hardware
component failure (e.g. cable, HBA, or IOM failure).
Acknowledge: false
Suppress: false
Policy: DualPathToDiskShelf_Policy
Acknowledger: -
Suppressor: -
Additional Information: Shelf uuid: 50:05:0c:c1:02:00:0f:02
Shelf id: 2
Shelf Name: 4d.shelf2
Number of Paths: 1
Number of Disks: 6
Adapter connected to IOMA:
Adapter connected to IOMB: 4d
Alerting Resource Name: Shelf ID 2
Additional Alert Tags: quality-of-service, nondisruptive-upgrade
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health alert definition show command displays information about the various alerts
defined in the system health monitor policy file. Using -instance will display additional details.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that you specify.
| [-instance ]}
Use this parameter to display additional information on each alert definition.
• Node name
• Monitor name
• Subsystem identifier
• Alert ID
• Severity of the alert
• Probable cause
• Probable cause description
• Possible effect due the error state
• Recommended corrective actions to be followed
• Any additional information
• Additional alert tags
[-node {<nodename>|local}] - Node
Selects the alert definitions for the specified node.
[-monitor <hm_type>] - Monitor
Selects the alert definitions with the specified monitor name.
[-alert-id <text>] - Class of Alert
Selects the alert definitions with the specified alert identifier.
[-perceived-severity <hm_perceived_sev>] - Severity of Alert
Selects the alert definitions with the specified perceived severity.
[-probable-cause <hm_probable_cause>] - Probable Cause
Selects the alert definitions with the specified probable cause of the alert.
[-probable-cause-description <text>] - Probable Cause Description
Selects the alert definitions with the specified probable cause description.
[-possible-effect <text>] - Possible Effect
Selects the alert definitions with the specified possible effect.
[-corrective-actions <text>] - Corrective Actions
Selects the alert definitions with the specified corrective action.
[-subsystem <hm_subsystem>] - Subsystem Name
Selects the alert definitions with the specified subsystem.
[-additional-information <text>] - Additional Relevant Data
Selects the alert definitions with the specified additional information.
[-tags <hm_alert_type>, ...] - Additional Alert Tags
Selects the alert definitions with the specified keywords.
● Examples
The example below displays information about all the definitions in the alert definition file:
The example below displays detailed information about the definitions in the alert definition file:
Node: krivC-01
Monitor: system-connect
Class of Alert: DualControllerNonHa_Alert
Severity of Alert: Major
Probable Cause: Configuration_error
Probable Cause Description: Disk shelf $(sschm_shelf_info.id) is connected to two controllers ($(sschm_
shelf_info.connected-nodes)) that are not an HA pair.
Possible Effect: Access to disk shelf $(sschm_shelf_info.id) may be lost with a single contr
oller failure.
Corrective Actions: 1. Halt all controllers that are connected to disk shelf $(sschm_shelf_inf
o.id).
2. Connect disk shelf $(sschm_shelf_info.id) to both HA controllers following the rules
in the Universal SAS and ACP Cabling Guide.
3. Reboot the halted controllers.
4. Contact support personnel if the alert persists.
Subsystem Name: SAS-connect
Additional Relevant Data: -
Additional Alert Tags: quality_of_service, nondisruptive-upgrade
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health autosupport trigger history show command displays all the alert triggers in
the cluster that generated the AutoSupport messages. The following fields are displayed in the output:
• Node name
• Monitor name
• Subsystem
• Alert identifier
• Alerting resource
• Severity
• If an AutoSupport has been sent due to this alert
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display only the fields you specify.
| [-instance ]}
Use this parameter to display additional information about all of the alerts that were generated.
[-node {<nodename>|local}] - Node
Use this parameter to display AutoSupport trigger history on the specified node.
[-monitor <hm_type>] - Monitor
Use this parameter to display AutoSupport trigger history with the specified monitor name.
[-alert-id <text>] - Alert ID
Use this parameter to display the AutoSupport message that was triggered by the specified alert ID.
[-alerting-resource <text>] - Alerting Resource
Use this parameter to display the AutoSupport message that was triggered by the specified alerting
resource.
[-subsystem <hm_subsystem>] - Subsystem
Use this parameter to display the AutoSupport message that was triggered by the specified subsystem.
[-indication-time <Date>] - Indication Time
Use this parameter to display the AutoSupport message that was triggered at the indicated time.
[-perceived-severity <hm_perceived_sev>] - Perceived Severity
Use this parameter to display the AutoSupport message that was triggered by alerts with the specified
perceived severity.
[-autosupport-triggered {true|false}] - AutoSupport Triggered
Use this parameter to display the alerts that generated AutoSupport messages.
[-probable-cause <hm_probable_cause>] - Probable Cause
Use this parameter to display the alerts that were generated with the specified probable cause.
● Examples
This example displays information about the AutoSupport trigger history
This example displays info about the autosupport trigger history in detail
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health config show command displays the configuration and status of each health monitor
in the cluster. The command shows a health status for each health monitor. The health status is an
aggregation of the subsystem health for each subsystem that the health monitor monitors. For example, if a
health monitor monitors two subsystems and the health status of one subsystem is "ok" and the other is
"degraded", the health status for the health monitor is "degraded".
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to list the health monitors present on the specified node.
[-monitor <hm_type>] - Monitor
Use this parameter to display the health monitors with the specified monitor name.
[-subsystem <hm_subsystem>, ...] - Subsystem
Selects the health monitors with the specified subsystems.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Health
Selects the health monitors with the specified health status.
[-mon-version <text>] - Monitor Version
Selects the health monitors with the specified monitor version.
[-pol-version <text>] - Policy File Version
Selects the health monitors with the specified health monitor policy version.
[-context {Node |Cluster}] - Context
Selects the health monitors with the specified running context.
[-aggregator <hm_type>] - Aggregator
Selects the health monitors with the specified aggregator.
[-resources <text>, ...] - Resource
Selects the health monitors with the specified resource name.
[-init-state {Invalid|Initailizing|Initialized|Starting_Discovery|Starting_Re-Discovery|
Discovery_Done_Partially|Discovery_Done}] - Subsystem Initialization Status
Selects the health monitors with the specified subsystem initialization state.
[-sub-pol-versions <text>] - Subordinate Policy Versions
Selects the health monitors with the specified subordinate policy version.
● Examples
The example below displays information about health monitor configuration:
The example below displays detailed information about health monitor configuration:
Node: node1
Monitor: node-connect
Subsystem: SAS-connect
Health: degraded
Monitor Version: 1.0
Policy File Version: 1.0
Context: node_context
Aggregator: system-connect
Resource: SasAdapter, SasDisk, SasShelf
Subsystem Initialization Status: initialized
Subordinate Policy Versions: 1.0 SAS, 1.0 SAS multiple adapters
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health policy definition modify enables or disables health monitoring policies based
on input parameters the user provides.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which you want to enable or disable the policy.
-monitor <hm_type> - Monitor
Use this parameter to specify the monitor name for which you want to be enable or disable the policy.
-policy-id <text> - Policy
Use this parameter to specify the policy identifier that you want to enable or disable.
[-enable {true|false}] - Policy Status
Use this parameter with the value "true" to enable the policy. Set the value to "false" to disable the
policy.
[-asup-enable {true|false}] - Enable AutoSupport for This Alert
Use this parameter to enable or disable an AutoSupport message for this alert.
● Examples
This example modifies policy state on the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health policy definition show command lists the health monitor policy definitions as
described by the health monitor policy file. The command displays the following fields:
• Node name
• Monitor name
• Policy name
• Policy rule expression
• Expression for joining two tables
• Policy status
• Alert identifier
• Responsible resource name
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects policy definitions for the specified node.
[-monitor <hm_type>] - Monitor
Selects policy definitions with the specified monitor name.
[-policy-id <text>] - Policy
Selects policy definitions with the specified policy identifier.
[-rule-expression <ArithExpr>] - Rule Expression
Selects policy definitions with the specified rule of expression.
[-where <ArithExpr>] - Variable Equivalence
Selects rules that match the provided expression. This expression is part of the alert definition. It is
shown for reference only and cannot be changed.
[-enable {true|false}] - Policy Status
Use this parameter with the value set to "true" to select policy definitions that are enabled. Set the value
to "false" to select policy definitions that are disabled.
[-alert-id <text>] - Alert ID
Selects all policy definitions of the specified alert identifier.
[-responsible-resource-info <text>] - Table and ID of Resource at Fault
Selects all policy definitions with the specified responsible resource.
[-asup-enable {true|false}] - Enable AutoSupport for This Alert
Selects policy definitions for which AutoSupport messages are either enabled or disabled.
● Examples
The example below displays information about all the policy definitions present in the cluster:
The example below displays detailed information about all the policy definitions present in the cluster:
Node: node1
Monitor: node-connect
Policy: ControllerToShelfIomA_Policy
Rule Expression: nschm_shelf_info.num-paths == 2 && nschm_shelf_info.iomb-adapter ==
NULL
Variable Equivalence: -
Policy Status: true
Alert ID: ControllerToShelfIomA_Alert
Table and ID of Resource at Fault: nschm_shelf_info.name
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health status show command displays the health monitor status. The possible states are:
• ok
• ok-with-suppressed
• degraded
• unreachable
● Examples
This example displays information about health monitoring status:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system health subsystem show command displays the health status of each subsystem for which
health monitoring is available. This command aggregates subsystem health status from each node in the
cluster. A subsystem's health status changes to "degraded" when a health monitor raises an alert. You can
use the system health alert show command to display information about generated alerts.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-subsystem <hm_subsystem>] - Subsystem
Selects the specified subsystem.
[-health {ok|ok-with-suppressed|degraded|unreachable|unknown}] - Health
Selects subsystems that have the specified health status.
[-init-state {Invalid|Initailizing|Initialized|Starting_Discovery|Starting_Re-Discovery|
Discovery_Done_Partially|Discovery_Done}] - Initialization State
Selects subsystems that have the specified initialization state.
[-outstanding-alert-count <integer>] - Number of Outstanding Alerts
Selects subsystems that have the specified number of outstanding alerts.
[-suppressed-alert-count <integer>] - Number of Suppressed Alerts
Selects subsystems that have the specified number of suppressed alerts.
[-node {<nodename>|local}, ...] - Node
Selects subsystems for the specified node.
[-refresh-interval <[<integer>h][<integer>m][<integer>s]>, ...] - Subsystem Refresh Interval
The refresh interval is in minutes. A value of zero disables the sub-system refresh until a reboot or
restart of the subsystem process.
● Examples
The example below displays the health status of each subsystem:
The example below displays detailed information about the health status of each subsystem:
Subsystem: SAS-connect
Health: degraded
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1,node2
Subsystem Refresh Interval: 30m, 30m
Subsystem: Switch-Health
Health: ok
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1
Subsystem Refresh Interval: 5m
Subsystem: CIFS-NDO
Health: OK
Initialization State: initialized
Number of Outstanding Alerts: 0
Number of Suppressed Alerts: 0
Node: node1
Subsystem Refresh Interval: 5m
Related reference
system health alert show (page 1376)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command adds a license to a cluster. To add a license you must specify a valid license key, which you
can obtain from your sales representative.
● Parameters
-license-code <License Code V2>, ... - License Code V2
This parameter specifies the key of the license that is to be added to the cluster. The parameter accepts
a list of 28 digit upper-case alphanumeric character keys.
● Examples
The following example adds a list of licenses with the keys AAAAAAAAAAAAAAAAAAAAAAAAAAAA and
BBBBBBBBBBBBBBBBBBBBBBBBBBBB to the cluster
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command manages licenses in the cluster that have no effect, and so can be removed. Licenses that
have expired or are not affiliated with any controller in the cluster are deleted by this command. Licenses
that cannot be deleted are displayed with reasons for the non-deletion.
● Parameters
[-unused [true]] - Remove unused licenses
If you use this parameter, the command removes licenses in the cluster that are not affiliated with any
controller in the cluster.
[-expired [true]] - Remove expired licenses
If you use this parameter, the command removes licenses in the cluster that have expired.
[-simulate | -n [true]] - Simulate Only
If you use this parameter, the command will not remove the licenses. Instead it will display the licenses
that will be removed if this parameter was not provided.
● Examples
The following example simulates and displays the licenses that can be cleaned up:
The following licenses are either expired or unused but cannot be safely deleted:
------------------------- -----------------------------------------------------
Cloud System generated license
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command deletes a license from a cluster.
● Parameters
-serial-number <text> - Serial Number
This parameter specifies the serial number of the license that is to be deleted from the cluster. If this
parameter is not provided, the default value is the serial number of the cluster.
-package <Licensable Package> - Package
This parameter specifies the name of the package that is to be deleted from the cluster.
● Examples
The following example deletes a license named CIFS and serial number
1-81-0000000000000000000123456 from the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system license show command displays the information about licenses in the system.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <text>] - Serial Number
If you use this parameter, the command displays information only about the licenses that matches the
serial number you specify.
[-package <Licensable Package>] - Package
If you use this parameter, the command displays information only about the specified package.
[-owner <text>] - Owner
If you use this parameter, the command displays information only about the packages that matches the
owner name you specify.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Expiration
If you use this parameter, the command displays information only about the licenses that have the
expiration date you specify.
● Examples
The following example displays default information about all licensed packages in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the status of all ONTAP aggregates.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you use this parameter, the command displays information only about aggregates that match the
given node.
[-aggr-name <text>] - Aggregate Name
If you use this parameter, the command displays information only about aggregate that match the
given aggregate.
[-aggr-size {<integer>[KB|MB|GB|TB|PB]}] - Aggregate Size
If you use this parameter, the command displays information only about aggregates that match the
given physical size of an aggregate.
[-licensed-size {<integer>[KB|MB|GB|TB|PB]}] - Licensed Size
If you use this parameter, the command displays information only about aggregates that match the
given licensed-size.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Lease Expiration
If you use this parameter, the command displays information only about aggregates that match the
given lease expiration.
[-status <AggrLicStatus>] - Aggregate Status
If you use this parameter, the command displays information only about aggregates that match the
given status.
[-compliant {true|false}] - Is Aggregate Compliant
If you use this parameter, the command displays information only about aggregates that match the
given state of compliance.
[-aggr-uuid <UUID>] - Aggregate UUID
If you use this parameter, the command displays information only about aggregate that match the
given aggregate uuid.
● Examples
The following example displays the license status of the cluster:
Licensed Physical
Node Aggregate Size Size Lease Expiration Status
--------- ------------------- -------- -------- ------------------ ------------
node1
root1 0B 2GB - lease-not-required
root2 (mirror) 0B 2GB - lease-not-required
aggr1 20GB 20GB 6/21/2018 18:10:00 lease-up-to-date
aggr2 (mirror) 10GB 10GB 6/21/2018 20:00:00 lease-up-to-date
node2
root1 (mirror) 0B 2GB - lease-not-required
root2 0B 2GB - lease-not-required
aggr1 (mirror) 20GB 20GB 6/21/2018 18:10:00 lease-up-to-date
aggr2 10GB 10GB 6/21/2018 20:00:00 lease-up-to-date
node3
root3 0B 2GB - lease-not-required
root4 (mirror) 0B 2GB - lease-not-required
aggr3 15GB 0B 6/21/2018 20:00:00 aggregate-deleted
aggr4 (mirror) 15GB 15GB 6/21/2018 12:00:00 lease-expired
aggr5 (mirror) 15GB 15GB 6/21/2018 21:00:00 lease-up-to-date
aggr6 15GB 15GB 6/21/2018 21:00:00 plex-deleted
aggr7 15GB 14GB 6/21/2018 21:00:00 aggregate-license-size-decreased
aggr8 (mirror) 0B 14GB - lease-missing
node4
root3 (mirror) 0B 2GB - lease-not-required
root4 0B 2GB - lease-not-required
aggr3 (mirror) 15GB 0B 6/21/2018 20:00:00 aggregate-deleted
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the history of changes to the support and node serial numbers. The node serial
number and the support serial number of an ONTAP system are generally the same and do not change over
time. However, when capacity pools licensing is used, the support serial number is that of the capacity pool
license serial number and the node serial number is generated by the license manager. Also, when a cluster
is upgraded or converted from capacity tiers licensing to capacity pools licensing, its support serial numbers
as well as its node serial numbers change.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-seqnum <Sequence Number>] - Sequence number
Event sequence number
[-node {<nodename>|local}] - Node
Selects the node names that match this parameter value.
[-date <MM/DD/YYYY HH:MM:SS>] - Date
Selects the dates of serial number changes that match this parameter value.
[-reason <text>] - Reason for change
Reasons for serial number changes.
[-support-serial-number <text>] - Support Serial Number
Selects the support serial numbers that match this parameter value.
[-node-serial-number <text>] - Node Serial Number
Selects the node serial number that match this parameter value.
● Examples
The following example displays the serial number change history of a four node capacity pools cluster. Its
two HA pairs were originally assigned to capacity pools 390000101 and 390000102, and then both were
reassigned to another capacity pool 390000103:
The following example displays the history of a four node cluster converted from capacity tiers licensing to
capacity pools licensing:
The following example displays the history of an evaluation cluster that was upgraded to capacity pools
licensing:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the status of all Data ONTAP licenses.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-status {not-compliant|eval|partially-installed|valid|not-installed|not-applicable|not-known}] -
Current State
If you use this parameter, the command displays information only about licenses that match the given
status.
[-license <Licensable Package>] - License
If you use this parameter, the command displays information only about licenses that match the given
license.
[-scope {site|cluster|node|pool}] - License Scope
If you use this parameter, the command displays information only about licenses that match the given
scope.
[-detailed-status <text>, ...] - Detailed Status
If you use this parameter, the command displays information only about licenses that match the given
detailed-status.
● Examples
The following example displays the license status of the cluster:
SnapMirror - -
FlexClone - -
SnapVault - -
SnapLock - -
SnapManagerSuite - -
SnapProtectApps - -
V_StorageAttach - -
Insight_Balance - -
OCShift - -
TPM - -
VE - -
DP_Optimized - -
not-applicable
Cloud - -
Select - -
20 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system license update-leases command attempts to update (that is, renew) any capacity pool
leases that have expired.
● Parameters
[-node {<nodename>|local}] - Nodes to Attempt Renewal
This optional parameter directs the system to update leases for only the specified nodes.
[-force {true|false}] - Force Renewal of Valid Leases
This optional parameter, if set with a value of "true", directs the system to update all leases for a node,
not just those that have expired.
● Examples
The following example updates all leases on a node:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release of Data ONTAP. Use the "system
license show-status" command.
The system license capacity show command displays the information about the licenses in the
system that are specifically related to storage capacity limits.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-serial-number <Node Serial Number>] - Serial Number
If you use this parameter, the command displays information only about the capacity-related licenses
that matches the serial number you specify.
[-package <Licensable Package>] - Package
If you use this parameter, the command displays information only about the package you specify.
[-owner <text>] - Owner
If you use this parameter, the command displays information only about the capacity-related licenses
that have the owner you specify.
[-max-capacity {<integer>[KB|MB|GB|TB|PB]}] - Maximum Capacity
If you use this parameter, the command displays information only about the capacity-related licenses
that have the maximum amount of attached storage capacity you specify.
[-current-capacity {<integer>[KB|MB|GB|TB|PB]}] - Current Capacity
If you use this parameter, the command displays information only about the capacity-related licenses
that apply to the node with the current attached capacity you specify.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Expiration Date
If you use this parameter, the command displays information only about the capacity-related licenses
that have the expiration date you specify.
[-reported-state {evaluation|warning|missing|enforcement|installed}] - Reported State
If you use this parameter, the command displays information only about the capacity-related licenses
that have the reported state you specify.
[-node {<nodename>|local}] - Node Name
If you use this parameter, the command displays information only about the capacity-related licenses
that apply to the node you specify.
● Examples
The following example displays information about all capacity-related licensed packages in the cluster, for a
hypothetical cluster of four nodes:
Note that for some nodes below, the maximum capacity is displayed as "-" (meaning "unlimited"). This
happens when there is no capacity license for the node - the node is operating with a limited-time
temporary capacity license.
Node: node1
Serial Number: 1-81-0000000000001234567890123456
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select 2TB 15.81GB 4/11/2016 00:00:00
Node: node2
Serial Number: 1-81-0000000000000000000123456788
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select - 10.40TB 4/11/2016 00:00:00
Node: node3
Serial Number: 1-81-0000000000000000000123456789
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select - 10.40TB 4/11/2016 00:00:00
Node: node4
Serial Number: 1-81-0000000000001234567890123456
Max Current
Package Capacity Capacity Expiration
------------------------ -------- -------- -------------------
Select 2TB 15.81GB 4/11/2016 00:00:00
Related reference
system license show-status (page 1395)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays information about license entitlement risk of the cluster for each license package.
The command displays license package name, entitlement risk, corrective action to reduce the entitlement
risk for each package, and the names and serial numbers for the nodes that do not have a node-locked
license for a given package. If command is used with the "-detail" parameter, the output displays the names
and serial numbers for all nodes in the cluster instead of only the nodes missing a node-locked license. It
also displays whether each node has a license and if the features corresponding to the package are used in
the past week.
License entitlement risk does not apply to base license. If a node has a site or a valid demo license for the
given package, the entitlement risk will be shown as "medium" and the nodes missing a node-locked license
will be displayed. The corrective action, if the cluster has a site license for the given package is, "Verify all
controllers are entitled". If the entitlement risk is high, the corrective action is "Acquire node-locked license".
For the low entitlement risk and if the cluster is unlicensed for a given package, the corrective action is
"None". If the license entitlement risk cannot be computed because of infrastructure issues, the entitlement
risk is shown as "unknown" and the corrective action is displayed as "Verify system health".
● Parameters
{ [-fields <fieldname>, ...]
With this parameter, you can specify which fields should be displayed by the command. License package
names and node serial numbers are always displayed.
| [-detail ]
If you use this parameter, the command displays the license package name, entitlement risk, corrective
action, all nodes' names, their serial numbers, whether a node-locked license is present and whether a
given license package has been in use in the past week for each node in the cluster.
| [-instance ]}
If this parameter is used, the command displays values for all fields for each license package and each
node in the cluster individually.
[-package <Licensable Package>] - Package Name
If you use this parameter, the command displays information only for the specified license package.
[-serial-number <text>] - Node Serial Number
If you use this parameter, the command displays information only for the node with the specified serial
number. The displayed entitlement risk and corrective action apply to the entire cluster.
[-node-name <text>] - Node Name
If you use this parameter, the command displays information only for the node with the specified name.
The displayed entitlement risk and corrective action apply to the entire cluster.
[-risk {high|medium|low|unlicensed|unknown}] - Entitlement Risk
If you use this parameter, the command displays information only for the license packages that have the
specified license entitlement risk.
[-action <text>] - Corrective Action
If you use this parameter, the command displays information only for the license packages which need
the specified corrective action to reduce entitlement risk.
[-is-licensed {true|false}] - Is Node-Locked License Present
If you use this parameter, the command displays information only for the license packages for which at
least one node in the cluster has a node-locked license. It also displays the nodes in the cluster which do
not have a node-locked license.
[-in-use {true|false}] - Usage Status
If you use this parameter, the command displays information only for the license packages with
corresponding features in use.
[-missing-serial-numbers <text>, ...] - Serial Numbers Missing a Node-Locked License
If you use this parameter, the command displays the packages for which the node with the specified
serial number does not have a node-locked license.
[-missing-node-names <text>, ...] - Node Names Missing a Node-Locked License
If you use this parameter, the command displays all the packages for which the node with the specified
name does not have a node-locked license.
[-action-code {acquire-license|adjust-capacity|verify-entitlement|verify-system-health|none}] -
Corrective Action Code
If you use this parameter, the command displays information only for the license packages which need
specified corrective action code to reduce entitlement risk. This parameter is same as the parameter
"action".
● Examples
The following example displays the information for license package NFS. NFS is unlicensed in the cluster and
no action is necessary to reduce the entitlement risk. The nodes, cluster1-01 and cluster-02, are missing a
node-locked license. The serial numbers for both nodes are also displayed.
The following example displays the information for license package CIFS. The cluster has high entitlement
risk for CIFS. The command displays serial numbers for all nodes in the cluster. Both nodes are missing a
node-locked CIFS license. Node with serial number 1-81-0000000000000004073806282 has used CIFS
feature in the past week, and the node with serial number 1-81-0000000000000004073806283 has not
used this feature in the past week.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system license license-manager check checks the connectivity status of a node to the License
Manager that the node was configured to use. The status of a node might indicate that the License Manager
is inaccessible. If so, the status message contains additional text in parentheses. The text options and
descriptions are as follows:
• license_expired : The License Manager has a license, but it is expired.
• network_error : The node is unable to establish basic network connectivity.
• no_valid_license : The License Manager does not have a valid capacity pool license.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter directs the system to display results for the License Manager configured for the specified
node.
[-status <text>] - Status
This parameter directs the system to display results for the given status message.
● Examples
The following examples check the status of the configured License Manager, before and after its license has
expired:
Node: node1
LM status: License Manager (1.2.3.4:5678) is accessible.
Node Status
--------------- ------------------------------------------------------
node1 License Manager (1.2.3.4:5678) is accessible.
node2 License Manager (1.2.3.4:5678) is accessible.
2 entries were displayed.
Node: node1
LM status: License Manager (1.2.3.4:5678) is inaccessible (license_expired).
Node Status
--------------- ------------------------------------------------------
node1 License Manager (1.2.3.4:5678) is inaccessible (license_expired).
node2 License Manager (1.2.3.4:5678) is inaccessible (license_expired).
2 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system license license-manager modify command modifies the configuration information for
the License Manager the system is using.
● Parameters
[-host <text>] - License Manager Host
Sets the specified host, which can either be a fully qualified domain name (FQDN) or an IP address.
● Examples
The following example modifies information about the License Manager configuration:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system license license-manager show command displays the information about the current
License Manager configuration.
● Examples
The following example displays information about current License Manager configuration:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This command is deprecated and may be removed in a future release of Data ONTAP. Use the "system
license show-status" command.
This command displays the list of licensable packages in the system and their current licensing status.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-package <Licensable Package>] - Package Name
If you use this parameter, the command displays information only about the specified package.
[-method {none|license|site|demo|subscr|capacity}] - Licensed Method
If you use this parameter, the command displays information only about the packages with the specified
licensed method.
[-expiration <MM/DD/YYYY HH:MM:SS>] - Expiration Date
If you use this parameter, the command displays information only about the licenses that have the
expiration date you specify.
[-description <text>] - Description
If you use this parameter, the command displays information only about the licenses that match the
description you specify.
[-status-details <text>] - Additional Information About Status
This option displays additional information regarding the cluster-level license status for license
methods.
● Examples
The following example displays the license status of the cluster:
Related reference
system license show-status (page 1395)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node halt command stops all activity on a node. You may supply a reason for the shutdown,
which will be stored in the audit log. You may also keep partner nodes from performing storage takeover
during the shutdown.
● Parameters
-node {<nodename>|local} - Node
Use this mandatory parameter to specify the node that you want to shut down. The value local
specifies the current node.
[-reason <text>] - Reason for Shutdown
Use this parameter to enter a brief note to indicate the reason for the restart, which will be stored in the
audit log. Providing this information assists support personnel with troubleshooting efforts.
[-inhibit-takeover | -f [true]] - Disallow Storage Takeover by Partner
This parameter optionally forces the shutdown and prevents storage failover. LIFs will migrate prior to
shutdown even when inhibit-takeover is set to true. To prevent LIF migration, skip-lif-migration-before-
shutdown should be set to true. In a two-node MetroCluster configuration, this parameter prevents
automatic unplanned switchover.
Note
If -inhibit-takeover is set to true, the default behavior of the storage failover show -fields
onreboot command is ignored.
If you enter this command without using this parameter, its effective value is false and storage takeover
is allowed. If you enter this parameter without a value, it is automatically set to true and storage
takeover is disabled during reboot.
[-dump | -d [true]] - Create a Core Dump
If this parameter is set to true, it forces a dump of the kernel core when halting the node.
[-skip-lif-migration-before-shutdown [true]] - Skip Migrating LIFs Away from Node Prior to
Shutdown
If this parameter is specified, LIF migration prior to the shutdown will be skipped. However if LIFs on this
node are configured for failover, those LIFs may still failover after the shutdown has occurred. The
default is to migrate LIFs prior to the shutdown. In the default case, the command attempts to
synchronously migrate data and cluster management LIFs away from the node prior to shutdown. If the
migration fails or times out, the shutdown will be aborted.
[-ignore-quorum-warnings [true]] - Skip Quorum Check Before Shutdown
If this parameter is specified, quorum checks will be skipped prior to the shutdown. The operation will
continue even if there is a possible data outage due to a quorum issue.
[-ignore-strict-sync-warnings [true]] - Skip SnapMirror Synchronous Strict Sync Check Before
Reboot
If this parameter is specified, the check for volumes that are in SnapMirror Synchronous relationships
with policy of type strict-sync-mirror will be skipped. The operation will continue even if there is a
possible data outage due to not being able to fully sync data.
[-power-off {true|false}] - Power off the node after shutdown
If this parameter is specified, the node will be powered off at the end of the halt operation.
● Examples
The following example shuts down the node named cluster1 for hardware maintenance:
Related reference
storage failover show (page 1128)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node migrate-root command migrates the root aggregate of a node to a different set of
disks. You need to specify the node name and the list of disks on which the new root aggregate will be
created. The command starts a job that backs up the node configuration, creates a new aggregate, set it as
new root aggregate, restores the node configuration and restores the names of original aggregate and
volume. The job might take as long as a few hours depending on time it takes for zeroing the disks,
rebooting the node and restoring the node configuration.
● Parameters
-node {<nodename>|local} - Node
Specifies the node that owns the root aggregate that you wish to migrate. The value local specifies the
current node.
{ -disklist <disk path name>, ... - List of Disks for New Root Aggregate
Specifies the list of disks on which the new root aggregate will be created. All disks must be spares and
owned by the same node. Minimum number of disks required is dependent on the RAID type.
-raid-type {raid_tec|raid_dp|raid4} - RAID Type for the New Root Aggregate
Specifies the RAID type of the root aggregate. The default value is raid-dp.
| -resume [true]} - Resume a Failed Migrate Operation
Resumes a failed migrate-root operation if the new_root aggregate is created and the old root
aggregate is in the retricted state.
● Examples
The command in the following example starts the root aggregate migration on node node1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node modify command sets the attributes of a node.
The owner, location, and asset tag attributes are informational only, and do not affect the operation of the
node or the cluster. The cluster eligibility attribute marks a node as eligible to participate in a cluster. The
epsilon attribute marks a node as the tie-breaker vote if the cluster has an even number of nodes.
Any field of type <text> may be set to any text value. However, if the value contains spaces or other special
characters, you must enter it using double-quotes as shown in the example below.
Use the system node show command to display the field values that this command modifies.
● Parameters
-node {<nodename>|local} - Node
This mandatory parameter specifies which node will have its attributes modified. The value "local"
specifies the current node.
[-owner <text>] - Owner
This optional text string identifies the node's owner. Fill it in as needed for your organization.
[-location <text>] - Location
Use this text string to identify the physical location of the node. This text string is optional; fill it in as
needed for your organization.
[-assettag <text>] - Asset Tag
If your organization uses asset tags to track equipment, you can use this text string to store that tag's
value.
[-eligibility {true|false}] - Eligibility (privilege: advanced)
This parameter specifies whether the node is eligible to participate in a cluster. If you modify another
node's eligibility to false, it will no longer be visible from other nodes in the cluster. If you modify the
local node's eligibility to false, the node will no longer be active in the cluster and you will not be able to
see any cluster nodes from it.
[-epsilon {true|false}] - Epsilon (privilege: advanced)
If specified as true for a node, this value designates the specified node as epsilon for this cluster. In a
cluster, only one node can be designated as epsilon at any given time. A node can be designated as
Epsilon to add weight to its voting in a cluster with an even number of nodes.
[-skip-quorum-check-before-eligible [true]] - Skip Quorum Check Before Setting Node Eligible
(privilege: advanced)
If this parameter is specified, quorum checks will be skipped prior to setting a node eligible. When
setting a node to eligible, the operation will continue even if there is a possible data outage due to a
quorum issue.
[-skip-quorum-check-before-ineligible [true]] - Skip Quorum Check Before Setting Node
Ineligible (privilege: advanced)
If this parameter is specified, quorum checks will be skipped prior to setting a node ineligible. When
setting a node to ineligible, the operation will continue even if there is a possible data outage due to a
quorum issue.
[-is-diff-svcs {true|false}] - Differentiated Services
If set to true this means that the specified node and its HA partner is part of differentiated services
storage infrastructure. The default value for this setting is false.
● Examples
The following example modifies the attributes of a node named node0. The node's owner is set to "IT" and
its location to "Data Center 2."
cluster1::> system node modify -node node0 -owner "IT" -location "Data Center 2"
Related reference
system node show (page 1411)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node reboot command restarts a node. You can supply a reason for the reboot, which is
stored in the audit log. You can also keep partner nodes from performing storage takeover during the reboot
and instruct the rebooted node to create a core dump.
● Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be restarted. The value "local" specifies the current node.
[-inhibit-takeover | -f [true]] - Disallow Storage Takeover by Partner
If set to true, this parameter specifies that the node's failover partner is not allowed to take over for the
node when the node is rebooted. LIFs will migrate prior to reboot even when inhibit-takeover is set to
true. To prevent LIF migration, skip-lif-migration-prior-to-reboot should be set to true. In a two-node
MetroCluster configuration, this parameter prevents automatic unplanned switchover. If you enter this
command without using this parameter, its effective value is false and storage takeover is allowed. If
you enter this parameter without a value, it is automatically set to true and storage takeover is disabled
during reboot.
[-reason <text>] - Reason for Reboot
Use this parameter to enter a brief note to indicate the reason for the restart, which will be stored in the
audit log. Providing this information assists support personnel with troubleshooting efforts.
[-dump | -d [true]] - Create a Core Dump
If you would like the node to create a core dump before restarting, specify the true value with this
parameter. If you enter this command without using this parameter, its effective value is false and the
node doesn't create a core dump. If you enter this parameter without a value, it is automatically set to
true and the node creates a core dump.
[-skip-lif-migration-before-reboot [true]] - Skip Migrating LIFs Away from Node Prior to Reboot
If this parameter is specified, LIF migration prior to the reboot will be skipped. However if LIFs on this
node are configured for failover, those LIFs may still failover after the reboot has occurred. The default is
to migrate LIFs prior to the reboot. In the default case, the command attempts to synchronously migrate
data and cluster management LIFs away from the node prior to reboot. If the migration fails or times
out, the reboot will be aborted.
[-ignore-quorum-warnings [true]] - Skip Quorum Check Before Reboot
If this parameter is specified, quorum checks will be skipped prior to the reboot. The operation will
continue even if there is a possible data outage due to a quorum issue.
[-ignore-strict-sync-warnings [true]] - Skip SnapMirror Synchronous Strict Sync Check Before
Reboot
If this parameter is specified, the check for volumes that are in SnapMirror Synchronous relationships
with policy of type strict-sync-mirror will be skipped. The operation will continue even if there is a
possible data outage due to not being able to fully sync data.
● Examples
The command in the following example restarts the node named cluster1 for a software upgrade:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node rename command changes a node's name. Both the node to be modified and the new
name of that node must be specified with the following parameters. This command is best executed from
the node that is being renamed, using the -node local parameter.
Use the system node show command to display the names of all the nodes in the current cluster.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies which node you are renaming. The value local specifies the current node.
-newname <text> - New Name
Use this parameter to specify the new name of the node.
• The name must contain only the following characters: A-Z, a-z, 0-9, "-" or "_".
• The first character must be one of the following characters: A-Z or a-z.
• The last character must be one of the following characters: A-Z, a-z or 0-9.
• The maximum supported length is 47 characters.
• The system reserves the following names: "all", "cluster", "local" and "localhost".
● Examples
The following example changes the name of the node named node3 to node4.
Related reference
system node show (page 1411)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node restore-backup command restores the backup configuration file that is stored on the
partner node to the specified target node in an HA pair. The backup configuration file is restored after Data
ONTAP has been installed on the target node.
The backup configuration file is stored on the HA partner node while the target node is down. After the
target node has been installed, the partner node sends this backup configuration file to the target node
through the management network by using the system node restore-backup command to restore the
original configuration. This procedure is commonly used when replacing the target node's boot device.
The target IP address should be the address of the target node used for netboot installation.
● Parameters
-node {<nodename>|local} - Node
Specifies the partner node that sends the backup configuration file to the target node. The value "local"
specifies the current node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node revert-to command reverts a node's cluster configuration to the given version. After
the system node revert-to command has finished, the revert_to command must be run from the
nodeshell. The revert_to command reverts the filesystem on individual nodes to the target release. Before
running revert-to in the cluster shell, the target release must be installed on the node.
● Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be reverted. The value local specifies the current node.
-version <revert version> - Data ONTAP Version
Specifies the version of Data ONTAP to which the node is to be reverted.
[-check-only [true]] - Capability Check
If set to true, this parameter specifies that the cluster configuration revert should perform checks to
verify all of the preconditions necessary for revert-to to complete successfully. Setting the parameter to
true does not run through the actual revert process. By default this option is set to false.
● Examples
The command in the following example reverts cluster configuration of a node named node1 to Data ONTAP
version 9.7
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use the system node run command to run certain commands from the nodeshell CLI on a specific node in
the cluster. You can run a single nodeshell command from the clustershell that returns immediately, or you
can start an interactive nodeshell session from which you can run multiple nodeshell commands.
Nodeshell commands are useful for root volume management and system troubleshooting. Commands that
are available through the nodeshell are scoped to a single node in the cluster. That is, they affect only the
node specified by the value of the -node parameter and do not operate on other nodes in the cluster. To see
a list of available nodeshell commands, type '?' at the interactive nodeshell prompt. For more information
on the meanings and usage of the available commands, use the man command in the nodeshell.
Only one interactive nodeshell session at a time can be run on a single node. Up to 24 concurrent, non-
interactive sessions can be run at a time on a node.
When running the nodeshell interactively, exit the nodeshell and return to the clustershell by using the
exit command. If the nodeshell does not respond to commands, terminate the nodeshell process and
return to the clustershell by pressing Ctrl-D.
The system node run command is not available from the GUI interface.
Note
An alternate way to invoke the system node run command is by typing the run as a single word.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the name of the node on which you want to run the nodeshell command.
If you specify only this parameter, the command starts an interactive nodeshell session that lasts
indefinitely. You can exit the nodeshell to the clustershell by pressing Ctrl-D or by typing the exit
command.
{ [-command <text>, ...] - Command to Run
This optionally specifies the name of a single nodeshell command to run on the specified node. To see a
list of available nodeshell commands, type '?' at an interactive nodeshell prompt.
| [-reset [true]]} - Reset Existing Connection
If this parameter is specified with the true value, it terminates any existing interactive nodeshell
session on the specified node. The default value is false.
● Examples
The following example runs the nodeshell command sysconfig -V on a node named node1:
The following example starts a nodeshell session on a node named node2 and then runs the nodeshell
sysconfig -V command. The system remains in the nodeshell after running the sysconfig -V
command.
The following example starts a nodeshell session on a node named node1 and then runs two nodeshell
commands, aggr status first and vol status second. Use quotation marks and semicolons when
executing multiple nodeshell commands with a single run command.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command allows you to access the console of any remote node on the same cluster. The remote access
is helpful in situations where the node cannot be booted up or has network issues. This command
establishes an SSH session with the Service Processor of a remote node and accesses that node's console
over the serial channel. This command works even if Data ONTAP is not booted up on the remote node. You
can get back to the original node by pressing Ctrl+D. This command works only on SSH sessions and not on
physical console sessions.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose physical console you want to access.
● Examples
The following example accesses the console of node2 in the same cluster.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node show command displays information about the nodes in a cluster. You can limit output
to specific types of information and specific nodes in the cluster, or filter output by specific field values.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with
the list of field names you wish to view. Use the system node modify command to change some of the
field values that this command displays.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-inventory ]
Use this parameter to display inventory information such as serial numbers, asset tags, system
identifiers, and model numbers.
| [-messages ]
Use this parameter to display system messages for each node.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information for node names that match this parameter value.
[-owner <text>] - Owner
Selects nodes that have owner values that match this parameter value.
[-location <text>] - Location
Selects nodes at specific physical locations that match this parameter value.
[-model <text>] - Model
Selects nodes that have model numbers that match this parameter value.
[-serialnumber <text>] - Serial Number
Selects nodes that have serial numbers that match this parameter value.
[-assettag <text>] - Asset Tag
Selects nodes that have asset tags that match this parameter value.
[-uptime {<seconds>|[<d> days] <hh>:<mm>[:<ss>]}] - Uptime
Selects nodes that have uptimes that match this parameter value. This parameter is most useful when
used with a range indicator such as less than or greater than, as in:
● Examples
This example displays the locations and model numbers of all nodes that are in physical locations that have
names beginning with "Lab":
Related reference
system node modify (page 1405)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node show-discovered command displays information about all the detectable nodes on
the local cluster network. This includes both nodes in a cluster and nodes that do not belong to a cluster.
You can filter the output to show only nodes that do not belong to a cluster or nodes that are in a cluster.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with
the list of field names you wish to view.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command only displays the fields that you
specify.
| [-instance ]}
If the -instance parameter is specified, the command displays detailed information about each node.
[-node <text>] - Node Name
This parameter specifies the name of a node for which information is to be displayed. If this parameter is
not specified, the command displays information about all discovered nodes.
[-is-in-cluster {true|false}] - Is in a Cluster
If this parameter is set to false, the command lists only nodes that do not belong to a cluster.
[-cluster-uuid <UUID>] - Cluster UUID
Displays information about nodes belonging to the cluster that has the UUID you specify.
[-cluster-name <text>] - Cluster Name
Displays information about nodes belonging to the cluster that has the name you specify.
[-serial-number <text>] - Node Serial Number
Displays information about the node that has the serial number you specify.
[-addresses <IP Address>, ...] - Cluster IP Addresses
Displays information about the node that has the cluster IP addresses you specify.
[-netmask <IP Address>] - Cluster Address Mask
Displays information about the nodes that have the netmask address you specify.
[-nvramid <nvramid>] - Node NVRAM ID
Displays information about the node that has the NVRAM ID you specify.
[-partner-nvramid <nvramid>] - Partner NVRAM ID
Displays information about the node that has an HA partner with the NVRAM ID you specify.
[-model <text>] - Model
Displays the nodes that have the specified model number.
[-version <text>] - Software Version
Displays the nodes that have the specified version of Data ONTAP.
● Examples
The following example displays information about all discovered nodes in the cluster network:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport invoke command sends an AutoSupport message from a node.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify text sent in the subject line of the AutoSupport message. This parameter is
not available when the -type parameter is set to performance.
-type {test|performance|all} - Type of AutoSupport Collection to Issue
Use this parameter to specify the type of AutoSupport collection to issue. There is no default; you must
specify a -type.
• test - The message contains basic information about the node. When the AutoSupport message is
received by technical support, an e-mail confirmation is sent to the system owner of record. This
enables you to confirm that the message is being received by technical support.
• all - The message contains all collected information about the node.
• performance - The message contains only performance information about the node. This parameter
has effect only if performance AutoSupport messages are enabled, which is controlled by the -perf
parameter of the system node autosupport modify command.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to the destination you specify instead of the
configured destination. Only "file", "mailto", "http", and "https" protocols are supported. If this parameter
is omitted, the message is sent to the all of the recipients defined by the system node autosupport
modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the message even if AutoSupport is disabled on the node.
● Examples
The following example sends a test AutoSupport message from a node named node0 with the text "Testing
ASUP":
cluster1::> system node autosupport invoke -node node0 -type test -message "Testing ASUP"
Related reference
system node autosupport modify (page 1418)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport invoke-core-upload command sends two AutoSupport messages. The
first AutoSupport message contains the content relevant to this core upload. This AutoSupport message has
subject line with prefix "CORE INFO:". The second Autosupport message contains the core file specified by the
"-core-filename" option. This AutoSupport message has subject line with prefix "CORE UPLOAD:". The
command requires that the specified file be present while the AutoSupport message is being transmitted.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent. Defaults to
localhost.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify the text in the subject line of the AutoSupport message.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "http" and "https"
protocols are supported. If this parameter is omitted, the message is sent to all the recipients defined by
the system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on
the node.
[-case-number <text>] - Case Number for This Core Upload
Use this parameter to specify the optional case number to be associated with this AutoSupport message.
-core-filename <text> - The Existing Core Filename to Upload
Use this parameter to specify the core file to be included in the AutoSupport message. Use the system
node coredump show command to list the core files by name.
● Examples
Use this command to list the core files from a node:
Use this command to invoke an AutoSupport message with the corefile core.
4073000068.2013-09-11.15_05_01.nz:
Related reference
system node autosupport modify (page 1418)
system node coredump show (page 1443)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport invoke-performance-archive command sends an AutoSupport
message with the performance archives from a node. The command requires that the performance archives
in the specified date range be present while the AutoSupport message is being transmitted.
Note
Performance archive upload is not supported when the -remove-private-data parameter is
enabled. To disable this parameter, use the system node autosupport modify -remove-
private-data false command.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent. The default setting
is localhost.
[-message <text>] - Message Included in the AutoSupport Subject
Use this parameter to specify the text in the subject line of the AutoSupport message.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "file," "http," and
"https" protocols are supported. If this parameter is omitted, the message is sent to the all of the
recipients defined by the system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on
the node.
[-case-number <text>] - Case Number for This Performance Archive Upload
Use this parameter to specify the optional case number to be associated with this AutoSupport message.
-start-date <MM/DD/YYYY HH:MM:SS> - Start Date for Performance Archive Dataset
Use this parameter to specify the start date for the files in the performance archive dataset to be
included in the AutoSupport message.
{ -end-date <MM/DD/YYYY HH:MM:SS> - End Date for Performance Archive Dataset
Use this parameter to specify the end date for the files in the performance archive dataset to be
included in the AutoSupport message. The end date should be within six hours of the start date.
| -duration <[<integer>h][<integer>m][<integer>s]>} - Duration of Performance Archive Dataset
Use this parameter with start-date to specify the duration of the performance archive dataset to be
included in the AutoSupport message. The maximum duration limit is six hours from the start date.
● Examples
Use this command to invoke an AutoSupport message to include the performance archives in the given date
range:
cluster1::> system node autosupport invoke-performance-archive -node local -start-date 9/12/2013 19:37:
24 -end-date 9/12/2013 22:37:24
cluster1::> system node autosupport invoke-performance-archive -node local -start-date 11/21/2013 13:4
2:09 -duration 6h
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport invoke-splog command sends an AutoSupport message with the
Service Processor log files from a specified node in the cluster.
● Parameters
-remote-node {<nodename>|local} - Node
Use this parameter to specify the node from which Service Processor log files are to be collected.
[-log-sequence <integer>] - Log File Sequence Number
Use this parameter to specify the sequence number of the Service Processor log files to be collected. If
this parameter is omitted, the latest Service Procesor log files are collected.
[-uri <text>] - Alternate Destination for This AutoSupport
Use this parameter to send the AutoSupport message to an alternate destination. Only "file," "http," and
"https" protocols are supported. If this parameter is omitted, the message is sent to the all of the
recipients defined by the system node autosupport modify command.
[-force [true]] - Generate and Send Even if Disabled
Use this parameter to generate and send the AutoSupport message even if AutoSupport is disabled on
the node.
● Examples
Use this command to invoke an AutoSupport message to include the Service Processor log files collected
from node cluster1-02.
cluster1::>
Related reference
system node autosupport modify (page 1418)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport modify command modifies the AutoSupport configuration of all the
nodes in the cluster.
● Parameters
-node {<nodename>|local} - Node
Note
The AutoSupport configuration will be modified on all nodes in the cluster, even if this parameter
is specified.
Note
This parameter has been deprecated and might be removed in a future version of ONTAP. Instead,
use the event notification destination create command to create an email destination
and the event notification create command to map the important-events system filter
to the newly created event destination. This will send an email notification for all events with a
severity of ERROR, ALERT, or EMERGENCY. For events with other severity values, a new event filter
needs to be created using the event filter create command. Filter rules can be added to the
filter using the event filter rule add command. This newly created filter has to be mapped
to the event destination.
Use this parameter to specify up to five addresses to receive a short-note version of AutoSupport
messages that are most relevant for your internal organization. Short-note e-mails contain only the
subject line of the AutoSupport message, which is easier to view on a mobile device. For this parameter
to have effect, the -mail-hosts parameter must be configured correctly. Individual trigger events can
change the default usage of this parameter using the -noteto parameter of the system node
autosupport trigger modify command. By default, no list is defined.
the sections of the message that AutoSupport sent. AutoSupport collects and sends the content in order
of priority. The priority is predefined for each AutoSupport message. To identify the collection order for
an AutoSupport trigger, use the "system node autosupport trigger show" command with the -instance
parameter.
[-max-smtp-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum SMTP Size
Use this parameter to specify the maximum file size (in bytes by default, but can also be specified in KB,
MB, TB or PB) for SMTP (e-mail) transfers. This parameter applies to messages sent to the addresses
specified in the -to, -noteto, and -partner-address parameters. If the -transport parameter is set to smtp,
this parameter also applies to messages sent to the vendor's technical support. Setting the value to 0
disables the delivery size budget. The default is 5 MB and the minimum supported size is 2 MB.
If the size of the AutoSupport message exceeds this value, AutoSupport will deliver as much of the
message as possible. You can use the "system node autosupport manifest show" command to identify
the sections of the message that AutoSupport sent. AutoSupport collects and sends the content in order
of priority. The priority is predefined for each AutoSupport message. To identify the collection order for
an AutoSupport trigger, use the "system node autosupport trigger show" command with the -instance
parameter.
[-remove-private-data {true|false}] - (DEPRECATED) Remove Sensitive Data
Note
This parameter has been deprecated and might be removed in a future version of ONTAP.
Use this parameter with the value true to remove, encode, or mask sensitive data from AutoSupport
attachments and headers. Use this feature to eliminate private data from all AutoSupport messages.
Eliminated data might include: IP addresses, MAC addresses, URIs, DNS names, e-mail addresses, port
numbers, node names, Vserver names, cluster names, aggregate names, volume names, junction paths,
policy names, user IDs, group IDs, LUNs, NVMe namespaces and qtree names.
The default is false.
Note
Changing this value from false to true deletes the AutoSupport history and all files associated with
it.
Use this parameter to specify whether the "from" e-mail address needs to be node-specific. If enabled,
the node-specific "from" e-mail address is generated by prepending the node name and "-" to the
configured "from" address. For example, if the node name is "cluster-01" and the "from" parameter is
"prod@company.com", the generated "from" e-mail address is "cluster-01-prod@company.com".
● Examples
The following example enables AutoSupport on all nodes in the cluster with the following settings:
• SMTP mail host named smtp.example.com.
• E-mail "from" address of alerts@example.com
• E-mail "to" address of support@example.com
• AutoSupport messages sent to support personnel
• HTTPS set as transport protocol
• If sending fails, the system will wait 23 minutes before retrying.
cluster1::> system node autosupport modify -state enable -mail-hosts smtp.example.com -from alerts@exam
ple.com -to support@example.com -support enable -transport https -retry-interval 23m
The following examples show how to modify AutoSupport URLs when using IPv6 address literals:
Related reference
system node autosupport trigger modify (page 1438)
event notification destination create (page 179)
event notification create (page 176)
event filter create (page 158)
event filter rule add (page 166)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport show command displays the AutoSupport configuration of one or more
nodes.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-config ]
Use this parameter to display the retry interval, retry count, throttle, and reminder settings of all nodes
in the cluster.
| [-nht-performance ]
Use this parameter to display NHT and performance information about all nodes in the cluster.
| [-recent ]
Use this parameter to display the subject and time of the last AutoSupport message generated by each
node in the cluster.
| [-support-http ]
Use this parameter to display whether HTTP support is enabled in the cluster, and identify the transport
protocol and the support proxy URL used.
| [-support-smtp ]
Use this parameter to display whether SMTP (e-mail) support is enabled in the cluster, and identify the
transport protocol and the "to" e-mail address used.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Note
The AutoSupport configuration is identical for all nodes in the cluster.
Use this parameter to display detailed information about the node you specify.
[-state {enable|disable}] - State
Use this parameter to display information only about nodes that have the AutoSupport state you specify.
[-mail-hosts <text>, ...] - SMTP Mail Hosts
Use this parameter to display information only about nodes that use the mail hosts you specify.
[-from <mail address>] - From Address
Use this parameter to display information only about nodes that have the "from" e-mail address you
specify.
[-to <mail address>, ...] - List of To Addresses
Use this parameter to display information only about nodes that have the "to" e-mail addresses you
specify.
[-noteto <mail address>, ...] - (DEPRECATED) List of Noteto Addresses
Note
This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter to display information only about nodes that send short-note e-mail messages to
the e-mail addresses you specify. Short-note e-mails contain only the subject line of the AutoSupport
message, which is easier to view on a mobile device.
[-partner-address <mail address>, ...] - List of Partner Addresses
Use this parameter to display information only about nodes that have the "partner-address" e-mail
addresses you specify. These addresses are not subject to the delivery limitations configured for the "-to"
addresses of AutoSupport triggers.
[-support {enable|disable}] - Send AutoSupport Messages to Vendor Support
Use this parameter with the value "enable" to display information only about nodes that send
AutoSupport messages to your vendor's technical support. Use this parameter with the value "disable" to
display information only about nodes that do not send AutoSupport messages to your vendor's technical
support.
[-transport {smtp|http|https}] - Protocol to Contact Support
Use this parameter to display information only about nodes that use the protocol you specify to send
AutoSupport messages.
[-url <text>] - Support URL for HTTP/HTTPS
Use this parameter to display information only about nodes that use the URL you specify to send
messages through HTTP and HTTPS POST operations.
[-put-url <text>] - Support URL for HTTP/S PUT
Use this parameter to display information only about nodes that use the URL you specify to send
messages through HTTP PUT operations.
[-proxy-url <text>] - Support Proxy URL
Use this parameter to display information only about nodes that use the proxy URL you specify.
[-support-address <mail address>, ...] - Support Address
Use this parameter to display information only about nodes that use the external support address you
specify.
[-hostname-subj {true|false}] - Hostname Subject
Use this parameter to display information only about nodes that include their hostname in the "Subject:"
line of AutoSupport messages. If the parameter "remove-private-data" is false, this parameter has no
effect.
[-nht {true|false}] - NHT Enable
Use this parameter with the value "true" to display information only about nodes that send NHT disk
drive data. Use this parameter with the value "false" to display information only about nodes that do not
send NHT data.
[-perf {true|false}] - Performance Data Enable
Use this parameter with the value "true" to display information only about nodes that send periodic
performance AutoSupport messages. Use this parameter with the value "false" to display information
only about nodes that do not send periodic performance messages.
[-retry-interval <[<integer>h][<integer>m][<integer>s]>] - Retry Interval
Use this parameter to display information only about nodes that use the retry interval you specify.
[-retry-count <integer>] - Retry Count
Use this parameter to display information only about nodes that use the retry count you specify.
[-reminder {true|false}] - Reminder Enable
Use this parameter with the value "true" to display information only about nodes that send messages
reminding administrators to enable AutoSupport if AutoSupport is not enabled. Use this parameter with
the value "false" to display information only about nodes that do not send reminder messages.
[-last-subject <text>] - (DEPRECATED) Last Subject Sent
Note
This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter to display information only about nodes whose last AutoSupport message had the
"Subject:" line you specify.
[-last-time <MM/DD/YYYY HH:MM:SS>] - (DEPRECATED) Last Time Sent
Note
This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter to display information only about nodes whose last AutoSupport message was sent
at the date and time you specify. Specify the date and time in the format "MM/DD/YYYY HH:MM:SS".
[-max-http-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum HTTP Size
Use this parameter to display information only about nodes that limit the maximum size of HTTP
transfers to the file size you specify.
[-max-smtp-size {<integer>[KB|MB|GB|TB|PB]}] - Maximum SMTP Size
Use this parameter to display information only about nodes that limit the maximum size of SMTP (e-
mail) transfers to the file size you specify.
[-remove-private-data {true|false}] - (DEPRECATED) Remove Sensitive Data
Note
This parameter has been deprecated and might be removed in a future version of ONTAP.
Use this parameter with the value "true" to display information only about nodes that remove sensitive
data from AutoSupport messages. Use this parameter with the value "false" to display information only
about nodes that do not remove sensitive data.
[-validate-digital-certificate {true|false}] - Validate Digital Certificate Received
Use this parameter with the value "true" to display information only about nodes that validate digital
certificates they receive. Use this parameter with the value "false" to display information only about
nodes that do not validate digital certificates.
[-ondemand-state {enable|disable}] - AutoSupport OnDemand State (privilege: advanced)
Use this parameter to display information only about nodes that have the AutoSupport OnDemand state
you specify.
[-ondemand-remote-diagnostics-state {enable|disable}] - AutoSupport OnDemand Remote
Diagnostics State (privilege: advanced)
Use this parameter to display information only about nodes that have the AutoSupport OnDemand
Remote Diagnostics state you specify.
[-ondemand-server-url <text>] - AutoSupport OnDemand Server URL
Use this parameter to display information only about nodes that have the AutoSupport OnDemand
Server URL you specify.
[-node-specific-from {enable|disable}] - Node-Specific From Address (privilege: advanced)
Use this parameter to display information that matches the specified value - enabled or disabled.
● Examples
The following example displays the AutoSupport configuration for a node named node3:
Related reference
system node autosupport trigger show (page 1439)
system node autosupport history show (page 1430)
system node autosupport manifest show (page 1434)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport status check show command displays the overall status of the
AutoSupport subsystem.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the nodes that match this parameter value. This parameter specifies the node whose status is
being displayed.
[-http-status {ok|warning|failed|not-run}] - Overall Status of AutoSupport HTTP/HTTPS Destinations
Selects the nodes that match this parameter value. This parameter specifies whether connectivity to the
AutoSupport HTTP destination was established.
[-aod-status {ok|warning|failed|not-run}] - Overall Status of AutoSupport OnDemand Server
Selects the nodes that match this parameter value. This parameter specifies the detailed description of
the connectivity status to the AutoSupport OnDemand Server.
[-smtp-status {ok|warning|failed|not-run}] - Overall Status of AutoSupport SMTP Destinations
Selects the nodes that match this parameter value. This parameter specifies whether connectivity to the
AutoSupport mailhost was established.
[-config-status {ok|warning|failed|not-run}] - Overall Status of AutoSupport Configuration
Selects the nodes that match this parameter value. This parameter specifies whether the AutoSupport
configuration check succeeded or not.
[-warning-text <text>] - Conditional Warning Message
Selects the nodes that match this parameter value. This parameter specifies how to get more details
regarding the status of the AutoSupport subsystem, in case of any errors.
● Examples
The following example displays the overall status of the AutoSupport subsystem on a node named node2:
On Demand
Node HTTP/HTTPS Server SMTP Configuration
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport check show-details command displays the detailed status of the
AutoSupport subsystem. This includes verifying connectivity to your vendor's AutoSupport destinations by
sending test messages and providing a list of possible errors in your AutoSupport configuration settings.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Selects the check results that match this parameter value. This parameter specifies the node whose
status is being displayed.
[-check-type <Type of AutoSupport Check>] - AutoSupport Check Type
Selects the check results that match this parameter value. This parameter specifies the type of
AutoSupport check being performed.
[-status {ok|warning|failed|not-run}] - Status of the Check
Selects the check results that match this parameter value. This parameter specifies the result of this
AutoSupport check.
[-error-detail <text>] - Detailed Description of Error
Selects the check results that match this parameter value. This parameter specifies the detailed error
message for this AutoSupport check.
[-corrective-action <text>] - Corrective Action
Selects the check results that match this parameter value. This parameter specifies a description of how
to correct any errors seen as part of this AutoSupport Check
● Examples
The following example displays the detailed status of the AutoSupport subsystem for a node named node2:
Node: node2
Category: http-https
Component: http-put-destination
Status: ok
Detail: Successfully connected to "support.netapp.com/put".
Component: http-post-destination
Status: ok
Detail: Successfully connected to "support.netapp.com/post".
Category: smtp
Component: mail-server
Status: ok
Detail: Successfully connected to "mailhost.netapp.com".
Component: mail-server
Status: ok
Detail: Successfully connected to "sendmail.domain.com".
Component: mail-server
Status: ok
Detail: Successfully connected to "qmail.domain.com".
Category: on-demand
Component: ondemand-server
Status: ok
Detail: Successfully connected to "support.netapp.com/aods".
Category: configuration
Component: configuration
Status: ok
Detail: No configuration issues found.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport destinations show command displays a list of all message
destinations used by AutoSupport. The command provides you with a quick summary of all addresses and
URLs that receive AutoSupport messages from all nodes in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display only destinations that receive AutoSupport messages from the node you
specify.
[-destinations <text>, ...] - Destinations
Use this parameter to display only destination lists for nodes that send AutoSupport messages to the
destinations you specify.
● Examples
This example displays all of the destinations in use by the current cluster. Each node uses the same
destination for HTTP POST, HTTP PUT, and e-mail notifications.
node2
https://asuppost.example.com/cgi-bin/asup.cgi
https://asupput.example.com/cgi-bin/asup.cgi
support@example.com
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport history cancel command cancels an active AutoSupport
transmission. This command is used to pause or abandon a long running delivery of an AutoSupport
message. The cancelled AutoSupport message remains available for retransmission using the system node
autosupport history retransmit command.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to cancel the AutoSupport message. The default setting
is localhost.
-seq-num <Sequence Number> - AutoSupport Sequence Number
Use this parameter to specify the sequence number of the AutoSupport message you want to cancel.
[-destination {smtp|http|noteto|retransmit}] - Destination for This AutoSupport
Use this parameter to specify the destination type for the AutoSupport message you want to cancel.
● Examples
Use this command to cancel the AutoSupport message delivery with seq-num 10 to all destinations.
Use this command to cancel the AutoSupport message delivery with seq-num 10 via HTTP only.
cluster1::> system node autosupport history cancel -node local -seq-num 10 -destination http
Related reference
system node autosupport history retransmit (page 1430)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport history retransmit command retransmits a locally stored
AutoSupport message.
Support personnel might ask you to run this command to retransmit an AutoSupport message. You might
also retransmit an AutoSupport message if you run the system node autosupport history show
command and notice that a message was not delivered.
If you retransmit an AutoSupport message, and if support already received that message, the support
system will not create a duplicate case. If, on the other hand, support did not receive that message, then the
AutoSupport system will analyze the message and create a case, if necessary.
Use the system node autosupport history show command to display the 50 most recent AutoSupport
messages, which are available for retransmission.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node from which the AutoSupport message is sent.
-seq-num <Sequence Number> - AutoSupport Sequence Number
Use this parameter to specify the sequence number of the AutoSupport message to retransmit.
-uri <text> - Destination to Send this AutoSupport
Use this parameter to specify the HTTP, HTTPS, FILE, or MAILTO uniform resource indicator (URI) to which
the AutoSupport message is sent.
[-size-limit {<integer>[KB|MB|GB|TB|PB]}] - Transmit Size Limit for this AutoSupport.
Use this parameter to specify a size limit for the retransmitted AutoSupport message. If the message
information exceeds this limit, it will be trimmed to fit the limit you specify. Omit the size limit or set it
to 0 to disable it, which is useful to retransmit an AutoSupport message that was truncated by a mail or
Web server due to the default size limits.
● Examples
The following example retransmits the AutoSupport message with sequence number 45 on the node
"node1" to a support address by e-mail.
cluster1::> system node autosupport history retransmit -node node1 -seq-num 45 -uri mailto:support@exam
ple.com
Related reference
system node autosupport history show (page 1430)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport history show command displays information about the 50 most recent
AutoSupport messages sent by nodes in the cluster. By default, it displays the following information:
• AutoSupport sequence number
• Destination type, such as smtp
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-delivery ]
Use this parameter to display destination information about each AutoSupport message.
| [-detail ]
Use this parameter to display trigger and subject information about each AutoSupport message.
| [-instance ]}
Use this parameter to display the following detailed information about all entries:
• Trigger event
• Subject of the message
• Delivery URI
• Last error
• Compressed Size
• Decompressed Size
• Total Collection Time (in ms)
[-node {<nodename>|local}] - Node
Use this parameter to display only AutoSupport messages sent from the node you specify.
[-seq-num <Sequence Number>] - AutoSupport Sequence Number
Use this parameter to display only AutoSupport messages with the sequence number you specify.
Sequence numbers are unique to a node. Use this parameter with the -node parameter to display
information about an individual message.
[-destination {smtp|http|noteto|retransmit}] - Destination for This AutoSupport
Use this parameter to display only AutoSupport messages that were sent to the destination type you
specify.
[-trigger <Message Name>] - Trigger Event
Use this parameter to display only AutoSupport messages that match the trigger event you specify.
[-last-update <MM/DD/YYYY HH:MM:SS>] - Time of Last Update
Use this parameter to display only AutoSupport messages that were updated most recently at the time
you specify. Specify time in "MM/DD/YYYY HH:MM:SS" format.
[-status <AutoSupport general status>] - Status of Delivery
Use this parameter to display only AutoSupport messages with the status you specify. Possible statuses
are:
• initializing - The AutoSupport message request is being processed.
• collection-failed - The AutoSupport collection failed. View the 'Last Error' field of this message for more
information.
• collection-in-progress - The AutoSupport collection is in progress.
• queued - The AutoSupport message is queued for delivery.
• transmitting - The AutoSupport message transmission is in progress.
• sent-successful - The AutoSupport message was sent successfully.
• ignore - The AutoSupport message was processed successfully, but the trigger event is not configured
for delivery to the current destination type.
• re-queued - The AutoSupport message transmission failed, has been re-queued, and will be retried.
• transmission-failed - The AutoSupport message transmission failed, and the retry limit was exceeded.
• ondemand-ignore - The AutoSupport message was processed successfully, but the AutoSupport On
Demand server chose to ignore it.
[-attempt-count <integer>] - Delivery Attempts
Use this parameter to display only AutoSupport messages that the system has attempted to send the
number of times you specify. This parameter is most useful when given a range, such as ">5".
[-subject <text>] - AutoSupport Subject
Use this parameter to display only AutoSupport messages of the type you specify.
[-uri <text>] - Delivery URI
Use this parameter to display only AutoSupport messages sent to the destination URI you specify.
[-error <text>] - Last Error
Use this parameter to display only AutoSupport messages that failed with the "Last Error" description
you specify.
[-generated-on <MM/DD/YYYY HH:MM:SS>] - Time of Generation
Use this parameter to display only AutoSupport messages that were generated (collected) at a particular
time.
[-size {<integer>[KB|MB|GB|TB|PB]}] - AutoSupport Compressed Size
Use this parameter to display only AutoSupport messages of the compressed size you specify.
[-percent-complete <integer>] - Percent Complete
Use this parameter to display the percentage completed for any active (incomplete) AutoSupport
message.
[-upload-rate {<integer>[Bps|KBps|MBps|GBps]|unlimited}] - Rate of Upload
Use this parameter to display the rate in bytes per second that upload is using currently, otherwise zero
when not active.
[-time-remaining <[<integer>h][<integer>m][<integer>s]>] - Time Remaining for Upload
Use this parameter to display the estimated time for the transmission of the AutoSupport message to
complete.
[-decompressed-size {<integer>[KB|MB|GB|TB|PB]}] - AutoSupport Decompressed Size
Use this parameter to display only AutoSupport messages of the decompressed size you specify.
[-total-time <integer>] - Total Collection Time (ms)
Use this parameter to display only AutoSupport messages of total collection time you specify. A value is
only shown when the collection has completed.
● Examples
The following example shows the first three results output by the history command. Note that "q" was
pressed at the prompt.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport history show-upload-details command displays upload details of
the 50 most recent AutoSupport messages sent by nodes in the cluster. By default, it displays the following
information:
• AutoSupport Sequence Number
• Destination
• Compressed Size
• Percentage Complete
• Rate of upload
• Time Remaining
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Use this parameter to display the following detailed information about all entries:
• AutoSupport Sequence Number
• Destination
• Compressed Size
• Percentage Complete
• Rate of Upload
• Time Remaining
[-node {<nodename>|local}] - Node
Use this parameter to display details of AutoSupport messages sent from the node you specify.
[-seq-num <Sequence Number>] - AutoSupport Sequence Number
Use this parameter to display details of AutoSupport messages with the sequence number you specify.
Sequence numbers are unique to a node. Use this parameter with the -node parameter to display
information about an individual message.
[-destination {smtp|http|noteto|retransmit}] - Destination for this AutoSupport
Use this parameter to display details of AutoSupport messages that were sent to the destination type
you specify.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Autosupport Compressed Size
Use this parameter to display details of AutoSupport messages of the compressed size you specify.
● Examples
The following example shows the first three results output by the history show-upload-details command.
Note that "q" was pressed at the prompt.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport manifest show command reports what information is contained in
AutoSupport messages. The name and size of each file collected for the message is reported, along with any
errors.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-content ]
Use this parameter to display detailed information about the content of the files contained in the report.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display information about only AutoSupport messages sent from the node you
specify.
[-seq-num <Sequence Number>] - AutoSupport Sequence Number
Use this parameter to display information about only AutoSupport message content with the sequence
number you specify. Sequence numbers are unique to a node. Use this parameter with the -node
parameter to display information about an individual message.
[-prio-num <integer>] - Priority Order of Collection
Use this parameter to display information about only AutoSupport message content with the collection
priority you specify. Content is collected in order, by priority number.
[-subsys <subsys1,subsys2,...>] - Subsystem
Use this parameter to display information about only AutoSupport message content collected by the
AutoSupport subsystem you specify.
[-cmd-tgt <Execution domain of AutoSupport content>] - Execution Domain for Command
Use this parameter to display information about only AutoSupport message content produced in the
execution domain you specify.
[-body-file <text>] - The AutoSupport Content Filename for this Data
Use this parameter to display information about only AutoSupport message content stored in the body
file with the file name you specify.
[-cmd <text>] - Actual Data Being Collected
Use this parameter to display information about only AutoSupport message content produced by the D-
Blade command, BSD command, file, or XML table you specify.
[-query <text>] - Table Query for XML Collection
Use this parameter to display information about only AutoSupport message content produced by the
table query you specify.
[-size-collected {<integer>[KB|MB|GB|TB|PB]}] - Number of Bytes Collected
Use this parameter to display information about only AutoSupport message content collected in files
with the file size you specify.
[-time-collected <integer>] - Collection Time for this Data Item (ms)
Use this parameter to display information about only AutoSupport message content collected in the
amount of time you specify, in milliseconds.
[-status <AutoSupport manifest collection status>] - Status of this Data Item
Use this parameter to display information about only AutoSupport message content with the collection
status you specify. Possible statuses are:
• requested - The AutoSupport request has been added to the queue and is waiting processing by the
collector.
• working - The AutoSupport collector is actively gathering the needed data.
• file-not-found - AutoSupport data collection failed because a necessary file is missing.
• no-such-table - The AutoSupport collector was unable to find the requested SMF table.
• collection-truncated-size-limit - AutoSupport data was truncated due to size limits, but
partial data is available.
• collection-truncated-file-size-limit - AutoSupport data for a particular data item or file
was truncated due to file size limits, but partial data is available.
• collection-skipped-size-limit - AutoSupport data was skipped due to size limits, and no data
is available.
• collection-truncated-time-limit - AutoSupport data was truncated due to time limits, but
partial data is available.
• collection-skipped-time-limit - AutoSupport data was skipped due to time limits, and no data
is available.
• delivery-skipped-size-limit - AutoSupport data was skipped at delivery time due to size limits.
• collection-truncated-age - AutoSupport data for the particular data item or file was truncated
due to age, but partial data is available.
• general-error - AutoSupport data collection failed. Additional information (if any) is in the Error
String field.
• completed - AutoSupport data collection is complete, and the AutoSupport message is ready for
delivery.
• content-not-collected-precheck - AutoSupport content was not collected due to pre-check
function violation.
• content-not-collected-privacy - AutoSupport content was not collected because the operation
is disabled in privacy mode.
• content-empty - AutoSupport content was collected successfully, but the output was empty.
• collection-aborted - AutoSupport data collection was aborted.
[-error <text>] - Textual Description of Error
Use this parameter to display information about only AutoSupport message content with the error text
you specify. If data collection has failed, the error text contains a description of the failure. If data
collection completes successfully, this field is empty.
[-content-type <Type of AutoSupport content>] - AutoSupport Content Type for this Data
Use this parameter to display information about only AutoSupport message content of the type you
specify. Types supported are:
• basic - Configuration data about this subsystem
• troubleshooting - Detailed diagnostic data about this subsystem
[-orig-size-collected {<integer>[KB|MB|GB|TB|PB]}] - Initial Number of Bytes Collected
Use this parameter to display information about only AutoSupport message content collected in files
with the original file size you specify.
[-size-compressed {<integer>[KB|MB|GB|TB|PB]}] - Compressed Size
Use this parameter to display information about only AutoSupport message content collected in files
with the compressed file size you specify.
● Examples
This example displays the content of AutoSupport message number 372 on the node "node1".
cluster1::> system node autosupport manifest show -node node1 -seq-num 372
AutoSupport Collected
Node Sequence Body Filename Size Status Error
----------------- --------- -------------- ----------- --------- -----------
node1 372
SYSCONFIG-A.txt 1.73KB completed
OPTIONS.txt 29.44KB completed
software_image.xml 7.56KB completed
CLUSTER-INFO.xml 3.64KB completed
autosupport.xml 12.29KB completed
autosupport_budget.xml
7.01KB completed
autosupport_history.xml
46.52KB completed
X-HEADER-DATA.TXT 717.00B completed
SYSTEM-SERIAL-NUMBER.TXT
35.00B completed
cluster_licenses.xml
3.29KB completed
cm_hourly_stats.gz 151.4KB completed
boottimes.xml 56.86KB completed
rdb_txn_latency_stats_hrly.xml
39.31KB completed
rdb_voting_latency_stats_hrly.xml
3.43KB completed
14 entries were displayed.
This example shows how you can use parameters to limit output to specific fields of a specific AutoSupport
message. This is helpful when troubleshooting.
cluster1::> system node autosupport manifest show -node node5 -seq-num 842 -fields body-file,status,siz
e-collected,time-collected,cmd,cmd-tgt,subsys
node seq-num prio-num subsys cmd-tgt body-file cmd size-collected time-collec
ted status
---------- ------- -------- --------- ------- --------------- -------------- -------------- -----------
--- ---------
node5 842 0 mandatory dblade SYSCONFIG-A.txt "sysconfig -a" 16.44KB 256
completed
node5 842 1 mandatory dblade OPTIONS.txt options 29.67KB 3542
completed
node5 842 2 mandatory smf_table software_image.xml software_image 8.68KB 33
completed
node5 842 3 mandatory smf_table CLUSTER-INFO.xml asup_cluster_info 4.75KB 7
completed
node5 842 4 mandatory smf_table autosupport.xml autosupport 12.32KB 10
completed
node5 842 5 mandatory smf_table autosupport_budget.xml autosupport_budget 7.03KB 29
completed
node5 842 6 mandatory smf_table autosupport_history.xml autosupport_history 62.77KB 329
completed
node5 842 7 mandatory custom_fx X-HEADER-DATA.TXT "Custom function" 720.00B 3
completed
node5 842 8 mandatory custom_fx SYSTEM-SERIAL-NUMBER.TXT "Custom function" 31.00B 2
completed
node5 842 9 mandatory smf_table cluster_licenses.xml cluster_licenses 5.62KB 9
completed
node5 842 10 log_files custom_fx log_files.xml "Custom function" 13.07KB 4
completed
node5 842 11 log_files custom_fx EMS-LOG-FILE.gz "Custom function" 25.33KB 24
completed
node5 842 12 log_files dblade_file EMS-LOG-FILE-PARTNER.gz /etc/log/ems - -
content-not-collected-precheck
node5 842 13 log_files dblade_file MESSAGES.gz /etc/log/messages 35.40KB 42
completed
node5 842 14 log_files dblade_file MESSAGES-PARTNER.gz /etc/log/messages - -
content-not-collected-precheck
14 entries were displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use the system node autosupport trigger modify command to enable and disable AutoSupport
messages for individual triggers, and to specify additional subsystem reports to include if an individual
trigger sends an AutoSupport message.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node whose AutoSupport trigger configuration is modified.
-autosupport-message <Autosupport Message> - EMS Message
Use this parameter to specify the AutoSupport trigger to modify. AutoSupport triggers are EMS messages
whose names begin with "callhome.". However, for the purposes of this command, "callhome." is
implied, does not need to be entered, and will not be displayed in command output.
[-to {enabled|disabled}] - Deliver to AutoSupport -to Addresses
Use this parameter with the value "enabled" to enable sending AutoSupport messages to the configured
"to" addresses.
[-noteto {enabled|disabled}] - (DEPRECATED) Deliver to AutoSupport -noteto Addresses
Note
This parameter has been deprecated and might be removed in a future version of ONTAP. Instead,
use the event notification destination create command to create an email destination
and the event notification create command to map the important-events system filter
to the newly created event destination. This will send an email notification for all events with a
severity of ERROR, ALERT, or EMERGENCY. For events with other severity values, a new event filter
needs to be created using the event filter create command. Filter rules can be added to the
filter using the event filter rule add command. This newly created filter has to be mapped
to the event destination.
Use this parameter with the value "enabled" to enable sending short notes to the configured "noteto"
addresses.
[-basic-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting Basic Info
Use this parameter to include basic content from the additional subsystems you specify. Content is
collected from these subsystems in addition to the default list of subsystems.
[-troubleshooting-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting
Troubleshooting Info
Use this parameter to include troubleshooting content from the additional subsystems you specify.
Content is collected from these subsystems in addition to the default list of subsystems.
[-suppress {true|false}] - Suppress all occurrences of this trigger
Use this parameter with the value "true" to suppress the collection when the AutoSupport message is
triggered.
● Examples
The following example enables messages to the configured "to" addresses from the battery.low trigger on
the node node1.
cluster1::> system node autosupport trigger modify -node node1 -autosupport-message battery.low -to ena
bled
Related reference
event notification destination create (page 179)
event notification create (page 176)
event filter create (page 158)
event filter rule add (page 166)
system node autosupport manifest show (page 1434)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node autosupport trigger show command displays what system events trigger
AutoSupport messages. When a trigger event occurs, the node may send an AutoSupport message to a
predefined destination, and a short note to another destination. The full AutoSupport message contains
detail for troubleshooting. The short message is meant for short pager or SMS text messages.
Use the system node autosupport destinations show command to view available destinations.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-basic ]
Use this parameter to display which subsystem information is included as basic content when the
AutoSupport message is triggered.
| [-troubleshooting ]
Use this parameter to display which subsystem information is included as troubleshooting content when
the AutoSupport message is triggered.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display AutoSupport triggers only on the node you specify.
[-autosupport-message <Autosupport Message>] - EMS Message
Use this parameter to display only AutoSupport triggers with the name you specify. AutoSupport triggers
are EMS messages whose names begin with "callhome.". However, for the purposes of this command,
"callhome." is implied, does not need to be entered, and will not be displayed in command output.
[-to {enabled|disabled}] - Deliver to AutoSupport -to Addresses
Use this parameter with the value "enabled" to display only AutoSupport messages that send full
messages to the "to" address when triggered. Use this parameter with the value "disabled" to display
only AutoSupport messages that do not send full messages.
Note
This parameter has been deprecated and might be removed in a future version of Data ONTAP.
Use this parameter with the value "enabled" to display only AutoSupport messages that send short notes
to the "noteto" address when triggered. Use this parameter with the value "disabled" to display only
AutoSupport messages that do not send short notes.
[-basic-default <subsys1,subsys2,...>, ...] - Default Subsystems Reporting Basic Info
Use this parameter to display only AutoSupport triggers that include in their messages, by default, basic
content from the subsystems you specify.
[-troubleshooting-default <subsys1,subsys2,...>, ...] - Default Subsystems Reporting
Troubleshooting Info
Use this parameter to display only AutoSupport triggers that include in their messages, by default,
troubleshooting content from the subsystems you specify.
[-additional-content <Type of AutoSupport content>, ...] - Additional Content Flag
Use this parameter to display only AutoSupport triggers that have been configured to include additional
basic or troubleshooting content.
[-basic-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting Basic Info
Use this parameter to display only AutoSupport triggers that have been configured to include additional
basic content from the subsystems you specify.
[-troubleshooting-additional <subsys1,subsys2,...>, ...] - Additional Subsystems Reporting
Troubleshooting Info
Use this parameter to display only AutoSupport triggers that have been configured to include additional
troubleshooting content from the subsystems you specify.
[-suppress {true|false}] - Suppress all occurrences of this trigger
Use this parameter with the value "true" to display only AutoSupport messages that have been
suppressed.
● Examples
This example shows the first page of output from the command. Note that "q" was pressed at the prompt to
quit.
Related reference
system node autosupport destinations show (page 1428)
system node autosupport manifest show (page 1434)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump delete command deletes a specified core dump. If the command is issued
while the specified core dump is being saved, the command prompts you before stopping the save
operation and deleting the core dump.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node from which core files are to be deleted.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This specifies the type of core file to be deleted. If the type is kernel, the specified kernel core file will be
deleted. If the type is application, the specified application core file will be deleted.
-corename <text> - Coredump Name
This specifies the core file that is to be deleted.
● Examples
The following example deletes a core dump named core.101268397.2010-05-30.19_37_31.nz from a node
named node0:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump delete-all command deletes either all unsaved core dumps or all saved
core files on a node. You can specify whether saved core files or unsaved core dumps are deleted by using
the optional -saved parameter. If the command is issued while a core dump is being saved, the command
prompts you before stopping the save operation and deleting the core dump.
● Parameters
-node <nodename> - Node That Owns the Coredump
This specifies the node from which core files or core dumps are to be deleted.
[-type {unsaved-kernel|saved-kernel|kernel|application|all}] - Type of Core to delete
This parameter specifies the type of core file to be deleted. If the type is unsaved, all unsaved core
dumps will be deleted. If the type is saved, all saved core files will be deleted. If the type is kernel, all
kernel core files and kernel core dumps will be deleted. If the type is application, all application core
files will be deleted. If the type is all, all core files will be deleted. The default setting is to delete only
unsaved kernel core dumps and core files.
● Examples
The following example deletes all unsaved kernel core dumps on a node named node0:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump save command saves a specified core dump. If the node has already
attempted to save the core dump by the value specified by the -save-attempts parameter, the command
prompts you before continuing. The -save-attempts parameter is set by invoking the command system
node coredump config modify. A saved core dump can be uploaded to a remote site for support
analysis; see the system node coredump upload command man page for more information.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core dump is located.
-corename <text> - Coredump Name
This specifies the core dump that is to be saved.
● Examples
The following example saves a core dump named core.101268397.2010-05-30.19_37_31.nz on a node
named node0:
Related reference
system node coredump config modify (page 1450)
system node coredump upload (page 1449)
system node coredump save-all (page 1442)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump save-all saves all unsaved core dumps on a specified node. If the node has
already attempted to save the core dump by the value set by the -save-attempts parameter, the command
prompts you before continuing. The save-attempts parameter is set by invoking the command system node
coredump config modify.
● Parameters
-node <nodename> - Node That Owns the Coredump
This specifies the node on which unsaved core dumps are to be saved.
● Examples
The following example saves all unsaved core dumps on a node named node0:
Related reference
system node coredump save (page 1442)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump show command displays basic information about core dumps, such as the
core dump name, time of panic that triggered the core dump and whether the core file is saved. You can
specify optional parameters to display information that matches only those parameters. For example, to
display the list of kernel core files, run the command with -type kernel.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-system ]
If you specify this parameter, the command displays the following information:
• Node name
• Core dump name
• Core dump ID
• Node that panicked and generated the core
• System ID of the node that panicked and generated the core
• Version of the core
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node That Owns the Coredump
If you specify both this parameter and the -corename parameter, the command displays detailed
information about the specified core. If you specify this parameter by itself, the command displays
information about the core files on the specified node.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This parameter specifies the type of core files to be displayed. If the type is kernel and the system
supports segmented core files, the command displays information about primary kernel core segment
files. If the type is kernel and the system does not support segmented core files, the command displays
information about full core files. If the type is ancillary-kernel-segment, the command displays
information about ancillary kernel core segment files. If the type is application, the command displays
information about application core files. If no type is specified, the command displays information about
core files of type kernel or application.
[-corename <text>] - Coredump Name
If you specify both this parameter and the -node parameter, the command displays detailed
information about the specified core. If you specify this parameter by itself, the command displays
information about the core files that match the specified name.
[-panic-node <text>] - Node That Generated Core
If you specify this parameter with a node name, the command displays information only about the core
files that were generated when the specified node panicked.
[-panic-systemid <integer>] - System ID of Node That Generated Core
If you specify this parameter, the command displays information only about the core files that were
generated when the node with the specified system ID panicked.
[-version <text>] - Data ONTAP Version of Core
If you specify this parameter, the command displays information only about the core files that match the
specified version.
[-panic-time <MM/DD/YYYY HH:MM:SS>] - Time of Panic That Generated Core
If you specify this parameter, the command displays information only about the core files that were
generated by a panic at the specified time. Specify time in the format of MM/DD/YYYY HH:MM:SS [+-
HH:MM]. You can use [+- HH:MM] to specify the time range within which all core files triggered by a
panic are displayed. [+- HH:MM] is relative to UTC.
[-panic-string <text>] - Panic String
If you specify this parameter, the command displays information only about the core files that match the
specified panic string.
[-is-saved {true|false}] - Saved Core
If you specify this parameter, the command displays information only about the core dumps that are or
are not saved yet to a core file.
[-is-partial {true|false}] - Partial Core
If you specify this parameter, the command displays information only about the core dumps that are or
are not partially saved.
[-save-attempts <integer>] - Number of Attempts to Save Core
If you specify this parameter, the command displays information only about the core dumps that have
the specified number of successful or failed save attempts.
[-space-needed {<integer>[KB|MB|GB|TB|PB]}] - Space Needed To Save Core
If you specify this parameter, the command displays information only about the core dumps that need
the specified amount of disk space to save into a core file.
[-size <text>] - Size of Core (bytes)
If you specify this parameter, the command displays information only about the saved core files that are
of the specified size.
[-md5-data-chksum <text>] - MD5 Checksum of the Compressed Data of Core
If you specify this parameter, the command displays information only about the saved core files that
have the specified MD5 checksum for compressed data of the core.
[-ancillary-segment-directory <text>] - Directory Holding Ancillary Kernel Core Segments
If you specify this parameter, the command displays information only about the saved core files that
have the specified ancillary segment directory.
● Examples
The following examples display information about the core files:
Related reference
system node coredump status (page 1445)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump status command displays status information about core dumps. The
command output depends on the parameters specified with the command. If a core dump is in the process
of being saved into a core file, the command also displays its name, the total number of blocks that are to
be saved, and the current number of blocks that are already saved.
You can specify additional parameters to display only information that matches those parameters. For
example, to display coredump status information about the local node, run the command with the
parameter -node local.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-disks ]
If you specify this parameter, the command displays the following information:
• Node name
● Examples
The following example displays core dump information about the node named node0:
Node: node0
State: idle
Space Available On Internal Filesystem: 132.1GB
Name of Core Being Saved: -
Total Number of Blocks in Core Being Saved: -
Number of Blocks saved: -
Type of core dump: spray
Number of Unsaved Complete Cores: 0
Number of Unsaved Partial Cores: 1
Space Needed To Save All Unsaved Cores: 4.81GB
Minimum Free Bytes On Internal Filesystem: 250MB
Related reference
system node coredump show (page 1443)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command triggers a Non-maskable Interrupt (NMI) on the specified node via the Service Processor of
that node, causing a dirty shutdown of the node. This operation forces a dump of the kernel core when
halting the node. LIF migration or storage takeover occurs as normal in a dirty shutdown. This command is
different from the -dump parameter of the system node shutdown, system node halt, or system
node reboot command in that this command uses a control flow through the Service Processor of the
remote node, whereas the -dump parameter uses a communication channel between Data ONTAP running
on the nodes. This command is helpful in cases where Data ONTAP on the remote node is hung or does not
respond for some reason. If the panic node reboots back up, then the generated coredump can be seen by
using the system node coredump show command. This command works for a single node only and the
full name of the node must be entered exactly.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node for which you want to trigger a coredump.
● Examples
The following example triggers a NMI via the Service Processor and causes node2 to panic and generate a
coredump. Once node2 reboots back up, the command system node coredump show can be used to
display the generated coredump.
Warning: These advanced commands are potentially dangerous; use them only when
Warning: The Service Processor is about to perform an operation that will cause
a dirty shutdown of node "node2". This operation can
cause data loss. Before using this command, ensure that the cluster
will have enough remaining nodes to stay in quorum. To reboot or halt
a node gracefully, use the "system node reboot" or "system node halt"
command instead. Do you want to continue? {yes|no}: yes
Warning: This operation will reboot the current node. You will lose this login
session. Do you want to continue? {y|n}: y
cluster1::*>
cluster1::> system coredump show
Node:Type Core Name Saved Panic Time
--------- ------------------------------------------- ----- -----------------
node2:kernel
core.1786429481.2013-10-04.11_18_37.nz false 10/4/2013 11:18:37
Partial Core: false
Number of Attempts to Save Core: 0
Space Needed To Save Core: 3.60GB
1 entries were displayed.
cluster1::>
Related reference
system node halt (page 1404)
system node halt (page 1404)
system node reboot (page 1407)
system node coredump show (page 1443)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump upload command uploads a saved core file to a specified URL. You should
use this command only at the direction of technical support.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core file is located.
[-type {kernel|ancillary-kernel-segment|application}] - Coredump Type
This specifies the type of core files to be uploaded. If the type is kernel, kernel core files will be
uploaded. If the type is application, application core file will be uploaded.
-corename <text> - Coredump Name
This specifies the name of the core file that is to be uploaded.
[-location <text>] - URL for Coredump Upload Directory
This specifies the URL to which the core file is to be uploaded. If this parameter is not specified, the
command uploads the core file to the location specified by the -upload-location parameter of the
system node coredump config modify command. The following protocols are supported: ftp and
http.
[-casenum <integer>] - Case Number
This specifies the support case number that will be prefixed to the core file name at the destination. The
case number is critical information for quick and automated processing of the received core file.
● Examples
The following example uploads a core file named core.07142005145732.2010-10-05.19_03_41.nz on a
node named node0 to the default location. The support case number is 2001234567.
Related reference
system node coredump config modify (page 1450)
system node autosupport invoke-core-upload (page 1416)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump config modify command modifies the cluster's core dump configuration.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose coredump configuration you want to modify.
[-sparsecore-enabled {true|false}] - Enable Sparse Cores
If you set this parameter to true, the command enables sparse cores. A sparse core omits all memory
buffers that contain only user data.
[-min-free {<integer>[KB|MB|GB|TB|PB]}] - Minimum Free Bytes On Root Filesystem
If you specify this parameter, the command displays the number of bytes that need to be made
available in the root file system after saving the core dump. If the minimum number of bytes cannot be
guaranteed, core dumps are not generated. The default setting is 250 MB.
[-coredump-attempts <integer>] - Maximum Number Of Attempts to Dump Core
If you specify this parameter, the command displays the maximum number of times the system will
attempt to generate a core dump when encountering repeated disk failures. The default setting is 2.
[-save-attempts <integer>] - Maximum Number Attempts to Save Core
If you specify this parameter, the command displays the maximum number of times the system will
attempt to save a core dump.The default setting is 2.
[-save-onstartup {true|false}] - Enable Auto Save of Coredumps on Startup
If you set this parameter to true, the system will automatically start saving the core dump after reboot.
[-upload-location <text>] - URL for Coredump Upload Directory
If you specify this parameter, the system uploads the core dumps to the specified URL. The following
protocols are supported: ftp and http.
● Examples
The following example sets the maximum number of core dump attempts to 5 and the maximum number of
save attempts to 5:
Related reference
system node autosupport invoke-core-upload (page 1416)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump config show command displays basic information about a cluster's core
dump configuration, such as whether sparse cores are enabled, minimum number of free bytes on the root
volume file system that need to be available after saving the core files, maximum number of times the
process attempts to generate a core dump when encountering repeated disk failures, maximum number of
times the process attempts to save a core dump, the URL to which core dumps are uploaded, and whether
core dumps are automatically saved when a node restarts.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays the coredump configuration information of the
specified node.
[-sparsecore-enabled {true|false}] - Enable Sparse Cores
If you specify this parameter, the command displays only the coredump information that matches the
specified spare core setting. A sparse core omits all memory buffers that contain only user data.
[-min-free {<integer>[KB|MB|GB|TB|PB]}] - Minimum Free Bytes On Root Filesystem
If you specify this parameter, the command displays only the core dump information that matches the
specified number of bytes that need to be made available in the root file system after saving the core
dump.
If you specify this parameter, the command displays only the core dump information that matches the
specified URL where core dumps are uploaded.
● Examples
The following example displays information about the cluster's core dump configuration:
Related reference
system node autosupport invoke-core-upload (page 1416)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node coredump external-device save command saves a specified core dump to an
external USB device plugged into the port specified by the -device parameter.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the core dump is located.
-device {usb0|usb1} - Device
This specifies which external USB device connected to the node to save the core dump, for example:
usb0. Currently, only usb0 is supported. usb0 refers to the first external USB device connected to the
node (regardless of port).
-corename <text> - Coredump Name
This specifies the core dump that is to be saved.
● Examples
The following example saves a core dump named core.101268397.2010-05-30.19_37_31.nz on node1 to
external USB device usb0:
cluster1::> system node coredump external-device save -node node1 -device usb0 -corename core.10126839
7.2010-05-30.19_37_31.nz
Related reference
system node coredump show (page 1443)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node coredump external-device show command displays basic information about files
on an external USB device, such as the filename and size.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
-node {<nodename>|local} - Node That Owns the Coredump
This parameter selects the node that has files that are to be displayed on the external USB device.
[-device {usb0|usb1}] - Device
This parameter specifies the name of the external USB device, for example: usb0. Currently, only usb0 is
supported. usb0 refers to the first external USB device connected to the node (regardless of port).
[-corename <text>] - Coredump Name
This parameter specifies the core dump file for which the information is displayed.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Size of Core
If specified, the command displays information only about the core files that are of the specified size.
● Examples
The following example displays information about core files stored on external USB devices:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump reports delete command deletes the specified application core report.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node from which reports are to be deleted.
-reportname <text> - Report Name
This specifies the report that is to be deleted.
● Examples
The following example shows how a report named notifyd.
1894.80335005.2011-03-25.09_59_43.ucore.report is deleted from a node named node0:
cluster1::> system node coredump reports delete -node node0 -reportname notifyd.1894.80335005.2011-03-2
5.09_59_43.ucore.report
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump reports show command displays basic information about application core
reports, such as the report name and time of the panic that triggered the application core dump. You can
specify optional parameters to display information that matches only those parameters. For example, to
display the list of reports in the local node, run the command with -node local.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node That Owns the Coredump
Selects information about all the reports on the specified node. If you specify both this parameter and
the -reportname parameter, the command displays detailed information about the specified report.
[-reportname <text>] - Report Name
Selects information about the reports that match the specified name. If you specify both this parameter
and the -node parameter, the command displays detailed information about the specified report.
[-panic-node <text>] - Node That Generated Core
Selects information about the reports that were generated by the specified node.
[-panic-systemid <integer>] - System ID of Node That Generated Core
Selects information about the reports that were generated by thenode with the specified system ID.
[-version <text>] - Data ONTAP Version of Core
Selects information about the reports that match the specified version.
[-panic-time <MM/DD/YYYY HH:MM:SS>] - Time of Panic That Generated Core
Selects information about the reports that were generated by a panic at the specified time. Specify time
in the format of MM/DD/YYYY HH:MM:SS [+- HH:MM]. You can use [+- HH:MM] to specify the time range
within which all core files triggered by a panic are displayed. [+- HH:MM] is relative to UTC.
[-panic-string <text>] - Panic String
Selects information about the reports that match the specified panic string.
● Examples
The following example displays information about the reports:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node coredump reports upload command uploads an application report to a specified
URL. You should use this command only at the direction of technical support.
● Parameters
-node {<nodename>|local} - Node That Owns the Coredump
This specifies the node on which the report is located.
-reportname <text> - Report Name
This specifies the name of the report that is to be uploaded.
[-location <text>] - URL for Coredump Upload Directory
This specifies the URL to which the report is to be uploaded. The following protocols are supported: ftp
and http.
[-casenum <integer>] - Case Number
This specifies the support case number that is be prefixed to the core file name at the destination. The
case number is critical information for quick and automated processing of the received core file.
● Examples
The following example shows how a report named notifyd.1894.80335005.2011-03-25.09_59_43.ucore.bz2
is uploaded on a node named node0 to the default location. The support case number is 2001234567.
cluster1::> system node coredump reports upload -node node0 -corename notifyd.1894.80335005.2011-03-25.
09_59_43.ucore.bz2 -casenum 2001234567
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command deletes a core segment.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which to delete the core segments.
-segment <text> - Core Segment
This specifies the core segment to delete. The pathname is relative to the coredump directory. If a
directory is specified, all core segment files within it are deleted. If the directory is empty, it is deleted.
[-owner-node <text>] - Node That Owns the Core Segment File
This specifies the node that owns the core segment. Use this parameter only in takeover mode to delete
a partner's coredump segment.
● Examples
This deletes all core segments in the directory, core.151708240.2012-01-11.05_56_52.
cluster1::> system node coredump segment delete -node node1 -segment core.151708240.2012-01-11.05_56_52
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command deletes all the core segments on a node.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which to delete the core segments.
● Examples
This deletes all the core segments for node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the following information about core segments:
• name of the core segment directory
• time of the panic that generated the core segment
• total number of core segment files
• core segment file name
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Displays the following details:
• Core segment file name
• Node that owns the core segment file
• System ID of the node that generated the core
• MD5 checksum of the compressed data of the core segment file
• Name of the core segment
• Total number of core segments for the core file
• Timestamp of the panic that triggered the core segment
[-node {<nodename>|local}] - Node
Selects information about the core segments on the specified node.
[-segment <text>] - Core Segment
Selects information about the specified core segment. If segment is a directory, the command displays
the information for the first core segment file. If segment is a file, the command displays the file
information.
[-owner-node <text>] - Node That Owns the Core Segment File
Selects information about the core segments owned by the specified node. This parameter should only
be used in takeover mode to display information about the partner's core segments.
[-panic-system-id <integer>] - System ID of Node That Generated Core
Selects information about the core segments that were generated when the node with the specified
system ID panicked.
[-md5-data-chksum <text>] - Md5 Checksum of the Compressed Data of the Core Segment
Selects information about the core segments whose data segment's MD5 checksum matches the
specified checksum.
[-segment-name <text>] - Name of the Core Segment
Selects information about the core segments with the specified name.
[-total-segment-count <integer>] - Number of Segments Generated
Selects information about the core segments with the specified name.
[-panic-time <MM/DD/YYYY HH:MM:SS>] - Time of Panic That Generated Core
Selects information about the core segments that were generated by a panic at the specified time.
[-size <text>] - Size of Core Segment (bytes)
Selects information about the core segments that are of the specified size.
[-panic-string <text>] - Panic String of Panic That Generated Core
Selects information about the core segments that match the specified panic string.
● Examples
The example below displays the core segments on node1.
The example below displays detailed information a specific core segment file on node1.
cluster1::> system node coredump segment show -node node1 -segment core.118049106.2012-01-05.17_11_11.o
ntap.nz -instance
Node: node1
Core Segment: core.118049106.2012-01-05.17_11_11.ontap.nz
Node That Owns the Core Segment File: node1
System ID of Node That Generated Core: 118049106
Md5 Checksum of the Compressed Data of the Core Segment: 1a936d805dcd4fd5f1180fa6464fdee4
Name of the Core Segment: ontap
Number of Segments Generated: 2
Time of Panic That Generated Core: 1/5/2012 12:11:11
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node environment sensors show command displays the following information:
• Node name
• Sensor name
• Sensor state
• Sensor value
• Sensor units
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the sensors on the specified node.If this parameter is specified with the -
name parameter, the command displays information only about the specified sensor.
[-name <text>] - Sensor Name
Selects information about the sensors that have the specified name.If this parameter is specified with
the -node parameter, the command displays information only about the specified sensor.
[-fru <text>] - FRU
Selects information about the sensors associated with the specified Field Replaceable Unit (FRU).
[-type {fan|thermal|voltage|current|battery-life|discrete|fru|nvmem|counter|minutes|percent|agent|
unknown}] - Sensor Type
Selects information about the sensors that have the specified sensor type. Possible values vary among
platforms but may include fan, temperature, thermal and voltage.
[-units <text>] - Value Units
Selects information about the sensors that have readings displayed in the specified units of measure.
Possible values vary among platforms but may include RPM, C and mV.
[-state {normal|warn-low|warn-high|crit-low|crit-high|disabled|uninitialized|init-failed|not-
available|invalid|retry|bad|not-present|failed|ignored|fault|unknown}] - Sensor State
Selects information about the sensors that have the specified state. Possible values vary among
platforms but may include normal, warn_lo, warn_hi, crit_lo, crit_hi and failed.
[-discrete-state {normal|warn-low|warn-high|crit-low|crit-high|disabled|uninitialized|init-failed|
not-available|invalid|retry|bad|not-present|failed|ignored|fault|unknown}] - Discrete Sensor State
Selects information about the discrete-valued sensors that are in the specified state. A discrete-valued
sensor has a set of possible discrete values rather than a range of possible values. For example, a
presence sensor which has the discrete values PRESENT and NOT_PRESENT is a discrete-valued sensor.
Possible values vary among platforms but may include normal and failed.
[-value <integer>] - Last Sensor Value
Selects information about the sensors that have the specified sensor value.
[-discrete-value <text>] - Discrete Sensor Value
Selects information about the discrete-valued sensors that have the specified discrete value. Possible
values vary among sensors but may include PRESENT, NOT_PRESENT, ON, OFF, OK and FAULT.
[-crit-low <integer>] - Critical Low Threshold
Selects information about the sensors that have the specified critically low threshold.
[-warn-low <integer>] - Warning Low Threshold
Selects information about the sensors that have the specified warning-low threshold.
[-warn-hi <integer>] - Warning Hi Threshold
Selects information about the sensors that have the specified warning-high threshold.
● Examples
The following example displays information about all sensors on a cluster named cluster1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node external-cache modify command can be used to modify the following attributes of
external-cache for a node:
• is-enabled
• is-rewarm-enabled
• is-mbuf-inserts-enabled
• pcs-size
• is-hya-enabled
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the modifications need to be made.
[-is-enabled {true|false}] - Is Enabled?
Enables external-cache module (Flash Cache Family) on the storage system. Valid values for this option
are true and false. If external-cache hardware is present, then this option will enable external-cache
functionality in WAFL. If no hardware is present, this option will enable external-cache pcs (Predictive
Cache Statistics). The default value for this option is false.
[-is-rewarm-enabled {true|false}] - Is Rewarm On?
Specifies whether an external-cache module should attempt to preserve data across reboots. Valid
values for this option are true and false. This option applies only to cache hardware with persistent
media. It does not apply to Predictive Cache Statistics (PCS). Enabling this option will marginally
increase the duration of system boot and shutdown, but it will reduce or eliminate the time required for
cache warming. The default value for this option is determined by the cache hardware type. The option
is disabled by default.
[-is-mbuf-inserts-enabled {true|false}] - Is Mbuf Inserts On?
Specifies whether the external-cache module allows insert of mbuf data as part of a network write. In
rare cases, inserting mbuf data may cause excessive CPU usage. We provide this workaround to disable
the behavior, if necessary. Do not change the value of this option unless directed to do so by technical
support. The data from the mbuf network writes can still be stored in the external cache, but only after a
subsequent disk read of that data.
[-pcs-size <integer>] - PCS Size
Controls the size of the cache emulated by external-cache PCS. Valid values for this option are integers
between 16 and 16383. This option is only used when PCS is enabled. The default value for this option is
chosen automatically based on the amount of memory in the controller, and the upper limit is further
restricted on controllers with smaller amounts of memory.
[-is-hya-enabled {true|false}] - Is HyA Caching Enabled?
Specifies whether the external-cache module allows caching of blocks targeted for hybrid aggregates.
This option is set to true by default when the external-cache is enabled.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node external-cache show command displays external-cache information for each of the
nodes available.
● Parameters
{ [-fields <fieldname>, ...]
Valid values for this option are {node|is-enabled|is-rewarm-enabled|is-mbuf-inserts-
enabled|pcs-size|is-hya-enabled}. Specifying the value will display all entries that correspond to
it.
| [-instance ]}
This option does not need an input value. Specifying this option will display the information about all
the entries.
[-node {<nodename>|local}] - Node
Specify this parameter to display external-cache parameters that match the specified node.
[-is-enabled {true|false}] - Is Enabled?
Valid values for this option are true and false. Specifying the value will display all entries that
correspond to it.
[-is-rewarm-enabled {true|false}] - Is Rewarm On?
Valid values for this option are true and false. Specifying the value will display all entries that
correspond to it.
[-is-mbuf-inserts-enabled {true|false}] - Is Mbuf Inserts On?
Valid values for this option are true and false. Specifying the value will display all entries that
correspond to it.
[-pcs-size <integer>] - PCS Size
Valid values for this option are integers between 16 and 16383. Specifying the value will display all
entries that correspond to it.
[-is-hya-enabled {true|false}] - Is HyA Caching Enabled?
Valid values for this option are true and false. Specifying the value will display all entries that
correspond to it.
● Examples
Node: node1
Is Enabled: false
Is rewarm on: false
Is Mbuf inserts on: true
PCS size: 256
Is hya caching enabled: true
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node firmware download command downloads new system firmware to the boot device. A
reboot followed by the 'update_flash' command at the firmware prompt is required for the firmware to take
effect.
● Parameters
-node {<nodename>|local} - Node
This specifies the node or nodes on which the firmware is to be updated.
-package <text> - Package URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP, HTTPS, FTP and FILE, are accepted. The FILE URL scheme can be used to specify
location of the package to be fetched from an external device connected to the storage controller.
Currently, only USB mass storage devices are supported. The USB device is specified as file://usb0/
<filename>. Typically, the file name is image.tgz. The package must be present in the root directory of
the USB mass storage device. The HTTPS URL scheme requires that you install the HTTPS server
certificate on the system by using the command "security certificate install -type server-ca".
● Examples
The following example downloads firmware to node-01 from a web server:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node hardware nvram-encryption modify command configures the encryption feature
for the NVRAM or NVMEM data that is destaged to non-volatile flash storage.
Note
This feature might be restricted in some countries due to local regulations concerning encrypted data.
● Parameters
-node {<nodename>|local} - Node
Specifies the node containing the NVRAM or NVMEM subsystem.
● Examples
The following commands enable or disable the NVRAM encryption:
cluster1::> system node hardware nvram-encryption modify -node node1 -is-enabled false
cluster1::> system node hardware nvram-encryption modify -node node1 -is-enabled true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node hardware nvram-encryption show command displays the configuration of the
encryption feature for the NVRAM or NVMEM data that is destaged to non-volatile flash storage.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about the NVRAM encryption
configuration on the specified node.
[-nvram-device-name <text>] - NVRAM Device Name
If this parameter is specified, the command displays information about the NVRAM encryption
configuration for the specified NVRAM device. Current platforms only support one device - NVRAM.0.
[-is-supported {true|false}] - Is Encryption Support
If this parameter is specified, the command displays information about the NVRAM encryption
configuration for platforms that support it.
[-is-enabled {true|false}] - Is Encryption Enabled
If this parameter is specified, the command displays information about the NVRAM encryption
configuration for the NVRAM or NVMEM devices where the device has the specified enabled value.
[-key-id <text>] - Key ID of the Encryption Key
If this parameter is specified, the command displays information about the NVRAM encryption
configuration with the specified encryption Key ID used to encrypt the NVRAM or NVMEM data on flash
storage.
● Examples
The following example displays information about the NVRAM encryption configuration on all nodes of the
cluster:
Enabled: true
Key-ID: 0000000000000000000000000000000000000000000000000000000000000000
Node: node2
NVRAM-Device: NVRAM.0
Supported: true
Enabled: true
Key-ID: 0000000000000000000000000000000000000000000000000000000000000000
2 entries are displayed.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the following information about tape drives:
• Node to which the tape drive is attached
• Device ID of the tape drive
• Description of the tape drive
• NDMP path of the tape drive
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape drives on the specified node.
[-device-id <text>] - Device ID
Selects information about the tape drive that has the specified device ID.
[-description <text>] - Description
Selects information about the tape drive or drives that have the specified description.
[-wwn <text>] - World Wide Name
Selects information about the tape drive that has the specified worldwide name.
[-serial-number <text>] - Serial Number
Selects information about the tape drive that has the specified serial number.
[-ndmp-path <text>, ...] - NDMP Path
Selects information about the tape drive or drives that have the specified NDMP path.
● Examples
The following example displays information about all tape drives in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the following information about tape libraries:
• Node to which the tape library is attached
• Device ID of the tape library
• Description of the tape library
• NDMP path of the tape library
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Displays detailed information about tape libraries on the specified node.
[-device-id <text>] - Device ID
Selects information about the tape library that has the specified device ID.
[-description <text>] - Description
Selects information about the tape library or libraries that have the specified description.
[-wwn <text>] - World Wide Name
Selects information about the tape library that has the specified worldwide name.
[-serial-number <text>] - Serial Number
Selects information about the tape library that has the specified serial number.
[-ndmp-path <text>] - NDMP Path
Selects information about the tape library or libraries that have the specified NDMP path.
● Examples
The following example displays information about all tape libraries attached to the cluster:
Commands used for managing Fibre Channel and converged networking adapters.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node hardware unified-connect modify command changes the adapter configuration.
Any changes to the adapter mode or type will require a reboot for the changes to take effect. The adapter
must also be offline before you can make any changes.
The adapter argument is in the form Xy where X is an integer and y is a letter. For example: 4a
For a target adapter, use the network fcp adapter modify command to bring the adapter offline.
For an initiator adapter, use the system node run local storage disable adapter command to
take the adapter offline.
The -mode parameter refers to the mode of the adapter and can be either fc or cna.
The -type parameter refers to the FC-4 type of the adapter and can be initiator, target, or fcvi.
The -force parameter suppresses confirmation prompts.
Note
The adapter type fcvi is supported only on platforms with FCVI adapters.
● Parameters
-node {<nodename>|local} - Node
Specifies the node of the adapter.
-adapter <text> - Adapter
Specifies the adapter.
● Examples
cluster1::> system node hardware unified-connect modify -node node1 -adapter 0d -mode cna
Related reference
network fcp adapter modify (page 400)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command manages Fibre Channel and converged networking adapters used by the storage subsystem.
Use the command to show the current mode and FC-4 type of adapters or the capabilities of adapters.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-capability ]
If this parameter is specified, the command displays the capabilities of the adapters.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about Fibre Channel and converged
networking adapters on the specified node.
[-adapter <text>] - Adapter
If this parameter is specified, the command displays information about the specified adapter.
[-current-mode {fc|cna}] - Current Mode
If this parameter is specified, the command displays adapters configured to the specified mode.
[-current-type {initiator|target|fcvi}] - Current Type
If this parameter is specified, the command displays adapters configured to the specified FC-4 type.
[-pending-mode {fc|cna}] - Pending Mode
If this parameter is specified, the command displays adapters configured to the specified mode on the
next reboot.
[-pending-type {initiator|target|fcvi}] - Pending Type
If this parameter is specified, the command displays adapters configured to the specified FC-4 on the
next reboot.
[-status-admin <text>] - Administrative Status
If this parameter is specified, the command displays adapters with the specified status.
● Examples
The following example displays information about all Fibre Channel and converged networking adapters in
the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node image abort-operation command aborts software installation ("update") or
download ("get") operation on the specified node.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which to abort the operation.
● Examples
The following example aborts the software installation operation on a node named node1.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command fetches a file from the specified URL and stores it in the /mroot/etc/software directory.
● Parameters
[-node {<nodename>|local}] - Node
This parameter specifies the node that fetches and stores the package.
-package <text> - Package URL
This parameter specifies the URL that provides the location of the package to be fetched. Standard URL
schemes, including HTTP, HTTPS, FTP and FILE, are accepted. The FILE URL scheme can be used to specify
the location of the package to be fetched from an external device connected to the storage controller.
Currently, only USB mass storage devices are supported. The USB device is specified as file://usb0/
<filename>. Typically, the file name is image.tgz. The package must be present in the root directory of
the USB mass storage device. The HTTPS URL scheme requires that you install the HTTPS server
certificate on the system by using the command "security certificate install -type server-ca".
[-replace-package [true]] - Replace the Local File
Specifies whether an existing package is deleted and replaced with a new package. If you enter this
command without using this parameter, its effective value is false and an existing package is not
replaced with the new one. If you enter this parameter without a value, it is set to true and an existing
package is replaced with the new one.
[-rename-package <text>] - Rename the File
Use this parameter to enter a package name that is different than the file name in the URL.
[-background [true]] - Run in the background
This parameter allows the operation to run in the background. The progress of the operation can be
checked with the command system image show-update-progress. If this command is entered without
using this parameter, its effective value is false and the operation runs in the foreground. If this
parameter is used without a value, it is set to true.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node image modify command sets the default software image on a specified node. The
default software image is the image that is run when the node is started. A node holds two software images;
when you set one as the default image, the other image is automatically unset as the default. Conversely, if
you unset a software image as the default, the other image is automatically set as the default.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the software image is located.
-image {image1|image2|remote} - Image Name
This specifies the software image that is to be set or unset as the default.
[-isdefault {true|false}] - Is Default Image
This optionally specifies whether the specified image is the default.
● Examples
The following example sets the software image named image2 as the default image on a node named
node0.
node::> system node image modify -node node0 -image image2 -isdefault true
Default Image Changed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node image show command displays information about software images. By default, the
command displays the following information:
• Node name
• Image name
• Whether the image is the default image
• Whether the image is the current image
• Software version
• Installation date
To display detailed information about a specific software image, run the command with the -node and -
image parameters. The detailed view adds information about the kernel image path, and the root file
system image path.
You can specify additional parameters to select specific information. For example, to display information
only about software images that are currently running, run the command with the -iscurrent true
parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the software images on the specified node. If this parameter and the -image
parameter are both used, the command displays detailed information about the specified software
image.
[-image {image1|image2|remote}] - Image Name
Selects information about the software images that match the specified name. If this parameter and the
-node parameter are specified, the command displays detailed information about the specified software
image.
[-isdefault {true|false}] - Is Default Image
Selects information about the software images with the specified default setting.
[-iscurrent {true|false}] - Is Current Image
Selects information about the software images that have the specified currency value.
[-kernel-path <text>] - Kernel Image Path
Selects information about the software images that have the specified kernel image path.
[-rootfs-path <text>] - Root Filesystem Image Path
Selects information about the software images that have the specified root file system image path.
[-version <text>] - Software Version
Selects information about the software images that have the specified root file system image path.
[-installdate <MM/DD/YYYY HH:MM:SS>] - Install Date
Selects information about the software image that have the specified installation date. Specify the date
in the format MM/DD/YYYY HH:MM:SS [+- HH:MM].
● Examples
The following example displays information about the software images on a node named node1:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node image show-update-progress command displays the progress of a software-image
update initiated by using the system node image update command. The command displays progress
until the update completes; you can also interrupt it by pressing Ctrl-C.
● Parameters
-node {<nodename>|local} - Node
This optionally specifies the name of a node whose image-update progress is to be displayed.
[-follow [true]] - Follow the Progress in the Foreground
Do use not use background processing for this command. If you do not use this parameter, the value is
true.
● Examples
The following example displays image-update progress:
Related reference
system node image update (page 1474)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node image update command downloads the software image from a specified location and
updates the alternate software image (that is, the image that is not currently running on the node). By
default, validation of the software image is not performed. Use the "-validate-only" parameter to validate
the software image first, before performing the update on the cluster nodes.
At the advanced privilege level, you can specify whether to disable version-compatibility checking.
● Parameters
-node {<nodename>|local} - Node
This specifies the node on which the software image is located.
-package <text> - Package URL
This specifies the location from which the software image is to be downloaded. The location can be
specified in any of the following ways:
• As an HTTP URL in the form http://host_name[:port]/path_to_file. For instance, http://example.com/
downloads/image.tgz. The management utility prompts you for a user name and password before
beginning the download.
Note
If you use HTTP to transfer software images, be aware that the management utility does not
check whether the Web server is password protected; if it is not, press Enter at the prompt for
user name and password.
Note
The HTTPS URL scheme requires that you install the HTTPS server certificate on the system by
using the command "security certificate install -type server-ca".
• As a path to a package in a mounted file system in the form file://localhost/path_to_file. For example,
file://localhost/mroot/etc/software/image.tgz.
• The FILE URL scheme can be used to specify the location of the package to be fetched from an external
device connected to the storage controller. Currently, only USB mass storage devices are supported.
The USB device is specified as file://usb0/<filename>. Typically, the file name is image.tgz. The
package must be present in the root directory of the USB mass storage device.
[-replace {image1|image2}] - Image to Replace
This optionally specifies the image that is to be replaced when the node is booted from the network.
● Examples
The following example updates the software image on a node named node0 from a software package
located at ftp://ftp.example.com/downloads/image.tgz:
node::> system node image update -node node0 -package ftp://ftp.example.com/downloads/image.tgz -setdef
ault true
Related reference
system node image get (page 1471)
system node image package show (page 1476)
system node image show-update-progress (page 1473)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The delete command will delete the specified software package.
● Parameters
-node {<nodename>|local} - Node
The package will be deleted from the repository belonging to the node specified with this parameter.
The local node is used as the default if this parameter is omitted.
-package <text> - Package File Name
This parameter specifies the package to be deleted.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The package show command displays details of the software packages residing on the storage controller.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects which node's packages are displayed. The local node is the default if this parameter is omitted.
[-package <text>] - Package File Name
This parameter specifies which package's information will be displayed.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The delete command deletes the specified file on the external device.
● Parameters
-node {<nodename>|local} - Node
The file is deleted from the external device of the node specified with this parameter. If this parameter is
omitted, then the local node is used as the default node.
-package <text> - File Name
This parameter specifies the file to be deleted.
-device {usb0|usb1} - Device
This parameter specifies the name of the external device. Currently, only usb0 is supported. usb0 refers
to the first external USB device connected to the node (regardless of port).
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The external-device show command displays files residing on the external storage device.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter selects the node that has files that are to be displayed on the external storage device. If
this parameter is omitted, then the local node is the default node.
[-package <text>] - File Name
This parameter specifies the file for which the information is displayed.
[-device {usb0|usb1}] - Device
This parameter specifies the name of the external device. Currently, only usb0 is supported. usb0 refers
to the first external USB device connected to the node (regardless of port).
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node internal-switch show command is used to display the internal switch state
information and the link status.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to specify the node the switch resides on.
[-switch-id <integer>] - Switch
Use this parameter to specify the switch id. For example, 1.
[-port-id <integer>] - Port
Use this parameter to specify the port id. For example, 0.
[-port-name <text>] - Port Name
Use this parameter to specify the port name. For example, e0M.
[-auto-admin <Auto-negotiation setting>] - Auto-Negotiation Administrative
Use this parameter to show the auto-negotiation administrative setting. 'enable' or 'disable'.
[-auto-op <Auto-negotiation setting>] - Auto-Negotiation Operational
Use this parameter to show the auto-negotiation operational setting. 'unknown', 'complete',
'incomplete', 'failded' or 'disabled'.
[-duplex-admin <Duplex>] - Duplex Mode Administrative
Use this parameter to show the duplex mode administrative setting. 'half' or 'full'.
[-duplex-op <Duplex>] - Duplex Mode Operational
Use this parameter to show the duplex mode operational setting. 'half' or 'full'.
[-speed-admin <Link speed>] - Speed Administrative
Use this parameter to show the speed administrative setting. '10', '100' or '1000'.
[-speed-op <Link speed>] - Speed Operational
Use this parameter to show the speed operational setting. '10', '100' or '1000'.
[-link <Link Status>] - Link State
Use this parameter to show the link state, 'up' or 'down'.
[-up-admin <Link Status>] - Up Administrative
Use this parameter to show the up administrative setting, 'up' or 'down'.
[-fc-op <Flow control>] - Flow Control Operational
Use this parameter to show the flow control operational setting, 'full', 'send', 'receive' or 'none'.
● Examples
The example shows the attributes of the internal switch 0 on the node Node1.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node internal-switch dump stat command is used to display the counter information of
the internal switch ports.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Use this parameter to specify the node the switch resides on.
[-switch-id <integer>] - Switch
Use this parameter to specify the switch id. For example, 1.
[-port-id <integer>] - Port
Use this parameter to specify the port id. For example, 0.
[-stat-id <text>] - Counter Name
Use this parameter to specify the counter name.
[-valued <integer>] - Counter Value
Use this parameter to show the value of specified counter.
[-port-name <text>] - Port Name
Use this parameter to specify the port name. For example, e0M.
● Examples
The following example shows partial counter information of the internal switch 0 on Node1
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node nfs usage show command displays the NFS usage information in the local node. The
display output shows the number of RPC calls received per protocol on the local node. Usage is collected
whenever there is any NFS traffic. These values are not persistent and will reset when the node reboots. This
command is used in conjunction with the system node nfs usage reset command in the NFS licensing
callback.
● Examples
The following example displays the NFS usage information that does not have any NFS usage.
The following example displays the NFS usage information with v3 usage.
The following example displays the NFS usage information with v4 usage.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
This command switches on the power of the main controller of the specified node. This command works for a
single node only and the full name of the node must be entered exactly.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose power you want to switch on.
● Examples
The following example switches on the power of node2.
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Fujitsu personnel.
Do you want to continue? {y|n}: y
cluster1::*>
cluster1::*> system node power on -node node2
cluster1::*>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the power status of the main controller in each node across the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This optional parameter specifies the name of a node for which information is to be displayed. If this
parameter is not specified, the command displays information about all nodes in the cluster.
[-status {on|off}] - Current Power Status
If the -status parameter is specified, the command only lists information about the node with the power
status value you enter.
● Examples
The following example displays power status of all the nodes in cluster1.
Node Status
---------------- -----------
node1 on
node2 on
2 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node root-mount create command produces a root-mount from one node in the cluster to
another node's root volume. The root-mount is marked for immediate creation by a background task. Use
the system node root-mount show command to view the current status of root-mount or verify task
completion.
● Parameters
-node <nodename> - Owner of the Root-mount
The node name where the root-mount will be created.
-root-node <nodename> - Root-mount Destination Node
The node name that the root-mount will access.
● Examples
The following example shows the creation of a root-mount from cluster1::nodeA to cluster1::nodeB and the
verification of the successful completion.
Related reference
system node root-mount show (page 1483)
system node root-mount delete (page 1483)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node root-mount delete command removes a root-mount from one node in the cluster to
another node's root volume. The root-mount is marked for immediate deletion by a background task. Use
the system node root-mount show command to view the current status of root-mount or verify task
completion.
● Parameters
-node <nodename> - Owner of the Root-mount
The node which has the mount.
-root-node <nodename> - Root-mount Destination Node
The node accessed by the mount.
● Examples
This example shows the deletion of a root-mount from cluster1::nodeA to cluster1::nodeB and the
verification of the command's successful completion.
Related reference
system node root-mount show (page 1483)
system node root-mount create (page 1482)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node root-mount show command displays the status of current root-mounts from any node
to another node's root volume. These root-mounts are used by cluster services to access data on other nodes
in the cluster. These root-mounts are not pre-created, but are created as they are needed. They can also be
manually created or deleted.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Owner of the Root-mount
Selects information about root-mounts that exist on the specified node.
[-root-node <nodename>] - Root-mount Destination Node
Selects information about root-mounts that connect to the specified node.
[-create-time <MM/DD/YYYY HH:MM:SS>] - Mount Creation Time
Selects information about root-mounts that were created at the specified time.
[-state <Mount State>] - State of the Root-Mount
Selects information about root-mounts that have the specified state. The states are:
• unknown: The state of the root-mount is being determined.
• initializing: A root-mount was found and needs testing to determine the correct state.
• mount-requested: The root-mount has been requested, but is not ready.
• mounting: The root-mount is being created, but is not ready.
• ready: The root-mount is ready to be used.
• not-responding: The root-mount exists but is not responding.
• does-not-exist: No root-mount is possible to this node's root volume.
• ha-busy: The root-mount is busy pending completion of an HA event.
• clean-up-requested: The root-mount is being deleted.
• cleaning-up: The root-mount is being deleted.
• create-error: The root-mount could not be created.
[-last-error <text>] - Last Error
Selects information about root-mounts that have the specified last-error value.
● Examples
The following example shows the default state of the root-mounts on a cluster that is not using root-node
services:
The following example displays the root-mounts that exist for a cluster that has nodeA mounted to nodeB,
and nodeB mounted to nodeA:
Related reference
system node root-mount create (page 1482)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node upgrade-revert show command displays information about the status of upgrades or
reversions. If an upgrade has failed, this command enables you to determine which phase of the upgrade
contains the failed upgrade task and the reason for the failure.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Use this parameter to display status information only about upgrades or reversions that are slated to
occur on the nodes you specify.
[-upgrade-version <integer>] - Cluster Upgrade Version
Selects status information about upgrades or reversions that are to the version number you specify.
[-startup-phase {pre-root|pre-apps|post-apps}] - Startup Phase
Selects status information about upgrades or reversions that are slated to occur during the startup phase
you specify. Startup phases are:
• pre-root - Upgrade is applied before mroot is mounted
• pre-apps - Upgrade is applied before other cluster apps are started
• post-apps - Upgrade is applied after all RDB apps are online
[-status <Upgrade/Revert Execution Status>] - Execution Status
Selects status information about upgrades or reversions that have the execution status you specify.
Execution statuses are:
• prepared - Ready to upgrade
• applied - Successful upgrade
• reverted - Successful reversion
• failed - Unsuccessful upgrade or reversion
• aborted - Unsuccessful upgrade or reversion
• skipped - Upgrade or reversion was skipped for that phase
• locked - Upgrading or reverting
[-status-msg <text>] - Status Message
Selects status information about upgrades or reversions that have the status message you specify. The
status message displays the current status of the phase with which it appears.
[-direction {upgrade|revert}] - Upgrade/Revert Direction
Use this parameter with the value upgrade to select status information about upgrades. Use this
parameter with the value revert to select status information about reversions.
[-node-status {reverting|complete|not-needed|aborted|failed|waiting|in-progress|stopped}] - Node
Status
Selects status information about upgrades or reversions that have the status you specify on nodes where
they are slated to occur. Node statuses are:
• aborted - Upgrade process aborted. Contact support personnel.
• failed - Upgrade process failed. Contact support personnel.
• stopped - Upgrade process stopped due to node or management application restart. Use the system
node upgrade-revert upgrade command to complete the upgrade manually.
● Examples
The following example shows typical output for a cluster with two nodes. Status messages for each phase
display information about the tasks in that phase.
Related reference
system node upgrade-revert upgrade (page 1487)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node upgrade-revert upgrade command manually executes an upgrade. Use this
command to execute an upgrade after issues that caused an upgrade failure are resolved. If the upgrade is
successful, no messages display.
Before the command executes upgrades, it checks the configuration of the nodes in the cluster. If no
upgrades are needed, the command displays a message and does not execute any upgrades.
● Parameters
-node {<nodename>|local} - Node
Specifies the node that is to be upgraded. The value local specifies the current node.
● Examples
This example shows command output of a node named node0 if node configuration is current.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node usb-ports modify command allows the user to change the state of the external USB
ports on the next boot.
● Parameters
-node {<nodename>|local} - Node That Owns the External USB Ports
This parameter specifies which node the external USB state will be modified.
[-is-disabled-next-boot {true|false}] - State of the External USB Ports on the Next Boot
If set to true the external USB ports on the specified node will be disabled on the next boot.
● Examples
The following example disables the external USB ports for node1 on the next boot:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node usb-ports show command displays the status of the external USB ports on each node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node That Owns the External USB Ports
If this parameter is specified, the command displays information about the status of the external USB
ports on the specified node.
[-is-disabled {true|false}] - Current State of the External USB Ports
This parameter specifies the current state of the external USB ports.
[-is-disabled-next-boot {true|false}] - State of the External USB Ports on the Next Boot
This parameter specifies the state of the external USB ports after the next boot.
[-is-feature-supported {true|false}] - External USB Ports Supported?
This parameter specifies whether or not the node even supports external USB ports.
[-connected-ports {true|false}, ...] - Connected USB Devices
This parameter specifies whether or not the USB ports have a device connected to them.
● Examples
The following example displays information about the state of the external USB ports:
The system node virtual-machine commands enable virtual machine and hypervisor management.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node virtual-machine show-network-load-balancer displays the list of network load
balancer probe ports for each ONTAP node in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - Node
Represents the name of the ONTAP node for which information is to be displayed. If this parameter is
not specified, the command displays information about all nodes in the cluster.
[-vserver-name <text>] - Vserver Name
Vserver name.
[-lif-name <text>] - ONTAP LIF Name
ONTAP logical interface name.
[-probe-port <integer>] - Probe Port
A TCP port which is regularly probed by the network load balancer. When the TCP port is healthy and
open, the network load balancer will continue sending traffic to an associated network route. When
unhealthy, the network load balancer will redirect all traffic intended for this route to an alternate route.
[-last-probe-time <MM/DD/YYYY HH:MM:SS>] - Last Probe Time
The timestamp of the most recent health probe request on this TCP port.
[-remove-listener {true|false}] - Remove listener for This LIF
Whether or not ONTAP has programmatically told the network load balancer to stop listening on the
health probe associated with this LIF.
[-active {true|false}] - Actively receiving Health Probes
Whether or not the network load balanacer has received a health probe request on this TCP port, within
the expecteced timeframe.
● Examples
The following example displays probe ports for each node in the cluster.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node virtual-machine disk-object-store create command adds an object store
container to a node's configuration. All objects within the container will be added as disks to the specified
node.
● Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node to which the object store container will be added.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
-server <text> - Fully Qualified Domain Name of the Object Store Server
Specifies the object store server where the container is hosted.
-port <integer> - Port Number of the Object Store
Specifies the port number to connect to the object store server.
-container-name <text> - Container Name
Specifies the name of the container to be added.
-azure-account <text> - Azure Storage Account
Specifies the Azure storage account.
-azure-private-key <text> - Azure Storage Account Access Key
Specifies the access key required to authenticate requests to the Azure object store.
[-update-partner [true]] - Update HA Partner
Specify this parameter when the system is running in an HA configuration.
● Examples
The following example adds a container to the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node virtual-machine disk-object-store delete command removes an object store
container from a node's configuration.
● Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node from which the object store container will be removed.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
[-update-partner {true|false}] - Update HA Partner
Specify this parameter when the system is running in an HA configuration.
● Examples
The following example removes a container from the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node virtual-machine disk-object-store modify command updates one or more
object store configuration parameters.
● Parameters
-node <nodename> - ONTAP Node Name
Specifies the name of the ONTAP node for which the object store configuration will be modified.
-object-store-name <object store name> - ONTAP Name for this Object Store Config
Specifies the name that will be used to identify the object store configuration.
[-server <text>] - Fully Qualified Domain Name of the Object Store Server
This optional parameter specifies the new Fully Qualified Domain Name (FQDN) of the same object store
server.
[-azure-private-key <text>] - Azure Storage Account Access Key
This optional parameter specifies a new access key for the storage account.
[-update-partner [true]] - Update HA Partner
Specify this parameter when the system is running in an HA configuration.
● Examples
The following example updates the stored private key for an Azure container on the specified node.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system node virtual-machine disk-object-store show command displays the list of object
store containers that contain each node's disks.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node <nodename>] - ONTAP Node Name
Represents the name of the ONTAP node for which information is to be displayed. If this parameter is
not specified, the command displays information about all nodes in the cluster.
[-object-store-name <object store name>] - ONTAP Name for this Object Store Config
Selects object store configurations with the specified name.
[-server <text>] - Fully Qualified Domain Name of the Object Store Server
Selects containers on the specified server.
[-port <integer>] - Port Number of the Object Store
Selects containers attached on the specified port.
[-container-name <text>] - Container Name
Selects containers with the specified name.
[-azure-account <text>] - Azure Storage Account
Selects containers in the specified Azure storage account.
[-alive {true|false}] - Is Server Alive
Selects containers based on their aliveness state, as seen from the ONTAP node.
● Examples
The following example displays the list of containers for each node in the cluster.
The system node virtual-machine hypervisor commands enable you to view and manage
information about the hypervisor on which Data ONTAP resides.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node virtual-machine hypervisor show command displays information for each
hypervisor that is hosting a Data ONTAP virtual machine. The output contains the hypervisor-specific
information, such as host name and IP address, as well as network configuration details. The command only
scans hypervisors on which Data ONTAP virtual machines are installed. To filter command output, specify
any number of optional fields listed below.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
The name of the Data ONTAP node running in a virtual machine for which information is to be displayed.
If this optional parameter is not specified, the command displays information about all nodes in the
cluster.
[-vm-uuid <UUID>] - UUID of the Virtual Machine
The hypervisor-supplied unique ID for this virtual machine. This optional parameter selects information
about the hypervisor on which the Data ONTAP virtual machine is running with the specified UUID. Since
UUID is unique per host, an alternative and easier way is to use -node to filter out the same
information.
[-vmhost-bios-release-date <text>] - Release Date for the Hypervisor BIOS
The release date for the currently running hypervisor BIOS. This optional parameter selects information
about the hypervisors that have the specified BIOS release date.
[-vmhost-bios-version <text>] - Current BIOS Version of the Hypervisor Physical Chassis
The current BIOS version of the hypervisor physical chassis. This optional parameter selects information
about the hypervisors that are running with the specified BIOS version.
[-vmhost-boot-time <text>] - Time When Hypervisor was Last Booted
The time when the hypervisor was last booted. This optional parameter selects information about the
hypervisors which were last booted at the specified boot time.
[-vmhost-cpu-clock-rate <integer>] - Speed of the Hypervisor CPU Cores (MHz)
The speed of the hypervisor CPU cores. This optional parameter selects information about the
hypervisors that are running with the specified CPU clock rate.
[-vmhost-cpu-core-count <integer>] - Number of Physical CPU Cores on the Hypervisor
The number of physical CPU cores on the hypervisor. Physical CPU cores are the processors contained by
a CPU package. This optional parameter selects information about the hypervisors that are running with
the specified CPU cores.
[-vmhost-cpu-socket-count <integer>] - Number of Physical CPU Packages on the Hypervisor
The number of physical CPU packages on the hypervisor. Physical CPU packages are chips that contain
one or more processors. Processors contained by a package are also known as CPU cores. For example,
one dual-core package is comprised of one chip that contains two CPU cores. This optional parameter
selects information about the hypervisors that are running with the specified CPU sockets.
[-vmhost-cpu-thread-count <integer>] - Number of Physical CPU Threads on the Hypervisor
The number of physical CPU threads on the hypervisor. This optional parameter selects information
about the hypervisors that are running with the specified CPU threads.
[-vmhost-gateway <text>] - Default Gateway for the Hypervisor
The default gateway for the hypervisor. This optional parameter selects information about the
hypervisors with the specified gateway address.
[-vmhost-hardware-vendor <text>] - Hardware Vendor of the Hypervisor
The name of hypervisor hardware manufacturer. This optional parameter selects information about the
hypervisors with the specified hardware vendor.
[-vmhost-hypervisor <text>] - Complete Product Name, including the Version Information for the
Hypervisor
The complete product name, including the version information for the hypervisor. This optional
parameter selects information about the hypervisors that are running with the specified hypervisor
version.
[-vmhost-ip-address <text>] - Primary IP Address Assigned to the Hypervisor
The primary IP address assigned to the hypervisor. This optional parameter selects information about
the hypervisors with the specified IP address.
[-vmhost-memory <integer>] - Physical Memory Size of the Hypervisor (Bytes)
The physical memory size of the hypervisor in bytes. This optional parameter selects information about
the hypervisors that are running with the specified physical memory.
[-vmhost-model <text>] - Hypervisor Manufacturer-Supplied Hardware Model Name
The hypervisor manufacturer-supplied hardware model name. This optional parameter selects
information about the hypervisors with the specified hardware model.
[-vmhost-name <text>] - Hostname of the Hypervisor
The host name assigned to the hypervisor. This optional parameter selects information about the
hypervisor with the specified host name.
[-vmhost-netmask <text>] - Subnet Mask Address for the Hypervisor
The subnet mask address for the hypervisor. This optional parameter selects information about the
hypervisors with the specified netmask address.
[-vmhost-processor-id <text>] - Processor ID of the Hypervisor
The processor ID of the hypervisor. This optional parameter selects information about the hypervisors
with the specified processor ID.
[-vmhost-processor-type <text>] - CPU Model of the Hypervisor
The CPU model of the hypervisor. This optional parameter selects information about the hypervisors that
are running with the specified processor type.
[-vmhost-software-vendor <text>] - Name of the Virtual Machine Software Manufacturer
The name of the virtual machine software manufacturer. This optional parameter selects information
about the hypervisors with the specified software vendor.
[-vmhost-uuid <UUID>] - UUID of the Hypervisor
A unique ID for the hypervisor. This optional parameter selects information about the hypervisor with
the specified UUID.
[-vmhost-error <text>] - Error in case Hypervisor Info Retrieval Fails
Displays a list of nodes on which the hypervisor has received the specified error. This parameter is most
useful when entered with wildcards.
[-vm-custom-max-capacity <integer>] - Maximum Storage Capacity of the Virtual Machine (in TB)
The maximum system capacity (in TB) that can be configured on the VM. This optional parameter selects
information about the node's storage capacity.
● Examples
The following example shows typical output from the system node virtual-machine hypervisor
show command for the Data ONTAP virtual machines running in the cluster.
Hypervisor Info
--------------------
Hardware Vendor: VMware, Inc.
Model: VMware Virtual Platform
Software Vendor: Unknown
Hypervisor: VMware ESX 4.1.0 build-12345
Host Name: myesx.example.com
Last Boot Time: 2014-01-01T01:23:45.678901-23:45
Host UUID: 00000000-0000-0000-0000-0012a3456789
BIOS Version: S1234.5.6.7.8.901234567890
BIOS Release Date: 2013-01-01T00:00:00Z
CPU Packages: 2
CPU Cores: 12
CPU Threads: 24
Processor ID: 0000:0000:0000:0010:0010:0100:1100:0010
Processor Type: Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
CPU MHz: 2925
Memory Size: 4227858432
IPv4 Configuration: IP Address: 192.168.0.1
Netmask: 255.255.255.0
Gateway: 192.165.0.1
Hypervisor Info
--------------------
Hardware Vendor: VMware, Inc.
Model: VMware Virtual Platform
Software Vendor: Unknown
Error: ServerFaultCode:
InvalidLoginFault type='InvalidLogin'
Related reference
system node virtual-machine instance show-system-disks (page 1498)
The system node virtual-machine instance commands enable virtual machine instance
management.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node virtual-machine instance show command displays virtual machine information.
With this information you can determine the relationship between a Data ONTAP node and its associated
virtual machine instance running within a cloud provider. Several other details about the virtual machine
can be extracted as well, such as the cloud provider account ID to which it belongs. To filter command
output, specify any number of optional fields listed below.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This optional parameter represents the name of the Data ONTAP node running in a virtual machine for
which information is to be displayed. If this parameter is not specified, the command displays
information about all nodes in the cluster.
[-instance-id <text>] - ID of This Instance
Selects the nodes that match this parameter value. A cloud provider-supplied unique instance ID for this
virtual machine, for example "i-a9d42f89" or "db00a7755a5e4e8a8fe4b19bc3b330c3.1".
[-account-id <text>] - ID of This Account
Selects the nodes that match this parameter value. The cloud provider-associated account ID for this
virtual machine. This parameter is usually associated with a cloud provider login ID and password.
[-image-id <text>] - ID Of the Image in Use on This Instance
Selects the nodes that match this parameter value. The image ID installed on this virtual machine
instance. It identifies a pre-defined template of a computing device's software environment. It contains
the operating system and can also include application software, such as database servers, middleware,
and web servers. In this case, the ID refers to an image that contains everything required to run Data
ONTAP in the cloud.
[-instance-type <text>] - Specifies System Attributes and Use Cost
Selects the nodes that match this parameter value. A specification (as defined by the cloud provider)
that defines the memory, CPU, storage capacity and usage cost for a virtual machine instance. Some
instance types are designed for standard applications, whereas others are designed for CPU-intensive or
memory-intensive applications and so on.
[-region <text>] - Set of Resources in the Same Geographic Area
Selects the nodes that match this parameter value. A named set of resources in the same geographical
area. For example "us-east-1" might be the name for a collection of compute and storage resources on
the eastern coast of the United States. Typically, a region contains multiple availability zones.
[-version <text>] - Instance Metadata Version of This VM
Selects the nodes that match this parameter value. The instance metadata version of this virtual
machine instance.
[-availability-zone <text>] - Distinct Location within a Region
Selects the nodes that match this parameter value. A distinct location within a region that is insulated
from failures in other availability zones. It provides low-latency network connectivity to other availability
zones in the same region.
[-primary-ip <text>] - Primary IP Address Assigned to this Instance
Selects the nodes that match this parameter value. The primary IP address assigned to this virtual
machine instance.
[-fault-domain <integer>] - Fault Domain of This Instance
Selects the nodes that match this parameter value. A cloud provider-assigned numerical fault domain ID
for this virtual machine within an Availability Set.
[-update-domain <integer>] - Update Domain of This Instance
Selects the nodes that match this parameter value. A cloud provider-assigned numerical update domain
ID for this virtual machine within an Availability Set.
[-provider <text>] - Provider on which this instance is running.
Selects the nodes that match this parameter value. The provider on which this instance is running.
[-offer <text>] - Marketplace Offer for This Image
Selects the nodes that match this parameter value. A Marketplace is an online store that offers
applications and services either built on or designed to integrate with a particular cloud provider. A
virtual machine offer corresponds to a class of product offering from a publisher. An offer is a collection
of SKUs. Every offer appears as its own entity in the provider Marketplace.
[-sku <text>] - Marketplace SKU for This Image
Selects the nodes that match this parameter value. A SKU is the smallest purchasable unit of an offer.
While within the same product class (offer), SKUs allow you to differentiate between different features
supported, VM image types and billing models supported.
[-sku-version <text>] - Marketplace Version of a SKU
Selects the nodes that match this parameter value. The version for this virtual machine SKU.
[-resource-group-name <text>] - Resource Group Name of This Instance
Selects the nodes that match this parameter value. Resource group for the instance.
[-cpu-platform <text>] - CPU Platform of the Instance
Selects the nodes that match this parameter value. GCP only. CPU platform of the hypervisor host.
Example: Intel Broadwell.
● Examples
The following examples illustrate typical output from the system node virtual-machine instance
show command for a virtual machine running in a cloud provider environment.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system node virtual-machine instance show-system-disks command displays information
about the system disks (non-data disks) attached to the virtual machine. Data disk information is available
using the command storage disk show -virtual-machine-disk-info.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node Name
Selects disk information for nodes that match this parameter.
[-vmdisk-purpose <text>] - Purpose of the System Disk
Selects disk information for disks that match this parameter. Values include: Boot, NVRAM, and Core.
[-vmdisk-type <text>] - Type of the System Disk
Selects disk information for disks that match this parameter. Possible values are: VMDISK, SSD.
[-vmdisk-name <text>] - System Disk Name
Selects disk information for disks that match this parameter. The virtual machine ID of the system disk.
[-vmdisk-capacity <text>] - Size of the System Disk (GB)
Selects disk information for disks that match this parameter. The size of the system disk.
[-vmdisk-file-name <text>] - File Name of the System Disk Used By the Hypervisor
Selects disk information for disks that match this parameter. The virtual machine file name of the disk.
Each system disk is mapped to a unique VM disk file.
● Examples
The following example shows typical output from the command.
Related reference
storage disk show (page 1083)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system script delete command deletes files that contain CLI session records. Use the system
script show command to display saved CLI sessions.
● Parameters
-username <text> - Log Owner Username
Use this parameter to specify the name of the user whose CLI session record files are deleted. The
default is the username is that of the logged in user.
-filename <text> - Log Filename
Use this parameter to specify the names of CLI session record files to delete.
● Examples
The following example shows how to delete the files named sessionlog2 and sessionlog3.
Related reference
system script show (page 1500)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system script show command displays information about files that contain records of CLI sessions.
For security reasons, the command normally displays only the script files created by the logged in user.
Administrative users can display all log files using the -user parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-user ]
Use this parameter to display all script files created by all users, along with the username associated
with each file.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-username <text>] - Log Owner Username
Use this parameter to display information only about files saved by the user you specify. The default
username is that of the logged in user.
[-filename <text>] - Log Filename
Use this parameter to display information only about files that have the file name you specify.
[-size-limit {<integer>[KB|MB|GB|TB|PB]}] - Logfile Size Limit
Use this parameter to display information only about files that have the size limit you specify.
[-state <State of CLI session log>] - Current State
Use this parameter to display information only about files that have the state you specify. Valid values
for this parameter are open-and-logging, file-full, and file-closed.
[-size {<integer>[KB|MB|GB|TB|PB]}] - Current Logfile Size
Use this parameter to display information only about files that are the size you specify.
[-mtime <MM/DD/YYYY HH:MM:SS>] - Last Modification Time
Use this parameter to display information only about files that were last modified at the date and time
you specify.
[-this-session {yes|no}] - Session is Logging
Use this parameter with the value yes to display information only about files that are recording the
current CLI session. Use this parameter with the value no to display information only about files that are
not recording the current CLI session.
● Examples
The following example displays typical system script information.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system script start command starts creating a record of your CLI session. The record is stored in a
file. Use the system script show -this-session yes command to display files that are recording the
current CLI session. Use the system script stop command to stop recording the current CLI session.
● Parameters
-filename <text> - Filename to Log To
Use this parameter to specify the file name to which the CLI session record is saved.
-size-limit {<integer>[KB|MB|GB|TB|PB]} - Logfile Size Limit Max:2GB
Use this parameter to specify the maximum size of the file that contains the CLI session record. When
the file size reaches this limit, recording stops. The default file size limit is 1 MB. The maximum file size
limit is 2 GB.
● Examples
The following example shows how to start creating a record of the CLI session in a file named sessionlog3.
The size limit of this file is 20 MB.
Related reference
system script show (page 1500)
system script stop (page 1501)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system script stop command stops creating a record of your CLI session, if you started creating the
record by using the system script start command. Use the system script show -this-session yes
command to display files that are recording the current CLI session.
● Examples
The following example shows how to stop creating a record of your CLI session.
Related reference
system script start (page 1501)
system script show (page 1500)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system script upload command uploads a CLI session record file to a remote location. Specify the
remote location using an FTP or HTTP URI. Use the system script show command to display saved CLI
sessions. Use the system script start command to record a CLI session and save it to a file.
● Parameters
-username <text> - Username If Not Your Own
Use this parameter to specify the name of the user who owns the file to upload. By default, this is the
user who is logged in.
-filename <text> - Filename to Log To
Use this parameter to specify the name of a file to be uploaded.
-destination {(ftp|http)://(hostname|IPv4 Address|'['IPv6 Address']')...} - URI to Send File To
Use this parameter to specify the FTP or HTTP destination of the file.
● Examples
The following example shows how to upload the file named sessionlog3 to the destination ftp://
now.example.com/cli_sessions.
Related reference
system script show (page 1500)
system script start (page 1501)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor reboot-sp command reboots the Service Processor of the specified
node.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor is to be rebooted.
[-image {primary|backup}] - Image to Boot with After Reboot
This parameter specifies the image that the Service Processor uses after the reboot. By default, the
primary image is used. Avoid booting the SP from the backup image. Booting from the backup image is
reserved for troubleshooting and recovery purposes only. It might require that the SP automatic
firmware update be disabled, which is not a recommended setting. You should contact technical support
before attempting to boot the SP from the backup image.
● Examples
The following command reboots the Service Processor of node "node1" into the primary image.
cluster1::>
The following command reboots the Service Processors of all nodes. Since -image is not specified, the
Service Processors will boot into the primary image.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor show command displays information about the Service Processor of
each node in a cluster. You can limit output to specific types of information and specific nodes in the cluster,
or filter output by specific field values.
In case a node is offline or its Service Processor management daemon is down, the command displays the
last known IP address of its Service Processor. Only the IP address is displayed in such cases.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information for the Service Processor of the specified node.
[-type {SP|NONE|BMC}] - Type of Device
Selects information for the Service Processors of the specified type.
[-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|updating}] -
Status
Selects information for the Service Processors whose status matches the specified value.
[-ip-configured {true|false}] - Is Network Configured
Selects information for the Service Processors whose network is configured (true) or not configured
(false).
[-address <IP Address>, ...] - Public IP Address
Selects information for the Service Processors that use the specified IP address or addresses.
[-mac <MAC Address>] - MAC Address
Selects information for the Service Processors that use the specified MAC address.
[-fw-version <text>] - Firmware Version
Selects information for the Service Processors that are running the specified firmware version.
[-part-num <text>] - Part Number
Selects information for the Service Processors that have the specified part number.
[-serial-num <text>] - Serial Number
Selects information for the Service Processors that have the specified serial number.
[-dev-rev <text>] - Device Revision
Selects information for the Service Processors that have the specified device revision.
[-autoupdate-enabled {true|false}] - Is Firmware Autoupdate Enabled
Selects information for the Service Processors that have the specified status for firmware automatic
update.
● Examples
The following example displays basic information for the Service Processors of all the nodes.
cluster1::>
The following example displays all available information for the Service Processors of all the nodes.
Node: node1
Type of Device: SP
Status: online
Is Network Configured: true
Public IP Address: 192.168.1.201
MAC Address: ab:cd:ef:fe:ed:01
Firmware Version: 2.2X5
Part Number: Not Applicable
Serial Number: Not Applicable
Device Revision: Not Applicable
Is Firmware Autoupdate Enabled: true
Node: node2
Type of Device: SP
Status: online
cluster1::>
The following example displays only the type, status and firmware version for the Service Processors of all
the nodes.
cluster1::>
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command disables user-installed certificates for secure communication with the service processor API
service. Default certificates are then auto-generated.
● Examples
The following example disables user-installed certificates for the service processor API service.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command enables user-installed certificates for secure communication with the service processor. Use
the security certificate install command to install client, server and CA certificates.
● Parameters
-vserver <Vserver Name> - Vserver
Use this parameter to specify the Vserver on which certificates are installed.
-server-cert <text> - Name of the Server Certificate
Use this parameter to specify the unique name of the server certificate.
-client-cert <text> - Name of the Client Certificate
Use this parameter to specify the unique name of the client certificate.
-rootca-cert <text> - Names of the Root CA Certificates
Use this parameter to specify the unique names of server-ca or client-ca certificate.
● Examples
The following example installs server, client and rootca certificates and then enables those certificates for
secure communication with the service processor.
cluster1 1533F273AA311FDB
xxx-client client
Certificate Authority: xxx-ca
Expiration Date: Fri May 31 05:34:37 2019
cluster1 1533F1B321E55242
xxx-server server
Certificate Authority: xxx-ca
Expiration Date: Fri May 31 05:20:50 2019
Related reference
security certificate install (page 560)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system service-processor api-service modify command modifies SP API service
configuration. The SP API is a secure network API that enables Data ONTAP to communicate with the Service
Processor over the network.
● Parameters
[-is-enabled {true|false}] - Is SP API Service Enabled
This parameter enables or disables the API service of the Service Processor (SP) or baseboard
management controller (BMC). When the API service is disabled, features like network-based firmware
updates and network-based down node log collection will not be available, and the slower serial-
interface will be used for firmware updates and down node log collection.
[-port <integer>] - SP API Service Port
This parameter specifies the port number on the Service Processor used for the API service. By default,
port 50000 is used.
[-limit-access {true|false}] - Only Cluster Members Can Access the Service
This parameter restricts API service access to cluster members only. By default, it is enabled.
● Examples
The following example modifies the port number used for the SP API service and then disables the SP API
service.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor api-service regenerate-ssh-auth-service command
regenerates public and private keys for SSH public key authentication between ONTAP nodes and their
service processor (SP) or basboard management controller (BMC).
● Parameters
-key-type {ecdsa-256|ecdsa-384|rsa-2048|rsa-3076} - SSH Key Type
This parameter specifies the SSH authentication key type and key size that must be used when
regenerating the keys. Examples: ecdsa-256, rsa-2048.
● Examples
The following example regenerates keys based on key type ECDSA and key size 256.
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The system service-processor api-service renew-internal-certificates command
generates the certificates used for secure communication with the service processor API service. This
command is not allowed if user-installed certificates are enabled.
● Examples
The following example generates new default host and root-ca certificates.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system service-processor api-service show command displays the service processor (SP) or
baseboard management controller (BMC) API service configuration.
● Examples
The following example displays the service processor API service configuration:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor image modify command enables or disables automatic firmware
update on the Service Processor of specified node or nodes.
● Parameters
-node {<nodename>|local} - Node
The parameter specifies the node on which automatic Service Processor firmware update is to be
enabled or disabled.
[-autoupdate {true|false}] - Firmware Autoupdate
Setting this parameter to true enables automatic firmware update. Setting this parameter to false
disables automatic firmware update. This is a mandatory parameter.
● Examples
The following command enables automatic firmware update for the Service Processor on the local node.
The following command enables automatic firmware update for the Service Processors on all the nodes.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor image show command displays information about the currently
installed firmware images on the Service Processor of each node in a cluster. You can limit output to specific
types of information and specific nodes in the cluster, or filter output by specific field values.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects firmware image information for the Service Processor of the specified node.
[-image {primary|backup}] - Image
Selects firmware image information for the Service Processors that are running the primary or backup
image as specified.
[-type {SP|NONE|BMC}] - Type
Selects firmware image information for the Service Processors of the specified type.
[-status {installed|corrupt|updating|auto-updating|none}] - Image Status
Selects firmware image information for the Service Processors whose image status matches the specified
value.
[-is-current {true|false}] - Is Image Current
Selects firmware image information for the SP whose current image matches the specified status. This
parameter indicates the partition (primary or backup) that the SP is currently booted from, not whether
the installed firmware version is most current.
[-version <text>] - Firmware Version
Selects firmware image information for the Service Processors running the specified firmware version.
[-autoupdate {true|false}] - Firmware Autoupdate
Selects firmware image information for the Service Processors whose automatic update matches the
specified configuration.
[-last-update-status {failed|passed}] - Last Update Status
Selects firmware image information for the Service Processors whose last update is of the specified
status.
● Examples
The following command displays basic firmware information for the Service Processors of all the nodes.
cluster1::>
The following command displays all available firmware information for the Service Processors of all the
nodes.
Node: node1
Image: primary
Type: SP
Image Status: installed
Is Image Current: true
Firmware Version: 2.2X8
Firmware Autoupdate: true
Last Update Status: passed
Node: node1
Image: backup
Type: SP
Image Status: installed
Is Image Current: false
Firmware Version: 2.2X5
Firmware Autoupdate: true
Last Update Status: passed
Node: node2
Image: primary
Type: SP
Image Status: installed
Is Image Current: true
Firmware Version: 2.2X8
Firmware Autoupdate: true
Last Update Status: passed
Node: node2
Image: backup
Type: SP
Image Status: installed
Is Image Current: false
Firmware Version: 2.2X5
Firmware Autoupdate: true
Last Update Status: passed
4 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor image update command installs a new firmware version on the
Service Processor of specified node in a cluster. If this command fails, it will display and log an appropriate
error message and abort. No automatic command retries will be performed. This command also specifies
which firmware image is to be installed on the Service Processor and how.
You can use the command system service-processor image update-progress show to check the
progress of the update.
The following parameter combinations are not supported for this command:
• -update-type differential with -clear-logs true
• -baseline true with -package <text>
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor's firmware is to be updated.
[-package <text>] - Firmware Package
This parameter specifies the package that will be installed. You can find the package file in the SP
Update Repository field of the system node image package show command. If you do not specify
this parameter, the Service Processor is updated to the most recent version of the firmware that is
available in the update repository. You must specify this parameter if baseline is false or omitted.
[-baseline {true|false}] - Install Baseline
If you set this parameter to true, the command installs the Service Processor firmware version that is
bundled with the currently running release of Data ONTAP. This is a safety mechanism that allows you to
revert the SP firmware to the version that was qualified and bundled with the currently running version
of Data ONTAP on your system. If not specified, this parameter defaults to false.
[-clear-logs {true|false}] - Clear Logs After Update
If you set this parameter to true, the command resets log settings to factory default and clears contents
of all logs maintained by the Service Processor, including:
• Event logs
• IPMI logs
• Forensics logs
● Examples
The following command reverts the firmware on the Service Processor of the local node to the version that
was packaged with the currently running release of Data ONTAP. A complete install will be performed,
clearing all logs maintained by the Service Processor. The second command displays the status of the in-
progress firmware install.
cluster1::> system service-processor image update -node local -update-type full -baseline true -clear-l
ogs true
cluster1::>
cluster1::>
Related reference
system node image package show (page 1476)
system service-processor image update-progress show (page 1512)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor image update-progress show command displays the progress
information of firmware updates on the Service Processor of the specified nodes. The "in-progress" field
displays "no" if no update is in progress. This command does not display the progress of an SP firmware
update that is triggered from the SP CLI.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
This parameter displays the status of Service Processor firmware update for the specified node.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Latest SP Firmware Update Start Timestamp
This parameter displays the status of the Service Processor whose firmware update start time matches
the specified value.
[-percent-done <integer>] - Latest SP Firmware Update Percentage Done
This parameter displays the status of the Service Processor whose update completion percentage
matches the specified value.
[-end-time <MM/DD/YYYY HH:MM:SS>] - Latest SP Firmware Update End Timestamp
This parameter displays the status of the Service Processor whose firmware update end time matches
the specified value.
[-in-progress {yes|no}] - Is Update in Progress
This parameter displays the update status of the Service Processor that matches the specified in-progress
status.
● Examples
The following example starts a firmware update on the local node and then uses the command system
service-processor image update-progress show to display progress of firmware updates on Service
Processors of all nodes in the system.
cluster1::> system service-processor image update -node local -update-type full -baseline true -clear-l
ogs true
cluster1::>
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor log show-allocations command displays the allocation map of the
Service Processor logs collected in the cluster. The Service Processor logs of a node are archived in the mroot
directory of the collecting node. This command displays the sequence numbers for the Service Processor log
files that reside in each collecting node.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
If you specify this parameter, the command displays the sequence numbers of Service Processor log files
that the specified node has collected.
[-remote-node <text>] - Remote Node
If you specify this parameter, the command displays the sequence numbers of Service Processor log files
that have been collected from the specified remote node.
[-seqList <integer>, ...] - Log File Sequence Numbers
If you specify this parameter, the command displays information about Service Processor log files with
the specified sequence number.
● Examples
The following example displays the allocation map of the Service Processor log files in the cluster.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor network modify command modifies the network configuration of the
Service Processor of specified node or nodes in a cluster.
If the SP automatic network configuration has been enabled, the system service-processor network
modify command allows you to only enable or disable the SP IPv4 or Ipv6 network interface.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node whose Service Processor's network configuration is to be modified.
-address-family {IPv4|IPv6} - Address Family
This parameter specifies whether the IPv4 or the IPv6 configuration is to be modified.
[-enable {true|false}] - Interface Enabled
This parameter enables or disables the underlying network interface for the specified address-family.
This is a mandatory parameter.
[-dhcp {v4|none}] - DHCP Status
If this parameter is set to v4, the Service Processor uses network configuration from the DHCP server.
Otherwise, the Service Processor uses the network address you specify. If this parameter is not set to v4
or is not specified, you must specify the IP address, netmask, prefix-length, and gateway in the
command. DHCP is not supported for IPv6 configuration.
[-ip-address <IP Address>] - IP Address
This parameter specifies the public IP address for the Service Processor. You must specify this parameter
when the -dhcp parameter is not set to v4.
{ [-netmask <IP Address>] - Netmask
This parameter specifies the netmask for a Service Processor that uses an IPv4 address. This parameter
has no effect if the IP address family is set to IPv6. You must specify this parameter when DHCP is not v4
and the address family is IPv4.
| [-prefix-length <integer>]} - Prefix Length of Subnet Mask
This parameter specifies the network prefix-length of the Service Processor if the address family is set to
IPv6. The parameter has no effect when the address family is set to IPv4. You must specify this
parameter when DHCP is not set to v4 and when the address family is set to IPv6.
[-gateway <IP Address>] - Gateway IP Address
This parameter specifies network gateway of the Service Processor. You must specify this parameter
when DHCP is not set to v4.
● Examples
The following example enables the network interface for IPv4 on the Service Processor of the local node. It
first displays the current network configuration information of the local node to show the network interface
is initially disabled, and then enables it with IP address 192.168.1.202, netmask as 255.255.255.0 and
gateway as 192.168.1.1. It displays the interim state with SP Network Setup Status field showing "in-
progress". It finally displays the network configuration again to confirm the specified values took effect.
Node: node2
Address Family: IPv4
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: not-setup
SP Network Setup Failure Reason: -
2 entries were displayed.
cluster1::>
cluster1::> system service-processor network modify -node local -address-family IPv4 -enable true -ip-a
ddress 192.168.1.202 -netmask 255.255.255.0 -gateway 192.168.1.1
cluster1::>
cluster1::> system service-processor network show -instance -node local
Node: node2
Address Family: IPv4
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: -
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: in-progress
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Node: node2
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: none
IP Address: 192.168.1.202
MAC Address: ab:cd:ef:fe:ed:02
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 16:29:55 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor network show command displays the network configuration of the
Service Processor of each node in a cluster. You can limit output to specific types of information and specific
nodes in the cluster, or filter output by specific field values.
In case a node is offline or its Service Processor management daemon is down, the command displays the
last known IP address of its Service Processor. Only the IP address is displayed in such cases.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects network configuration information for the Service Processor of the specified node.
[-address-family {IPv4|IPv6}] - Address Family
Selects network configuration information for the Service Processors that have the specified IP address
family.
[-enable {true|false}] - Interface Enabled
Selects network configuration information for the Service Processors whose network interface for the
given address-family is enabled or disabled as specified.
[-type {SP|NONE|BMC}] - Type of Device
Selects network configuration information for the Service Processors of the specified type.
[-status {online|offline|sp-daemon-offline|node-offline|degraded|rebooting|unknown|updating}] -
Status
Selects network configuration information for the Service Processors whose status matches the specified
value.
● Examples
The following example displays basic network configuration information for the Service Processors of all the
nodes.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
DHCP: none
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: -
Network Mask (IPv4 only): -
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: not-setup
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:02
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
DHCP: none
MAC Address: ab:cd:ef:fe:ed:02
Network Gateway: -
Network Mask (IPv4 only): -
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: not-setup
cluster1::>
The following example displays all available network configuration information for the Service Processors of
all the nodes.
Node: node1
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: v4
IP Address: 192.168.1.201
MAC Address: ab:cd:ef:fe:ed:01
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node1
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:01
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: not-setup
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv4
Interface Enabled: true
Type of Device: SP
Status: online
Link Status: up
DHCP Status: v4
IP Address: 192.168.1.202
MAC Address: ab:cd:ef:fe:ed:02
Netmask: 255.255.255.0
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: 192.168.1.1
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: succeeded
SP Network Setup Failure Reason: -
Node: node2
Address Family: IPv6
Interface Enabled: false
Type of Device: SP
Status: online
Link Status: disabled
DHCP Status: none
IP Address: -
MAC Address: ab:cd:ef:fe:ed:02
Netmask: -
Prefix Length of Subnet Mask: -
Router Assigned IP Address: -
Link Local IP Address: -
Gateway IP Address: -
Time Last Updated: Fri Jun 13 17:03:59 GMT 2014
Subnet Name: -
Enable IPv6 Router Assigned Address: -
SP Network Setup Status: not-setup
SP Network Setup Failure Reason: -
4 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor network auto-configuration disable command disables the SP's
use of subnet resource for the automatic configuration of its networking port. This command is a cluster-
wide configuration. When you disable the SP automatic network configuration, all SPs in the cluster will be
configured to use IPv4 DHCP. Any addresses previously allocated from the subnet to the SP will be released.
If the SP fails to obtain an IPv4 IP address from the DHCP server, an EMS message warns you about the
failure. The IPv6 interface will be disabled.
● Parameters
-address-family {IPv4|IPv6} - Subnet Address Family
This parameter specifies whether the IPv4 or the IPv6 automatic configuration is to be disabled for the
SP.
● Examples
The following example disables the automatic configuration for IPv4 on the SP. It first displays the current
network configuration and then disables the SP IPv4 automatic network configuration.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor network auto-configuration enable command enables the
automatic network configuration for the SP. This is a cluster-wide configuration. Every node in the cluster
will use the specified subnet to allocate IP address, subnet mask and gateway address for the SP
configuration. When the SP automatic network configuration is enabled, you do not need to manually
manage the SP network of individual nodes. A node that subsequently joins the cluster uses the specified
subnet to configure its SP network automatically.
Prior to running this command, the subnet you want to use for the SP automatic network configuration must
already be defined in the cluster and must have no resource conflicts with the SP network interface.
● Parameters
-address-family {IPv4|IPv6} - Subnet Address Family
This parameter specifies whether the IPv4 or the IPv6 automatic configuration is to be enabled for the
SP.
-subnet-name <text> - Subnet Name
This parameter specifies the network subnet that the SP will use for automatic network configuration.
● Examples
The following example enables the automatic network configuration for IPv4 on the SP. It first displays the
current SP network configuration, displays available network subnet in the cluster, and then enable the SP
to use the subnet for IPv4 automatic configuration.
DHCP: v4
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: -
SP Network Setup Status: succeeded
IPspace: Default
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
ipv4_test 192.168.1.0/24 Default 192.168.1.1 3/5 192.168.1.2-192.168.1.6
DHCP: none
MAC Address: ab:cd:ef:fe:ed:01
Network Gateway: 192.168.1.1
Network Mask (IPv4 only): 255.255.255.0
Prefix Length (IPv6 only): -
IPv6 RA Enabled: -
Subnet Name: ipv4_test
SP Network Setup Status: succeeded
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor network auto-configuration show command displays the names
of the IPv4 and IPv6 network subnet objects configured in the cluster that the SP uses for automatic
configuration.
● Examples
The following example shows that the SP is configured to use the "ipv4_test" IPv4 subnet in the cluster for
the SP automatic network configuration.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor ssh add-allowed-addresses command grants IP addresses access
to the Service Processor.
● Parameters
-allowed-addresses <IP Address/Mask>, ... - Public IP Addresses
Use this parameter to specify one or more IP addresses with corresponding netmasks. The value should
be specified in the format of address/netmask, for example, 10.98.150.10/24, fd20:8b1e:b255:c09b::/64.
Use commas to separate multiple address/netmask pairs. If "0.0.0.0/0, ::/0" is specified in the parameter,
any IP address is allowed to access the Service Processor.
● Examples
The following examples grant the specified IP addresses access to the Service Processor and display the list
of public IP addresses that are allowed to access the Service Processor.
The following example enables all IP addresses to access the Service Processor.
cluster1::> system service-processor ssh add-allowed-addresses -allowed-addresses 0.0.0.0/0, ::/0
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor ssh remove-allowed-addresses command blocks the specified IP
address from accessing the Service Processor. If all IP addresses are removed from the access list, then the
Service Processor is not accessible from any IP address.
● Parameters
-allowed-addresses <IP Address/Mask>, ... - Public IP Addresses
Use this parameter to specify one or more IP addresses with corresponding netmasks. The value should
be specified in the format of address/netmask, for example, 10.98.150.10/24, fd20:8b1e:b255:c09b::/64.
Use commas to separate multiple address/netmask pairs.
● Examples
The following example prevents the specified IP addresses from accessing the Service Processor. It also
displays the list of public IP addresses that are allowed to access the Service Processor.
Warning: If all IP addresses are removed from the allowed address list, all IP
addresses will be denied access. To restore the "allow all" default,
use the "system service-processor ssh add-allowed-addresses
-allowed-addresses 0.0.0.0/0, ::/0" command. Do you want to continue?
{y|n}: y
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system service-processor ssh show command displays the IP addresses that are allowed to
access the Service Processor by using SSH.
● Examples
The following example displays SSH security information about the Service Processor.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall modify command modifies a node's firewall configuration.
● Parameters
-node {<nodename>|local} - Node
Use this parameter to specify the node on which to modify firewall configuration.
[-enabled {true|false}] - Service Enabled
Use this parameter to specify whether firewall protection is enabled ("true") or disabled ("false") for the
node's network ports. The default setting is true.
[-logging {true|false}] - (DEPRECATED)-Enable Logging
Note
This parameter is deprecated and may be removed in a future version of Data ONTAP.
Use this parameter to specify whether logging is enabled ("true") or disabled ("false") for the firewall
service. The default setting is false.
● Examples
The following example enables firewall protection and logging for a node named node1:
cluster1::> system services firewall modify -node node1 -enabled true -logging true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall show command displays firewall configuration and logging
information. If the command is issued without any parameters, it displays information about all nodes in
the cluster. You can also query specific nodes for their firewall information by running the command with
the -node parameter.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the firewall settings on the node you specify.
[-enabled {true|false}] - Service Enabled
Selects information about the nodes with the firewall enabled ("true") or disabled ("false").
[-logging {true|false}] - (DEPRECATED)-Enable Logging
Note
This parameter is deprecated and may be removed in a future version of Data ONTAP.
Selects information about the nodes with firewall logging enabled ("true") or disabled ("false").
● Examples
The following example displays information about firewall configuration for all nodes in the cluster:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall policy clone command creates a new firewall policy that is an exact
copy of an existing policy, but has a new name.
● Parameters
-vserver <text> - Vserver owning the Policy
Use this parameter to specify the name of the Vserver owning the existing policy to copy.
-policy <text> - Firewall Policy to be Cloned
Use this parameter to specify the name of the existing policy to copy.
[-destination-vserver <text>] - Vserver owning the New Firewall Policy
Use this parameter to specify the name of the Vserver that will own the new policy to create.
-destination-policy <text> - Name of New Firewall Policy
Use this parameter to specify the name of the new policy to create.
● Examples
This example creates a new firewall policy named "data2" on Vserver "vs0" from an existing firewall policy
named "data" on Vserver "vs1".
cluster1::> system services firewall policy clone -vserver vs0 -policy data -destination-vserver vs1 -d
estination-policy data2
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall policy create command creates a firewall policy entry with the
specified name and network service. This command is used both to create the first network service
associated with a new firewall policy, and to add to an existing firewall policy by associating another
network service with an existing policy. You can optionally specify one or more IP addresses with
corresponding netmasks that are allowed to use the firewall policy entry.
You can use the network interface modify command with the -firewall-policy parameter to put a
firewall policy into effect for a given logical interface by modifying that logical interface to use the specified
firewall policy.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the name of the Vserver on which the policy is to be created.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy that is to be created.
-service <service> - Service
Use this parameter to specify the network service that is associated with the policy. Possible values
include:
• dns - The DNS protocol server
• http - The HTTP protocol
• ndmp - The NDMP tape backup protocol
• ndmps - The NDMPS tape backup protocol
• none - No protocol (for creating an empty policy)
• ntp - The NTP protocol
• rsh - The RSH protocol
• snmp - The SNMP protocol
• telnet - The Telnet protocol
-allow-list <IP Address/Mask>, ... - Allowed IPs
Use this parameter to specify one or more IP addresses with corresponding netmasks that are to be
allowed by this firewall policy. The correct format for this parameter is address/netmask, similar to
"192.0.2.128/25". Multiple address/netmask pairs should be separated with commas. Use the value
0.0.0.0/0 for "any".
● Examples
The following example creates a firewall policy named data that uses the NDMP protocol and enables access
from all IP addresses on the 192.0.2.128/25 subnet:
cluster1::> system services firewall policy create -policy data -service ndmp -allow-list 192.0.2.128/2
5
The following example adds an entry to the firewall policy named data, associating the DNS protocol with
that policy and enabling access from all IP addresses on the 192.0.2.128/25 subnet:
cluster1::> system services firewall policy create -policy data -service dns -allow-list 192.0.2.128/25
Related reference
network interface modify (page 414)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall policy delete command deletes a firewall policy. You cannot delete
a policy that is being used by a logical interface. Use the network interface modify command with the
-firewall-policy parameter to change a network interface's firewall policy.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver of the policy to delete.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy to delete.
-service <service> - Service
Use this parameter to specify the policy's network service to delete.
● Examples
The following example deletes a firewall policy that uses the Telnet protocol on the policy named data:
cluster1::> system services firewall policy delete -policy data -service telnet
Use wildcards to delete entire policies at once, or particular services from every policy. This example deletes
the entire intercluster policy.
Related reference
network interface modify (page 414)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services firewall modify command enables you to modify the list of IP addresses and
netmasks associated with a firewall policy.
● Parameters
-vserver <vserver> - Vserver Name
Use this parameter to specify the Vserver of the policy to modify.
-policy <textpolicy_name> - Policy
Use this parameter to specify the name of the policy to modify.
-service <service> - Service
Use this parameter to specify the policy's network service to modify.
[-allow-list <IP Address/Mask>, ...] - Allowed IPs
Use this parameter to specify one or more IP addresses with corresponding netmasks that are allowed by
this firewall policy. The correct format for this parameter is address/netmask, similar to "192.0.2.128/25".
Multiple address/netmask pairs should be separated with commas. Use the value 0.0.0.0/0 for "any".
● Examples
The following example modifies the firewall policy named data that uses the NDMP protocol to enable
access from all addresses on the 192.0.2.128 subnet:
cluster1::> system services firewall policy modify -policy data -service ndmp -allow-list 192.0.2.128/2
5
Related reference
system services firewall modify (page 1528)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
The system services firewall policy show command displays information about firewall policies.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>,... parameter, the command displays only the fields that
you specify.
| [-instance ]}
Use this parameter to display all the fields for the specified policies.
[-vserver <vserver>] - Vserver Name
Use this parameter to display information only about the Vserver you specify.
[-policy <textpolicy_name>] - Policy
Use this parameter to display information about the policy you specify.
[-service <service>] - Service
Use this parameter to display information about the services you specify.
[-allow-list <IP Address/Mask>, ...] - Allowed IPs
Use this parameter to display information about the firewall policies that match the list of allowed IP
addresses and netmasks you specify. The correct format for this parameter is address/netmask, similar to
"192.0.2.128/25". Multiple address/netmask pairs should be separated with commas.
[-ipspace <text>] - IPspace
Use this parameter to display information only about the IPspace you specify.
● Examples
The following example displays information about all firewall policies:
cluster1
intercluster
ndmp 0.0.0.0/0, ::/0
ndmps 0.0.0.0/0, ::/0
cluster1
mgmt
dns 0.0.0.0/0, ::/0
http 0.0.0.0/0, ::/0
ndmp 0.0.0.0/0, ::/0
ndmps 0.0.0.0/0, ::/0
ntp 0.0.0.0/0, ::/0
snmp 0.0.0.0/0, ::/0
cluster1
mgmt-nfs
dns 0.0.0.0/0, ::/0
http 0.0.0.0/0, ::/0
ndmp 0.0.0.0/0, ::/0
ndmps 0.0.0.0/0, ::/0
ntp 0.0.0.0/0, ::/0
snmp 0.0.0.0/0, ::/0
17 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager install show command displays information about installed services.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <text>] - Service
Selects information about installed services that have the name you specify.
[-version <service version>] - Version
Selects information about installed services that have the version number you specify.
[-constituent <text>] - Constituent
Selects information about installed services that have the constituent process you specify.
● Examples
The following example shows typical output from a two-node cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager policy add command adds a new service policy to the services
manager. Policies determine which versions of a service can run on the nodes of the cluster.
● Parameters
-service <text> - Service
Use this parameter to specify the name of the service for which to add a policy.
-version <service version> - Version
Use this parameter to specify the minimum version number of the service to run.
● Examples
This example adds a service manager policy for version 1.0 of the diagnosis service.
cluster1::> system services manager policy add -service diagnosis -version 1.0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager policy remove command removes a policy from the services manager.
Policies determine which versions of a service can run on the nodes of the cluster.
● Parameters
-service <text> - Service
Use this parameter to specify the name of the service from which to remove a policy.
● Examples
The following example shows the removal of the service policy for version 1.0 of the diagnosis service.
cluster1::> system services manager policy remove -service diagnosis -version 1.0
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager policy setstate command enables or disables services manager
policies. Use the system services manager policy show command to display information about
configured policies.
● Parameters
-service <text> - Service
Use this parameter to set the state of the policy you specify.
-version <service version> - Version
Use this parameter to set the state of the policy with the version number you specify.
-state {on|off} - State
Use this parameter with the value "on" to enable the policy. Use this parameter with the value "off" to
disable the policy.
● Examples
The following example sets the policy for version 1.0 of the diagnosis service to off.
cluster1::> system services manager policy setstate -service diagnosis -version 1.0 -state off
Related reference
system services manager policy show (page 1535)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager policy show command displays information about policies that
determine which versions of a service can run on the nodes of the cluster.
Use the system services manager status show command to view information about services that are
configured to run in the cluster.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
● Examples
The following example shows typical output for this command.
Related reference
system services manager status show (page 1536)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services manager status show command displays the status of system services that are
configured to run in the cluster.
System services run on the nodes of the cluster based on policies. Policies determine which versions of a
service can run on the nodes of the cluster. Use the system services manager policy show command
to view existing policies.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-service <text>] - Service
Selects information about services that match the service name you specify.
[-version <service version>] - Version
Selects information about services that are configured to run the version number you specify. The
configured version is the minimum version that is allowed to run in the cluster according to a policy. Use
the system services manager policy show command to view information about service policies.
[-constituent <text>] - Constituent
Selects information about services that have the constituent process you specify.
[-actual-version <service version>] - Actual Version
Selects information about services that are running the version number you specify. This number can be
higher than the configured version if a more recent version is installed on the node that is running the
service.
[-node <nodename>] - Node
Selects information about services that the services manager has assigned to run on the nodes you
specify. If the service state is "running", the service is running on these nodes.
[-state <svc_state>] - State
Selects information about services that are in the state you specify.
[-is-running {true|false}] - Is Running
Use this parameter with the value "true" to select information about services that are currently running.
Use this parameter with the value "false" to select information about services that are not currently
running.
● Examples
The example below shows typical output for a simple cluster.
Related reference
system services manager policy show (page 1535)
These commands can be used to view or modify the configurations of NDMP service across all the nodes in
the cluster.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services ndmp kill command is used to terminate a specific NDMP session on a particular
node in the cluster.
● Parameters
<integer> - Session Identifier
Session ID of the NDMP session.
● Examples
The following example shows how a specific NDMP session on the node named node1 can be terminated:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services ndmp kill-all command is used to terminate all NDMP sessions on a particular
node in the cluster.
● Parameters
-node {<nodename>|local} - Node
Node on which all NDMP sessions needs to be terminated.
● Examples
The following example shows how all NDMP sessions on the node named node1 can be terminated:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp modify"
command.
The system services ndmp modify command allows you to modify the NDMP configurations for a node
in the cluster. One or more of the following configurations can be modified:
• Enable/disable NDMP service
• Enable/disable sending the NDMP password in clear text. Note that MD5 authentication mode is always
enabled.
• NDMP user ID
● Parameters
-node {<nodename>|local} - Node
This specifies the node whose NDMP configuration is to be modified.
[-enable {true|false}] - NDMP Service Enabled
This optionally specifies whether NDMP is enabled on the node. The default setting is true.
[-clear-text {true|false}] - Allow Clear Text Password
This optionally specifies whether the NDMP password can be sent in clear text. The default setting is
true.
[-user-id <text>] - NDMP User ID
This optionally specifies the ID of the NDMP user.
● Examples
The following example modifies the NDMP configuration on a node named node1. The configuration
enables NDMP, disables sending the password in clear text, and specifies an NDMP user named ndmp:
Related reference
vserver services ndmp modify (page 2453)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp off"
command.
The system services ndmp off command is used to disable the NDMP service on any node in the
cluster.
● Parameters
-node {<nodename>|local} - Node
The specific node on which NDMP service is to be disabled.
● Examples
The following example is used to turn off the NDMP service on node named node1:
Related reference
vserver services ndmp off (page 2458)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp on" command.
The system services ndmp on command is used to enable the NDMP service across any node in the
cluster.
● Parameters
-node {<nodename>|local} - Node
The specific node on which the NDMP service is to be enabled.
● Examples
The following example is used to turn on the NDMP service on node named node1:
Related reference
vserver services ndmp on (page 2458)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp generate-
password" command.
The system services ndmp password command is used to change the NDMP password for a node in the
cluster.
● Parameters
-node {<nodename>|local} - Node
The specific node for which the password is to be changed.
● Examples
The following example is used to change the NDMP password for the node named node1:
Related reference
vserver services ndmp generate-password (page 2452)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services ndmp probe command displays diagnostic information about all the NDMP
sessions in the cluster. The following fields are displayed for each of the sessions:
• Node
• Session identifier
• NDMP version
• Session authorized
• Data state
• Data operation
• Data server halt reason
• Data server connect type
• Data server connect address
• Data server connect port
• Data bytes processed
• Mover state
• Mover mode
• Mover pause reason
• Mover halt reason
• Mover record size
• Mover record number
• Mover bytes moved
• Mover seek position
• Mover bytes left to read
• Mover window offset
• Mover window length
• Mover position
• Mover SetRecordSize flag
• Mover SetWindow flag
• Mover connect type
• Mover connect address
• Mover connect port
• Effective host
• NDMP client address
• NDMP client port
• SCSI device ID
• SCSI hostadapter
• SCSI target ID
• SCSI LUN ID
• Tape device
• Tape mode
• Is Secure Control Connection
• Data Backup Mode
• Data Path
• NDMP Source Address
● Parameters
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about the sessions running on the
specified node only. Node should be a valid node name.
[-session-id <integer>] - Session Identifier
If this parameter is specified, the command displays information only about the specified session.
[-ndmp-version <integer>] - NDMP Version
This parameter refers to the NDMP protocol version being used in the session.
[-session-authorized {true|false}] - Session Authorized
This parameter indicates whether an NDMP session is authenticated or not.
[-data-state <component state>] - Data State
This parameter identifies the current state of the data server's state machine.
[-data-operation <data operation>] - Data Operation
This parameter identifies the data server's current operation.
[-data-halt-reason <halt reason>] - Data Server Halt Reason
This parameter identifies the event that caused the data server state machine to enter the HALTED state.
[-data-con-addr-type <address type>] - Data Server Connect Type
This parameter specifies the type of data connection established by the data server. The data connection
can be established locally within a given system or between remote networked systems.
[-data-con-addr <text>] - Data Server Connect Address
This parameter specifies the connection endpoint information for the data server's data connection.
[-data-con-port <integer>] - Data Server Connect Port
This parameter specifies the TCP/IP port that the data server will use when establishing a data
connection.
[-data-bytes-processed <integer>] - Data Bytes Processed
This parameter represents the cumulative number of data stream bytes transferred between the backup
or recovery method and the data connection during the current data operation.
[-mover-state <component state>] - Mover State
This parameter identifies the current state of the NDMP tape server's mover state machine.
[-mover-mode <mover mode>] - Mover Mode
This parameter identifies the direction of the mover data transfer.
[-mover-pause-reason <pause reason>] - Mover Pause Reason
This parameter identifies the event that caused the mover state machine to enter the PAUSED state.
[-mover-halt-reason <halt reason>] - Mover Halt Reason
This parameter identifies the event that caused the mover state machine to enter the HALTED state.
[-mover-record-size <integer>] - Mover Record Size
This parameter represents the current mover record size in bytes.
[-mover-record-num <integer>] - Mover Record Number
This parameter represents the last tape record processed by the mover.
[-mover-bytes-moved <integer>] - Mover Bytes Moved
This parameter represents the cumulative number of data stream bytes written to the data connection
or the number of data stream bytes read from the data connection and written to the tape subsystem,
depending on the mode of mover operation.
[-mover-seek-position <integer>] - Mover Seek Position
This parameter represents the data stream offset of the first byte the DMA requested the mover to
transfer to the data connection during a mover read operation.
[-mover-bytes-left-to-read <integer>] - Mover Bytes Left to Read
This parameter represents the number of data bytes remaining to be transferred to the data connection
to satisfy the current NDMP_MOVER_READ request.
[-mover-window-offset <integer>] - Mover Window Offset
This parameter represents the absolute offset of the first byte of the mover window within the overall
data stream.
[-mover-window-length <integer>] - Mover Window Length
This parameter represents the length of the current mover window in bytes.
[-mover-position <integer>] - Mover Position
This parameter can be used to list only those sessions, whose mover position matches a specific value.
Mover-position should be an integer.
[-mover-setrecordsize-flag {true|false}] - Mover SetRecordSize Flag
This parameter is used by the DMA to establish the record size used for mover-initiated tape read and
write operations.
[-mover-setwindow-flag {true|false}] - Mover SetWindow Flag
This parameter represents whether a mover window has been set or not. A mover window represents the
portion of the overall backup stream that is accessible to the mover without intervening DMA tape
manipulation.
[-mover-con-addr-type <address type>] - Mover Connect Type
This parameter specifies the type of data connection established by the mover. The data connection can
be established locally within a given system or between remote networked systems.
[-mover-con-addr <text>] - Mover Connect Address
This parameter specifies the endpoint address or addresses that the mover will use when establishing a
data connection.
[-mover-con-port <integer>] - Mover Connect Port
This parameter specifies the TCP/IP port that the mover will use when establishing a data connection.
[-eff-host <host type>] - Effective Host
This parameter indicates the host context in which the NDMP session runs. The valid values are:
PRIMARY or PARTNER.
[-client-addr <text>] - NDMP Client Address
This parameter specifies the client's IP address.
[-client-port <integer>] - NDMP Client Port
This parameter specifies the client's port number.
[-spt-device-id <text>] - SCSI Device ID
This parameter specifies the SCSI device ID.
[-spt-ha <integer>] - SCSI Host Adapter
This parameter specifies the SCSI host adapter.
[-spt-scsi-id <integer>] - SCSI Target ID
This parameter specifies the SCSI target.
[-spt-scsi-lun <integer>] - SCSI LUN ID
● Examples
The following example displays diagnostic information about all the sessions in the cluster:
Node: cluster1-01
Session identifier: 4952
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Data server halt reason: NA
Data server connect type: LOCAL
....
...
Node: cluster1-02
Session identifier: 5289
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Data server halt reason: NA
Data server connect type: LOCAL
....
...
The following example displays diagnostic information of sessions running on the node cluster1-01 only:
Node: cluster1-01
Session identifier: 4952
NDMP version: 4
Session authorized: true
Data state: IDLE
Data operation: NOACTION
Related reference
system services ndmp status (page 1546)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp show"
command.
The system services ndmp show command displays the following information about the NDMP
configuration across all the nodes in the cluster:
• Node name
• Whether NDMP is enabled on the node
• Whether sending the NDMP password in clear text is enabled on the node
• NDMP user ID
A combination of parameters can be optionally supplied to filter the results based on specific criteria.
● Parameters
{ [-fields <fieldname>, ...]
If this parameter is specified, the command displays only the fields that you specify.
| [-instance ]}
If this parameter is specified, the command displays detailed information about all entries.
[-node {<nodename>|local}] - Node
Selects information about the specified node.
[-enable {true|false}] - NDMP Service Enabled
Selects information about the nodes where NDMP is enabled/disabled.
[-clear-text {true|false}] - Allow Clear Text Password
Selects information about the nodes whose clear-text setting matches the specified value.
[-user-id <text>] - NDMP User ID
Selects information about the nodes that have the specified NDMP user ID.
● Examples
The following example displays information about the NDMP configuration of all nodes in the cluster:
Related reference
vserver services ndmp show (page 2463)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system services ndmp status command lists all the NDMP sessions in the cluster. By default it lists
the following details about the active sessions:
• Node
• Session ID
A combination of parameters can be optionally supplied so as to list only those sessions which match
specific conditions. A short description of each of the parameter is provided in the parameters section.
● Parameters
{ [-fields <fieldname>, ...]
This optional parameter specifies which all additional fields to display. Any combination of the following
fields are valid:
• ndmp-version
• session-authorized
• data-state
• data-operation
• data-halt-reason
• data-con-addr-type
• data-con-addr
• data-con-port
• data-bytes-processed
• mover-state
• mover-mode
• mover-pause-reason
• mover-halt-reason
• mover-record-size
• mover-record-num
• mover-bytes-moved
• mover-seek-position
• mover-bytes-left-to-read
• mover-window-offset
• mover-window-length
• mover-position
• mover-setrecordsize-flag
• mover-setwindow-flag
• mover-con-addr-type
• mover-con-addr
• mover-con-port
• eff-host
• client-addr
• client-port
• spt-device-id
• spt-ha
• spt-scsi-id
• spt-scsi-lun
• tape-device
• tape-modes
• is-secure-control-connection
• data-backup-mode
• data-path
• source-addr
| [-instance ]}
If this parameter is specified, the command displays detailed information about all the active sessions.
[-node {<nodename>|local}] - Node
If this parameter is specified, the command displays information about the sessions running on the
specified node only. Node should be a valid node name.
[-session-id <integer>] - Session Identifier
If this parameter is specified, the command displays information about specific NDMP session. A session-
id is a number used to identify a particular NDMP session.
[-ndmp-version <integer>] - NDMP Version
This parameter refers to the NDMP protocol version being used in the session.
[-session-authorized {true|false}] - Session Authorized
This field indicates whether an NDMP session is authenticated or not.
[-data-state <component state>] - Data State
This field identifies the current state of the data server's state machine.
[-data-operation <data operation>] - Data Operation
This field identifies the data server's current operation.
[-data-halt-reason <halt reason>] - Data Server Halt Reason
This field identifies the event that caused the data server state machine to enter the HALTED state.
[-data-con-addr-type <address type>] - Data Server Connect Type
This field specifies the type of data connection established by the data server. The data connection can
be established locally within a given system or between remote networked systems.
[-data-con-addr <text>] - Data Server Connect Address
This specifies the connection endpoint information for the data server's data connection.
[-data-con-port <integer>] - Data Server Connect Port
This specifies the TCP/IP port that the data server will use when establishing a data connection.
[-data-bytes-processed <integer>] - Data Bytes Processed
This field represents the cumulative number of data stream bytes transferred between the backup or
recovery method and the data connection during the current data operation.
[-mover-state <component state>] - Mover State
This parameter identifies the current state of the NDMP tape server's mover state machine.
[-mover-mode <mover mode>] - Mover Mode
This parameter identifies the direction of the mover data transfer.
[-mover-pause-reason <pause reason>] - Mover Pause Reason
This parameter identifies the event that caused the mover state machine to enter the PAUSED state.
[-mover-halt-reason <halt reason>] - Mover Halt Reason
This integer field identifies the event that caused the mover state machine to enter the HALTED state.
[-mover-record-size <integer>] - Mover Record Size
This field represents the current mover record size in bytes.
[-mover-record-num <integer>] - Mover Record Number
This field represents the last tape record processed by the mover.
[-mover-bytes-moved <integer>] - Mover Bytes Moved
This field represents the cumulative number of data stream bytes written to the data connection or the
number of data stream bytes read from the data connection and written to the tape subsystem,
depending on the mode of mover operation.
[-mover-seek-position <integer>] - Mover Seek Position
This field represents the data stream offset of the first byte the DMA requested the mover to transfer to
the data connection during a mover read operation.
[-mover-bytes-left-to-read <integer>] - Mover Bytes Left to Read
This field represents the number of data bytes remaining to be transferred to the data connection to
satisfy the current NDMP_MOVER_READ request.
[-mover-window-offset <integer>] - Mover Window Offset
This field represents the absolute offset of the first byte of the mover window within the overall data
stream.
[-mover-window-length <integer>] - Mover Window Length
This field represents the length of the current mover window in bytes.
[-mover-position <integer>] - Mover Position
This parameter can be used to list only those sessions, whose mover position matches a specific value.
Mover-position should be an integer.
[-mover-setrecordsize-flag {true|false}] - Mover SetRecordSize Flag
This field is used by the DMA to establish the record size used for mover-initiated tape read and write
operations.
[-mover-setwindow-flag {true|false}] - Mover SetWindow Flag
This flag represents whether a mover window has been set or not. A mover window represents the
portion of the overall backup stream that is accessible to the mover without intervening DMA tape
manipulation.
[-mover-con-addr-type <address type>] - Mover Connect Type
This field specifies the type of data connection established by the mover. The data connection can be
established locally within a given system or between remote networked systems.
[-mover-con-addr <text>] - Mover Connect Address
This specifies the endpoint address or addresses that the mover will use when establishing a data
connection.
[-mover-con-port <integer>] - Mover Connect Port
This specifies the TCP/IP port that the mover will use when establishing a data connection.
[-eff-host <host type>] - Effective Host
This field indicates the host context in which the NDMP session runs. The valid values are: PRIMARY or
PARTNER.
[-client-addr <text>] - NDMP Client Address
This parameter specifies the client's IP address.
[-client-port <integer>] - NDMP Client Port
This parameter specifies the client's port number.
[-spt-device-id <text>] - SCSI Device ID
This parameter specifies the SCSI device ID.
[-spt-ha <integer>] - SCSI Host Adapter
This parameter specifies the SCSI host adapter.
[-spt-scsi-id <integer>] - SCSI Target ID
This parameter specifies the SCSI target.
[-spt-scsi-lun <integer>] - SCSI LUN ID
This parameter specifies the SCSI LUN ID.
[-tape-device <text>] - Tape Device
This parameter specifies the name to identify the tape device.
[-tape-mode <mover mode>] - Tape Mode
This parameter specifies the mode in which tapes are opened.
[-is-secure-control-connection {true|false}] - Is Secure Control Connection
This parameter specifies whether the control connection is secure or not.
[-data-backup-mode <text>] - Data Backup Mode
This parameter specifies whether the mode of data backup is Dump or SMTape.
[-data-path <text>] - Data Path
This parameter specifies the path of data being backed up.
[-source-addr <text>] - NDMP Source Address
This parameter specifies the control connection IP address of the NDMP session.
● Examples
The following example displays all the NDMP sessions on the cluster:
The following example shows how to display only the sessions running on node-01:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp log start"
command.
This command is used to start logging on an active NDMP session on a node. You can start logging two
different kinds of sessions. The NDMP server session manages all NDMP tasks on the node. If you want to log
information regarding the NDMP server, use server with the -session-id parameter to enable logging. If
you want to log information about a particular NDMP session, for example a restore operation, then
determine the session ID for the session using the "system services ndmp status" command and use that ID
with the -session-id parameter to enable logging.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node.
-session-id {<integer>|server} - Session Identifier
This parameter specifies the NDMP session-id on which logging needs to be started. The session-id is
associated with a unique NDMP session. Specify server to start logging on the NDMP server session.
-filter <text> - Level Filter
Use this parameter to specify the filter for a particular session ID. This parameter controls the NDMP
modules for which logging is to be enabled. This parameter can take five values. They are as follow : all,
none, normal, backend or "filter-expression". The default value for this is none.
• all turns on logging for all modules.
• none disables logging for all modules.
• normal is a short cut parameter that enables logging for all modules except verbose and io_loop. The
equivalent filter string is all-verbose-io_loop
• backend is a short cut parameter that enables logging for all modules except verbose, io_loop, ndmps
and ndmpd. The equivalent filter string is all-verbose-io_loop-ndmps-ndmpp
• (filter-expression) is a combination of one or more modules for which logs needs to be enabled.
Multiple module names can be combined using following operators :
- - to remove the given module from the list of specified modules in the filter string. For example the
filter all-ndmpp will enable logging for all modules but not ndmpp.
- ^ to add the given module or modules to the list of modules specified in the filter string. For
example the filter ndmpp^mover^data will enable logging for ndmpp, mover and data.
+---------------------------------------------------+
| Modules | Description |
+---------------------------------------------------+
| verbose | verbose message |
| io | I/O process loop |
| io_loop | I/O process loop verbose messages |
| ndmps | NDMP service |
| ndmpp | NDMP Protocol |
| rpc | General RPC service |
| fdc_rpc | RPC to FC driver service |
| auth | Authentication |
| mover | NDMP MOVER (tape I/O) |
| data | NDMP DATA (backup/restore) |
| scsi | NDMP SCSI (robot/tape ops) |
| bkup_rpc | RPC to Backup service client |
| bkup_rpc_s | RPC to Backup service server |
| cleaner | Backup/Mover session cleaner |
| conf | Debug configure/reconfigure |
| dblade | Dblade specific messages |
| timer | NDMP server timeout messages |
| vldb | VLDB service |
| smf | SMF Gateway messages |
| vol | VOL OPS service |
| sv | SnapVault NDMP extension |
| common | NDMP common state |
| ext | NDMP extensions messages |
| sm | SnapMirror NDMP extension |
| ndmprpc | NDMP Mhost RPC server |
+---------------------------------------------------+
● Examples
The following example shows how to start logging on a specific NDMP session 33522, running on the node
cluster1-01 with filter normal.
cluster1::*> system services ndmp log start -node cluster1-01 -session-id 33522 -filter normal
The following example shows how to start logging on the NDMP server session, on the node cluster1-01
with filter all.
cluster1::*> system services ndmp log start -session-id server -filter all -node cluster1-01
Related reference
vserver services ndmp log start (page 2474)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp log stop"
command.
This command is used to stop logging on an active NDMP session on a node. The NDMP server session
manages all NDMP tasks on the node. If you want to stop logging information regarding the NDMP server,
use server with the -session-id parameter to disable logging. If you want to stop logging information
about a particular NDMP session, for example a restore operation, then determine the session ID for the
session using the "system services ndmp status" command and use that ID with the -session-id
parameter to disable logging.
● Parameters
-node {<nodename>|local} - Node
This parameter specifies the node.
-session-id {<integer>|server} - Session Identifier
This parameter specifies the NDMP session-id on which logging needs to be stopped. The session-id is
associated with a unique NDMP session. Specify server to stop logging on the NDMP server session.
● Examples
The following example shows how to stop logging on a specific NDMP session 35512, running on node
cluster1-01.
cluster1::*> system services ndmp log stop -session-id 35512 -node cluster1-01
The following example shows how to stop logging on the NDMP server session, running on node
cluster1-01.
cluster1::*> system services ndmp log stop -session-id server -node cluster1-01
Related reference
vserver services ndmp log stop (page 2476)
Note
These node-scoped NDMP commands are deprecated. Node-scoped NDMP functionality may be
removed in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp"
commands.
Related reference
vserver services ndmp commands (page 2452)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp" command.
This command puts NDMP server in Vserver-aware mode. The Vserver-aware commands are available under
vserver services ndmp.
● Examples
The following example shows how to disable the node-scope-mode of NDMP server.
Related reference
vserver services ndmp commands (page 2452)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp" command.
This command puts the NDMP server in the node-scope-mode. In the node-scope-mode, NDMP server has
the following behavior:
• All NDMP operations are restricted to resources on the node
• Vserver-aware NDMP commands are disabled
• NDMP authentication falls back to DATA ONTAP 8.1 NDMP authentication scheme
● Examples
The following example enables node-scope-mode of operation :
Related reference
vserver services ndmp commands (page 2452)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Note
This node-scoped NDMP command is deprecated. Node-scoped NDMP functionality may be removed
in a future release of Data ONTAP. Use the Vserver-aware "vserver services ndmp" command.
This command displays whether the NDMP server is operating in node-scope-mode or not.
• NDMP node-scope-mode is disabled - NDMP server is Vserver-aware
• NDMP node-scope-mode is enabled - NDMP server is node scoped
● Examples
The following example shows how to check the status of NDMP server in a cluster
Related reference
vserver services ndmp commands (page 2452)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system services ndmp service modify command allows you to modify the NDMP service
configurations for a node in the cluster. The following configuration can be modified:
• NDMP Common Sessions
● Parameters
-node {<nodename>|local} - Node
This specifies the node whose NDMP configuration is to be modified.
[-common-sessions <integer>] - NDMP Common Sessions
This optional parameter specifies the number of extra common NDMP sessions supported, in addition to
the number of backup and restore sessions supported for a platform. The default value is 4 for all
platforms. The number of backup and restore sessions are platform dependent.
● Examples
The following example modifies the NDMP configuration on a node named node1. The configuration sets
the NDMP Common Sessions to 16:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system services ndmp service show command displays the following information about the
NDMP service configuration across all the nodes in the cluster:
• Node name
• NDMP Common Sessions
A combination of parameters can be optionally supplied to filter the results based on specific criteria.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects information about the specified node.
[-common-sessions <integer>] - NDMP Common Sessions
Selects information about the nodes that have the specified number of NDMP common sessions.
● Examples
The following example displays information about the NDMP configuration of all nodes in the cluster:
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system services ndmp service start command starts the NDMP service daemon for a node. This
is different from the system services ndmp on command. The system services ndmp on command
enables the daemon to accept NDMP requests. The NDMP service daemon starts automatically on a node
when it boots up. Use this command to start the NDMP service daemon that has been stopped by the
system services ndmp service stop command.
● Parameters
-node {<nodename>|local} - Node
The node on which the NDMP service needs to be started.
● Examples
Related reference
system services ndmp on (page 1539)
system services ndmp service stop (page 1556)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system services ndmp service stop command stops the NDMP service daemon on a node. This is
a disruptive command and should not be used in normal scenarios. Processing of active sessions continues
but the ability to view or kill sessions is lost. This is different from the system services ndmp off
command. The system services ndmp off command disables new NDMP connections on the node but
does not stop the NDMP service daemon.
● Parameters
-node {<nodename>|local} - Node
The node on which the NDMP service needs to be stopped.
● Examples
Related reference
system services ndmp off (page 1539)
system services ndmp service start (page 1555)
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system services ndmp service terminate command terminates all active sessions on the node.
This command forcefully terminates all NDMP sessions without an opportunity for a graceful shutdown. Use
system services ndmp kill-all for a clean termination of all active sessions on a node.
● Parameters
-node {<nodename>|local} - Node
The node on which the NDMP sessions need to be terminated
● Examples
Related reference
system services ndmp kill-all (page 1538)
These commands manage the availability of web protocols (HTTP/HTTPs) in the cluster, including the port
and encryption configurations for those protocols.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command modifies the overall availability of web services in the cluster, including the core protocol
configurations for those services. In a pre-root or unclustered scenario, its scope applies to the local node.
● Parameters
[-external {true|false}] - External Web Services
Defines whether remote clients can access HTTP or HTTPS service content. Along with the system
services firewall configuration, this parameter controls the visibility for client connections. The
default value for this parameter after installation is 'true', which exports web protocols for remote
access. If no value is provided during modification, its behavior does not change.
[-per-address-limit <integer>] - Per Address Limit (privilege: advanced)
Limits the number of connections that can be processed concurrently from the same remote address. If
more connections are accepted, those in excess of the limit are delayed and processed after the number
of connections being processed drops below the limit. The default value is 96.
[-http-enabled {true|false}] - HTTP Enabled (privilege: advanced)
Defines whether HTTP is enabled. The default value for this parameter is false.
[-csrf-protection-enabled {true|false}] - CSRF Protection Enabled (privilege: advanced)
Defines whether CSRF protection is enabled. The default value is true.
[-csrf-token-concurrent-limit <integer>] - Maximum Number of Concurrent CSRF Tokens
(privilege: advanced)
Defines how many concurrent CSRF tokens can exist at any given time. The default value is 500.
[-csrf-token-idle-timeout <integer>] - CSRF Token Idle Timeout (Seconds) (privilege: advanced)
Defines how long (in seconds) an unused CSRF token will exist until it expires. The default value is 900
seconds (15 minutes).
[-csrf-token-absolute-timeout <integer>] - CSRF Token Absolute Timeout (Seconds) (privilege:
advanced)
Defines how long (in seconds) a CSRF token can exist regardless of usage. The default value is 0/
undefined, which means that it will never time out.
● Examples
The following command changes the maximum size of the wait queue:
Related reference
system services firewall commands (page 1527)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the overall availability of web services in the cluster, including the core protocol
configurations for those services. In a pre-root or unclustered scenario, its output applies to the local node.
The following information explains the External Web Services and Status attributes, two features of
web services' availability.
The External Web Services field indicates whether remote clients are allowed to access the HTTP or
HTTPS service content. Along with the system services firewall configuration, the External Web
Services field indicates the visibility for client connections.
The Status field describes the aggregated operational state of cluster-level web services as retrieved from
the system services web node command. The Status field does not reflect whether the protocols are
externally visible, but whether the server processes are running correctly. For detailed information about
individual servers, use the system services web node show command. The following are the possible
values for the Status in node configuration or availability:
• online, all web services are consistently configured and working correctly.
• partial, one or more nodes' web services are unavailable due to an error condition.
• mixed, the nodes in the cluster do not share the same web services configuration. This situation might
occur if individual nodes were reconfigured with the system services web node command.
• offline, all of the nodes' web services are unavailable due to an error condition.
• unclustered, the current node is not part of an active cluster.
The HTTP Enabled field indicates whether HTTP is enabled.
The per-address-limit field is the limit of the number of connections that can be processed concurrently
from the same remote address. If more connections are accepted, those in excess of the limit are delayed
and processed after the number of connections being processed drops below the limit.
● Examples
The following example displays the availability of web services for the cluster.
Related reference
system services firewall commands (page 1527)
These commands manage the availability of web protocols (HTTP/HTTPs) on specific nodes in the cluster,
including the port and encryption configurations for those protocols.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays operational configuration for the web server processes on the nodes in the cluster.
This output is aggregated to produce the content for the system services web show command.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-node {<nodename>|local}] - Node
Selects the nodes that match this parameter value. Identifies the node where the web server process is
being executed.
[-external {true|false}] - External Web Services
Selects the nodes that match this parameter value. Defines whether remote clients can access the HTTP
or HTTPS service content. Along with the system services firewallcommand configuration, this
parameter controls the visibility for client connections. The default value for this parameter after
installation is true, which exports web protocols for remote access.
[-http-port <integer>] - HTTP Port
Selects the nodes that match this parameter value. Defines the HTTP port for the node-level web
services.
[-https-port <integer>] - HTTPS Port
Selects the nodes that match this parameter value. Defines the encrypted HTTP (HTTPS) port for the
node-level web services.
[-http-enabled {true|false}] - HTTP Enabled
Selects the nodes that match this parameter value. Defines whether HTTP is enabled.
[-per-address-limit <integer>] - Per Address Limit (privilege: advanced)
Selects the nodes that match this parameter value. Limits the number of connections that can be
processed concurrently from the same remote address. If more connections are accepted, those in excess
of the limit are delayed and processed after the number of connections being processed drops below the
limit.
[-status {offline|partial|mixed|online|unclustered}] - Protocol Status
Selects the nodes that match this parameter value. Describes the operational state of node-level web
services. This parameter does not reflect whether protocols are externally visible, but whether the server
processes are running correctly. The following are the possible values that describe the service
availability:
• online, indicates that web services are working correctly.
• offline, indicates that web services are unavailable due to an error condition.
• unclustered, indicates that the current node is not part of an active cluster.
[-total-hits <integer>] - Total HTTP Requests
Selects the nodes that match this parameter value. Indicates the total number of requests serviced by
the web server.
[-total-bytes <integer>] - Total Bytes Served
Selects the nodes that match this parameter value. Indicates the total number of bytes returned by the
web server.
● Examples
The following example displays the status of web servers for nodes in the cluster.
Related reference
system services firewall commands (page 1527)
system services web show (page 1558)
SMTape Commands
Manage SMTape operations
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command aborts the backup or restore operations based on the session identifier. You can perform
SMTape operations using the system smtape backup or system smtape restore commands. A unique
session identifier is assigned for each new SMTape operation. This command aborts sessions that are in
active and waiting states.
● Parameters
-session <Sequence Number> - Session Identifier
Use this parameter to specify the session identifier for a backup or restore session.
● Examples
Abort the SMTape session with the session identifier 20
Related reference
system smtape backup (page 1561)
system smtape restore (page 1563)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command performs a baseline backup of a specified volume path to a tape device. You can use the
command system hardware tape drive show to view the list of tape devices in the cluster. You must
specify a Snapshot copy name to perform an SMTape backup operation. The Snapshot copy name specified is
used as the base Snapshot copy. A new unique session ID is assigned for this SMTape operation and the
status of the session can be monitored using the command system smtape status. This session ID can
be subsequently used to perform other operations such as to find the SMTape status, abort an SMTape
operation, and continue an SMTape operation.
The volume and tape device must reside on the same node in the cluster. You must retain the base
Snapshot copy created during this backup operation in order to use this Snapshot copy to re-establish a
SnapMirror relationship upon a restore.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify
this parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the name of the volume that needs to be backed up to tape.
-backup-snapshot <snapshot name> - Snapshot Name
Use this parameter to specify the name of the Snapshot copy while performing an SMTape backup
operation.
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The
format of the tape device name is /node_name/tape_device, where node_name is the name of the
cluster node owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape
record size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB
unless it is specified.
● Examples
The following example will start the backup of a volume datavol in a Vserver vserver0 to a tape rst0a. Both
the volume and tape reside on the same node cluster1-01. The Snapshot copy to be backed up is
datavol_snapshot and the tape record size has the value of 256KB.
The following example will start the backup of a volume datavol in a Vserver vserver0 to a tape rst0a. The
volume datavol is in a Vserver vserver0. Both the volume and tape reside on the same node cluster1-01. The
Snapshot copy to be backed up is datavol_snapshot and the tape record size has the default value of 240KB.
Related reference
system smtape status commands (page 1566)
system smtape restore (page 1563)
system smtape status show (page 1567)
system smtape continue (page 1563)
system smtape abort (page 1560)
system node hardware tape drive show (page 1466)
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
● Description
This command breaks the relationship between the tape backup of a volume and a restored volume,
changing the restored volume from read-only to read/write.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify
this parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the name of the read-only volume that needs to be made read/writeable
after a restore.
● Examples
Make the read-only volume datavol on Vserver vserver0 writeable after a restore.
Related reference
system smtape backup (page 1561)
system smtape restore (page 1563)
system node hardware tape drive show (page 1466)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command continues the SMTape backup and restore operations using the specified tape device. You can
use this command when an SMTape operation has reached the end of current tape and is in the wait state to
write to or read from a new tape.
If a tape device is not specified, the original tape device will be used.
User has to make sure that the correct tape media is inserted in the device and positioned appropriately
before issuing this command.
● Parameters
[-tape </node_name/tape_device>] - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The
format of the tape device name is /node_name/tape_device, where node_name is the name of the
cluster node owning the tape and tape_device is the name of the tape device.
-session <Sequence Number> - Session Identifier
Use this parameter to specify the session identifier for the SMTape backup or restore operations.
● Examples
Continues an SMTape session having session ID 20 on tape device rst0a on the node node1 in the cluster.
The following example continues session 40 on the same tape device that was being used by the session.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command performs restore of a backup image created using the command system smtape backup in
the specified tape device to a destination volume path. A new unique session ID is assigned for this
operation; the status of the session can be monitored using the command system smtape status. It is
required that the volume and tape device reside in the same cluster node. The volume must be of type DP
(Data Protection) and should be placed in the restricted mode prior to a restore. Any existing data on the
volume will get overwritten upon a restore. The volume will remain as read-only and of type DP after the
restore. You can use the command system smtape break to get read/write permissions on the volume.
Restore can be done to a non-root DP volume.
● Parameters
-vserver <vserver name> - Vserver Name
Use this parameter to specify the Vserver name on which the volume is located. You need not specify
this parameter if only one cluster Vserver exists.
-volume <volume name> - Volume Name
Use this parameter to specify the volume name on which the tape content will be restored.
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The
format of the tape device name is /node_name/tape_device, where node_name is the name of the
cluster node owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape
record size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB
unless it is specified. Use the same record size which was used during the backup. If the tape record size
is different from the tape record size that was used at the time of backup then system smtape
restore will fail.
● Examples
The following example will start the restore to a volume datavol from a tape rst0a. The volume datavol is in
a Vserver vserver0. Both vserver0 and rst0a reside on the same node cluster1-01.
The following example will start the restore to a volume datavol from a tape rst0a. The volume datavol is in
a Vserver vserver0. Both vserver0 and rst0a reside on the same node cluster1-01. The default tape record size
of 240KB was used during backup.
Related reference
system smtape backup (page 1561)
system smtape status commands (page 1566)
system smtape break (page 1562)
system smtape status show (page 1567)
system smtape continue (page 1563)
system node hardware tape drive show (page 1466)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command displays the image header of a tape. The tape must have a valid backup of data. The
following information about the backup is displayed:
• Tape Number - the tape number if the backup spans multiple tape devices.
• WAFL Version - WAFL version of the storage system when the volume was backed up on tape.
• Backup Set ID - a unique backup set ID for the baseline backup.
• Source Storage System - the source storage system where the volume resided when the backup was
performed.
• Source Volume - the source volume that was backed up to tape.
• Source Volume Capacity - the capacity of the source volume that was backed up to tape.
• Source Volume Used Size - the used size of the source volume that was backed up to tape.
• Source Snapshot - name of the Snapshot copy used for the backup.
• Volume Type - type of the volume.
• Is SIS Volume - this field is true if the backed up volume was a SIS volume.
• Backup Version - the SMTape backup version.
• Backup Sequence No - the backup sequence number.
• Backup Mode - this field describes the backup mode.
• Time of Backup - the time at which the backup was performed.
• Time of Previous Backup - the time at which the previous backup was performed; this information is
displayed only if the previous backup was an incremental backup.
• Volume Total Inodes - number of inodes of the backed up volume.
• Volume Used Inodes - number of used inodes of the backed up volume.
• Number of Snapshots - number of Snapshot copies present in this backup.
• Snapshot ID - is the Snapshot ID of the backup Snapshot.
• Snapshot Time - time at which the backup Snapshot copy was created.
• Snapshot Name - name of the Snapshot copy which was backed up to tape.
● Parameters
-tape </node_name/tape_device> - Tape Name
Use this parameter to specify the name of the tape device which is used for this SMTape operation. The
format of the tape device name is /node_name/tape_device, where node_name is the name of the
cluster node owning the tape and tape_device is the name of the tape device.
[-tape-block-size <integer>] - Tape Record Size in KB
Use this parameter to specify the tape record size in KB for backup and restore operations. The tape
record size is in multiples of 4KB, ranging from 4KB to 256KB. The default tape record size is 240KB
unless it is specified.
● Examples
The following example reads the image header from the tape nrst0l residing on the node cluster1-01 and
displays relevant tape header information.
Related reference
system smtape backup (page 1561)
system smtape restore (page 1563)
system node hardware tape drive show (page 1466)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command clears SMTape sessions which are completed, failed or Unknown state.
● Parameters
[-session <Sequence Number>] - Session Identifier
Use this parameter to clear the SMTape sessions with the specified session identifier.
[-node {<nodename>|local}] - Node Name
Use this parameter to clear the SMtape sessions related to the specified node.
[-type {backup|restore}] - Operation Type
Use this parameter to clear the SMTape sessions of the specified operation type. These can be either
backup or restore sessions.
[-status {COMPLETED|FAILED|UNKNOWN}] - Session Status
Use this parameter to clear the SMTape sessions which have the status as specified in the parameter.
[-path <text>] - Path Name
Use this parameter to clear the SMTape sessions which have path as specified in the parameter.
[-device <text>] - Device Name
Use this parameter to clear the SMTape sessions on a specific tape device.
[-backup-snapshot <snapshot name>] - Snapshot Name
Use this parameter to clear the SMTape sessions using the Snapshot copy name as specified in the
parameter.
[-tape-block-size <integer>] - Tape Block Size
Use this parameter to clear the SMTape sessions with the tape block size as specified in the parameter.
● Examples
The following example clears all the completed SMTape sessions in the cluster:
The SMTape sessions on the node node1 in the cluster are cleared.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
This command lists the status of all SMTape sessions in the cluster. By default, this command lists the
following information:
• Session
• Type
• Status
• Progress
• Path
• Device
• Node
● Parameters
{ [-fields <fieldname>, ...]
Use this parameter to display additional fields about each session apart from the default entries. This
parameter is optional. Any combination of the following fields is valid:
• Session
• Node
• Type
• Status
• Path
• Device
• Progress
• Start-time
• End-time
• Update-time
• Backup-snapshot
• Tape-block-size
• Error
| [-instance ]}
Displays detailed information about the specified sessions.
[-session <Sequence Number>] - Session Identifier
Selects information about a specific SMTape session. A Session Identifier is a number that is used to
identify a particular SMTape session.
[-node {<nodename>|local}] - Node Name
Selects information about sessions related to the specified node.
[-type {backup|restore}] - Operation Type
Selectsinformation about SMTape sessions of the specified operation type.
[-status {COMPLETED|FAILED|ACTIVE|WAITING|ABORTING|UNKNOWN}] - Session Status
Selects information about SMTape sessions having the specified status in the parameter.
[-path <text>] - Path Name
Selects information about SMTape sessions on a volume which is at the specified path name. This is the
logical path of the volume and you must specify the path name in the following format: /
vserver_name/volume_name.
[-device <text>] - Device Name
Selects information about the SMTape sessions on the specified tape device. You must specify the tape
device name in the following format: /node_name/tape_device.
[-progress {<integer>[KB|MB|GB|TB|PB]}] - Bytes Transferred
Selects information about SMTape sessions in which the number of data bytes transferred in a particular
session matches with the number specified in this parameter.
[-start-time <MM/DD/YYYY HH:MM:SS>] - Start Time
Selects information about SMTape sessions whose starting time matches the specified starting time.
[-end-time <MM/DD/YYYY HH:MM:SS>] - End Time
Selects information about SMTape sessions whose ending time matches the specified ending time.
[-backup-snapshot <snapshot name>] - Snapshot Name
Selects information about SMTape sessions that use a particular Snapshot copy name which matches the
specified Snapshot copy name in the parameter in backup or restore operations.
[-tape-block-size <integer>] - Tape Block Size
Selects information about SMTape sessions that use a particular tape block size which matches the
specified tape block size parameter in backup or restore operations.
[-error <text>] - Error Description
Selects information about SMTape sessions that have a particular error description which matches the
specified error description in the parameter.
● Examples
Displays default entries about the five SMTape sessions.
The following example shows the output with the -instance argument.
Session Identifier: 1
Node Name: node1
Operation Type: Backup
Status: COMPLETED
Path Name: /vs1/vol1
Device Name: /node1/rst0a
Bytes Transferred: 2048
Start Time: 1/4/2012 14:26:24
End Time: 1/4/2012 14:29:45
Last updated: 1/4/2012 14:29:45
Snapshot Name: vol1.snapshot
Tape Block Size: 240
Error Description: None
Manage cluster-wide SNMP settings. SetRequest PDU is not supported. There is no default community for
the SNMP agent. SNMPv3 users are created using "security login create" CLI.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Use this command to either enable or disable the standard SNMP authentication failure traps.
● Parameters
[-authtrap <integer>] - Enables SNMP Authentication Trap
Enter the value of 1 to enable SNMP authentication failure traps. By default, SNMP authentication trap is
disabled and the value is 0.
● Examples
The following example demonstrates how to set the SNMP authtrap.
authtrap:
1
init:
0
traphosts:
-
community:
- -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Sets the contact name as the System.sysContact.0 MIB-II variable.
● Parameters
[-contact <text>] - Contact
Specifies the contact name. Without any value specified, this command displays current setting of
contact name.
● Examples
The following example sets the contact name for SNMP.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system snmp enable-snmpv3 command enables SNMPv3 server on the entire cluster. When this
command is run, SNMP users and SNMP traphosts that are non-compliant to FIPS will be deleted
automatically, since cluster FIPS mode is enabled. Any SNMPv1 user, SNMPv2c user or SNMPv3 user (with
none or MD5 as authentication protocol or none or DES as encryption protocol or both) is non-compliant to
FIPS. Any SNMPv1 traphost or SNMPv3 traphost (configured with an SNMPv3 user non-compliant to FIPS) is
non-compliant to FIPS.
● Examples
The following command enables SNMPv3 server on the entire cluster, within a cluster named cluster1:
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Fujitsu personnel.
Do you want to continue? {y|n}: y
Warning: If you enable SNMPv3 using this command, any SNMP users and
SNMP traphosts that are non-compliant to FIPS will be
deleted automatically, since cluster FIPS mode is enabled.
Any SNMPv1 user, SNMPv2c user or SNMPv3 user (with none or
MD5 as authentication protocol or none or DES as encryption
protocol or both) is non-compliant to FIPS. Any SNMPv1
traphost or SNMPv3 traphost (configured with an SNMPv3 user
non-compliant to FIPS) is non-compliant to FIPS.
Do you want to continue? {y|n}: y
cluster1::*>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Initializes or disables sending of traps by the SNMP daemon from the cluster.
● Parameters
[-init <integer>] - Initialize Traps
Use the value of 1 to initialize SNMP daemon to send traps or use a value of 0 to stop sending traps from
the cluster. If no value is specified, this command displays the current setting of init. Traps are enabled
by default.
● Examples
The following command initializes SNMP daemon to send traps.
location:
NB
authtrap:
1
init:
1
traphosts:
-
community:
- -
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Sets the location name as the System.sysLocation.0 MIB-II variable.
● Parameters
[-location <text>] - Location
Specifies the location details. If no value is specified, this command displays the current setting of
location.
● Examples
This command sets the location name.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system snmp prepare-to-downgrade command prepares the SNMP subsystem for a downgrade or a
revert. More specifically, it prepares the SNMPv3 client feature for a downgrade or a revert. It deletes all
storage switches that were explicitly added for monitoring and are using SNMPv3 as the underlying
protocol. It also deletes any cluster switches that are using SNMPv3 for monitoring. Finally, it deletes any
remote switch SNMPv3 users configured in ONTAP.
● Examples
The following command prepares the SNMP subsystem for a downgrade or a revert, within a cluster named
cluster1:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Lists the current values of all the SNMP parameters.
● Examples
The example below shows a typical command display.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system snmp community add command adds communities with the specified access control type.
Only read-only communities are supported. There is no limit for the number of communities supported.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver to which the community will be added. If no Vserver is specified, the
community is added to the admin Vserver.
-community-name <text> - Community
This parameter specifies the name of the community.
-type <ctype> - access type
This parameter specifies 'ro' for read-only community.
● Examples
The following example adds the read-only community name 'private'.
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system snmp community delete command deletes communities with the specified access control
type. Only read-only communities are supported.
● Parameters
-vserver <Vserver Name> - Vserver
This parameter specifies the Vserver from which you wish to delete the community. If no Vserver is
specified, the community is deleted from the admin Vserver.
-community-name <text> - Community
Specify the name of the community.
-type <ctype> - access type
Specify 'ro' for a read-only community.
● Examples
The following example deletes the read-only community 'private':
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays the current list of SNMP communities.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-vserver <Vserver Name>] - Vserver
Selects the Vserver to which the SNMP community belongs
[-community-name <text>] - Community
Selects the SNMP v1/v2c community string
[-access <ctype>] - access
Selects the access type of the SNMP v1/v2c community. Read-only (ro) is the only access type supported
● Examples
cluster1
ro private
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Adds the SNMP manager who receives the SNMP trap PDUs. The SNMP manager can be a hostname or IP
address. There is no limit on the number of traphosts supported.
● Parameters
-peer-address <Remote InetAddress> - Remote IP Address
Specifies the IP address or hostname of the traphost. If the USM user is associated, then the SNMPv3
traps are generated for this traphost using the associated USM user's authentication and privacy
credentials. If no USM user is associated, then the SNMP v1/v2c traps are generated for this traphost. For
the SNMP v1/v2c traps, the default community string is 'public', when no community is defined. When
the community strings are defined, then the first community string is chosen for the SNMP v1/v2c traps.
● Examples
In the following example, the command adds a hostname 'yyy.example.com' for the SNMPv3 traps:
In the following example, the command adds a hostname 'xxx.example.com' for the SNMP v1/v2c traps:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Deletes the SNMP manager, who receives the SNMP trap PDUs. The SNMP manager can be a hostname or IP
address. There is no limit on the number of traphosts supported.
● Parameters
-peer-address <Remote InetAddress> - Remote IP Address
Specifies the IP address or hostname of the traphost. If the USM user is associated, then specify the USM
user to delete the traphost.
[-usm-username <text>] - USM User Name
Specifies the USM user associated with traphost.
● Examples
In the following example, the command deletes the SNMPv3 traphost 'yyy.example.com' associated with the
USM user:
In the following example, the command deletes the SNMP v1/v2c traphost 'xxx.example.com' associated
with a community string:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
Displays list of the SNMP v1/v2c and SNMP v3 managers, that receive trap PDUs.
● Examples
In the following example, the command displays all the host names or IP addresses that have been added
until now:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system status show command displays information about the status of objects in Data ONTAP. You
can limit output to specific types of information and specific status in Data ONTAP, or filter output by specific
field values.
To see a list of values that are in use for a particular field, use the -fields parameter of this command with
the list of field names you wish to view.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
If you specify the -instance parameter, the command displays detailed information about all fields.
[-internal-name <text>] - Internal Full Name
Selects status of objects that match this parameter value.
[-name <text>] - Name
Selects status of objects that match this parameter value.
[-vserver-id <text>] - Vserver ID
Selects status of objects that match this parameter value.
[-cluster-id <text>] - Cluster ID
Selects status of objects that match this parameter value.
● Examples
Traps:
1.3.6.1.4.1.6876.50.101.0:TRAPv1 SNMPv1 'public' enterprise=1.3.6.1.4.1.6876.4.1 agent_addr=1
0.226.10.220 generic_trap=6 specific_trap=4 time-stamp=604476800
[0]: 1.3.6.1.4.1.6876.50.101.0=INTEGER 4
[1]: 1.3.6.1.4.1.6876.50.102.0=OCTET STRING 78: 2f 76 6d 66 73 2f 76 6f 6c 75 6d 65 73 2f 32 34 64 35
38 39 61 65 2d 36 34 33 62 31 35 38 65 2f 76 73 69 6d 73 79 73 2f 76 73 69 6d 73 2f 76 73 69 6d 73 79 7
3 5f 76 73 69 6d 5f 76 73 69 6d 31 2f 44 61 74 61 4f 4e 54 41 50 2e 76 6d 78
[2]: 1.3.6.1.4.1.6876.2.1.1.2.2815=OCTET STRING 18: 76 73 69 6d 73 79 73 5f 76 73 69 6d 5f 76 73 69 6d
31
1.3.6.1.4.1.6876.50.101.0:TRAPv1 SNMPv1 'public' enterprise=1.3.6.1.4.1.6876.4.1 agent_addr=1
0.226.10.40 generic_trap=6 specific_trap=4 time-stamp=1683800
[0]: 1.3.6.1.4.1.6876.50.101.0=INTEGER 4
[1]: 1.3.6.1.4.1.6876.50.102.0=OCTET STRING 79: 2f 76 6d 66 73 2f 76 6f 6c 75 6d 65 73 2f 61 37 63 31
66 37 61 61 2d 39 65 39 63 63 34 61 30 2f 74 66 61 72 72 65 6c 6c 2f 76 73 69 6d 73 2f 74 66 61 72 72 6
5 6c 6c 5f 76 73 69 6d 5f 6e 73 63 31 2f 44 61 74 61 4f 4e 54 41 50 2e 76 6d 78
[2]: 1.3.6.1.4.1.6876.2.1.1.2.2093=OCTET STRING 18: 74 66 61 72 72 65 6c 6c 5f 76 73 69 6d 5f 6e 73 63
31
47 entries were displayed.
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system switch ethernet configure-health-monitor command downloads an Ethernet switch's
health monitor configuration file in the ZIP format, which contains the XML file and a signed version file.
After download, ONTAP will check the signed file. If valid, the Ethernet switch health monitor restarts to use
the new Ethernet switch health monitor configuration file.
● Parameters
-node {<nodename>|local} - Node
This specifies the node or nodes on which the Ethernet switch health monitor configuration file is to be
updated.
-package-url <text> - Package URL
This parameter specifies the URL that provides the location of the package to be downloaded. Standard
URL schemes, including HTTP, HTTPS, FTP and FILE, are accepted.
● Examples
The following example downloads Ethernet switch health monitor configuration file to node1 from a web
server and enables Ethernet switch health monitor to process it:
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet create command adds information about an Ethernet switch (cluster,
management or storage). The Ethernet switch health monitor uses this information to monitor the health of
the switch.
Use this command if ONTAP cannot automatically discover a cluster, management or storage switch. ONTAP
relies on the Cisco Discovery Protocol (CDP) to discover the switches. CDP is always enabled on all cluster
ports of a node by default, disabled on all non-cluster ports of a node. If the CDP is also enabled on your
Ethernet switches, they will be automatically discovered.
If you want ONTAP to discover and monitor management switches, the CDP must be enabled on non-cluster
ports. To verify whether the CDP is enabled or disabled, use the command system node run -node
[HTML_REMOVED]node_name[HTML_REMOVED] -command options cdpd.enable.
Use the system switch ethernet show command to identify switches that the Ethernet switch health
monitor is monitoring.
● Parameters
-device <text> - Device Name
Specifies the device name of the switch that you want to monitor. ONTAP uses the device name of the
switch to identify the SNMP agent with which it wants to communicate.
-address <IP Address> - IP Address
Specifies the IP address of switch's management interface.
-snmp-version {SNMPv1|SNMPv2c|SNMPv3} - SNMP Version
Specifies the SNMP version that ONTAP uses to communicate with the switch. The default is SNMPv2c.
{ -community <text> - DEPRECATED-Community String or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -
community-or-username instead.
Specifies the community string for SNMPv2 authentication or SNMPv3 user name for SNMPv3 security.
The default community string for SNMPv2 authentication is cshm1!.
| -community-or-username <text>} - Community String or SNMPv3 Username
Specifies the community string for SNMPv2 authentication or SNMPv3 user name for SNMPv3 security.
The default community string for SNMPv2 authentication is cshm1!.
-model {NX5010|NX5020|CAT2960|OTHER|NX5596|CN1610|CN1601|NX3132|NX5548|NX3132V|OT9332|
NX3132XL|NX3232C} - Model Number
This parameter specifies the model number of the switch. Use "OTHER" model when adding a switch that
requires a switch health monitor XML configuration file, for example: BES-53248 and N9K-C92300YC
switches. ONTAP also sets the model to OTHER when it automatically discovers a switch that does not
support health monitoring.
● Examples
cluster1::> system switch ethernet create -device SwitchA -address 1.2.3.4 -snmp-version SNMPv2c -commu
nity-or-username cshm1! -model NX55596 -type cluster-network
cluster2::> system switch ethernet create -device SwitchB -address 5.6.7.8 -snmp-version SNMPv3 -commun
ity-or-username snmpv3u1 -model CN1601 -type management-network
Related reference
system node run (page 1409)
system switch ethernet show (page 1582)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet delete command disables switch health monitoring for an Ethernet
switch (cluster, management or storage).
● Parameters
-device <text> - Device Name
Specifies the name of the switch.
[-force [true]] - Force Delete (privilege: advanced)
Specifies if the delete operation is forced.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet modify command modifies information about an Ethernet switch
(cluster, management or storage). The Ethernet switch health monitor uses this information to monitor the
switch.
● Parameters
-device <text> - Device Name
Specifies the device name of switch that you want to monitor.
[-address <IP Address>] - IP Address
Specifies the IP address of the switch's management interface.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Specifies the SNMP version that ONTAP uses to communicate with the switch. The default is SNMPv2c.
{ [-community <text>] - DEPRECATED-Community String or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -
community-or-username instead.
Specifies the community string for SNMPv2 authentication or SNMPv3 username for SNMPv3 security.
| [-community-or-username <text>]} - Community String or SNMPv3 Username
Specifies the community string for SNMPv2 authentication or SNMPv3 username for SNMPv3 security.
[-type {cluster-network|management-network|storage-network}] - Switch Network
Specifies the switch type.
[-is-monitoring-enabled-admin {true|false}] - Enable Switch Monitoring
Specifies the switch admin monitoring status.
● Examples
cluster1::> system switch ethernet modify -device SwitchB -snmp-version SNMPv3 -community-or-username s
nmpv3u1
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet show command displays configuration details for the monitored Ethernet
switches (cluster, management and storage).
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-snmp-config ]
Displays the following information about a switch:
• Device Name
• SNMPv2c Community String or SNMPv3 Username
• SNMP Version
| [-status ]
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -
community-or-username instead.
Selects the switches that match the specified SNMPv2c community string or SNMPv3 username.
[-community-or-username <text>] - Community String or SNMPv3 Username
Selects the switches that match the specified SNMPv2c community string or SNMPv3 username.
[-model {NX5010|NX5020|CAT2960|OTHER|NX5596|CN1610|CN1601|NX3132|NX5548|NX3132V|
OT9332|NX3132XL|NX3232C}] - Model Number
Selects the switches that match the specified model number.
[-type {cluster-network|management-network|storage-network}] - Switch Network
Selects the switches that match the specified switch type.
[-sw-version <text>] - Software Version
Selects the switches that match the specified software version.
[-reason <text>] - Reason For Not Monitoring
Selects the switches that match the specified reason.
[-version-source <text>] - Source Of Switch Version
Selects the switches that match the specified version source (for example, from SNMP, CDP or ISDP).
[-is-monitoring-enabled-operational {true|false}] - Is Monitored ?
Selects the switches that match the specifed operational monitoring status.
[-serial-number <text>] - Serial Number of the Device
Selects the switches that match the specified serial number.
● Examples
The example above displays the configuration of all Ethernet switches (cluster, management and storage).
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system switch ethernet show-all command displays configuration details for discovered
monitored Ethernet switches (cluster, management and storage), including switches that are user-deleted.
From the list of deleted switches, you can delete a switch permanently from the database to re-enable
automatic discovery of that switch.
● Parameters
{ [-fields <fieldname>, ...]
Selects the fields that have the specified name.
| [-instance ]}
Selects detailed information for all the switches.
[-device <text>] - Device Name
Selects the switches that match the specified device name.
[-address <IP Address>] - IP Address
Selects the switches that match the specified IP address.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Selects the switches that match the specified SNMP version.
[-community <text>] - DEPRECATED-Community String or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -
community-or-username instead.
Selects the switches that match the specified community string or SNMPv3 username.
[-community-or-username <text>] - Community String or SNMPv3 Username
Selects the switches that match the specified community string or SNMPv3 username.
[-discovered {true|false}] - Is Discovered
Selects the switches that match the specified discovery setting.
[-type {cluster-network|management-network|storage-network}] - Switch Network
Selects the switches that match the specified switch type.
[-sw-version <text>] - Software Version
Selects the switches that match the specified software version.
[-is-monitoring-enabled-operational {true|false}] - Switch Monitoring Status
Selects the switches that match the specified operational monitoring status.
[-reason <text>] - Reason For Not Monitoring
Selects the switches that match the specified reason.
[-version-source <text>] - Source Of Switch Version
Selects the switches that match the specified version source (for example, from SNMP, CDP or ISDP).
[-serial-number <text>] - Serial Number of the Device
Selects the switches that match the specified serial number.
[-model <text>] - Model to display
Selects the switches that match the specified model number.
● Examples
Is Monitored: yes
Reason:
Software Version: Cisco IOS 4.1N1
Version Source: CDP
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log collect command initiates the collection of an Ethernet switch log
for the specified Ethernet switch via OpenSSH.
● Parameters
-device <text> - Switch Name
Specifies the Ethernet switch device for which the log collection is being made.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log disable-collection command disables the collection of Ethernet
switch logs via OpenSSH.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log enable-collection command enables the collection of Ethernet
switch logs via OpenSSH.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log modify command modifies the log request of the specified Ethernet
switch.
● Parameters
-device <text> - Switch Name
Specifies the Ethernet switch device for which the log request is being made. Note: the device must be
one of the devices listed as an Ethernet switch from the system switch ethernet show command.
The full device name from the system switch ethernet show command must be used.
[-log-request {true|false}] - Requested Log
Specifies the initiation of a switch log retrieval for the specified Ethernet switch if set to true.
● Examples
cluster1::> system switch ethernet log modify -device switch-name01(Switch---SN) -log-request true
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log setup-password command allows the administrator to set up
Ethernet switch health monitor access to certain Ethernet switches, so that switch logs can be collected.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet log show command displays the status and requests for Ethernet switch
logs.
● Parameters
{ [-fields <fieldname>, ...]
If you specify the -fields <fieldname>, ... parameter, the command output also includes the
specified field or fields. You can use '-fields ?' to display the fields to specify.
| [-instance ]}
Specifies an instance of the Ethernet switch devices log status.
[-device <text>] - Switch Name
Specifies the name of the Ethernet switch device to display log status on.
[-log-request {true|false}] - Requested Log
Specifies the state of the log request for an Ethernet switch device. Values: true, false.
[-log-status <text>] - Log Status
Specifies the status of the log request for an Ethernet switch device.
[-log-timestamp <MM/DD/YYYY HH:MM:SS>] - Log Timestamp
Specifies the completion timestamp of the log request for an Ethernet switch device.
[-idx <integer>] - Index
Specifies the index of the Ethernet switch device.
[-filename <text>] - Filename
Specifies the full filename of the Ethernet switch log.
[-filenode <text>] - File Node
Specifies the name of the controller on which the Ethernet switch log resides.
● Examples
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system switch ethernet polling-interval modify command modifies the interval in which
the Ethernet switch health monitor polls cluster, management and storage switches.
● Parameters
[-polling-interval <integer>] - Polling Interval
Specifies the interval in which the health monitor polls switches. The interval is in minutes. The default
value is 5. The allowed range of values is 2 to 120.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet polling-interval show command displays the polling interval used
by the Ethernet switch health monitor.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch ethernet threshold show command displays thresholds used by health monitor
alerts.
● Examples
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch fibre-channel add command enables you to add fibre-channel (FC) switches for
SNMP monitoring in a MetroCluster configuration. Front end switches should not be added for monitoring
and will result in a Monitor Status Error condition.
● Parameters
-address <IP Address> - FC Switch Management IP Address
This parameter specifies the IP address of the back-end FC switch that is added for monitoring.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - Supported SNMP Version
This parameter specifies the SNMP version that ONTAP uses to communicate with the back-end FC switch
that is added for monitoring. The default SNMP version is SNMPv2c.
{ [-snmp-community <text>] - (DEPRECATED)-SNMPv2c Community or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -snmp-
community-or-username instead.
This parameter specifies the SNMPv2c community set or SNMPv3 username on the FC switch that is
added for monitoring.
| [-snmp-community-or-username <text>]} - SNMPv2c Community or SNMPv3 Username
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch that is added
for monitoring.
[-veto-backend-fabric-check {true|false}] - Veto Back-end Fabric Check? (privilege: advanced)
If specified, the system switch fibre-channel add command will not check if the switch is present
in the MetroCluster's back-end fabric. By default, it does not let you add switches that are not present.
[-blades <integer>, ...] - Cisco Director Class Switch Blades to Monitor
This parameter specifies the blades to monitor on the back-end switch that is added for monitoring. It is
only applicable to director-class switches.
● Examples
The following command adds a back-end switch with IP Address 10.226.197.34 for monitoring:
cluster1::>
The following command adds a Cisco Director Class switch for monitoring. ONTAP uses SNMPv3 and
'snmpuser1' username to communicate with this switch.
cluster1::> system switch fibre-channel add -address 10.228.56.208 -snmp-version SNMPv3 -snmp-commu
nity-or-username snmpuser1 -blades 3,4
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch fibre-channel modify command enables you to modify certain parameters for
identifying and accessing the back-end fibre-channel (FC) end switches added for monitoring in a
MetroCluster configuration.
● Parameters
-switch-name <text> - FC Switch Name
This parameter specifies the name of the switch.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
This parameter specifies the SNMP version that ONTAP uses to communicate with the switch.
[-switch-ipaddress <IP Address>] - Switch IP Address
This parameter specifies the IP address of the switch.
{ [-snmp-community <text>] - (DEPRECATED)-SNMPv2c Community or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of ONTAP. Use -snmp-
community-or-username instead.
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch.
| [-snmp-community-or-username <text>]} - SNMPv2c Community or SNMPv3 Username
This parameter specifies the SNMPv2c community set or SNMPv3 username on the switch.
[-blades <integer>, ...] - Director-Class Switch Blades to Monitor
This parameter specifies the blades to monitor on the switch. It is only applicable to director-class
switches.
● Examples
The following command modifies Cisco_10.226.197.34 switch SNMP community to 'public':
cluster1::>
cluster1::>
The following command modifies Brocade 6505 switch SNMP version to SNMPv3 and SNMPv3 username to
'snmpuser1':
Availability: This command is available to cluster administrators at the advanced privilege level.
● Description
The system switch fibre-channel refresh command triggers a refresh of the SNMP data for the
MetroCluster fibre-channel (FC) switches and FC-to-SAS bridges. It does not do anything if the refresh is
already going on. The FC switches and FC-to-SAS bridges must have been previously added for monitoring by
using the system switch fibre-channel add and system bridge add commands, respectively.
● Examples
The following command triggers a refresh for the FC switch and FC-to-SAS bridge data:
cluster1::*>
Related reference
system switch fibre-channel add (page 1589)
system bridge add (page 1264)
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch fibre-channel remove command enables you to remove back-end fibre-channel
(FC) switches that were previously added for SNMP monitoring.
● Parameters
-switch-name <text> - FC Switch Name
This parameter specifies the name of the back-end FC switch added for monitoring.
● Examples
The following command removes 'Cisco_10.226.197.34' switch from monitoring:
Symbolic Is Monitor
Switch Name Vendor Model Switch WWN Monitored Status
----------- -------- ------- ---------- ---------------- --------- -------
Cisco_10.226.197.34
mcc-cisco-8Gb-fab-4
Cisco DS-C9148-16P-K9
2000547fee78f088 true ok
mcc-cisco-8Gb-fab-1
mcc-cisco-8Gb-fab-1
Cisco - - false -
mcc-cisco-8Gb-fab-2
mcc-cisco-8Gb-fab-2
Cisco - - false -
mcc-cisco-8Gb-fab-3
mcc-cisco-8Gb-fab-3
Cisco - - false -
4 entries were displayed.
cluster1::>
Availability: This command is available to cluster administrators at the admin privilege level.
● Description
The system switch fibre-channel show command displays information about all the back-end fibre-
channel (FC) switches in the MetroCluster configuration. The back-end switches must have been previously
added for monitoring using the system switch fibre-channel add command. If no parameters are
specified, the default command displays the following information about the back-end FC switches:
• Switch
• Symbolic Name
• Vendor
• Model
• Switch WWN
• Is Monitored
• Monitor Status
To display detailed profile information about a single back-end FC switch, use the -switch-name
parameter.
● Parameters
{ [-fields <fieldname>, ...]
Displays the specified fields for all the back-end FC switches, in column-style output.
| [-connectivity ]
Displays the following details about the connectivity from the back-end FC switch to connected entities:
• Port name
• Port operating mode
• Port world wide name
• Peer port world wide name
• Peer type
• Additional information about peer
Displays the following details about the connectivity from the node to the back-end FC switch:
• Node name
• Adapter name
• Switch port name
• Switch port speed
• Adapter type
| [-cooling ]
Displays the following details about the fans and temperature sensors on the back-end FC switch:
• Fan name
• Fan speed in rotations per minute (RPM)
• Fan operational status
• Temperature sensor name
• Temperature sensor reading in Celsius (C)
• Temperature sensor status
| [-error ]
Displays the errors related to the back-end FC switch.
| [-port ]
Displays the following details about the back-end FC switch ports:
• Port name
• Port world wide name
• Port administrative status
• Port operational status
• Port operating mode
Displays expanded information about all the back-end FC switches in the system. If a back-end FC switch
is specified, then this parameter displays the same detailed information for the back-end FC switch you
specify as does the -switch-name parameter.
[-switch-name <text>] - FC Switch Name
Displays information only about the back-end FC switches that match the name you specify.
[-switch-wwn <text>] - Switch World Wide Name
Displays information only about the back-end FC switches that match the switch wwn you specify.
[-switch-symbolic-name <text>] - Switch Symbolic Name
Displays information only about the back-end FC switches that match the switch symbolic name you
specify.
[-switch-fabric-name <text>] - Fabric Name
Displays information only about the back-end FC switches that match the switch fabric you specify.
[-domain-id <integer>] - Switch Domain ID
Displays information only about the back-end FC switches that match the switch domain id you specify.
[-switch-role {unknown|primary|subordinate}] - Switch Role in Fabric
Displays information only about the back-end FC switches that match the switch role you specify.
[-snmp-version {SNMPv1|SNMPv2c|SNMPv3}] - SNMP Version
Displays information only about the back-end FC switches that match the switch SNMP version you
specify.
[-switch-model <text>] - Switch Model
Displays information only about the back-end FC switches that match the switch model you specify.
[-switch-vendor {unknown|Brocade|Cisco}] - Switch Vendor
Displays information only about the back-end FC switches that match the switch vendor you specify.
[-fw-version <text>] - Switch Firmware Version
Displays information only about the back-end FC switches that match the switch firmware version you
specify.
[-serial-number <text>] - Switch Serial Number
Displays information only about the back-end FC switches that match the switch serial number you
specify.
[-switch-ipaddress <IP Address>] - Switch IP Address
Displays information only about the back-end FC switches that match the switch IP address you specify.
[-switch-status {unknown|ok|error}] - Switch Status
Displays information only about the back-end FC switches that match the switch status you specify.
[-snmp-community <text>] - (DEPRECATED)-SNMPv2c Community or SNMPv3 Username
Note
This parameter is deprecated and may be removed in a future release of Data ONTAP. Use -snmp-
community-or-username instead.
Displays information only about the back-end FC switches that match the switch SNMPv2c community or
SNMPv3 username you specify.
[-snmp-community-or-username <text>] - SNMPv2c Community or SNMPv3 Username
Displays information only about the back-end FC switches that match the switch SNMPv2c community or
SNMPv3 username you specify.
[-profile-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] -
Switch Profile Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the profile data last successful
refresh timestamp you specify.
Displays information only about the back-end FC switches that match the fan data last successful refresh
timestamp you specify.
[-vsan-index-list <integer>, ...] - Switch VSAN Index List
Displays information only about the back-end FC switches that have the VSANs with the indexes you
specify.
[-vsan-name-list <text>, ...] - Switch VSAN Name List
Displays information only about the back-end FC switches that have the VSANs with the names you
specify.
[-vsan-oper-status-list {up|down}, ...] - Switch VSAN Operational Status List
Displays information only about the back-end FC switches that have the VSANs with the operational
statuses you specify.
[-vsan-load-balancing-type-list {src-id-dest-id|src-id-dest-id-ox-id}, ...] - Switch VSAN Load
balancing Type List
Displays information only about the back-end FC switches that have the VSANs with the load balancing
types you specify.
[-is-vsan-iod-list {true|false}, ...] - Is In-order Delivery Set for VSAN List
Displays information only about the back-end FC switches that have the VSANs with the IOD setting you
specify.
[-vsan-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] -
Switch VSAN Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the VSAN data last successful
refresh timestamp you specify.
[-member-switch-name-list <text>, ...] - Member Switch List
Displays information only about the back-end FC switches that have the VSANs with the member switch
names you specify.
[-member-switch-port-name-list <text>, ...] - Member Switch Port Name List
Displays information only about the back-end FC switches that have the VSANs with the memeber switch
port names you specify.
[-vsan-id-list <integer>, ...] - Zone VSAN ID List
Displays information only about the back-end FC switches that have the VSANs with the IDs you specify.
[-zone-name-list <text>, ...] - Switch Zone Name List
Displays information only about the back-end FC switches that have the zones with the names you
specify.
[-zone-member-sw-domain-id-list <integer>, ...] - Zone Member Switch Port Domain ID List
Displays information only about the back-end FC switches that have the zones with the member switch
domain ids you specify.
[-zone-member-port-name-list <text>, ...] - Zone Member Port List
Displays information only about the back-end FC switches that have the zones with the port names you
specify.
[-zone-member-port-wwn-list <text>, ...] - Zone Member WWPN List
Displays information only about the back-end FC switches that have the zones with the port WWNs you
specify.
[-zone-member-port-switch-name-list <text>, ...] - Zone Member Switch WWN List
Displays information only about the back-end FC switches that have the zones with the member port
hosting switch names you specify.
[-zone-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] -
Switch Zone Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the zone data last successful
refresh timestamp you specify.
Displays information only about the back-end FC switches that have the ports with the transmitted
octets values you specify.
[-port-rx-octets-list <integer>, ...] - Switch Port Total Received Octets List
Displays information only about the back-end FC switches that have the ports with the received octets
values you specify.
[-port-frame-error-list <integer>, ...] - Switch Port Frame Error Count List
Displays information only about the back-end FC switches that have the ports with the error frame
values you specify.
[-port-stat-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}]
- Switch Port Stat Data Last Update Timestamp
Displays information only about the back-end FC switches that match the port statistics data last
successful refresh timestamp you specify.
[-sfp-port-name-list <text>, ...] - Switch Port Name List
Displays information only about the back-end FC switches that have the ports with the names you
specify.
[-sfp-type-list {unknown|other|gbic|embedded|glm|gbic-with-serial-id|gbic-without-serial-id|sfp-
with-serial-id|sfp-without-serial-id|xfp|x2-short|x2-medium|x2-tall|xpak-short|xpak-medium|xpak-tall|
xenpak|sfp-dw-dm|qsfp|x2-dw-dm|gbic-not-installed|small-form-factor}, ...] - Switch Port SFP Type List
Displays information only about the back-end FC switches that have the ports with the SFP types you
specify.
[-sfp-tx-type-list {unknown|long-wave-laser|short-wave-laser|long-wave-laser-cost-reduced|
electrical|ten-gig-base-sr|ten-gig-base-lr|ten-gig-base-er|ten-gig-base-lx4|ten-gig-base-sw|ten-gig-
base-lw|ten-gig-base-ew}, ...] - Switch Port SFP Transmitter Type List
Displays information only about the back-end FC switches that have the ports with the SFP transmitter
types you specify.
[-sfp-vendor-list <text>, ...] - Switch Port SFP Vendor List
Displays information only about the back-end FC switches that have the ports with the SFP vendors you
specify.
[-sfp-part-number-list <text>, ...] - Switch Port SFP Part Number List
Displays information only about the back-end FC switches that have the ports with the SFP part numbers
you specify.
[-sfp-serial-number-list <text>, ...] - Switch Port SFP Serial Number List
Displays information only about the back-end FC switches that have the ports with the SFP serial
numbers you specify.
[-sfp-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] - Switch
Port SFP Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the port SFP data last successful
refresh timestamp you specify.
[-switch-error-text-list <text>, ...] - Switch Error Text List
Displays information only about the back-end FC switches that have the errors you specify.
[-conn-switch-port-name-list <text>, ...] - Switch Port Name List
Displays information only about the back-end FC switches that have the ports with the names you
specify.
[-conn-switch-port-mode-list {unknown|auto|F-port|FL-port|E-port|TE-port|U-port|G-port|other|
EX-port|D-port|SIM-port|VE-port|AE-port|AF-port}, ...] - Switch Port Operating Mode List
Displays information only about the back-end FC switches that have the ports with the operating modes
you specify.
[-conn-switch-port-wwn-list <text>, ...] - Switch Port WWN List
Displays information only about the back-end FC switches that have the ports with the WWNs you
specify.
[-conn-switch-port-peer-port-wwn-list <text>, ...] - Switch Port Peer Port WWN List
Displays information only about the back-end FC switches that have the ports with the peer port WWNs
you specify.
[-conn-switch-port-peer-info-list <text>, ...] - Switch Port Peer Host & Port Name List
Displays information only about the back-end FC switches that have the ports with the peer information
values you specify.
[-conn-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] -
Switch Connectivity Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the switch connectivity data last
successful refresh timestamp you specify.
[-conn-switch-port-peer-type-list {unknown|bridge|switch|fcp-adapter|fcvi-adapter}, ...] -
Switch Port Peer Type List
Displays information only about the back-end FC switches that have the ports connected to the peer
types you specify.
[-switch-port-name-list <text>, ...] - Switch Port Name List
Displays information only about the back-end FC switches that have the ports with the names you
specify.
[-switch-port-speed-list <integer>, ...] - Switch Port Speed (in Gbps) List
Displays information only about the back-end FC switches that have the ports with the speeds you
specify.
[-node-name-list <nodename>, ...] - Node Name List
Displays information only about the back-end FC switches that are connected to the nodes you specify.
[-adapter-name-list <text>, ...] - Node Adapter Name List
Displays information only about the back-end FC switches that are connected to the adapters you
specify.
[-adapter-port-name-list <text>, ...] - Node Adapter Port Name List
Displays information only about the back-end FC switches that are connected to the adapter ports you
specify.
[-adapter-type-list {unknown|FCP-Initiator|FC-VI|FCP-Target}, ...] - Node Adapter Type List
Displays information only about the back-end FC switches that are connected to the types of adapters
you specify.
[-path-data-last-successful-refresh-timestamp {MM/DD/YYYY HH:MM:SS [{+|-}hh:mm]}] -
Switch Path Data Last Successful Refresh Timestamp
Displays information only about the back-end FC switches that match the node to switch path data last
successful refresh timestamp you specify.
[-name-list <text>, ...] - Switch Name List
Displays information only about the back-end FC switches that match the names you specify.
[-domain-id-list <integer>, ...] - Switch Domain ID List
Displays information only about the back-end FC switches that match the domain ids you specify.
[-wwn-list <text>, ...] - Switch WWN List
Displays information only about the back-end FC switches that match the switch WWNs you specify.
[-role-list {unknown|primary|subordinate}, ...] - Switch Role in Fabric List
Displays information only about the back-end FC switches that match the switch roles you specify.
[-address-list <IP Address>, ...] - Switch IP Address List
Displays information only about the back-end FC switches that match the switch IP addresses you
specify.
● Examples
The following example displays information about all back-end FC switches:
cluster::>
The following example displays connectivity (switch to peer and node to switch) information about all back-
end FC switches:
Connectivity:
Port Name Port Mode Port WWN Peer Port WWN Peer Type Peer Info
--------- --------- ---------------- ---------------- ------------ ---------
fc1/1 F-port 2001547fee78efb0 2100001086607d34 unknown unknown
fc1/3 F-port 2003547fee78efb0 21000024ff3dd9cb unknown unknown
fc1/4 F-port 2004547fee78efb0 21000024ff3dda8d unknown unknown
fc1/5 F-port 2005547fee78efb0 500a0980009af880 unknown unknown
fc1/6 F-port 2006547fee78efb0 500a0981009af370 unknown unknown
fc1/11 TE-port 200b547fee78efb0 200b547fee78f088 switch Cisco_10.226.197.34:fc1/11
fc1/12 TE-port 200c547fee78efb0 200c547fee78f088 switch Cisco_10.226.197.34:fc1/12
fc1/13 F-port 200d547fee78efb0 2100001086609e22 unknown unknown
Path:
Switch
Port
Node Adapter Switch Port Speed Adapter Type
-------------------- -------------- ----------- ------- --------------
dpg-mcc-3240-15-a1 0d fc1/30 4Gbps FCP-Initiator
dpg-mcc-3240-15-a1 fcvi_device_1 fc1/27 8Gbps FC-VI
dpg-mcc-3240-15-a2 0c fc1/29 4Gbps FCP-Initiator
dpg-mcc-3240-15-a2 fcvi_device_1 fc1/28 8Gbps FC-VI
The following command displays cooling (temperature sensors and fans) information about all back-end FC
switches:
Fans:
Fan RPM Status
------------ -------- ---------------
Fan Module-1 - operational
Fan Module-2 operational
Fan Module-3 operational
Fan Module-4 operational
Temperature Sensors: