You are on page 1of 33

Upgrading controller hardware on a pair of nodes running

clustered Data ONTAP 8.3 by moving volumes


You can upgrade a pair of nodes in an HA pair running clustered Data ONTAP 8.3 or later to another pair of nodes in an HA
pair by moving all the nonroot volumes. You might want to do so if the original nodes have internal disk drives or external disk
shelves not supported by the new nodes. If you want to move nonroot volumes to any HA nodes, you need to install Data
ONTAP 8.3 on both of the original and both of the new nodes.
About this task

In this procedure, you join the new nodes to the cluster and move all the nonroot volumes from the original nodes to the new
nodes.

This procedure does not result in loss of client access to data, provided that all volumes are moved from the original system's
storage.

You can reuse the original hardware after the upgrade.


If you are upgrading a FAS2220 with internal SATA drives or SSDs, you can transfer the internal drives to an external disk
shelf attached to the new system after the upgrade.
If you are upgrading a FAS2240 system, you can convert it to a disk shelf and attach it to the new system after the upgrade.
You can move any disk shelves attached to the original system to the new system, provided that the new system supports the
disk shelves.

This procedure is written with the following assumptions:

You are upgrading a pair of nodes running clustered Data ONTAP 8.3 in an HA pair to a new pair of nodes that have not
been used, and that are running clustered Data ONTAP 8.3 in an HA pair.
Note: Both the original and new controllers must be running the same major version of clustered Data ONTAP before
the upgrade. For example, you can upgrade nodes running Data ONTAP 8.3 with Data ONTAP 8.2.1 or later without
upgrading Data ONTAP. In contrast, you cannot upgrade nodes running releases prior to Data ONTAP 8.1.x to nodes
running Data ONTAP 8.3 without first upgrading Data ONTAP on the original nodes.

Non-HA nodes are not supported in clusters.

You are moving all the volumes from the original nodes to the new nodes.

If you are in a SAN environment, you have a supported multipathing solution running on your host.

If you are replacing a single, failed node, use the appropriate controller-replacement procedure instead of this procedure.

If you are replacing an individual component, see the field-replaceable unit (FRU) flyer for that component on the NetApp
Support Site.

This procedure uses the term boot environment prompt to refer to the prompt on a node from which you can perform certain
tasks, such as rebooting the node and printing or setting environmental variables.
The prompt is shown in the following example:
LOADER>

Note: Most Data ONTAP platforms released before Data ONTAP 8.2.1 were released as separate FAS and V-Series

hardware systems (for example, a FAS6280 and a V6280). Only the V-Series systems (a V or GF prefix) could attach to
storage arrays. Starting in Data ONTAP 8.2.1, only one hardware system is being released for new platforms. These new

215-09573_A0_ur002

Copyright 2015 NetApp, Inc. All rights reserved.


Web: www.netapp.com Feedback: doccomments@netapp.com

platforms, which have a FAS prefix, can attach to storage arrays if the required licenses are installed. These new platforms
are the FAS80xx and FAS25xx systems.
This document uses the term systems with FlexArray Virtualization Software to refer to systems that belong to these new
platforms and the term V-Series system to refer to the separate hardware systems that can attach to storage arrays.

Steps

1.
2.
3.
4.

Guidelines for upgrading nodes on page 2


Gathering tools and documentation on page 3
Upgrading the controller hardware on page 5
Performing post-upgrade tasks on page 24

Guidelines for upgrading nodes


To upgrade the original nodes, you need to follow certain guidelines and be aware of restrictions that affect the procedure.
Supported upgrade paths
You can upgrade types of systems as follows:

A FAS system to a FAS system

A FAS system to a V-Series system or system with FlexArray Virtualization Software

A V-Series system or system with FlexArray Virtualization Software to a V-Series system or system with FlexArray
Virtualization Software

A V-Series system or system with FlexArray Virtualization Software to a FAS system, provided that the V-Series system or
system with FlexArray Virtualization Software has no array LUNs

The following table displays the upgrades that you can perform for each model of controller in Data ONTAP 8.3:
Original controllers

Replacement controllers

FAS22xxA with internal disk drives

3220, 3250, 6220, 6250, 6290, or FAS80xx

FAS2220A with internal disk drives

FAS2552A, FAS2554A, or FAS80xx

FAS2240-2A with internal disk dries

FAS2552A

FAS2240-4A with internal disk drives

FAS2554A

32xx, 62xx

FAS8080

Note: If your FAS80xx controllers are running Data ONTAP 8.3 or later and one or both are All-Flash FAS models, make
sure that both controllers have the same All-Flash Optimized personality set:
system node show -instance node_name

Both nodes must either have the personality enabled or disabled; you cannot combine a node with the All-Flash Optimized
personality enabled with a node that does not have the personality enabled in the same HA pair. If the personalities are
different, refer to KB Article 1015157 in the NetApp Knowledge Base for instructions on how to sync node personality.
Maximum cluster size
During the procedure, you cannot exceed the maximum cluster size for the Data ONTAP release.
The cluster can be as small as a single HA pair (two controllers). When you move volumes, the cluster count is increased by two
because of the addition of the newly added destination HA pair. For this reason, the maximum controller count supported in this

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

procedure must be less than the maximum supported controller count for the version of Data ONTAP installed on the nodes.
Maximum cluster size also depends on the models of controllers that make up the cluster.
See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for information about cluster limits in
non-SAN environments; see the Clustered Data ONTAP SAN Configuration Guide for information about cluster limits in SAN
environments. See the Hardware Universe for information about the maximum number of nodes per cluster for SAN and NAS
environments.
Licensing in Data ONTAP 8.3
When you set up a cluster, the setup wizard prompts you to enter the cluster base license key. However, some features require
additional licenses, which are issued as packages that include one or more features. Each node in the cluster must have its own
key for each feature to be used in that cluster.
If you do not have new license keys, currently licensed features in the cluster will be available to the new controller. However,
using features that are unlicensed on the controller might put you out of compliance with your license agreement, so you should
install the new license key or keys for the new controller.
Starting with Data ONTAP 8.2, all license keys are 28 uppercase, alphabetic characters in length. You can obtain new 28character license keys for Data ONTAP 8.3 on the NetApp Support Site at mysupport.netapp.com. The keys are available in the
My Support section under Software licenses. If the site does not have the license keys you need, contact your NetApp sales
representative.
For detailed information about licensing, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators and the KB article Data ONTAP 8.2 and 8.3 Licensing Overview and References on the NetApp Support Site.
Storage Encryption
Storage Encryption is available in clustered Data ONTAP 8.2.1. The original nodes or the new nodes might already be enabled
for Storage Encryption. In that case, you need to take additional steps in this procedure to ensure that Storage Encryption is set
up properly.
If you want to use Storage Encryption, all the disk drives associated with the nodes must have self-encrypting disk drives.

Gathering tools and documentation


Before beginning the upgrade process, you must gather the necessary tools and recommended documentation.
Steps

1. Gather the tools you need to perform the upgrade:

Grounding strap

#2 Phillips screwdriver

2. Download from the NetApp Support Site at mysupport.netapp.com the documents that contain helpful information for the
upgrade.
Download the version of the document that matches the version of Data ONTAP that the system is running.
Document

Contents

Hardware Universe at hwu.netapp.com

Contains information about the physical and electrical requirements of


storage systems.

Clustered Data ONTAP Software Setup Guide

Describes how to set up and configure NetApp systems.

Gathering tools and documentation

Document

Contents

Clustered Data ONTAP System Administration


Guide for Cluster Administrators

Contains instructions for obtaining information to set up the SP or


RLM editing files in the /etc directory, and performing other
administrative tasks.

Clustered Data ONTAP 8.3 Upgrade and Revert/ Contains instructions for downloading and upgrading Data ONTAP.
Downgrade Guide
Clustered Data ONTAP Physical Storage
Management Guide

Describes how to manage physical storage resources, using disks,


RAID groups, and aggregates.
Also contains information about Storage Encryption.

Clustered Data ONTAP Logical Storage


Management Guide

Describes how to efficiently manage your logical storage resources,


using volumes, FlexClone volumes, files and LUNs, FlexCache
volumes, deduplication, compression, qtrees, and quotas.

Installation and Setup Guide for the model of


the new nodes

Contains instructions for installing and cabling the new system.

FlexArray Virtualization Installation


Requirements and Reference Guide

Describes how to set up Data ONTAP systems for working with


storage arrays.

Clustered Data ONTAP Network Management


Guide

Describes how to configure and manage physical and virtual network


ports (VLANs and interface groups), LIFs using IPv4 and IPv6,
routing, and host-resolution services in clusters. Also describes how to
optimize network traffic by load balancing; and monitor the cluster by
using SNMP.

The appropriate disk shelf guide

Contains instructions for installing and monitoring disk shelves and


replacing disk shelf devices.

Universal SAS and ACP Cabling Guide

Contains information for cabling SAS shelves applicable to all


platforms.

Clustered Data ONTAP SAN Administration


Guide

Describes how to configure and manage iSCSI and FC protocols for


SAN environments.

Clustered Data ONTAP SAN Configuration


Guide

Contains information about FC and iSCSI topologies and wiring


schemes.

Clustered Data ONTAP High-Availability


Configuration Guide

Contains cabling instructions and other information for HA pairs.

The appropriate host utilities documentation

Contains information about configuring initiators and other


information about connecting SAN hosts to NetApp storage.

Migrating from a switchless cluster to a


switched Cisco Nexus 5596, Nexus 5020, or
Nexus 5010 cluster environment or Migrating
from a switchless cluster to a switched NetApp
CN1610 cluster environment

Contains instructions for converting a two-node switchless cluster to a


switched cluster.

Transitioning to a two-node switchless cluster

Contains instructions for converting a switched cluster to a two-node


switchless cluster.

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

The NetApp Support Site includes the Hardware Universe, which contains information about the hardware that the new
system supports. The NetApp Support Site also includes documentation about disk shelves, NICs, and other hardware that
you might use with your system.

Upgrading the controller hardware


To upgrade the node pair, you need to prepare the original nodes, install the new nodes and join them to the cluster, move
volumes, and then unjoin the original nodes from the cluster. You also need to migrate data and cluster management LIFs and, if
your system is in a SAN environment, create SAN LIFs on the new nodes.
Steps

1. Preparing for the upgrade on page 5


2. Rekeying disks for Storage Encryption on page 6
3. Installing the new nodes on page 8
4. Joining the new nodes to the cluster on page 8
5. Verifying setup of the new nodes on page 11
6. Creating SAN LIFs on the new nodes on page 11
7. Creating an aggregate on page 13
8. Moving volumes from the original nodes on page 14
9. Verifying that the volumes have moved successfully on page 17
10. Moving non-SAN data LIFs and cluster management LIFs from the original nodes to the new nodes on page 17
11. Deleting SAN LIFs from the original nodes on page 19
12. Unjoining the original nodes from the cluster on page 20
13. Deleting aggregates and removing disk ownership from the original nodes' internal storage on page 22
14. Completing the upgrade on page 23

Preparing for the upgrade


Before you can upgrade the node pair, you might need to update Data ONTAP on the original nodes and add additional storage
to the cluster. You must also make sure that the hardware is compatible and supported, and prepare a list of all the volumes that
you want to move from the original nodes.
Before you begin

If you are upgrading a pair of nodes in a switchless cluster, you must have converted them to a switched cluster before
performing the upgrade procedure. See the Migrating from a switchless cluster to a switched CiscoNexus 5596, Nexus 5020,
or Nexus 5010 cluster environment or the Migrating from a switchless cluster to a switched NetApp CN1610 cluster
environment on the NetApp Support Site at mysupport.netapp.com.
Steps

1. Make sure that the original nodes are running the same version of Data ONTAP as the new nodes; if they are not, update
Data ONTAP on the original nodes.
This procedure assumes that the new nodes have not previously been used and are running the desired version of Data
ONTAP. However, if the original nodes are running the desired version of Data ONTAP and the new nodes are not, you need
to update Data ONTAP on the new nodes.
See the Clustered Data ONTAP Upgrade and Revert/Downgrade Guide for instructions for upgrading Data ONTAP.
2. Check the Hardware Universe at hwu.netapp.com to verify that the existing and new hardware components are compatible
and supported.
3. Make sure that you have enough storage on the new nodes to accommodate storage associated with the original nodes.
Upgrading the controller hardware

If you do not have enough storage, add more storage to the new nodes before joining them to the cluster. See the Clustered
Data ONTAP Physical Storage Management Guide and the appropriate disk shelf guide.
4. If you plan to convert a FAS2240 to a disk shelf or move internal SATA drives or SSDs from a FAS2220 system, enter the
following command and capture the disk name and ownership information in the output:
storage disk show

See the Clustered Data ONTAP Commands: Manual Page Reference for more information about the storage disk show
command.
5. Prepare a list of all the volumes that you want to move from the original nodes, whether those volumes are found on internal
storage, on attached disk shelves, or on array LUNs (if you have a V-Series system or system with FlexArray Virtualization
Software) that are not supported by the new nodes.
You can use the volume show command to list the volumes on the nodes. See the Clustered Data ONTAP Commands:
Manual Page Reference.
6. Obtain IP addresses, mail host addresses, and other information for the Service Processors (SPs) on the new nodes.
You might want to reuse the network parameters of the remote management devicesRemote LAN Managers (RLMs) or
SPsfrom the original system for the SPs on the new nodes.
For detailed information about the SPs, see the Clustered Data ONTAP System Administration Guide for Cluster
Administrators and the Clustered Data ONTAP Commands: Manual Page Reference.
7. If you want the new nodes to have the same licensed functionality as the original nodes, enter the following command to see
which licenses are on the original system and examine its output:
system license show

8. Send an AutoSupport message to NetApp for the original nodes by entering the following command, once for each node:
system node autosupport invoke -node node_name -type all -message "Upgrading node_name from
platform_original to platform_new"

Rekeying disks for Storage Encryption


If you used Storage Encryption on the original nodes and the new nodes have encryption-enabled disks, you must make sure that
the original nodes' disks are correctly keyed.
About this task

Contact technical support to perform an optional step to preserve the security of the encrypted drives by rekeying all drives to a
known authentication key.
Steps

1. On one of the nodes, access the nodeshell:


system node run -node node_name

The nodeshell is a special shell for commands that take effect only at the node level.
2. Display the status information to check for disk encryption:
disk encrypt show
Example

The system displays the key ID for each self-encrypting disk, as shown in the following example:
node> disk
Disk
0c.00.1
0c.00.0
6

encrypt show
Key ID
0x0
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3

Locked?
No
Yes

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

0c.00.3
0c.00.4
0c.00.2
0c.00.5
...

080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3
080CF0C8000000000100000000000000A948EE8604F4598ADFFB185B5BB7FED3

Yes
Yes
Yes
Yes

If you get the following error message, proceed to the Preparing for netboot section; if you do not get an error message,
continue with these steps.
node> disk encrypt show
ERROR: The system is not configured to run this command.

3. Examine the output of the disk encrypt show command, and if any disks are associated with a non-MSID key, rekey
them to an MSID key by taking one of the following actions:

To rekey disks individually, enter the following command, once for each disk:
disk encrypt rekey 0x0 disk_name

To rekey all disks at once enter the following command:


disk encrypt rekey 0x0 *

4. Verify that all the self-encrypting disks are associated with an MSID:
disk encrypt show
Example

The following example shows the output of the disk encrypt show command when all self-encrypting disks are
associated with an MSID:
node> disk
Disk
---------0b.10.23
0b.10.18
0b.10.0
0b.10.12
0b.10.3
0b.10.15
0a.00.1
0a.00.2

encrypt show
Key ID
---------------------------------------------------------------0x0
0x0
0x0
0x0
0x0
0x0
0x0
0x0

Locked?
------No
No
Yes
Yes
No
No
Yes
Yes

5. Obtain an IP address for the external key management server.


See the Clustered Data ONTAP Software Setup Guide for more information about the external key management server.
6. Exit the nodeshell and return to the cluster shell:
exit

7. Repeat Step 1 through Step 6 on the second node.

Upgrading the controller hardware

Installing the new nodes


The upgrade begins with the installation of new nodes. Different controller models have different installation instructions; you
need to follow the instructions appropriate for the new nodes.
About this task

When you install the new nodes, you must make sure that you properly configure them for high availability. See the Clustered
Data ONTAP High-Availability Configuration Guide in addition to the appropriate Installation and Setup Instructions and
cabling guide.
Steps

1. Install the new nodes and their disk shelves in a rack, following the instructions in the appropriate Installation and Setup
Instructions.
Note: If the original nodes have attached disk shelves and you want to migrate them to the new system, do not attach them
to the new nodes at this point. You need to move the volumes from the original nodes' disk shelves and then remove them
from the cluster along with the original nodes.

2. Attach power and console connections to the new nodes.


For information about power and console connections, see the Installation and Setup Instructions for the model of the new
nodes.
3. Attach the network cables to the new nodes.
For information about cabling, see the Installation and Setup Instructions for the model of the new nodes and the appropriate
cabling guide.

Joining the new nodes to the cluster


You must join the new nodes to the cluster and set up storage failover on them before you can move volumes from the original
nodes to storage elsewhere on the cluster.
Before you begin

You must have completed the Cluster Setup worksheet in the Clustered Data ONTAP Software Setup Guide before turning on
the power to the new nodes.
Steps

1. Turn on the power to one of the new nodes.


The node boots and the Node Setup wizard starts on the console, as shown in the following example. Enter your information
as prompted:

Welcome to node setup.


You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the setup wizard.
Any changes you made before quitting will be saved.
To accept a default or omit a question, do not enter a value.
This system will send event messages and weekly reports to NetApp Technical
Support.
To disable this feature, enter "autosupport modify -support disable" within 24
hours.
8

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Enabling AutoSupport can significantly speed problem determination and


resolution should a problem occur on your system.
For further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}: <yes>
Enter the node management interface
Enter the node management interface
Enter the node management interface
Enter the node management interface
A node management interface on port
been created.

port [e0M]: <management interface port>


IP address: <mgmt_ip_address>
netmask: <mgmt_netmask>
default gateway: <mgmt_gateway>
e0M with IP address <management interface port> has

This node has its management address assigned and is ready for cluster setup.
To complete cluster setup after all nodes are ready, download and run the System Setup
utility from the NetApp Support Site and use it to discover the configured nodes.
For System Setup, this node's management address is: <management interface port>.

2. Log in as the admin user:


login:
admin

3. To launch the Cluster Setup wizard, enter the following command:


cluster setup

The following output is displayed:


Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}:
Note: Use the network interface show command to verify that the cluster interfaces are configured with the correct

IP addresses. If the cluster interfaces are not correctly configured they will not be able to join the cluster.
4. To join the node to the cluster follow these steps:
a. Enter join at the prompt to add the node to the cluster:
join

If the node is not configured in HA mode, the following message is displayed:


Non-HA mode, Reboot node to activate HA
Warning: Ensure that the HA partner has started disk initialization before
rebooting this node to enable HA.
Do you want to reboot now to set storage failover (SFO) to HA mode?
{yes, no} [yes]:

b. Enter

Upgrading the controller hardware

yes

to place the node in HA mode. The node will reboot.


c. If necessary, repeat Steps 1, 2, 3, and 4a to configure the node in HA mode and then go to Step 5.
5. Follow the prompts to set up the node and join it to the cluster:

To accept the default value for a prompt, press Enter.

Enter your own value for the prompt.


Note: You need to specify cluster interface IP addresses in the Cluster Setup wizard. You must use different addresses
from the ones on the original nodes. You can accept the default addresses or assign them yourself; however, if you assign
them yourself, you must make sure that the cluster addresses are on the same subnet as the rest of the cluster.
Note: The Node Setup wizard might prompt you to reboot the system; you do not need to reboot unless prompted.

6. After the Cluster Setup wizard is completed and exits, verify that the node is healthy and eligible to participate in the cluster
by completing the following substeps:
a. Log in to the cluster.
b. Enter the following command to display the status of the cluster:
cluster show
Example

The following example shows a cluster after the first new node (node2) has been joined to the cluster:
cluster::> cluster show
Node
Health
--------------------- ------node0
true
node1
true
node2
true

Eligibility
-----------true
true
true

Make sure that the output of the cluster show command shows that the two new nodes are part of the same cluster and
are healthy.
c. The Cluster Setup wizard assigns the node a name that consists of cluster_name-node_number. If you want to
customize the name, enter the following command:
system node rename -node current_node_name -newname new_node_name

7. Repeat Step 1 through Step 6 with the second new node.


8. On one of the new nodes, enable storage failover by entering the following command:
storage failover modify -node new_node_name -enabled true

Enabling storage failover on one node of an HA pair automatically enables it on the partner node.
9. Verify that storage failover is enabled by entering the following command:
storage failover show
Example

The following example shows the output when nodes are connected to each other and takeover is possible:
cluster::> storage failover show
Takeover
Node
Partner
Possible State Description
-------------- -------------- -------- ------------------------------------10

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

node0
node1
node2
node3

node1
node0
node3
node2

true
true
true
true

Connected
Connected
Connected
Connected

to
to
to
to

node1
node0
node3
node2

Verifying setup of the new nodes


After you add the nodes, you need to verify that the new nodes are set up correctly.
Steps

1. Take one of the following actions:


If the cluster is...

Then...

In a SAN environment

Complete Step 2 and go to the section Creating SAN LIFs on the new nodes.

Not in a SAN environment

Skip both Step 2 and the section Creating SAN LIFs on the new nodes, and go to the section
Creating an aggregate.

2. Verify that all the nodes are in quorum by entering the following command on one of the nodes:
event log show -messagename scsiblade.*
Example

The following example shows the output when the nodes in the cluster are in quorum:
cluster::> event log show -messagename scsiblade.*
Time
Node
Severity
------------------- ---------------- ------------8/13/2012 14:03:51 node0
INFORMATIONAL
8/13/2012 14:03:51 node1
INFORMATIONAL
8/13/2012 14:03:48 node2
INFORMATIONAL
8/13/2012 14:03:43 node3
INFORMATIONAL

Event
--------------------------scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...
scsiblade.in.quorum: The scsi-blade...

Creating SAN LIFs on the new nodes


If your cluster is configured for SAN, you need to create SAN LIFs on the new nodes once you join those new nodes to the
cluster.
About this task

You need to create at least two SAN LIFs on each node for each Storage Virtual Machine (SVM).
Steps

1. Determine if any iSCSI or FCP LIFs on the original nodes were members of port sets by entering the following command on
one of the original nodes and examining its output:
lun portset show
Example

The following example displays the output of the command, showing the port sets and LIFs (port names) for an SVM named
vs1:
cluster::> lun portset
Virtual
Server
Portset
--------- -----------vs1
ps0
Upgrading the controller hardware

show
Protocol Port Names
Igroups
-------- ----------------------- -----------mixed
LIF1,
igroup1
11

ps1
iscsi
ps2
fcp
3 entries were displayed.

LIF2
LIF3
LIF4

igroup2
-

2. To list the mappings between the LUNs and initiator groups, enter the lun mapping show command:
lun mapping show
Example
lun mapping show
Vserver
Path
-------- ------------------------------------------vs1
/vol/vol1/lun1
vs1
/vol/vol1/lun1
vs1
/vol/vol2/lun2
3 entries were displayed.

Igroup LUN ID
------- -----igroup1
1
igroup2
4
igroup3
10

Protocol
-------mixed
mixed
mixed

3. Add both nodes to the reporting-nodes list of the aggregate or volume:


lun mapping add-reporting-nodes -lun lun_name -destination-aggregate aggregate_name destination-volume volume_name
Example

The following example shows the current node and HA partner for the LUN mapping of /vol/vol1/lun2 being added to
igroup igroup1:
cluster::> lun mapping add-reporting-nodes -vserver vs1 -path /vol/vol1/lun2 -igroup
igroup1

4. Use the lun mapping-remove-reporting-nodes command to remove a LUN from the existing LUN map:
lun mapping remove-reporting-nodes -vserver vserver_name -path lun_path -lun lun_name igroup igroup_name
Example

The following command removes excess remote nodes from the LUN mapping of /vol/vol1/lun1 to igroup igroup1:
lun mapping remove-reporting-nodes -vserver vs1 -path /vol/vol1/lun1 -igroup igroup1

5. Create SAN LIFs on the new nodes by entering the following command, once for each LIF:
If LIF type is...

Enter...

iSCSI

network interface create -vserver vserver_name -lif lif_name -role


data -data-protocol iscsi -home-node new_node_name -homeport
port_name -address IP_Address -netmask Netmask

FCP

network interface create -vserver vserver_name -lif lif_name -role


data -data-protocol fcp -home-node new_node_name -homeport
port_name

For detailed information about creating LIFs, see the Clustered Data ONTAP Network Management Guide and the Clustered
Data ONTAP Commands: Manual Page Reference.
6. If port sets exist, add the new SAN LIFs to the port sets by entering the following command, once for each LIF:
lun portset add vserver vserver_name -portset portset_name -port-name lif_name

12

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

7. If you use FC or FCoE, take the following actions:


a. Verify that zoning is set up correctly to enable the existing initiator ports on the host to connect to the new target ports on
the new nodes.
b. Update switch zoning to connect the new nodes to existing initiators.
Zoning setup varies depending on the switch that you use. For the following steps, refer to the Clustered Data ONTAP
SAN Configuration Guide and the appropriate switch documentation.
8. Verify that initiators can log in and discover all LUNs through the newly created SAN LIFs by entering one of the following
commands:

If FC is enabled, display information about FC initiators that are currently logged in on the new nodes by entering the
following command:
vserver fcp initiator show -vserver vvol_vs -lif lif_name

The following example displays information about logged-in FC initiators for an SVM named vvol_vs1:
cluster::> fcp initiator show -vserver vvol_vs1 -lif fcp_lif_2,fcp_lif_3
Logical
Port
Initiator
Initiator
Vserver
Interface
Address WWNN
WWPN
Igroup
--------- ----------------- -------- ------------ ------------ --------------vvol_vs
fcp_lif_2
dd0800
20:01:00:1b:32:2a:f6:b0 21:01:00:1b:32:2a:f6:b0
vvol_vs
fcp_lif_2
dd0700
20:01:00:1b:32:2c:ae:0c 21:01:00:1b:32:2c:ae:0c
vvol_vs
fcp_lif_3
dd0800
20:01:00:1b:32:2a:f6:b0 21:01:00:1b:32:2a:f6:b0
vvol_vs
fcp_lif_3
dd0700
20:01:00:1b:32:2c:ae:0c 21:01:00:1b:32:2c:ae:0c
4 entries were displayed.

If iSCSI is enabled, display information about iSCSI initiators that are currently logged in on the new nodes by entering
the following command, once for each new LIF:
iscsi connection show -vserver vserver_name -lif new_lif

The following example displays information about a logged-in initiator for an SVM named vs1:
cluster::> iscsi connection show
Tpgroup
Vserver
Name
TSIH
------------ ------------- ----vs1
data1
10

-vserver vs1 -lif data1


Conn Local
Remote
TCP Recv
ID
Address
Address
Size
----- --------------- --------------- -------1 10.229.226.165 10.229.136.188
131400

You might need to set up the initiators to discover paths through the new nodes. Steps for setting up initiators vary depending
on the operating system of your host. See the host utilities documentation for your host computer for specific instructions.
See the Clustered Data ONTAP SAN Configuration Guide and the Clustered Data ONTAP SAN Administration Guide for
additional information about initiators.

Creating an aggregate
You create one or more aggregates on each new node to provide storage for volumes on the internal disk drives of the original
nodes or any disk shelves that are attached to the nodes along with disk shelves. Third-party array LUNs can also be used to
create aggregates if the configuration is a diskless FAS system that has a FAV license installed.
Before you begin

The new nodes must have enough storage to accommodate the aggregates.

At least one data aggregate must be on each of the new nodes.

You must know which disks will be used in the new aggregates.

You must know the source aggregates, the number and kind of disks, and the RAID setup on the original nodes.

Upgrading the controller hardware

13

About this task

You can specify disks by listing their IDs, or by specifying a disk characteristic such as type, size, or speed. Disks are owned by
a specific node; when you create an aggregate, all disks in that aggregate must be owned by the same node, which becomes the
home node for that aggregate.
If the nodes are associated with more than one type of disk and you do not explicitly specify what type of disks to use, Data
ONTAP creates the aggregate using the disk type with the highest number of available disks. To ensure that Data ONTAP uses
the disk type that you expect, always specify the disk type when creating aggregates from heterogeneous storage.
You can display a list of the available spares by using the storage disk show -spare command. This command displays
the spare disks for the entire cluster. If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. You can ensure that the aggregate is created on a specific node by using the node option or
by specifying the disks that are owned by that node.
Aggregate names must meet the following requirements:

Begin with either a letter or an underscore (_)

Contain only letters, digits, and underscores

Contain no more than 250 characters

Steps

1. Create at least one aggregate by entering the following command on one of the new nodes, once for each aggregate that you
want to create:
storage aggregate create -aggregate aggr_name -node new_node_name -diskcount integer

You must specify the node and the disk count when you create the aggregates.
The parameters of the command depend on the size of the aggregate needed. See the Clustered Data ONTAP Commands:
Manual Page Reference for detailed information about the storage aggregate create command.
2. Repeat Step 1 on the other new node.
3. Verify the RAID group and disks of your new aggregate:
storage aggregate show -aggregate aggr_name

Moving volumes from the original nodes


After you create one or more aggregates on each of the new nodes to store the volumes from the original nodes, you must
identify an aggregate for each volume and move each volume individually. You need to move the SVM root volume along with
the other volumes.
Before you begin

If you are moving a data protection mirror and you have not initialized the mirror relationship, you must have initialized the
mirror relationship using the snapmirror initialize command before proceeding. If you receive an error message when
trying to move an uninitialized volume it may be caused by there being no Snapshot copies available until after the volume is
initialized.
Data protection mirror relationships must be initialized before you can move one of the volumes.
See the Clustered Data ONTAP Data Protection Guide and the Clustered Data ONTAP Commands: Manual Page Reference for
detailed information about initializing data protection.
You must have prepared the list of volumes that you want to move, which you were directed to do in Step 5 of Preparing for the
upgrade.
You must have reviewed the requirements for and restrictions on moving volumes in the Clustered Data ONTAP Logical
Storage Management Guide.

14

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Note: You must review the guidelines in the Clustered Data ONTAP Logical Storage Management Guide if you receive error
messages when trying to use the volume move command to move FlexClone, load-sharing, temporary, or FlexCache
volumes.
About this task

Moving a volume occurs in phases:

A new volume is made on the destination aggregate.

The data from the original volume is copied to the new volume.
During this time, the original volume is intact and available for clients to access.

At the end of the move process, client access is temporarily blocked.


During this time, the system performs a final replication from the source volume to the destination volume, swaps the
identities of the source and destination volumes, and changes the destination volume to the source volume.

After completing the move, the system routes client traffic to the new source volume and resumes.

Moving volumes is not disruptive to client access because the time in which client access is blocked ends before clients notice a
disruption and time out. Client access is blocked for 45 seconds by default. If the volume move operation cannot finish in the
time that access is denied, the system aborts this final phase of the volume move operation and allows client access.
The system runs the final phase of the volume move operation until the volume move is complete or until the default of three
attempts is reached. If the volume move operation fails after the third attempt, the process goes into a cutover-deferred state and
waits for you to initiate the final phase.
You can change the amount of time client access is blocked or the number of times the final phase of the volume move
operation, known as cutover attempts, are run if the defaults are not adequate. You also can determine what the system does if
the volume move operation cannot be completed during the time client access is blocked. See the Clustered Data ONTAP
Commands: Manual Page Reference for detailed information about the volume move command.
Note: In Data ONTAP 8.3, when you move volumes, an individual controller should not be involved in more than 16
simultaneous volume moves at a time. The 16-simultaneous volume move limit includes volume moves where the controller
is either the source or destination of the operation. For example, if you are upgrading NodeA with NodeC and NodeB with
NodeD, up to 16 simultaneous volume moves can take place from NodeA to NodeC, and at the same time 16 simultaneous
volume moves can take place from NodeB to NodeD.
Note: If you want to use the volume move command on an Infinite Volume, you need to contact technical support for

assistance.
Note: You need to move the volumes from an original node to the new node that is replacing it. That is, if you replace NodeA
with NodeC and NodeB with NodeD, you must move volumes from NodeA to NodeC and volumes from NodeB to NodeD.
Steps

1. Display information for the volumes that you want to move from the original nodes to the new nodes:
volume show -vserver vserver_name -node original_node_name
Example

The following example shows the output of the command for volumes on an SVM named vs1 and a node named node0:
cluster::> volume show -vserver vs1
Vserver
Volume
Aggregate
--------- ------------ -----------vs1
clone
aggr1
vs1
vol1
aggr1
vs1
vs1root
aggr1
3 entries were displayed.

Upgrading the controller hardware

-node node0
State
Type
Size Available Used%
---------- ---- ---------- ---------- ----online
RW
40MB
37.87MB
5%
online
RW
40MB
37.87MB
5%
online
RW
20MB
18.88MB
5%

15

Capture the information in the output, which you will need in Step 4.
Note: If you are moving one volume at a time, complete Step 1 through Step 6 once for each volume you move, all the
steps for one volume, and then all the steps for the next volume, and so on. If you are moving multiple volumes at the
same time, complete Step 1 through Step 6 once for each group of volumes that you move, keeping within the limit of 16
simultaneous volume moves for Data ONTAP 8.3 or 25 simultaneous volume moves for Data ONTAP 8.1.

2. Determine an aggregate to which you can move the volume:


volume move target-aggr show -vserver Vserver_name -volume vol_name
Example

The output in the following example shows that the SVM vs2 volume user_max can be moved to any of the listed
aggregates:
cluster::> volume move target-aggr show -vserver vs2 -volume user_max
Aggregate Name
Available Size Storage Type
--------------------------- -----------aggr2
467.9GB
FCAL
node12a_aggr3
10.34GB
FCAL
node12a_aggr2
10.36GB
FCAL
node12a_aggr1
10.36GB
FCAL
node12a_aggr4
10.36GB
FCAL
5 entries were displayed

3. Run a validation check on the volume to ensure that it can be moved to the intended aggregate by entering the following
command for the volume that you want to move:
volume move start -vserver vserver_name -volume volume_name -destination-aggregate
destination_aggregate_name -perform-validation-only true

4. Move the volumes by entering the following command for the volume that you want to move:
volume move start -vserver vserver_name -volume vol_name -destination-aggregate
destination_aggr_name -cutover-window integer

You need to be in advanced mode to use the -cutover-window and -cutover-action parameters with the volume
move start command.

You must enter the command once for each volume that you want to move from the original nodes to the new nodes,
including SVM root volumes.

You cannot move vol0, the node root volume.


Other volumes, including SVM root volumes, can be moved.

The -cutover-window parameter specifies the time interval in seconds to complete cutover operations from the original
volume to the moved volume. The default is 45 seconds, and the valid time intervals can range from 30 to 300 seconds,
inclusive.
5. Check the outcome of the vol move command:
volume move show -vserver vserver_name -volume vol_name

6. If the volume move operation does not complete the final phase after three attempts and goes into a cutover-deferred state,
enter the following command to try to complete the move:
volume move trigger-cutover -vserver vserver_name -volume vol_name -force true

Forcing the volume move operation to finish can disrupt client access to the volume you are moving.

16

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Verifying that the volumes have moved successfully


After you move the volumes, you need to make sure that each one has been moved to the correct aggregate.
Steps

1. Enter the following command, once for each SVM, and examine the output to make sure the volumes are in the correct
aggregate:
volume show -vserver Vserver_name
Example

The following example shows the output for the command entered for an SVM named vs1:
cluster::> volume show
Vserver
Volume
--------- -----------vs1
vol1
vs1
vol1_dr
vs1
vol2
vs1
vol2_dr
vs1
vol3

-vserver vs1
Aggregate
-----------aggr1
aggr0_dp
aggr0
aggr0_dp
aggr1

State
---------online
online
online
online
online

Type
---RW
DP
RW
DP
RW

Size
---------2GB
200GB
150GB
150GB
150GB

Available
---------1.9GB
160.0GB
110.3GB
110.3GB
120.0GB

Used%
----5%
20%
26%
26%
20%

2. If any volumes are not in the correct aggregate, move them again by following the steps in the section Moving volumes from
the original nodes.
3. If you are unable to move any volumes to the correct aggregate, contact technical support.

Moving non-SAN data LIFs and cluster management LIFs from the original nodes to the new
nodes
After you have moved the volumes from the original nodes, you need to migrate the non-SAN data LIFs and clustermanagement LIFs from the original nodes to the new nodes.
About this task

You should execute the command for migrating a cluster-management LIF from the node where the cluster LIF is hosted.

You cannot migrate a LIF if that LIF is used for copy-offload operations with VMware vStorage APIs for Array Integration
(VAAI).
For more information about VMware VAAI, see the Clustered Data ONTAP File Access Management Guide for CIFS or the
Clustered Data ONTAP File Access Management Guide for NFS.

You should use the network interface show command to see where the cluster-management LIF resides.
If the cluster-management LIF resides on one of the controllers that is being decommissioned, you need to migrate the LIF
to a new controller.

Steps

1. Modify the home ports for the non-SAN data LIFs from the original nodes to the new nodes by entering the following
command, once for each LIF:
network interface modify -vserver vserver_name -lif lif_name -home-node new_node_name -homeport netport|ifgrp

If you use the same port on the destination node, you do not need to specify the home port.
2. Take one of the following actions:

Upgrading the controller hardware

17

If you want to migrate...

Then...

A specific LIF

Enter the following command, once for each LIF:


network interface migrate -vserver vserver_name -lif lif_name source-node source_node_name -destination-node dest_node_name destination-port dest_port_name

All the non-SAN data LIFs and


cluster-management LIFs

Enter the following command:


network interface migrate-all -node node_name

Example

The following command migrates a LIF named datalif1 on the SVM vs0 to the port e0d on node0b:
cluster::> network interface migrate -vserver vs0 -lif datalif1 -destination-node node0b destination-port e0d

The following command migrates all the data and cluster-management LIFs from the current (local) node:
cluster::> network interface migrate-all -node local

3. Check whether the cluster-management LIF home node is on one of the original nodes by entering the following command
and examining its output:
network interface show -lif cluster_mgmt -fields home-node

4. Take one of the following actions, based on the output of the command in Step 3:
If the cluster management LIF home
node...

Then...

Is on one of the controllers being


decommissioned

Complete the following substeps:


a.

Switch the home node of the cluster-management LIF to one of the new nodes:
network interface modify -vserver cluster_name -lif
cluster_mgmt -home-node new_node_name -home-port netport|
ifgrp
Note: If the cluster-management LIF is not on one of the new nodes, you will not be
able to unjoin the original nodes from the cluster.

b.

Migrate the cluster-management LIF to one of the new nodes:


network interface migrate -vserver -vserver_name -lif
cluster-mgmt -destination-node new_node_name -destinationport netport|ifgrp

c.
Is not on one of the controllers being
decommissioned

Go to the After you finish portion of this section.

Go to the After you finish portion of this section.

After you finish

Take one of the following actions:

18

If you are in a SAN environment, or if both SAN and NAS are enabled, complete the section Deleting SAN LIFs from the
original nodes and then go to the section Unjoining the original nodes from the cluster.

If you are in a NAS environment, skip the section Deleting SAN LIFs from the original nodes and go to the section
Unjoining the original nodes from the cluster.

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Deleting SAN LIFs from the original nodes


If the cluster is in a SAN environment, you need to make sure that the hosts are not logged in to any SAN LIFs that will be
deleted. You must then delete any SAN LIFs from the original nodes before you can unjoin the original nodes from the cluster.
Steps

1. Take one of the following actions:


If you have...

Then...

iSCSI initiators

Complete Step 2 and then go to Step 3.

FC initiators

Go to Step 4.

2. Enter the following command to display a list of active initiators currently connected to an SVM on the original nodes, once
for each of the old LIFs:
iscsi connection show -vserver Vserver_name -lif old_lif
Example

The following example shows the output of the command with an active initiator connected to SVM vs1:
cluster::> iscsi connection show
Tpgroup
Vserver
Name
TSIH
------------ ------------- ----vs1
data2
9

-vserver vs1 -lif data2


Conn Local
Remote
TCP Recv
ID
Address
Address
Size
----- --------------- --------------- -------1 10.229.226.166 10.229.136.188
131400

3. Examine the output of the command in Step 2 and then take one of the following actions:
If the output of the command in
Step 2...

Then...

Shows that any initiators are still


logged in to an original node

On your host computer, log out of any sessions on the original controller.

Does not show that any initiators are


still logged in to an original node

Go to Step 4.

Instructions vary depending on the operating system on your host. See the host utilities
documentation for your host for the correct instructions.

4. Display a list of port sets by entering the following command:


lun portset show
Example

The following example shows output of the lun portset show command:
cluster:>
Virtual
Server
--------js11

lun portset show

Portset
Protocol Port Names
Igroups
------------ -------- ----------------------- -----------ps0
mixed
LIF1,
igroup1
LIF2
ps1
iscsi
LIF3
igroup2
ps2
fcp
LIF4
3 entries were displayed.

Upgrading the controller hardware

19

See the Clustered Data ONTAP SAN Administration Guide for information about port sets and the Clustered Data ONTAP
Commands: Manual Page Reference for detailed information about portset commands.
5. Examine the output of the lun portset show command to see if any iSCSI or FC LIFs on the original nodes belong to
any port sets.
6. If any iSCSIs or FC LIFs on either node being decommissioned are members of a port set, remove them from the port sets
by entering the following command, once for each LIF:
lun portset remove -vserver vserver_name -portset portset_name -port-name lif_name
Note: You need to remove the LIFs from port sets before you delete the LIFs.

7. Delete the LIFs on the original nodes by entering the following command, once per LIF:
network interface delete -vserver vserver_name -lif lif_name

Unjoining the original nodes from the cluster


Before you remove and decommission the original nodes and transfer any storage to the new nodes, you might need to reassign
the epsilon to a new node. You also must unjoin the original nodes from the cluster.
Steps

1. Disable high-availability by entering the following command at the command prompt of one of the original nodes:
storage failover modify -node original_node_name -enabled false

2. Access the advanced privilege level by entering the following command on either node:
set -privilege advanced

The system displays the following message:


Warning: These advanced commands are potentially dangerous; use them only when directed
to do so by NetApp personnel.
do you wish to continue? (y or n):

3. Enter y.
4. Find the node that has epsilon by entering the following command and examining its output:
cluster show

The system displays information about the nodes in the cluster, as shown in the following example:
cluster::*>
Node
-------------------node0
node1
node2
node3

Health
------true
true
true
true

Eligibility
-----------true
true
true
true

Epsilon
-----------true
false
false
false

If one of the original nodes has the epsilon, then the epsilon needs to be moved to one of the new nodes. If another node in
the cluster has the epsilon, you do not need to move it.
5. If necessary, move the epsilon to one of the new nodes by entering the following commands:
cluster modify -node original_node_name -epsilon false
cluster modify -node new_node_name -epsilon true

6. Enter the following command from one of the new nodes for both of the original nodes:
cluster unjoin -node original_node_name
20

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

The system displays the following message:


Warning: This command will unjoin node node_name from the cluster. You
must unjoin the failover partner as well. After the node is
successfully unjoined, erase its configuration and initialize all
disks by using the "Clean configuration and initialize all disks (4)"
option from the boot menu.
Do you want to continue? {y|n}: y

Enter y.
You need to log in to both unjoined nodes to perform Steps 7 and 8.
Note: The cluster unjoin command should only be invoked from one of the new nodes. If it is entered on an existing
node you may see the following error message:
Error: command failed: Cannot unjoin a node on which the unjoin command is
invoked. Please connect to any other node in the cluster to unjoin this
node.

7. The node boots and stops at the boot menu, as shown here:
This node was removed from a cluster. Before booting, use option (4)
to initialize all disks and setup a new system.
Normal Boot is prohibited.
Please choose one of the following:
(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)?

Enter 4.
8. The system displays the following messages:
Zero disks, reset config and install a new file system?:
This will erase all the data on the disks, are you sure?:

Enter y at both prompts.


9. Take one of the following actions:
If the cluster, after you complete the
unjoin commands, has...
Two nodes

Then...

a.

Configure high availability by entering the following command:


cluster ha modify -configured true

b.
More than two nodes

Go to Step 11.

Go to Step 11.

10. Return to the admin level by entering the following command:

Upgrading the controller hardware

21

set -privilege admin

11. Before you turn off power to any disk shelves attached to the original nodes and move them, make sure that the disk
initialization started in Step 7 through Step 9 is complete.
12. Turn off power to the original nodes and then unplug them from the power source.
13. Remove all cables from the original nodes.
14. If you plan to reuse attached disk shelves from the original nodes on the new nodes, cable the disks shelves to the new nodes.
15. Remove the original nodes and their disk shelves from the rack if you do not plan to reuse the hardware in its original
location.

Deleting aggregates and removing disk ownership from the original nodes' internal storage
If you want convert a FAS2240 system to a disk shelf or move internal SATA drives or SSDs from a FAS22xx system, you must
delete the old aggregates from the original nodes' internal storage before completing the upgrade. You also must remove disk
ownership from the original system's internal disks.
Before you begin

You must have completed the previous tasks in this upgrade procedure.
About this task

If you do not want to convert a FAS2240 system to disk shelves or move internal SATA drives or SSDs from a FAS2220
controller, go to the section Completing the upgrade.
Note: You do not need to delete aggregates or remove ownership from disks in external shelves that you plan to migrate to the
new system.
Steps

1. Verify that there are no data volumes associated with the aggregates to be destroyed by entering the following command,
once for each aggregate:
volume show -aggregate aggr_name

The system should display the message in the following example:


cluster::> volume show -aggregate aggr1
There are no entries matching your query.

If volumes are associated with the aggregates to be destroyed, repeat the steps in the section Moving volumes from the
original nodes.
2. If there are any aggregates on the original nodes, delete them by entering the following command, once for each aggregate:
storage aggregate delete -aggregate aggregrate_name

The system displays the following message:


Warning: Are you sure you want to destroy aggregate "aggr1"?
{y|n}:

3. Enter y at the prompt.


The system displays a message similar to the one in the following example:

22

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

[Job 43] Job is queued: Delete aggr1.DBG: VOL_OFFLINE: tree aggr1, new_assim=0,
assim_gen=4291567353, creating=0, has_vdb=true
[Job 43] deleting aggregate aggr1 ... DBG: VOL_OFFLINE: tree aggr1, new_assim=0,
assim_gen=4291567353, creating=0, has_vdb=true
DBG:vol_obj.c:1823:volobj_offline(aggr2): clear VOL_FLAG_ONLINE
DBG:config_req.c:9921:config_offline_volume_reply2(aggr2): clr VOL_FLAG_ONLINE
[Job 43] Job succeeded: DONE

4. Verify all the old aggregates are deleted by entering the following command and examining its output:
storage aggregate show -aggregate aggr_name1,aggr_name2...

The system should display the message shown in the following example:
cluster::> storage aggregate show -aggregate aggr1,aggr2
There are no entries matching your query.

5. Remove ownership from the original system's internal disks by entering the following command, once for each disk:
storage disk removeowner -disk disk_name

Refer to the disk information that you captured in the section Preparing for the upgrade.
6. Verify that ownership has been removed for all of the internal disks by entering the following command and examining its
output:
storage disk show

Internal drives have 00. at the beginning of their ID. The 00. indicates an internal disk shelf, and the number after the
decimal point indicates the individual disk drive.

Completing the upgrade


After you complete the upgrade, you should configure the Service Processors (SPs) on the new nodes and send an AutoSupport
message to NetApp to confirm the upgrade. If you upgraded a pair of nodes running Data ONTAP 8.3, you also might want to
set up the new nodes as a switchless cluster.
Steps

1. Configure the SPs by performing the following command on both of the new nodes:
system node service-processor network modify

See the Clustered Data ONTAP System Administration Guide for Cluster Administrators for more information.
2. Set up AutoSupport by following the instructions in the Clustered Data ONTAP System Administration Guide for Cluster
Administrators.
3. Install new licenses for the new nodes by entering the following commands for each node:
system license add -license-code license_code,license_code,license_code...

In Data ONTAP 8.3, you can add one license at a time, or you can add multiple licenses at a time, each license key separated
by a comma.
4. To remove all of the old licenses from the original nodes, enter one of the following commands:
system license clean-up -unused -expired
system license delete -serial-number node_serial_number -package licensable_package

To delete all expired licenses, enter:


system license clean-up -expired

Upgrading the controller hardware

23

To delete all unused licenses, enter:


system license clean-up -unused

To delete a specific license from a cluster, enter the following commands on the nodes:
system license delete -serial-number <node1 serial number> -package *
system license delete -serial-number <node2 serial number> -package *

The following output is displayed:


Warning: The following licenses will be removed:
<list of each installed package>
Do you want to continue? {y|n}: y

Enter y to remove all of the packages.


5. Verify that the licenses are properly installed by entering the following command and examining its output:
system license show

You might want to compare the output with the output that you captured in Step 7 of the section Preparing for the upgrade.
6. Send a postupgrade AutoSupport message to NetApp by entering the following command, once for each node:
system node autosupport invoke -node node_name -type all -message "node_name successfully
upgraded from platform_old to platform_new"

7. If you have a two-node cluster running Data ONTAP 8.3 and you want to set up a switchless cluster on the new nodes,
follow the instructions in Transitioning to a two-node switchless cluster on the NetApp Support Site.

Performing post-upgrade tasks


Once you upgrade the original nodes, you can move storage from the original system to the new one, if you wish. You also
should set up Storage Encryption if the new nodes are encryption-enabled. If the new system has a converged network adapter
(CNA) card or onboard CNA ports, you should check whether you need to configure the ports. You should then decommission
any original storage hardware that you are not reusing.
Before you begin

You must have moved all the volumes from the original nodes and completed all of the upgrade procedure through the section
Completing the upgrade.
About this task

You can reuse FAS2240 nodes by converting them to disk shelves and attaching them to the new system. You can transfer SATA
disk drives or SSDs from FAS22xx nodes and install them in disk shelves attached to the new nodes.
Step

1. Take one of the following actions:

24

If the original system was a...

Then...

FAS2220 system with SAS drives

Go to the section Setting up Storage Encryption on the new nodes if the new system has
encryption-enabled disks; otherwise go to the section Decommissioning the old system.

FAS2220 system with SATA drives of


SSDs

Go to the section Moving internal disk drives from a FAS2220 system.

FAS2240 system

Go to the section Converting the FAS2240 system to a disk shelf and attaching it to the new
system.

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Steps

1.
2.
3.
4.
5.
6.

Moving internal disk drives from a FAS2220 system on page 25


Converting the FAS2240 system to a disk shelf and attaching it to the new system on page 26
Reassigning disks from the original nodes to the new nodes on page 26
Setting up Storage Encryption on the new nodes on page 28
Configuring CNA ports on page 29
Decommissioning the old system on page 32

Moving internal disk drives from a FAS2220 system


After you have installed and set up the new node, you can move the internal disk drives from a FAS2220 system with SATA
drives or SSDs to a disk shelf attached to the new system.
Before you begin

You must have done the following before proceeding with this section:

Made sure that the SATA or SSD drive carriers from the FAS2220 system are compatible with the new disk shelf
Check the Hardware Universe on the NetApp Support Site for compatible disk shelves

Made sure that there is a compatible disk shelf attached to the new system

Made sure that the disk shelf has enough free bays to accommodate the SATA or SSD drive carriers from the FAS2220
system

About this task

You cannot transfer SAS disk drives from a FAS2220 system to a disk shelf attached to the new nodes.
Steps

1. Gently remove the bezel from the front of the system.


2. Press the release button on the left side of the drive carrier.
The following illustration shows a disk drive with the release button located on the left of the carrier face:

6.0TB

6.0TB

6.0TB

The cam handle on the carrier springs open partially, and the carrier releases from the midplane.
3. Pull the cam handle to its fully open position to unseat the carrier from the midplane and gently slide the carrier out of the
disk shelf.
Attention: Always use two hands when removing, installing, or carrying a disk drive. However, do not place your hands

on the disk drive boards exposed on the underside of the carrier.


The following illustration shows a carrier with the cam handle in its fully open position:

Performing post-upgrade tasks

25

6.0TB

6.0TB

6.0TB

6.0TB

4. With the cam handle in the open position, insert the carrier into a slot in the new disk shelf, firmly pushing until the carrier
stops.
Caution: Use two hands when inserting the carrier.

5. Close the cam handle so that the carrier is fully seated in the midplane and the handle clicks into place.
Be sure you close the handle slowly so that it aligns correctly with the face of the carrier.
6. Repeat Step 2 through Step 5 for all of the disk drives that you are moving to the new system.

Converting the FAS2240 system to a disk shelf and attaching it to the new system
After you complete the upgrade, you can convert the FAS2240 system to a disk shelf and attach it to the new system to provide
additional storage.
Before you begin

You must have upgraded the FAS2240 system before converting it to a disk shelf. The FAS2240 system must be powered down
and uncabled.
Steps

1. Replace the controller modules in the FAS2240 system with IOM6 modules.
2. Set the disk shelf ID.
Each disk shelf, including the FAS2240 chassis, requires a unique ID.
3. Reset other disk shelf IDs as needed.
4. Turn off power to any disk shelves connected to the new nodes, and then turn off power to the new nodes.
5. Cable the converted FAS2240 disk shelf to a SAS port on the new system, and, if you are using ACP cabling, to the ACP
port on the new system.
Note: If the new system does not have a dedicated onboard network interface for ACP for each controller, you must

dedicate one for each controller at system setup. See the Installation and Setup Instructions for the new system and the
Universal SAS and ACP Cabling Guide for cabling information. Also consult the Clustered Data ONTAP HighAvailability Configuration Guide.
6. Turn on power to the converted FAS2240 disk shelf and any other disk shelves attached to the new nodes.
7. Turn on the power to the new nodes and then interrupt the boot process on each node by pressing Ctrl-C to access the boot
environment prompt.

Reassigning disks from the original nodes to the new nodes


If you are moving external disk shelves that were attached to the original nodes, you need to reassign the disks to the new nodes
and disk shelves. Third-party array LUNs can also be moved to new nodes in configurations with diskless FAS systems that

26

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

have a FAV license installed . You do not need to perform this task if you did not move external disk shelves from the original
system to the new one.
About this task

You need to perform the steps in this section on both nodes, completing each step on one node and then the other node before
going on to the next step.
Steps

1. Boot Data ONTAP on the new node by entering the fooling command at the boot environment prompt:
boot_ontap maint

2. On the new node, display the new node system ID by entering the following command at the Maintenance mode prompt:
disk show
Example
*> disk show
Local System ID: 101268854
...

3. Record the new node system ID.


4. Reassign the node's spares, any disks belonging to root, and any SFO aggregates by entering the following command:
disk reassign -s original_sysid -d new_sysid -p partner_sysid
Example

When you run the disk reassign command on node1, the -s parameter is node1 (original_sysid), the -p parameter
is node2 (partner_sysid), and node3 is the -d parameter (new_sysid):
disk reassign -s node1_sysid -d node3_sysid -p node2_sysid

When you run the disk reassign command on node2, the -p parameter is node 3's partner_sysid. The disk
reassign command will reassign only those disks for which original_sysid is the current owner.
To obtain the system ID for the nodes, use the sysconfig command.
Example

The system displays the following message:


Partner node must not be in Takeover mode during disk reassignment from maintenance mode.
Serious problems could result!!
Do not proceed with reassignment if the partner is in takeover mode. Abort reassignment
(y/n)? n
After the node becomes operational, you must perform a takeover and giveback of the HA
partner node to ensure disk reassignment is successful.
Do you want to continue (y/n)? y

5. Enter y.

Performing post-upgrade tasks

27

The system displays the following message:


Disk ownership will be updated on all disks previously belonging to Filer with sysid
<sysid>.
Do you want to continue (y/n)? y

6. Enter y.
7. If the node is in the FAS22xx, FAS25xx, FAS32xx, FAS62xx, or FAS80xx family, verify that the controller and chassis are
configured as ha by entering the following command and observing the output:
ha-config show
Example

The following example shows the output of the ha-config show command:
*> ha-config show
Chassis HA configuration: ha
Controller HA configuration: ha

FAS22xx, FAS25xx, FAS32xx, FAS62xx, and FAS80xx systems record in a PROM whether they are in an HA pair or standalone configuration. The state must be the same on all components within the stand-alone system or HA pair.
If the controller and chassis are not configured as ha, use the ha-config modify controller ha and ha-config
modify chassis ha commands to correct the configuration.
8. Enter the following command at the Maintenance mode prompt:
halt

9. Enter the following command at the boot environment prompt:


boot_ontap

Setting up Storage Encryption on the new nodes


If the new nodes have Storage Encryption enabled, you might need to complete a series of additional steps to ensure
uninterrupted Storage Encryption functionality. These steps include collecting network information, obtaining private and public
SSL certificates, and running the Storage Encryption setup wizard.
Before you begin

All the disks on the storage system must be encryption-enabled before you set up Storage Encryption on the new nodes.
About this task

You can skip this section if the system that you upgraded to does not have Storage Encryption enabled.
If you used Storage Encryption on the original system and migrated the disk shelves to the new system, you can reuse the SSL
certificates that are stored on migrated disk drives for Storage Encryption functionality on the upgraded system. However, you
should check that the SSL certificates are present on the migrated disk drives. If they are not present you will need to obtain
them.
Note: Step 1 through Step 3 are only the overall tasks required for configuring Storage Encryption. You need to follow the
detailed instructions for each task in the Clustered Data ONTAP Software Setup Guide.
Steps

1. Obtain and install private and public SSL certificates for the storage system and a private SSL certificate for each key
management server that you plan to use.

28

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

Requirements for obtaining the certificates and instructions for installing them are contained in the Clustered Data ONTAP
Software Setup Guide.
2. Collect the information required to configure Storage Encryption on the new nodes.
This includes the network interface name, the network interface IP address, and the IP address for external key management
server. The required information is contained in the Clustered Data ONTAP Software Setup Guide.
3. Launch and run the Storage Encryption setup wizard, responding to the prompts as appropriate.
4. If you have not done so, repeat Step 1 through Step 3 on the other new node.
After you finish

See the Clustered Data ONTAP Physical Storage Management Guide for information about managing Storage Encryption on
the updated system.

Configuring CNA ports


If a node has onboard CNA ports or a CNA card, you must check the configuration of the ports and possibly reconfigure them,
depending on how you want to use the upgraded system.
Before you begin

You must have the correct SFP+ modules for the CNA ports.
About this task

CNA ports can be configured into native Fibre Channel (FC) mode or CNA mode. FC mode supports FC initiator and FC target;
CNA mode allows concurrent NIC and FCoE traffic over the same 10-GbE SFP+ interface and supports FC target.
Note: NetApp marketing materials might use the term UTA2 to refer to CNA adapters and ports. However, the CLI and

product documentation use the term CNA.


CNA ports might be on an adapter or onboard the controller and have the following configurations:

CNA cards ordered when the controller is ordered are configured before shipment to have the personality you request.

CNA cards ordered separately from the controller are shipped with the default FC target personality.

Onboard CNA ports on new controllers are configured before shipment to have the personality you request.

However, you should check the configuration of the CNA ports on the node and change them, if necessary.
Steps

1. If you have not already done so, halt the node:


system node halt -node node_name

2. Access Maintenance mode:


boot_ontap maint

3. Check how the ports are currently configured by entering one of the following commands on one of the new nodes:
If the system you are upgrading...

Then...

Has storage disks

Enter the following command:


system node hardware unified-connect show

Is a V-Series system or has FlexArray


Virtualization Software and is
connected to storage arrays

Performing post-upgrade tasks

Enter the following command:


ucadmin show

29

The system displays output similar to the following examples:


cluster1::> system node hardware
Current Current
Node Adapter Mode
Type
---- ------- ------- --------f-a
0e
fc
initiator
f-a
0f
fc
initiator
f-a
0g
cna
target
f-a
0h
cna
target
f-b
0e
fc
initiator
f-b
0f
fc
initiator
f-b
0g
cna
target
f-b
0h
cna
target
8 entries were displayed.

node*> ucadmin show


Current Current
Adapter Mode
Type
------- ------- --------0e
fc
initiator
0f
fc
initiator
0g
cna
target
0h
cna
target
0e
fc
initiator
0f
fc
initiator
0g
cna
target
0h
cna
target
*>

unified-connect show
Pending Pending
Admin
Mode
Type
Status
------- -----------online
online
online
online
online
online
online
online

Pending
Mode
-------

Pending
Type
-------

Status
-----online
online
online
online
online
online
online
online

4. If the current SFP+ module does not match the desired use, replace it with the correct SFP+ module.
5. Examine the output of the ucadmin show or system node hardware unified-connect show command and
determine whether the CNA ports have the personality you want.
6. Take one of the following actions:
If the CNA ports ...

Then...

Do not have the personality that you


want

Go to Step 7.

Have the personality that you want

Go to Step 9.

7. If the CNA adapter is online, take if offline by entering one of the following commands:
If the system that you are
upgrading...
Has storage disks

Then...

If the adapter is in initiator mode, enter the following command:


system node run -node node-name disable adapter adapter-name

If the adapter is in target mode, enter the following command:


fcp adapter modify -node node-name -adapter adapter-name state down

Is a V-Series system or has FlexArray


Software and is attached to storage
arrays

Enter the following command:


storage disable adapter adapter-name

Adapters in target mode are automatically placed offline in Maintenance mode.

30

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

8. If the current configuration does not match the desired use, enter the following commands to change the configuration as
needed:
If the system that you are
upgrading...

Then...

Has storage disks

Enter the following command:


system node hardware unified-connect modify -node node-name adapter adapter-name -mode fc|cna -type target|initiator

Is a V-Series system or has FlexArray


Software and is attached to storage
arrays

Enter the following command:


ucadmin modify -m fc|cna -t initiator|target adapter-name

In either command:

-m or -mode is the personality mode, fc or 10GbE cna.

-t or -type is the FC4 type, target or initiator.


Note: You need to use an FC initiator for tape drives, FlexArray Virtualization systems, and Fabric MetroCluster. You also
need to use FC initiator for stretch MetroCluster if you are using a FibreBridge6500N bridge. You need to use an FC
target for SAN clients.

9. Verify the settings by entering one of the following commands and examining its output:
If the system that you are
upgrading...

Then...

Has storage disks

Enter the following command:


system node hardware unified-connect show

Is a V-Series system or has FlexArray


Software and is attached to storage
arrays

Enter the following command:


ucadmin show

Example

The output in the following examples show that the FC4 type of adapter 1b is changing to initiator and that the mode of
adapters 2a and 2b is changing to cna:
cluster1::> system node hardware
Current Current
Node Adapter Mode
Type
---- ------- ------- --------f-a
1a
fc
initiator
f-a
1b
fc
target
f-a
2a
fc
target
f-a
2b
fc
target
4 entries were displayed.

node> ucadmin show


Current Current
Adapter Mode
Type
------- ------- --------1a
fc
initiator
1b
fc
target
2a
fc
target
2b
fc
target
node>

unified-connect show
Pending Pending
Mode
Type
Status
------- -----------online
initiator online
cna
online
cna
online

Pending
Mode
------cna
cna

Pending
Type
------initiator
-

Status
-----online
online
online
online

10. Enter the following command:

Performing post-upgrade tasks

31

halt

The system stops at the boot environment prompt.


11. Cable the port.
12. Take one of the following actions:
If the system that you are
upgrading...

Then...

Has storage disks

Reboot the system by entering the following command:


system node reboot

Is a V-Series system or has FlexArray


Software and is attached to storage
arrays

Go to the Joining the new nodes to the cluster section.

13. Repeat Step 1 through Step 12 on the other node.

Decommissioning the old system


After upgrading, you can decommission the old system through the NetApp Support Site. Decommissioning the system tells
NetApp that the system is no longer in operation and removes it from support databases.
Steps

1. Go to the NetApp Support Site at mysupport.netapp.com and log in.


2. Click the link My Installed Systems.
3. On the Installed Systems page, enter the serial number of the old system in the form and then click Go!
A new page displays information about the controller.
4. Make sure that the information about the controller is correct.
If the information about the
controller is...
Correct...

Not correct...

Then...

a.

Select Decommission this system in the Product Tool Site drop-down menu.

b.

Go to Step 5.

a.

Click the feedback link to open the form for reporting the problem.

b.

Fill out and submit the form.

5. On the Decommission Form page, fill out the form and click Submit.

How to send comments about documentation and receive


update notification
You can help us to improve the quality of our documentation by sending us your feedback. You can receive automatic
notification when production-level (GA/FCS) documentation is initially released or important changes are made to existing
production-level documents.
If you have suggestions for improving this document, send us your comments by email to doccomments@netapp.com. To help
us direct your comments to the correct division, include in the subject line the product name, version, and operating system.

32

Upgrading controller hardware on a pair of nodes running clustered Data ONTAP 8.3 by moving volumes

If you want to be notified automatically when production-level documentation is released or important changes are made to
existing production-level documents, follow Twitter account @NetAppDoc.
You can also contact us in the following ways:

NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S.

Telephone: +1 (408) 822-6000

Fax: +1 (408) 822-4501

Support telephone: +1 (888) 463-8277

Trademark information
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, clustered Data ONTAP,
Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache,
FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate
ONTAP, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect,
SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL are trademarks or
registered trademarks of NetApp, Inc., in the United States, and/or other countries. A current list of NetApp trademarks is
available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.

Trademark information

33

You might also like