You are on page 1of 32

VPLEX SolVe Generator

Solution for Validating your engagement

Topic
VPLEX Customer Procedures
Selections
Procedures: Manage
Management Procedures: Shutdown
Shutdown Procedures: VS6 Shutdown Procedures
VS6 Shutdown Procedures: Cluster 2 in a Metro configuriaton
SR Number(s): 21590212

Generated: July 30, 2021 10:39 AM GMT

REPORT PROBLEMS

If you find any errors in this procedure or have comments regarding this application, send email to
SolVeFeedback@dell.com

Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION (“EMC”)
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
INFRINGEMENT AND ANY WARRANTY ARISING BY STATUTE, OPERATION OF LAW, COURSE OF
DEALING OR PERFORMANCE OR USAGE OF TRADE. IN NO EVENT SHALL EMC BE LIABLE FOR
ANY DAMAGES WHATSOEVER INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF EMC HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice. Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.

Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners.

Publication Date: July, 2021

version: 2.9.0.68

Page 1 of 32
Contents
Preliminary Activity Tasks .......................................................................................................4
Read, understand, and perform these tasks.................................................................................................4

VS6 Shutdown Procedure for Cluster 2 in a Metro Configuration...........................................5


Before you begin...........................................................................................................................................5
Task 1: Connecting to MMCS-A....................................................................................................6
Task 2: Connecting to the console port.........................................................................................6
Phase 1: Shut down the cluster ....................................................................................................................8
Task 3: Log in to the VPlexcli........................................................................................................8
Task 4: Connect to the management server on cluster 2 .............................................................8
Task 5: Change the transfer size for all distributed-devices to 128K ............................................8
Task 6: Verify current data migration status..................................................................................9
Task 7: Make any remote exports available locally on cluster 1 .................................................10
Task 8: Stop the I/O on the hosts that are using VPLEX volumes from cluster 2, or move I/O to
cluster 1 11
Task 9: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete ....12
Task 10: Re-login to the management server and VPlexcli on cluster 2.......................................12
Task 11: Verify the cluster health..................................................................................................13
Task 12: Verify COM switch health ...............................................................................................13
Task 13: Collect Diagnostics.........................................................................................................13
Task 14: Disable RecoverPoint consistency groups that use VPLEX volumes ............................13
Task 15: Power off the RecoverPoint cluster ................................................................................15
Task 16: Disable Call Home..........................................................................................................15
Task 17: Identify RecoverPoint-enabled distributed consistency-groups with cluster 2 as winner15
Task 18: Make cluster 1 the winner for all distributed synchronous consistency-groups with
RecoverPoint not enabled .....................................................................................................................16
Task 19: Make cluster 1 the winner for all distributed devices outside consistency group ...........18
Task 20: Disable VPLEX Witness .................................................................................................19
Task 21: Shut down the VPLEX firmware on cluster-2 .................................................................19
Task 22: Manually resume any suspended Recover Point enabled consistency groups on cluster-
1 20
Task 23: Shut down the VPLEX directors on cluster-2 .................................................................20
Task 24: Shut down the management server on cluster-2............................................................21
Task 25: Shut down power to the VPLEX cabinet ........................................................................22
Task 26: Exit the SSH sessions, restore your laptop settings, restore the default cabling
arrangement 22
Phase 2: Perform maintenance activities....................................................................................................23

version: 2.9.0.68

Page 2 of 32
Phase 3: Restart cluster..............................................................................................................................23
Task 27: Bring up the VPLEX components...................................................................................24
Task 28: Starting a PuTTY (SSH) session....................................................................................25
Task 29: Verify COM switch health ...............................................................................................26
Task 30: (Optionally) Change management server IP address ....................................................26
Task 31: Verify the VPN connectivity ............................................................................................26
Task 32: Power on the RecoverPoint cluster and enable consistency groups .............................27
Task 33: Verify the health of the clusters ......................................................................................27
Task 34: Resume volumes at cluster 2 .........................................................................................27
Task 35: Enable VPLEX Witness..................................................................................................28
Task 36: Check rebuild status and wait for rebuilds to complete ..................................................28
Task 37: Remount VPLEX volumes on hosts connected to cluster-2, and start I/O .....................29
Task 38: Restore the original rule-sets for consistency groups ....................................................29
Task 39: Restore the original rule-sets for distributed devices .....................................................29
Task 40: Restore the remote exports............................................................................................30
Task 41: Disable Call Home..........................................................................................................30
Task 42: Collect Diagnostics.........................................................................................................31
Task 43: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete ....31
Task 44: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements.......32

version: 2.9.0.68

Page 3 of 32
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.

Read, understand, and perform these tasks


1. Table 1 lists tasks, cautions, warnings, notes, and/or knowledgebase (KB) solutions that you need to
be aware of before performing this activity. Read, understand, and when necessary perform any
tasks contained in this table and any tasks contained in any associated knowledgebase solution.

Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity

000171121: To provide feedback on the content of generated procedures

2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.

Note: There may not be any top trending service topics for this product at any given time.

VPLEX Top Service Topics

version: 2.9.0.68

Page 4 of 32
VS6 Shutdown Procedure for Cluster 2 in a Metro Configuration
Before you begin
Read this entire shutdown document before beginning this procedure. Before you begin a system
shutdown on a VPLEX metro system, review this section.
Confirm that you have the following information:
 IP address of the MMCS-A and MMCS-B in cluster 1 and cluster 2
 IP addresses of the hosts that are connected to cluster 1 and cluster 2
 (If applicable) IP addresses and login information for the RecoverPoint clusters attached to cluster 1
and cluster 2
 All VPLEX login usernames and passwords.
Default usernames and passwords for the VPLEX management servers, VPlexcli, VPLEX Witness
are published in the EMC VPLEX Security Configuration Guide.

Note: The customer might have changed some usernames or passwords. Ensure that you know any
changed passwords or that the customer is available when you need the changed passwords.

The following VPLEX documents are available on EMC Support Online:


 EMC VPLEX CLI Guide
 EMC VPLEX Administration Guide
 EMC VPLEX Security Configuration Guide
The following RecoverPoint documents are available on EMC Support Online:
 RecoverPoint Deployment Manager version Product Guide
 VPLEX Technical Note
The SolVe Desktop includes the following procedures referenced in this document:
 Change the management IP address server address (VS6):
 Changing the Cluster Witness Server's public IP address
 Configure 3-way VPN between Cluster Witness Server and VPLEX cluster (VS6)

CAUTION: If you are shutting down ALL the components in the SAN, shut down the
components in the following order:

1. [ ] Hosts connected to the VPLEX cluster.


This enables an orderly shutdown of all applications using VPLEX virtual storage.
2. [ ] RecoverPoint, if present in the configuration.
3. [ ] Components in the cluster's cabinet, as described in this document.
4. [ ] Storage arrays from which the cluster is getting the I/O disks and the meta-volume disks.

version: 2.9.0.68

Page 5 of 32
5. [ ] Front-end and back-end COM switches.

Task 1: Connecting to MMCS-A


Procedure
1. [ ] Launch PuTTY.exe.
2. [ ] Do one of the following:
 If a previously configured session to the MMCS exists in the Saved Sessions window click Load.
 Otherwise, start PuTTY with the following values:

Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit

Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.

3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Enter the service password.

Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.

Task 2: Connecting to the console port


To connect to the console port, do the following.
Procedure
1. [ ] Connect the monitor components to the management server using the micro-USB and
DisplayPort connections, in the following order: Monitor, keyboard, mouse.

version: 2.9.0.68

Page 6 of 32
Figure 1 Micro-USB/DisplayPort connections on MMCS-A (back view)

2. [ ] Log in to the management server as service and enter the password.

Note: Refer to the EMC VPLEX Security Configuration Guide for information on passwords and
default values.

3. [ ] From the shell prompt, type the following command, where hostname is a name that will
replace the default name (ManagementServer ) in the shell prompt in subsequent logins to the
management server:
Do not use the Linux OS hostname command to make this change.
sudo /opt/emc/VPlex/tools/ipconfig/changehostname.py -n hostname

4. [ ] Type the following command, which connects you to the VPlexcli:


vplexcli

Log in with username service and password.


5. [ ] From the VPlexcli prompt, type the following command to configure the IP address of
the management server’s public Ethernet port:
To configure IPv4 or IPv6 address, enter the following command:
VPlexcli:/> management-server set-ip -i IP_address/netmask -g gateway eth3

Note: ESRS Gateway does not support IPv6. If the unit is being configured with IPv6 addresses, the
management server must have an IPv4 address assigned to it apart from the IPv6 address. This is to
enable the management server to communicate with the ESRS Gateway.

6. [ ] Type the following command, and confirm that the output shows the correct
information:
VPlexcli:/> ll /management-server/ports/eth3
Name Value
-------- -------------
address 10.243.48.65
gateway 10.243.48.1
net-mask 255.255.255.0

version: 2.9.0.68

Page 7 of 32
7. [ ] Type the following command at the VPlexcli prompt, and again at the shell
prompt:
exit
8. [ ] Disconnect the monitor, keyboard, and mouse from the management server.
9. [ ] For serial or IP connections, launch PuTTY and establish a connection to the management
server’s public IP address, to confirm that the address was set correctly.
10. [ ] Proceed to the next applicable task.

Phase 1: Shut down the cluster


This procedure is in several phases. The first is shutting down cluster 1.

CAUTION: If any step you perform creates an error message or fails to give you the
expected result, consult the troubleshooting information available in generator, or
contact the EMC Support Center. Do not continue until the issue has been resolved.

Task 3: Log in to the VPlexcli


Procedure
1. [ ] Select the VPLEX Cluster 2 session and click Load.
2. [ ] Click Open, and log in to the MMCS with username service and password.

3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli

Task 4: Connect to the management server on cluster 2


Procedure
1. [ ] Select the VPLEX Cluster 2 session and click Load.
2. [ ] Click Open, and log in to the MMCS with username service and password.

3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli
After you finish
For the rest of this procedure:

Commands typed in the CLI session to cluster 1 are tagged with this icon:

Commands typed in the CLI session to cluster 2 are tagged with this icon:

Commands typed in the LINUX shell session to cluster 1 are tagged with:

Commands typed in the LINUX shell session to cluster 2 are tagged with:

Task 5: Change the transfer size for all distributed-devices to 128K


About this task

version: 2.9.0.68

Page 8 of 32
For more information on transfer-size, refer to the Administration Guide.
Procedure

1. [ ] Type the ls -al command from the /distributed-storage/distributed-


devices CLI context to display the value for Transfer size for all distributed devices.
VPlexcli:/distributed-storage/distributed-devices> ls –al

Name Status Operational Health Auto Rule Transfer


---------------------- ------- Status State Resume Set Size
---------------------- ------- ------------ ------- ------- Name --------
---------------------- ------- ------------ ------- ------- ---- --------
DR1_C1-C2_1gb_dev1 running ok ok true - 2M
DR1_C1-C2_1gb_dev10 running ok ok true - 2M
DR1_C1-C2_1gb_dev11 running ok ok true - 2M
.
.
.

The transfer size must be 128K or less.

2. [ ] If there are distributed devices with a transfer size of greater than 128K, do one of
the following:
 If all distributed devices have a transfer-size greater than 128K, type the following command to
change the transfer size for all devices:
VPlexcli:/distributed-storage/distributed-devices> set *::transfer-size 128K

Note: This command may take a few minutes to complete.

 If only some distributed devices have a transfer-size greater than 128K, type the following
commands to change the transfer-size for the specified distributed device:
cd /distributed-storage/distributed-devices

set distributed_device_name:: transfer-size 128K

3. [ ] Type the ls -al command to verify that the transfer size value for all distributed-devices is
128K or less
VPlexcli:/distributed-storage/distributed-devices> ls –al

Task 6: Verify current data migration status


Any current migration jobs stop during a system shutdown and will resume when cluster 1 is restarted.
About this task

CAUTION: Any data migration job initiated on cluster 1 pause when cluster 1 shuts
down and resume when cluster 1 is restarts.

Procedure
1. [ ] Verify whether any data migration has been initiated on cluster-1 and is ongoing.
Refer to the VPLEX Administration Guide:

version: 2.9.0.68

Page 9 of 32
 'Monitor a migration's progress' for one-time data migration
 'Monitor a batch migration's progress' for batch migrations
2. [ ] If any migrations are ongoing, do one of the following:
 If the data being migrated must be available on the target before cluster 1 is shutdown, wait for the
data migrations to complete before proceeding with this procedure.
 If the data being migrated does not need to be available on the target when cluster 1 is shutdown,
proceed to the next Task.
Results
Migrations should be stopped and you can proceed to stop I/O on the hosts.

Task 7: Make any remote exports available locally on cluster 1


To prevent data unavailability on remote exports on cluster 2, make the data available locally on cluster 1.
Use the following task to move data on remote-exports on cluster 2 to local devices on cluster 1.
About this task
Note: This task has no impact on I/O to hosts.

The VPLEX CLI Guide describes the commands used in this procedure.
Procedure

1. [ ] From the VPlexcli prompt on cluster 1, type the following command


ls /clusters/cluster-1/virtual-volumes/$d where $d::locality \== remote

/clusters/cluster-1/virtual-volumes/remote_r0_softConfigActC1_C2_CHM_0000_vol:
Name Value
------------------ -----------------------------------------
block-count 255589
block-size 4K
cache-mode synchronous
capacity 998M
consistency-group -
expandable false
health-indications []
health-state ok
locality remote
operational-status ok
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-tier -
supporting-device remote_r0_softConfigActC1_C2_CHM_0000
system-id remote_r0_softConfigActC1_C2_CHM_0000_vol
volume-type virtual-volume
.
.
.

2. [ ] If the output returned any volumes with service-status as running, for each remote export
on cluster 1.

a. Record the supporting-device in column 1 in the following table.


b. Record the capacity value in column 2.

version: 2.9.0.68

Page 10 of 32
Expand the table as needed.

Table 1 Remote export device migration source and target device

Remote export device Remote export capacity Local device on cluster-2 Local device capacity
from cluster-1 (source (target device)
device)

3. [ ] For each remote export identified in the previous step, identify a local device on cluster 1 based
on the conditions identified in the VPLEX Administration Guide. See the "Data Migration" chapter.

Note: Best practice is to select target devices that are the same size as their source devices. This will
simplify the device migration later in this procedure.

4. [ ] Record the device Name/system-id value in column 3.


5. [ ] Record the capacity value in column 4.
You will use these cluster 1 devices to migrate data from the cluster 2 devices using device
migrations.
6. [ ] Use the following steps to migrate data from cluster 2 to cluster 1:

a. Create migration job with source device and target device from column 1 and 3 in the table.
b. Give a migration job name to distinguish it from other existing migration jobs if any.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
7. [ ] Refer to the following procedures in the "Data Migration" chapter of the VPLEX Administration
Guide:
 If there is only one device to migrate, refer to "One-time data migration."
 If there multiple devices to migrate, refer to "Batch migrations'."

Task 8: Stop the I/O on the hosts that are using VPLEX volumes from cluster 2, or move I/O to
cluster 1
This task requires access to the hosts accessing the storage through cluster 2. Coordinate this activity
with host administrators if you do not have access to the hosts.
About this task
The steps to complete this task vary depending on whether the entire SAN is being shut down, and
whether certain hosts using storage on cluster 2 support I/O failover.
Procedure
1. [ ] If the entire front-end SAN will be shut down:

a. Log onto the host and stop the I/O applications.


b. Depending on the supported methods of the host operating system using the VPLEX volumes, let
the I/O drain from the hosts by doing one of the following:

version: 2.9.0.68

Page 11 of 32
 Shut down the hosts
 Unmount the file systems
2. [ ] Determine whether each host accessing cluster 2 supports I/O fail over (either manual or
automatic fail over).
 If host supports fail over, perform the tasks to fail over the I/O to cluster 1.
 If host does not support fail over, perform the following steps:

1. Log onto the host and stop the I/O applications.


2. Depending on the supported methods of the host operating system using the VPLEX volumes,
let the I/O drain from the hosts by doing one of the following:
Shut down the hosts
Unmount the file systems

Task 9: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete

Perform this task from cluster 1.


About this task

CAUTION: Do not shut down cluster 2 before any rebuild initiated from cluster 2 on a
distributed device has completed. Doing so could cause data loss if I/O is initiated on
those volumes from cluster 1.

Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds on distributed devices are
complete before shutting down the clusters.
VPlexcli:/> rebuild status

If rebuilds are complete the command will report the following output:

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

Task 10: Re-login to the management server and VPlexcli on cluster 2


The VPLexcli session to cluster 2 may have timed out.
Before you begin
About this task
Use PuTTY (version 0.60 or later) or a similar SSH client, to connect to the public IP address of the
MMCS-A on cluster 2, and login as user service.

version: 2.9.0.68

Page 12 of 32
Task 11: Verify the cluster health
Before continuing with the procedure, ensure that there are no issues with the health of the cluster.
Procedure
1. [ ] From the VPlexcli prompt, type the following command, and confirm that the operational and
health states appear as ok:
health-check

Results
If you do not have a RecoverPoint splitter in your environment, you can now begin shutting down call
home and other processes that are no longer necessary. If you are running RecoverPoint, follow the
RecoverPoint shutdown tasks.

Task 12: Verify COM switch health


If the cluster is dual-engine or quad-engine, verify the health of the InfiniBand COM switches as follows:
Procedure

1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1

connectivity: FULL

ib-port-group-3-0 - OK - All expected connectivity is present.


ib-port-group-3-1 - OK - All expected connectivity is present.

2. [ ] In the output, confirm that the cluster has full connectivity.

Task 13: Collect Diagnostics


Collect diagnostic information both before and after the shutdown.
Procedure
1. [ ] Type the following command to collect configuration information and log files from all directors
and the management server:
collect-diagnostics -–minimum
The information is collected, compressed in a Zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another computer.

Task 14: Disable RecoverPoint consistency groups that use VPLEX volumes
Disabling RecoverPoint consistency groups prevents errors in data replication while the system is
experiencing some maintenance tasks. Perform this task if there is a RecoverPoint splitter in your
environment.
About this task

version: 2.9.0.68

Page 13 of 32
CAUTION: This task disrupts replication on volumes that are part of the RecoverPoint
consistency group being disabled. Ensure that you perform this task on the correct
RecoverPoint cluster and RecoverPoint consistency group.

Procedure

1. [ ] Type the ll /recoverpoint/rpa-clusters/ command to display


RecoverPoint clusters attached to cluster 2:
VPlexcli:/> ll /recoverpoint/rpa-clusters/

/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- -------- ------ -----------
6.210.75 cluster-2 advil RPA 1 3.5(n.109)

2. [ ] Type the ll /recoverpoint/rpa-clusters/ ip-address /volumes command where


ip-address is the RPA host address that is displayed in the previous step to display the names of
RecoverPoint consistency groups that use VPLEX volumes. For example:
VPlexcli:/> ll /recoverpoint/rpa-clusters/10.6.210.75/volumes/

/recoverpoint/rpa-clusters/10.6.210.75/volumes:
Name RPA RP Type RP Role RP VPLEX Group
Capacity
----------------------------- Site ----------- ---------- Group -----------
----------
---------------------------- ----- ----------- ---------- ----- -----------
----------RP_Repo_Vol2_vol advil Repository - -
RP_RepJournal 10G
demo_prodjournal_1_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_2_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_3_vol advil Journal - cg1
RP_RepJournal 5G
.
.
.

3. [ ] Login to the RecoverPoint GUI for each RecoverPoint cluster that is attached to cluster 2.
4. [ ] Determine which RecoverPoint consistency groups the shutdown impacts.
 Inspect the Splitter Properties associated with the VPLEX cluster.
 Compare the serial number of the VPLEX cluster with the Splitter Name in the RecoverPoint GUI.
5. [ ] Record the names of the consistency groups.

Note: You will need this information to reconfigure the RecoverPoint consistency groups after you
complete the shutdown.

6. [ ] Disable each RecoverPoint consistency group associated with the VPLEX splitter on cluster 2

version: 2.9.0.68

Page 14 of 32
Task 15: Power off the RecoverPoint cluster
If there is a RecoverPoint splitter in the configuration, before shutting down the VPLEX cluster, power off
the RecoverPoint cluster.
About this task

CAUTION: This step disrupts replication on all volumes that are this RecoverPoint
cluster replicates. Ensure that you perform this task on the correct RecoverPoint cluster.

Procedure
1. [ ] Shut down each RecoverPoint cluster that is using a VPLEX virtual volume as its repository
volume.
2. [ ] Record the names of each RecoverPoint cluster that you shut down

Note: You need this information later in the procedure when you are powering on these RecoverPoint
clusters.

Task 16: Disable Call Home


Disable call home, to prevent call homes during the remainder of this procedure.
Procedure

1. [ ] At the VPlexcli prompt, type the following command to browse to the call-home
context:
VPlexcli:/> cd notifications/call-home/

2. [ ] Type the following command to check the value of enabled property.


VPlexcli:/notifications/call-home> ls
Attributes:
Name Value
------- -----
enabled true
Contexts:
snmp-traps

If the enabled property value is false, do not perform the next step.
Note whether call home was enabled or disabled. Later in the procedure, you need this information to
determine whether to enable call home again.
3. [ ] Type the following command to disable call home:
VPlexcli:/notifications/call-home> set enabled false --force

If this command worked, an ls of the context shows that the enabled state of the call home is
false.

Task 17: Identify RecoverPoint-enabled distributed consistency-groups with cluster 2 as winner


If your environment includes RecoverPoint, identify any RecoverPoint-enabled consistency groups that
are configured with cluster 2 as the winner
About this task

version: 2.9.0.68

Page 15 of 32
Note: It is not possible to change the detach-rule for a consistency-group with Recover Point enabled.

Procedure

1. [ ] From the VPlexcli prompt, type the following commands to display the consistency-
groups with RecoverPoint enabled:
VPlexcli:/> ls -p /clusters/cluster-1/consistency-groups/$d where
$d::recoverpoint-enabled \== true
/clusters/cluster-1/consistency-groups/Aleve_RPC1_Local_Journal_A:
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule winner cluster-2 after 5s
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes Aleve_RPC1_local_Journal_A_0000_vol,
Aleve_RPC1_local_Journal_A_0001_vol,
Aleve_RPC1_local_Journal_A_0002_vol,
Aleve_RPC1_local_Journal_A_0003_vol,
Aleve_RPC1_local_Journal_A_0004_vol,
Aleve_RPC1_local_Journal_A_0005_vol,
Aleve_RPC1_local_Journal_A_0006_vol,
Aleve_RPC1_local_Journal_A_0007_vol,
Aleve_RPC1_local_Journal_A_0008_vol,
Aleve_RPC1_local_Journal_A_0009_vol, ... (45 total)
visibility [cluster-1, cluster-2]

Contexts:
advanced recoverpoint
.
.
.

2. [ ] Record the names of all the Recover Point enabled distributed consistency groups with cluster 2
as the winner.

Note: You will use this information to manually resume the consistency group on cluster 1 in Phase 3
of this procedure.

Task 18: Make cluster 1 the winner for all distributed synchronous consistency-groups with
RecoverPoint not enabled
Procedure

1. [ ] From the VPlexcli prompt on cluster 2, type the following commands to display the
consistency-groups:
cd /clusters/cluster-2/consistency-groups

ll
/clusters/cluster-2/consistency-groups:
Name Operational Active Passive Detach Cache
Mode
-------------------------- Status Clusters Clusters Rule -------

version: 2.9.0.68

Page 16 of 32
----
-------------------------- ------------ -------- -------- --------- -------
----
sync_sC12_vC12_nAW_CHM (cluster-1,{ winner
synchronous
summary:: cluster-2
ok, after 22s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})
sync_sC12_vC12_wC2a25s_CHM (cluster-1,{ winner
synchronous
summary:: cluster-2
ok, after 25s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})

2. [ ] Record the name and rule-set of all consistency-groups with a rule-set that configures cluster 2
as winner or no-automatic-winner in the Detach Rule column in the table below. Expand the
table as needed.

Note: You will use this information when you reset the rule-set name in Phase 3 of this procedure.

Table 2 Consistency-groups with cluster-2 as winner or no-automatic-winner

Consistency group Detach rule

3. [ ] Make cluster 1 the winner for these consistency groups to prevent the consistency group from
suspending I/O to the volumes on cluster 1.
Type the following commands, where consistency-group_name is the name of a consistency-
group in the table and delay is the current delay for it.
cd consistency-group_name

set-detach-rule winner cluster-1 –-delay delay

cd ..

4. [ ] Repeat the previous step for every step in the table.


5. [ ] Type the following command to verify the rule-set name changes:
ll /clusters/cluster-1/consistency-groups/
6. [ ] In the output, confirm that all the consistency-groups (with known exceptions from previous
tasks) show cluster-1 as the winner

version: 2.9.0.68

Page 17 of 32
Task 19: Make cluster 1 the winner for all distributed devices outside consistency group
Procedure

1. [ ] Type the following commands to display the distributed devices:


cd /distributed-storage/distributed-devices
ll

2. [ ] In the output, note all distributed devices with a rule-set that configures cluster 2 as the winner in
the Rule Set Name column.
The default rule-set that configures cluster-2 as the winner is cluster-2-detaches.

Note: Customers may have created their own rule-set with cluster-2 as a winner.

3. [ ] Record the name and the rule-set of all the distributed devices with a rule-set that configures
cluster 2 as the winner or for which there is no rule-set-name (the rule-set-name field is blank) in the
Rule-set name column in the table below.

WARNING: If a distributed device outside of a consistency group has no rule-set name, it will be
suspended upon the shutdown of the cluster. This can lead to data unavailability.

Note: You will need this information when you reset the rule-set in Phase 3 of this procedure.

Table 3 Distributed devices with rule-set cluster 2 is winner

Distributed device name Rule-set name

4. [ ] This step varies depending on whether you are changing the rule-set for all distributed devices,
or for selected distributed devices
 To change the rule-set for all distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name cluster-1-detaches
 To change the rule-set for selected distributed devices, type the following command for each
device whose rule-set you want to change, where distributed_device_name is the name of a
device in the table.
cd distributed_device_name

set rule-set-name cluster-1-detaches

cd ..

5. [ ] Type the following command to verify the rule-set name changes:


ll /distributed-storage/distributed-devices
6. [ ] In the output, confirm that all distributed devices (with known exceptions) show cluster 1 as the
winner.

version: 2.9.0.68

Page 18 of 32
Task 20: Disable VPLEX Witness
If VPLEX Witness is enabled in the configuration, disable it.
Procedure

1. [ ] From the VPlexcli prompt, type the following commands to determine if VPLEX
Witness is enabled:
cd /cluster-witness
ls

Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Record whether VPLEX Witness is enabled or disabled.

Note: You will need this information later in the procedure.

3. [ ] If VPLEX Witness is enabled, type the following command to disable it


cluster-witness disable --force
4. [ ] Type the following command to verify that VPLEX Witness is disabled:
VPlexcli:/cluster-witness> ls
Attributes:
Name Value
------------- -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

Task 21: Shut down the VPLEX firmware on cluster-2


An orderly shutdown of a VPLEX cluster begins with shutting down the firmware in the cluster.

CAUTION: Running this command on the wrong cluster will result in Data Unavailability.
CAUTION: During the cluster shutdown procedure before executing the shutdown command DO NOT
DISABLE the WAN COM on any of the VPLEX directors (by disabling one or more directors' WAN COM
ports, or disabling the external WAN COM links via the WAN COM switches). Disabling the WAN COM
before executing the 'cluster shutdown' command triggers the VPLEX failure recovery process for
volumes, which can result in the 'cluster shutdown' command hanging. Disabling the WAN COM
beforethe cluster shutdown has not been tested and is not supported.

version: 2.9.0.68

Page 19 of 32
Procedure
1. [ ] To shut down the firmware in cluster 2, type the following commands:
VPlexcli:> cluster shutdown --cluster cluster-2
Warning: Shutting down a VPlex cluster may cause data unavailability. Please
refer to the VPlex documentation for the
recommended procedure for shutting down a cluster. To show that you understand
the impact, enter
'shutdown': shutdown
You have chosen to shutdown 'cluster-2'. To confirm, enter 'cluster-2':
cluster-2

Status Description
-------- -----------------
Started. Shutdown started.

Note: It takes ~3–5 minutes for the system to shut down.

2. [ ] To display the cluster status, type the following command:


VPlexcli:/> cluster status

Cluster cluster-2
operational-status: not-running
transitioning-indications:
transitioning-progress:
health-state: unknown
health-indications:
local-com: failed to validate local-com: Firmware
command error.
communication error recently.

Task 22: Manually resume any suspended Recover Point enabled consistency groups on
cluster-1

Run this task on cluster 2.


Procedure
1. [ ] If the volumes in a consistency group identified in a previous task, then type the following CLI
commands for each of those consistency-group to make cluster-1 the winner, and to allow it service
I/O:
VPlexcli:/clusters/cluster-1/consistency-groups> choose-winner -c cluster-1 -g
async_sC12_vC2_aCW_CHM
WARNING: This can cause data divergence and lead to data loss. Ensure the other
cluster is not serving I/O for this consistency group before continuing.
Continue? (Yes/No) Yes

2. [ ] Type the following command to ensure none of the above consistency groups require
resumption:
consistency-group summary

3. [ ] Look for consistency groups with requires-resume-at-loser

Task 23: Shut down the VPLEX directors on cluster-2


About this task
Ensure that you are at the LINUX shell prompt for cluster 2 .

version: 2.9.0.68

Page 20 of 32
Procedure
1. [ ] From the VPlexcli prompt, type the following command:
exit

2. [ ] From the shell prompt, type the following commands to shut down director 2-1-A:

Note: In the first command, the l in -l is a lowercase L.

ssh -l root 128.221.252.67


shutdown -P "now"
director-2-1-a:~ # shutdown -P "now"
Broadcast message from root (pts/0) (Fri Nov 18 20:04:33 2011):
The system is going down to maintenance mode NOW!

3. [ ] Repeat the previous step for each remaining director in cluster 2, substituting the applicable ssh
command shown in the following table:

Table 4 IP addresses for ssh command execution

Cluster size Director ssh command Checkbox

 Single-engine 2-1-A ssh -l root 128.221.252.67 []


 Dual-engine 2-1-B ssh -l root 128.221.252.68 []
 Quad-engine
 Dual-engine 2-2-A ssh -l root 128.221.252.69 []
 Quad-engine 2-2-B ssh -l root 128.221.252.70 []
Quad-engine 2-3-A ssh -l root 128.221.252.71 []
2-3-B ssh -l root 128.221.252.72 []
2-4-A ssh -l root 128.221.252.73 []
2-4-B ssh -l root 128.221.252.74 []

4. [ ] Type the following command, and verify that director 2-1-A is down:
ping –b 128.221.252.67

Note: A director can take up to four minutes to shut down completely.

Output example if the director is down:


PING 128.221.252.67 (128.221.252.67) 56(84) bytes of data.
From 128.221.252.65 icmp_seq=1 Destination Host Unreachable
From 128.221.252.65 icmp_seq=2 Destination Host Unreachable

5. [ ] Repeat the previous step for each director you shut down, substituting the applicable IP address
shown in the previous table.

Task 24: Shut down the management server on cluster-2


Procedure

1. [ ] Type the following command to shut down the management server on cluster-2:
sudo /sbin/shutdown 0
Broadcast message from root (pts/1) (Tue Feb 8 18:12:30 2010):
The system is going down to maintenance mode NOW!

version: 2.9.0.68

Page 21 of 32
Task 25: Shut down power to the VPLEX cabinet
Procedure
1. [ ] Switch the breakers on all PDU units on the cabinet to the OFF position.

2. [ ] Check the Power LED on the engine between drive fillers 7 and 8 is off.

Task 26: Exit the SSH sessions, restore your laptop settings, restore the default cabling
arrangement
If you are still logged in to the VPLEX CLI sessions, log out now and restore the laptop settings. If you
used a service laptop to access the management server, use the steps in this task to restore the default
cable arrangement.
About this task
Repeat these steps on each cluster
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:

version: 2.9.0.68

Page 22 of 32
 EMC cabinet:

1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
 Non-EMC cabinet:

1. Disconnect the red service cable from the laptop.


2. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to
the VPLEX management server.)

Phase 2: Perform maintenance activities


Once all components of the cluster are down, you can perform maintenance activities before restarting it.

CAUTION: This document assumes that all existing SAN components and access to
them from VPLEX components do not change as a part of the maintenance activity. If
components or access changes, please contact EMC Customer Support to plan this
activity.

Perform the activity that required the shutdown of the cluster.

Phase 3: Restart cluster


This procedure describes the tasks to bring up the cluster after it has been shutdown
The procedure assumes that the cluster was shutdown by following the tasks described earlier in this
document.
Order to restart hosts, clusters, and other components

CAUTION: If you are bringing up ALL the components in the SAN, bring them up in the
order described in the following steps. While you are bringing up all the components in
that order, ensure that the previous component is fully up and running before continuing
with next component. Ensure that there is a time (20 s or more) gap before starting each
component.

SAN components:

1. [ ] Storage arrays from which VPLEX is getting the I/O disks and the metavolume disks.
2. [ ] Front-end and back-end InfiniBand switches.

version: 2.9.0.68

Page 23 of 32
VPLEX components:

1. [ ] Components in the VPLEX cabinet, as described in this document.


2. [ ] (If applicable) RecoverPoint.
3. [ ] Hosts connected to the VPLEX cluster.

Task 27: Bring up the VPLEX components


Procedure
1. [ ] Switch each breaker switch on the PDU units to the ON position for all receptacle groups that
have power cables connected to them.

2. [ ] Verify that the blue Power LED on the engine is showing is as shown in the figure.

3. [ ] On dual-engine or quad-engine clusters only, verify that the Online LED on each UPS (shown in
the following figure) is illuminated (green), and that none of the other three LEDs on the UPS is
illuminated.

version: 2.9.0.68

Page 24 of 32
If the Online LED on a UPS is not illuminated, push the UPS power button, and verify that the LEDs
are as described above before proceeding to the next step.

4. [ ] Verify that the UPS AC power input status LEDs are on (solid) to confirm that each unit is getting
power from both power zones.

5. [ ] On dual-engine or quad-engine clusters only, verify that no UPS circuit breaker has triggered. If
either circuit breaker on a UPS has triggered, press it to reseat it.

CAUTION: If any step you perform creates an error message or fails to give you the expected result,
consult the troubleshooting information in the generator, or contact the EMC Support Center. Do not
proceed until the issue has been resolved.

Task 28: Starting a PuTTY (SSH) session


Procedure
1. [ ] Launch PuTTY.exe.
2. [ ] Do one of the following:
 If a previously configured session to the MMCS exists in the Saved Sessions window click Load.
 Otherwise, start PuTTY with the following values:

version: 2.9.0.68

Page 25 of 32
Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit

Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.

3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Enter the service password.

Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.

Task 29: Verify COM switch health


If the cluster is dual-engine or quad-engine, verify the health of the InfiniBand COM switches as follows:
Procedure

1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1

connectivity: FULL

ib-port-group-3-0 - OK - All expected connectivity is present.


ib-port-group-3-1 - OK - All expected connectivity is present.

2. [ ] In the output, confirm that the cluster has full connectivity.

Task 30: (Optionally) Change management server IP address


If the IP address of the management server in cluster 1 has changed, then follow the procedure in the
Solve Desktop titled "Change the management server address".

Task 31: Verify the VPN connectivity


Procedure
1. [ ] At the VPlexcli prompt, type the following command to confirm that the VPN tunnel has been
established, and that the local and remote directors are reachable from management server-1:
vpn status
2. [ ] In the output, confirm that IPSEC is UP:
VPlexcli:/> vpn status
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.31.25.27 is reachable
Remote Internal Gateway addresses are reachable

3. [ ] Repeat the steps on cluster 2

version: 2.9.0.68

Page 26 of 32
Task 32: Power on the RecoverPoint cluster and enable consistency groups
If a RecoverPoint cluster that used VPLEX virtual volumes for its repository volume was powered off in
the Shutdown phase of this procedure, power on the RecoverPoint cluster.
About this task
Refer to the procedures in the RecoverPoint documentation.
If a RecoverPoint consistency group was disabled in the first phase of this procedure, perform this task to
enable those consistency groups. Refer to the procedures in the RecoverPoint documentation.
Procedure
1. [ ] Login to the RecoverPoint GUI.
2. [ ] Enable each RecoverPoint consistency group that was disabled in Phase 1.
3. [ ] Repeat these steps for every RecoverPoint cluster attached to the VPLEX cluster.

Task 33: Verify the health of the clusters


After all maintenance activities, check the cluster health on both clusters.
Procedure

1. [ ] Type the following command, and confirm that the operational and health states
appear as ok:
health-check

2. [ ] Repeat the previous step on cluster 2.

Task 34: Resume volumes at cluster 2


If the consistency groups and the distributed storage not in consistency groups have auto-resume set to
false, then those volumes will not automatically resume when you restore cluster 2.
About this task
At the cluster 2 VPLEX CLI , follow these steps to resume volumes that do not have auto-
resume set to true:
Procedure
1. [ ] Ty[e the following command to display if any consistency groups require resumption:
consistency-group summary

Look for any consistency groups with requires-resume-at-loser.


2. [ ] Type the following command for each consistency group that has requires-resume-at-
looser:
cd /clusters/cluster-2
consistency-group resume-at-loser -c cluster -g consistency-group

3. [ ] Type the following command to display whether any volumes outside of a consistency group
require resumption on cluster 2.
ll /clusters/cluster-2/virtual-volumes/

4. [ ] Type the following command to resume at the loser cluster for all distributed volumes not in
consistency groups:

version: 2.9.0.68

Page 27 of 32
device resume-link-up -f -a

Task 35: Enable VPLEX Witness


If VPLEX Witness is deployed, and was disabled in Phase 1, complete this task to re-enable VPLEX
Witness.
Procedure

1. [ ] Type the following commands to enable VPLEX Witness on cluster-1 and confirm
that it is enabled:
cluster-witness enable
cd /cluster-witness
ls

Output example if VPLEX witness is enabled:


Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Confirm VPLEX witness is in contact with both clusters:


VPlexcli:/> ll cluster-witness/components/
/cluster-witness/components:
Name ID Admin State Operational State Mgmt Connectivity
--------- -- ----------- ------------------- -----------------
cluster-1 1 enabled in-contact ok
cluster-2 2 enabled in-contact ok
server - enabled clusters-in-contact ok

3. [ ] Confirm Admin State is enabled and Mgmt Connectivity is ok for all three components.
4. [ ] Confirm Operational State is in-contact for clusters and clusters-in-contact for
server.

Task 36: Check rebuild status and wait for rebuilds to complete
Rebuilds may take some time to complete while I/O is in progress. For more information on rebuilds,
please check the VPLEX Administration Guide "Data Migration" chapter.
Procedure

1. [ ] Type the rebuild status command and verify that all rebuilds are complete.
If rebuilds are complete, the command will report the following output:
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

version: 2.9.0.68

Page 28 of 32
Task 37: Remount VPLEX volumes on hosts connected to cluster-2, and start I/O
Remounting VPLEX volumes requires access to the hosts accessing the storage through cluster-2. This
task may require co-ordinating this task with host administrators if user does not have access to the
hosts.
About this task

CAUTION: The storage is ready to service I/O. However, bringing up applications now
greatly increases the time to complete cluster restart. Perform this task now only if the
user requires applications to be up. Best practice is to complete cluster restart before
completing this task.

Procedure
1. [ ] Perform a scan on the hosts and discover the VPLEX volumes.
2. [ ] Mount the necessary file systems on the VPLEX volumes.
3. [ ] Start the necessary I/O applications on the host.

Task 38: Restore the original rule-sets for consistency groups


If you changed the rule-sets for synchronous consistency groups in Phase 1, make the cluster selected in
Phase 1 the winner for all distributed synchronous consistency groups.
About this task
To change the rule-sets to their original value, follow these steps.

Note: Skip this task if you do not want to change the rule-sets.

See Table 1 in Phase 1 for the list of consistency groups.


Procedure
1. [ ] To restore the original rule-sets, type the following commands, where consistency-
group_name is the name of a consistency-group, original rule-set is the rule set in Phase 1
and delay is the delay set for the consistency-group:
cd consistency-group_name
set-detach-rule original rule-set –-delay delay
cd ..

2. [ ] Repeat the previous step for each consistency group listed in the table.
3. [ ] To verify the rule-set name change, type the following command:
ll /clusters/cluster-1/consistency-groups/
4. [ ] In the output, confirm that all the consistency groups listed in the table are restored to their
original detach rules.

Task 39: Restore the original rule-sets for distributed devices


If you changed the rule-set name for distributed devices to make cluster 2 as the winner in Phase 1, then
make the original winner cluster the winner for all distributed devices outside of consistency groups.
About this task
Perform the following steps to change the rule-set to its original value:

version: 2.9.0.68

Page 29 of 32
Procedure
1. [ ] Change the rule-set of distributed devices.

Note: You can change the rule-set for all distributed devices, or for selected distributed devices.

 To change the rule-set for distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name original rule-set-name

 To change the rule-set for selected distributed devices, type the following commands, where
distributed_device_name is the name of a device listed in the table.
cd distributed_device_name
set rule-set-name original rule-set-name
cd ..

2. [ ] To verify the rule-set name changes, type the following command:


ll /distributed-storage/distributed-devices

3. [ ] In the output, confirm that all the distributed devices listed in Table 2 are restored to the original
detach rule.

Task 40: Restore the remote exports


If cluster 2 remote exports were moved to cluster 1 before shutdown in Phase 1, use the steps in this task
to restore them.
Procedure
1. [ ] Refer to "Make any remote exports available locally on cluster 1" for the names of the cluster 1
devices used for data migration from cluster 2 before cluster 2 was shutdown.

2. [ ] Perform the following operations to migrate data from cluster 1 back to cluster 2:

a. Create a migration job with source device and target device from the table column 3 and 1
respectively.
b. Verify that the prerequisites for device migration are met. Refer to the VPLEX Administration
Guide.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
Depending on the number of devices to migrate, refer to the following sections in the Data Migration
chapter of the VPLEX Administration Guide:
 To migrate one device, refer to "One-time data migrations"
 To migrate multiple devices, refer to "Batch migrations"

Task 41: Disable Call Home


Disable call home, to prevent call homes during the remainder of this procedure.
Procedure

version: 2.9.0.68

Page 30 of 32
1. [ ] At the VPlexcli prompt, type the following command to browse to the call-home
context:
VPlexcli:/> cd notifications/call-home/

2. [ ] Type the following command to check the value of enabled property.


VPlexcli:/notifications/call-home> ls
Attributes:
Name Value
------- -----
enabled true
Contexts:
snmp-traps

If the enabled property value is false, do not perform the next step.
Note whether call home was enabled or disabled. Later in the procedure, you need this information to
determine whether to enable call home again.
3. [ ] Type the following command to disable call home:
VPlexcli:/notifications/call-home> set enabled false --force

If this command worked, an ls of the context shows that the enabled state of the call home is
false.

Task 42: Collect Diagnostics


Collect diagnostic information both before and after the shutdown.
Procedure
1. [ ] Type the following command to collect configuration information and log files from all directors
and the management server:
collect-diagnostics -–minimum
The information is collected, compressed in a Zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another computer.

Task 43: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete

Perform this task from cluster 1.


About this task

CAUTION: Do not shut down cluster 2 before any rebuild initiated from cluster 2 on a
distributed device has completed. Doing so could cause data loss if I/O is initiated on
those volumes from cluster 1.

Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds on distributed devices are
complete before shutting down the clusters.

version: 2.9.0.68

Page 31 of 32
VPlexcli:/> rebuild status

If rebuilds are complete the command will report the following output:

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

Task 44: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements
If you are still logged in to the VPLEX CLI session, restore the laptop settings. If you used a service
laptop to access the management server, use the steps in this task to restore the default cable
arrangement.
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:
 EMC cabinet:

1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
 Non-EMC cabinet:

1. Disconnect the red service cable from the laptop.


2. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to
the VPLEX management server.)

version: 2.9.0.68

Page 32 of 32

You might also like