Professional Documents
Culture Documents
Topic
VPLEX Customer Procedures
Selections
Procedures: Manage
Management Procedures: Shutdown
Shutdown Procedures: VS6 Shutdown Procedures
VS6 Shutdown Procedures: Cluster 2 in a Metro configuriaton
SR Number(s): 21590212
REPORT PROBLEMS
If you find any errors in this procedure or have comments regarding this application, send email to
SolVeFeedback@dell.com
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION (“EMC”)
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
INFRINGEMENT AND ANY WARRANTY ARISING BY STATUTE, OPERATION OF LAW, COURSE OF
DEALING OR PERFORMANCE OR USAGE OF TRADE. IN NO EVENT SHALL EMC BE LIABLE FOR
ANY DAMAGES WHATSOEVER INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF EMC HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice. Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.
Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners.
version: 2.9.0.68
Page 1 of 32
Contents
Preliminary Activity Tasks .......................................................................................................4
Read, understand, and perform these tasks.................................................................................................4
version: 2.9.0.68
Page 2 of 32
Phase 3: Restart cluster..............................................................................................................................23
Task 27: Bring up the VPLEX components...................................................................................24
Task 28: Starting a PuTTY (SSH) session....................................................................................25
Task 29: Verify COM switch health ...............................................................................................26
Task 30: (Optionally) Change management server IP address ....................................................26
Task 31: Verify the VPN connectivity ............................................................................................26
Task 32: Power on the RecoverPoint cluster and enable consistency groups .............................27
Task 33: Verify the health of the clusters ......................................................................................27
Task 34: Resume volumes at cluster 2 .........................................................................................27
Task 35: Enable VPLEX Witness..................................................................................................28
Task 36: Check rebuild status and wait for rebuilds to complete ..................................................28
Task 37: Remount VPLEX volumes on hosts connected to cluster-2, and start I/O .....................29
Task 38: Restore the original rule-sets for consistency groups ....................................................29
Task 39: Restore the original rule-sets for distributed devices .....................................................29
Task 40: Restore the remote exports............................................................................................30
Task 41: Disable Call Home..........................................................................................................30
Task 42: Collect Diagnostics.........................................................................................................31
Task 43: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete ....31
Task 44: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements.......32
version: 2.9.0.68
Page 3 of 32
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.
Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity
2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.
Note: There may not be any top trending service topics for this product at any given time.
version: 2.9.0.68
Page 4 of 32
VS6 Shutdown Procedure for Cluster 2 in a Metro Configuration
Before you begin
Read this entire shutdown document before beginning this procedure. Before you begin a system
shutdown on a VPLEX metro system, review this section.
Confirm that you have the following information:
IP address of the MMCS-A and MMCS-B in cluster 1 and cluster 2
IP addresses of the hosts that are connected to cluster 1 and cluster 2
(If applicable) IP addresses and login information for the RecoverPoint clusters attached to cluster 1
and cluster 2
All VPLEX login usernames and passwords.
Default usernames and passwords for the VPLEX management servers, VPlexcli, VPLEX Witness
are published in the EMC VPLEX Security Configuration Guide.
Note: The customer might have changed some usernames or passwords. Ensure that you know any
changed passwords or that the customer is available when you need the changed passwords.
CAUTION: If you are shutting down ALL the components in the SAN, shut down the
components in the following order:
version: 2.9.0.68
Page 5 of 32
5. [ ] Front-end and back-end COM switches.
Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit
Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.
3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Enter the service password.
Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.
version: 2.9.0.68
Page 6 of 32
Figure 1 Micro-USB/DisplayPort connections on MMCS-A (back view)
Note: Refer to the EMC VPLEX Security Configuration Guide for information on passwords and
default values.
3. [ ] From the shell prompt, type the following command, where hostname is a name that will
replace the default name (ManagementServer ) in the shell prompt in subsequent logins to the
management server:
Do not use the Linux OS hostname command to make this change.
sudo /opt/emc/VPlex/tools/ipconfig/changehostname.py -n hostname
Note: ESRS Gateway does not support IPv6. If the unit is being configured with IPv6 addresses, the
management server must have an IPv4 address assigned to it apart from the IPv6 address. This is to
enable the management server to communicate with the ESRS Gateway.
6. [ ] Type the following command, and confirm that the output shows the correct
information:
VPlexcli:/> ll /management-server/ports/eth3
Name Value
-------- -------------
address 10.243.48.65
gateway 10.243.48.1
net-mask 255.255.255.0
version: 2.9.0.68
Page 7 of 32
7. [ ] Type the following command at the VPlexcli prompt, and again at the shell
prompt:
exit
8. [ ] Disconnect the monitor, keyboard, and mouse from the management server.
9. [ ] For serial or IP connections, launch PuTTY and establish a connection to the management
server’s public IP address, to confirm that the address was set correctly.
10. [ ] Proceed to the next applicable task.
CAUTION: If any step you perform creates an error message or fails to give you the
expected result, consult the troubleshooting information available in generator, or
contact the EMC Support Center. Do not continue until the issue has been resolved.
3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli
3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli
After you finish
For the rest of this procedure:
Commands typed in the CLI session to cluster 1 are tagged with this icon:
Commands typed in the CLI session to cluster 2 are tagged with this icon:
Commands typed in the LINUX shell session to cluster 1 are tagged with:
Commands typed in the LINUX shell session to cluster 2 are tagged with:
version: 2.9.0.68
Page 8 of 32
For more information on transfer-size, refer to the Administration Guide.
Procedure
2. [ ] If there are distributed devices with a transfer size of greater than 128K, do one of
the following:
If all distributed devices have a transfer-size greater than 128K, type the following command to
change the transfer size for all devices:
VPlexcli:/distributed-storage/distributed-devices> set *::transfer-size 128K
If only some distributed devices have a transfer-size greater than 128K, type the following
commands to change the transfer-size for the specified distributed device:
cd /distributed-storage/distributed-devices
3. [ ] Type the ls -al command to verify that the transfer size value for all distributed-devices is
128K or less
VPlexcli:/distributed-storage/distributed-devices> ls –al
CAUTION: Any data migration job initiated on cluster 1 pause when cluster 1 shuts
down and resume when cluster 1 is restarts.
Procedure
1. [ ] Verify whether any data migration has been initiated on cluster-1 and is ongoing.
Refer to the VPLEX Administration Guide:
version: 2.9.0.68
Page 9 of 32
'Monitor a migration's progress' for one-time data migration
'Monitor a batch migration's progress' for batch migrations
2. [ ] If any migrations are ongoing, do one of the following:
If the data being migrated must be available on the target before cluster 1 is shutdown, wait for the
data migrations to complete before proceeding with this procedure.
If the data being migrated does not need to be available on the target when cluster 1 is shutdown,
proceed to the next Task.
Results
Migrations should be stopped and you can proceed to stop I/O on the hosts.
The VPLEX CLI Guide describes the commands used in this procedure.
Procedure
/clusters/cluster-1/virtual-volumes/remote_r0_softConfigActC1_C2_CHM_0000_vol:
Name Value
------------------ -----------------------------------------
block-count 255589
block-size 4K
cache-mode synchronous
capacity 998M
consistency-group -
expandable false
health-indications []
health-state ok
locality remote
operational-status ok
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-tier -
supporting-device remote_r0_softConfigActC1_C2_CHM_0000
system-id remote_r0_softConfigActC1_C2_CHM_0000_vol
volume-type virtual-volume
.
.
.
2. [ ] If the output returned any volumes with service-status as running, for each remote export
on cluster 1.
version: 2.9.0.68
Page 10 of 32
Expand the table as needed.
Remote export device Remote export capacity Local device on cluster-2 Local device capacity
from cluster-1 (source (target device)
device)
3. [ ] For each remote export identified in the previous step, identify a local device on cluster 1 based
on the conditions identified in the VPLEX Administration Guide. See the "Data Migration" chapter.
Note: Best practice is to select target devices that are the same size as their source devices. This will
simplify the device migration later in this procedure.
a. Create migration job with source device and target device from column 1 and 3 in the table.
b. Give a migration job name to distinguish it from other existing migration jobs if any.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
7. [ ] Refer to the following procedures in the "Data Migration" chapter of the VPLEX Administration
Guide:
If there is only one device to migrate, refer to "One-time data migration."
If there multiple devices to migrate, refer to "Batch migrations'."
Task 8: Stop the I/O on the hosts that are using VPLEX volumes from cluster 2, or move I/O to
cluster 1
This task requires access to the hosts accessing the storage through cluster 2. Coordinate this activity
with host administrators if you do not have access to the hosts.
About this task
The steps to complete this task vary depending on whether the entire SAN is being shut down, and
whether certain hosts using storage on cluster 2 support I/O failover.
Procedure
1. [ ] If the entire front-end SAN will be shut down:
version: 2.9.0.68
Page 11 of 32
Shut down the hosts
Unmount the file systems
2. [ ] Determine whether each host accessing cluster 2 supports I/O fail over (either manual or
automatic fail over).
If host supports fail over, perform the tasks to fail over the I/O to cluster 1.
If host does not support fail over, perform the following steps:
Task 9: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete
CAUTION: Do not shut down cluster 2 before any rebuild initiated from cluster 2 on a
distributed device has completed. Doing so could cause data loss if I/O is initiated on
those volumes from cluster 1.
Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds on distributed devices are
complete before shutting down the clusters.
VPlexcli:/> rebuild status
If rebuilds are complete the command will report the following output:
Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds
version: 2.9.0.68
Page 12 of 32
Task 11: Verify the cluster health
Before continuing with the procedure, ensure that there are no issues with the health of the cluster.
Procedure
1. [ ] From the VPlexcli prompt, type the following command, and confirm that the operational and
health states appear as ok:
health-check
Results
If you do not have a RecoverPoint splitter in your environment, you can now begin shutting down call
home and other processes that are no longer necessary. If you are running RecoverPoint, follow the
RecoverPoint shutdown tasks.
1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1
connectivity: FULL
Task 14: Disable RecoverPoint consistency groups that use VPLEX volumes
Disabling RecoverPoint consistency groups prevents errors in data replication while the system is
experiencing some maintenance tasks. Perform this task if there is a RecoverPoint splitter in your
environment.
About this task
version: 2.9.0.68
Page 13 of 32
CAUTION: This task disrupts replication on volumes that are part of the RecoverPoint
consistency group being disabled. Ensure that you perform this task on the correct
RecoverPoint cluster and RecoverPoint consistency group.
Procedure
/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- -------- ------ -----------
6.210.75 cluster-2 advil RPA 1 3.5(n.109)
/recoverpoint/rpa-clusters/10.6.210.75/volumes:
Name RPA RP Type RP Role RP VPLEX Group
Capacity
----------------------------- Site ----------- ---------- Group -----------
----------
---------------------------- ----- ----------- ---------- ----- -----------
----------RP_Repo_Vol2_vol advil Repository - -
RP_RepJournal 10G
demo_prodjournal_1_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_2_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_3_vol advil Journal - cg1
RP_RepJournal 5G
.
.
.
3. [ ] Login to the RecoverPoint GUI for each RecoverPoint cluster that is attached to cluster 2.
4. [ ] Determine which RecoverPoint consistency groups the shutdown impacts.
Inspect the Splitter Properties associated with the VPLEX cluster.
Compare the serial number of the VPLEX cluster with the Splitter Name in the RecoverPoint GUI.
5. [ ] Record the names of the consistency groups.
Note: You will need this information to reconfigure the RecoverPoint consistency groups after you
complete the shutdown.
6. [ ] Disable each RecoverPoint consistency group associated with the VPLEX splitter on cluster 2
version: 2.9.0.68
Page 14 of 32
Task 15: Power off the RecoverPoint cluster
If there is a RecoverPoint splitter in the configuration, before shutting down the VPLEX cluster, power off
the RecoverPoint cluster.
About this task
CAUTION: This step disrupts replication on all volumes that are this RecoverPoint
cluster replicates. Ensure that you perform this task on the correct RecoverPoint cluster.
Procedure
1. [ ] Shut down each RecoverPoint cluster that is using a VPLEX virtual volume as its repository
volume.
2. [ ] Record the names of each RecoverPoint cluster that you shut down
Note: You need this information later in the procedure when you are powering on these RecoverPoint
clusters.
1. [ ] At the VPlexcli prompt, type the following command to browse to the call-home
context:
VPlexcli:/> cd notifications/call-home/
If the enabled property value is false, do not perform the next step.
Note whether call home was enabled or disabled. Later in the procedure, you need this information to
determine whether to enable call home again.
3. [ ] Type the following command to disable call home:
VPlexcli:/notifications/call-home> set enabled false --force
If this command worked, an ls of the context shows that the enabled state of the call home is
false.
version: 2.9.0.68
Page 15 of 32
Note: It is not possible to change the detach-rule for a consistency-group with Recover Point enabled.
Procedure
1. [ ] From the VPlexcli prompt, type the following commands to display the consistency-
groups with RecoverPoint enabled:
VPlexcli:/> ls -p /clusters/cluster-1/consistency-groups/$d where
$d::recoverpoint-enabled \== true
/clusters/cluster-1/consistency-groups/Aleve_RPC1_Local_Journal_A:
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule winner cluster-2 after 5s
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes Aleve_RPC1_local_Journal_A_0000_vol,
Aleve_RPC1_local_Journal_A_0001_vol,
Aleve_RPC1_local_Journal_A_0002_vol,
Aleve_RPC1_local_Journal_A_0003_vol,
Aleve_RPC1_local_Journal_A_0004_vol,
Aleve_RPC1_local_Journal_A_0005_vol,
Aleve_RPC1_local_Journal_A_0006_vol,
Aleve_RPC1_local_Journal_A_0007_vol,
Aleve_RPC1_local_Journal_A_0008_vol,
Aleve_RPC1_local_Journal_A_0009_vol, ... (45 total)
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
.
.
.
2. [ ] Record the names of all the Recover Point enabled distributed consistency groups with cluster 2
as the winner.
Note: You will use this information to manually resume the consistency group on cluster 1 in Phase 3
of this procedure.
Task 18: Make cluster 1 the winner for all distributed synchronous consistency-groups with
RecoverPoint not enabled
Procedure
1. [ ] From the VPlexcli prompt on cluster 2, type the following commands to display the
consistency-groups:
cd /clusters/cluster-2/consistency-groups
ll
/clusters/cluster-2/consistency-groups:
Name Operational Active Passive Detach Cache
Mode
-------------------------- Status Clusters Clusters Rule -------
version: 2.9.0.68
Page 16 of 32
----
-------------------------- ------------ -------- -------- --------- -------
----
sync_sC12_vC12_nAW_CHM (cluster-1,{ winner
synchronous
summary:: cluster-2
ok, after 22s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})
sync_sC12_vC12_wC2a25s_CHM (cluster-1,{ winner
synchronous
summary:: cluster-2
ok, after 25s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})
2. [ ] Record the name and rule-set of all consistency-groups with a rule-set that configures cluster 2
as winner or no-automatic-winner in the Detach Rule column in the table below. Expand the
table as needed.
Note: You will use this information when you reset the rule-set name in Phase 3 of this procedure.
3. [ ] Make cluster 1 the winner for these consistency groups to prevent the consistency group from
suspending I/O to the volumes on cluster 1.
Type the following commands, where consistency-group_name is the name of a consistency-
group in the table and delay is the current delay for it.
cd consistency-group_name
cd ..
version: 2.9.0.68
Page 17 of 32
Task 19: Make cluster 1 the winner for all distributed devices outside consistency group
Procedure
2. [ ] In the output, note all distributed devices with a rule-set that configures cluster 2 as the winner in
the Rule Set Name column.
The default rule-set that configures cluster-2 as the winner is cluster-2-detaches.
Note: Customers may have created their own rule-set with cluster-2 as a winner.
3. [ ] Record the name and the rule-set of all the distributed devices with a rule-set that configures
cluster 2 as the winner or for which there is no rule-set-name (the rule-set-name field is blank) in the
Rule-set name column in the table below.
WARNING: If a distributed device outside of a consistency group has no rule-set name, it will be
suspended upon the shutdown of the cluster. This can lead to data unavailability.
Note: You will need this information when you reset the rule-set in Phase 3 of this procedure.
4. [ ] This step varies depending on whether you are changing the rule-set for all distributed devices,
or for selected distributed devices
To change the rule-set for all distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name cluster-1-detaches
To change the rule-set for selected distributed devices, type the following command for each
device whose rule-set you want to change, where distributed_device_name is the name of a
device in the table.
cd distributed_device_name
cd ..
version: 2.9.0.68
Page 18 of 32
Task 20: Disable VPLEX Witness
If VPLEX Witness is enabled in the configuration, disable it.
Procedure
1. [ ] From the VPlexcli prompt, type the following commands to determine if VPLEX
Witness is enabled:
cd /cluster-witness
ls
Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45
Contexts:
Components
Contexts:
Components
CAUTION: Running this command on the wrong cluster will result in Data Unavailability.
CAUTION: During the cluster shutdown procedure before executing the shutdown command DO NOT
DISABLE the WAN COM on any of the VPLEX directors (by disabling one or more directors' WAN COM
ports, or disabling the external WAN COM links via the WAN COM switches). Disabling the WAN COM
before executing the 'cluster shutdown' command triggers the VPLEX failure recovery process for
volumes, which can result in the 'cluster shutdown' command hanging. Disabling the WAN COM
beforethe cluster shutdown has not been tested and is not supported.
version: 2.9.0.68
Page 19 of 32
Procedure
1. [ ] To shut down the firmware in cluster 2, type the following commands:
VPlexcli:> cluster shutdown --cluster cluster-2
Warning: Shutting down a VPlex cluster may cause data unavailability. Please
refer to the VPlex documentation for the
recommended procedure for shutting down a cluster. To show that you understand
the impact, enter
'shutdown': shutdown
You have chosen to shutdown 'cluster-2'. To confirm, enter 'cluster-2':
cluster-2
Status Description
-------- -----------------
Started. Shutdown started.
Cluster cluster-2
operational-status: not-running
transitioning-indications:
transitioning-progress:
health-state: unknown
health-indications:
local-com: failed to validate local-com: Firmware
command error.
communication error recently.
Task 22: Manually resume any suspended Recover Point enabled consistency groups on
cluster-1
2. [ ] Type the following command to ensure none of the above consistency groups require
resumption:
consistency-group summary
version: 2.9.0.68
Page 20 of 32
Procedure
1. [ ] From the VPlexcli prompt, type the following command:
exit
2. [ ] From the shell prompt, type the following commands to shut down director 2-1-A:
3. [ ] Repeat the previous step for each remaining director in cluster 2, substituting the applicable ssh
command shown in the following table:
4. [ ] Type the following command, and verify that director 2-1-A is down:
ping –b 128.221.252.67
5. [ ] Repeat the previous step for each director you shut down, substituting the applicable IP address
shown in the previous table.
1. [ ] Type the following command to shut down the management server on cluster-2:
sudo /sbin/shutdown 0
Broadcast message from root (pts/1) (Tue Feb 8 18:12:30 2010):
The system is going down to maintenance mode NOW!
version: 2.9.0.68
Page 21 of 32
Task 25: Shut down power to the VPLEX cabinet
Procedure
1. [ ] Switch the breakers on all PDU units on the cabinet to the OFF position.
2. [ ] Check the Power LED on the engine between drive fillers 7 and 8 is off.
Task 26: Exit the SSH sessions, restore your laptop settings, restore the default cabling
arrangement
If you are still logged in to the VPLEX CLI sessions, log out now and restore the laptop settings. If you
used a service laptop to access the management server, use the steps in this task to restore the default
cable arrangement.
About this task
Repeat these steps on each cluster
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:
version: 2.9.0.68
Page 22 of 32
EMC cabinet:
1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
Non-EMC cabinet:
CAUTION: This document assumes that all existing SAN components and access to
them from VPLEX components do not change as a part of the maintenance activity. If
components or access changes, please contact EMC Customer Support to plan this
activity.
CAUTION: If you are bringing up ALL the components in the SAN, bring them up in the
order described in the following steps. While you are bringing up all the components in
that order, ensure that the previous component is fully up and running before continuing
with next component. Ensure that there is a time (20 s or more) gap before starting each
component.
SAN components:
1. [ ] Storage arrays from which VPLEX is getting the I/O disks and the metavolume disks.
2. [ ] Front-end and back-end InfiniBand switches.
version: 2.9.0.68
Page 23 of 32
VPLEX components:
2. [ ] Verify that the blue Power LED on the engine is showing is as shown in the figure.
3. [ ] On dual-engine or quad-engine clusters only, verify that the Online LED on each UPS (shown in
the following figure) is illuminated (green), and that none of the other three LEDs on the UPS is
illuminated.
version: 2.9.0.68
Page 24 of 32
If the Online LED on a UPS is not illuminated, push the UPS power button, and verify that the LEDs
are as described above before proceeding to the next step.
4. [ ] Verify that the UPS AC power input status LEDs are on (solid) to confirm that each unit is getting
power from both power zones.
5. [ ] On dual-engine or quad-engine clusters only, verify that no UPS circuit breaker has triggered. If
either circuit breaker on a UPS has triggered, press it to reseat it.
CAUTION: If any step you perform creates an error message or fails to give you the expected result,
consult the troubleshooting information in the generator, or contact the EMC Support Center. Do not
proceed until the issue has been resolved.
version: 2.9.0.68
Page 25 of 32
Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit
Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.
3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Enter the service password.
Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.
1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1
connectivity: FULL
version: 2.9.0.68
Page 26 of 32
Task 32: Power on the RecoverPoint cluster and enable consistency groups
If a RecoverPoint cluster that used VPLEX virtual volumes for its repository volume was powered off in
the Shutdown phase of this procedure, power on the RecoverPoint cluster.
About this task
Refer to the procedures in the RecoverPoint documentation.
If a RecoverPoint consistency group was disabled in the first phase of this procedure, perform this task to
enable those consistency groups. Refer to the procedures in the RecoverPoint documentation.
Procedure
1. [ ] Login to the RecoverPoint GUI.
2. [ ] Enable each RecoverPoint consistency group that was disabled in Phase 1.
3. [ ] Repeat these steps for every RecoverPoint cluster attached to the VPLEX cluster.
1. [ ] Type the following command, and confirm that the operational and health states
appear as ok:
health-check
3. [ ] Type the following command to display whether any volumes outside of a consistency group
require resumption on cluster 2.
ll /clusters/cluster-2/virtual-volumes/
4. [ ] Type the following command to resume at the loser cluster for all distributed volumes not in
consistency groups:
version: 2.9.0.68
Page 27 of 32
device resume-link-up -f -a
1. [ ] Type the following commands to enable VPLEX Witness on cluster-1 and confirm
that it is enabled:
cluster-witness enable
cd /cluster-witness
ls
Contexts:
Components
3. [ ] Confirm Admin State is enabled and Mgmt Connectivity is ok for all three components.
4. [ ] Confirm Operational State is in-contact for clusters and clusters-in-contact for
server.
Task 36: Check rebuild status and wait for rebuilds to complete
Rebuilds may take some time to complete while I/O is in progress. For more information on rebuilds,
please check the VPLEX Administration Guide "Data Migration" chapter.
Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds are complete.
If rebuilds are complete, the command will report the following output:
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds
Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.
version: 2.9.0.68
Page 28 of 32
Task 37: Remount VPLEX volumes on hosts connected to cluster-2, and start I/O
Remounting VPLEX volumes requires access to the hosts accessing the storage through cluster-2. This
task may require co-ordinating this task with host administrators if user does not have access to the
hosts.
About this task
CAUTION: The storage is ready to service I/O. However, bringing up applications now
greatly increases the time to complete cluster restart. Perform this task now only if the
user requires applications to be up. Best practice is to complete cluster restart before
completing this task.
Procedure
1. [ ] Perform a scan on the hosts and discover the VPLEX volumes.
2. [ ] Mount the necessary file systems on the VPLEX volumes.
3. [ ] Start the necessary I/O applications on the host.
Note: Skip this task if you do not want to change the rule-sets.
2. [ ] Repeat the previous step for each consistency group listed in the table.
3. [ ] To verify the rule-set name change, type the following command:
ll /clusters/cluster-1/consistency-groups/
4. [ ] In the output, confirm that all the consistency groups listed in the table are restored to their
original detach rules.
version: 2.9.0.68
Page 29 of 32
Procedure
1. [ ] Change the rule-set of distributed devices.
Note: You can change the rule-set for all distributed devices, or for selected distributed devices.
To change the rule-set for distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name original rule-set-name
To change the rule-set for selected distributed devices, type the following commands, where
distributed_device_name is the name of a device listed in the table.
cd distributed_device_name
set rule-set-name original rule-set-name
cd ..
3. [ ] In the output, confirm that all the distributed devices listed in Table 2 are restored to the original
detach rule.
2. [ ] Perform the following operations to migrate data from cluster 1 back to cluster 2:
a. Create a migration job with source device and target device from the table column 3 and 1
respectively.
b. Verify that the prerequisites for device migration are met. Refer to the VPLEX Administration
Guide.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
Depending on the number of devices to migrate, refer to the following sections in the Data Migration
chapter of the VPLEX Administration Guide:
To migrate one device, refer to "One-time data migrations"
To migrate multiple devices, refer to "Batch migrations"
version: 2.9.0.68
Page 30 of 32
1. [ ] At the VPlexcli prompt, type the following command to browse to the call-home
context:
VPlexcli:/> cd notifications/call-home/
If the enabled property value is false, do not perform the next step.
Note whether call home was enabled or disabled. Later in the procedure, you need this information to
determine whether to enable call home again.
3. [ ] Type the following command to disable call home:
VPlexcli:/notifications/call-home> set enabled false --force
If this command worked, an ls of the context shows that the enabled state of the call home is
false.
Task 43: Check status of rebuilds initiated from cluster-2, wait for these rebuilds to complete
CAUTION: Do not shut down cluster 2 before any rebuild initiated from cluster 2 on a
distributed device has completed. Doing so could cause data loss if I/O is initiated on
those volumes from cluster 1.
Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds on distributed devices are
complete before shutting down the clusters.
version: 2.9.0.68
Page 31 of 32
VPlexcli:/> rebuild status
If rebuilds are complete the command will report the following output:
Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds
Task 44: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements
If you are still logged in to the VPLEX CLI session, restore the laptop settings. If you used a service
laptop to access the management server, use the steps in this task to restore the default cable
arrangement.
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:
EMC cabinet:
1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
Non-EMC cabinet:
version: 2.9.0.68
Page 32 of 32