Professional Documents
Culture Documents
Do Not Distribute
Troubleshooting Workshop for Partners
Version 2.3
Exercise Guide
GS Learning and Performance
Troubleshooting Workshop for Partners v2.3
NetApp Internal Only
Do Not Distribute
This page left intentionally blank.
Copyright Information
Copyright © 2013 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change
NetApp Internal Only
Do Not Distribute
without notice.
No part of this book covered by copyright may be reproduced in any form or by any means—graphic,
electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval
system—without prior written permission of the copyright owner.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products or materials described
herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or
materials does not convey a license under any patent rights, trademark rights, copyrights or any other
intellectual property rights of NetApp, Inc.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or
pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark Information
NetApp, the NetApp logo, DataFabric, FAServer, FilerView, gFiler, MultiStore, NearStore, NetCache, SecureShare,
SnapManager, SnapMirror, SnapMover, SnapRestore, SnapVault, SyncMirror, Data ONTAP, SnapLock, SnapDrive,
and WAFL are registered trademarks of NetApp, Inc. in the United States and other countries. gFiler, Network
Appliance, SnapCopy, Snapshot, and The Evolution of Storage are trademarks of NetApp, Inc. in the United States
and/or other countries and registered trademarks in other countries. ApplianceWatch, BareMetal, Bolt design,
Camera-to-Viewer, ComplianceClock, ComplianceJournal, ContentDirector, EdgeFiler, FlexClone, FlexVol, FPolicy,
HyperSAN, InfoFabric, LockVault, Manage ONTAP, NOW, NOW NetApp on the Web, ONTAPI, RAID-DP, RoboCache,
RoboFiler, SecureAdmin, Serving Data by Design, SharedStorage, Simulate ONTAP, Simplicore, Smart SAN,
SnapCache, SnapDirector, , SnapFilter, SnapMigrator, SnapSuite, SnapValidator, SohoFiler, vFiler, VFM, VFM Virtual
File Manager, VPolicy, and Web Filer are trademarks of NetApp, Inc. in the United States and other countries.
NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the United States.
Spinnaker Networks, the Spinnaker Networks logo, SpinAccess, SpinCluster, SpinFS, SpinHA, SpinMove, and
SpinServer are registered trademarks of Spinnaker Networks, LLC in the United States and/or other countries.
SpinAV, Spin Manager, SpinMirror, SpinRestore, SpinShot, and SpinStor are trademarks of Spinnaker Networks, LLC
in the United States and/or other countries.
Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United States and/or
other countries.
Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the United
States and/or other countries.
IBM is a registered trademark and AIX is a trademark of IBM Corporation in the United States and/or other
countries.
All other brands or products are trademarks or registered trademarks of their respective holders and should be
treated as such.
NetApp is a licensee of the CompactFlash and CF Logo trademarks
All other brands or products are trademarks or registered trademarks of their respective holders and should
be treated as such.
Do Not Distribute
This page left intentionally blank.
Table of Contents
NetApp Internal Only
Do Not Distribute
Module 0 Exercises – Connectivity 7
Do Not Distribute
This page left intentionally blank.
Do Not Distribute
Descriptions and Instructions
The Storage Controller is online, and it can serve data, but you cannot get into Data ONTAP
itself.
Get into the Storage Controller, turn on the various ways to connect to it
o Hint: It is easier to use your Linux host for this lab. Run through the ways that
you can connect to Data ONTAP, and once you are in, turn on those options that
would make logging into the Storage Controller easier.
Use the rlm status command to find out the RLM IP address for any labs requiring
a console display.
A very handy tool for you Linux host is nmap (Network Map). Nmap is a port scanner that
can be run against a host to determine what ports are available for use. To install this tool,
follow these simple steps:
Login as root to your Linux host
Notes Enter the command, ‘yum install nmap’
You will be prompted to install the utility. Enter ‘y’ to install
Once nmap is installed, you can test it against your Storage Controller. This is a good
troubleshooting tool, and may assist you with this lab, as well as future labs in this course.
NetApp Internal Only
Do Not Distribute
You can enter ‘nmap’ on your Linux host to see the various switches and options that are
available.
Here is some example output using the nmap utility against a Storage Controller:
Do Not Distribute
Hardware & Software Lab 1 – Reverting Data ONTAP
Notes Revert Data ONTAP which is running on your Storage Controller from 7.3.5.1 to
7.2.7.
Steps Actions
Do Not Distribute
on your Storage Controller of 7.2.7.
http://10.61.77.165/ONTAP/727_setup_q.exe
2. Issue the download command to write the new Data ONTAP to disk
and flash.
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
5. At your loader prompt, boot to the backup image of Data ONTAP to get
your Storage Controller back into working condition. Use the following
command:
LOADER> boot_backup
Do Not Distribute
NetApp Data ONTAP 7.3.5.1, FAS3140
Product
Data ONTAP needs a minor revert of Data ONTAP.
Description
Time estimate: 30 minutes
Notes Revert Data ONTAP which is running on your Storage Controller from 7.3.5.1 to
7.3.3P4
Steps Actions
Do Not Distribute
on your Storage Controller of 7.3.3P4
http://10.61.77.165/ONTAP/733P4_setup_e.exe
2. Issue the download command to write the new Data ONTAP to disk
and flash.
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
5. At your loader prompt, boot to the backup image of Data ONTAP to get
your Storage Controller back into working condition.
NetApp Internal Only
Do Not Distribute
6. Did the boot_backup work? Why/Why not?
_________________________________________________________
_________________________________________________________
_________________________________________________________
_________________________________________________________
Do Not Distribute
NetApp Data ONTAP <BROKEN!>, FAS3140
Product
Netboot the Storage Controller
Description
Time estimate: 30 minutes
Notes Netboot the Storage Controller on 7.3.3P4, and install the correct version of
Data ONTAP.
Steps Actions
Do Not Distribute
Storage Controller on the correct version of Data ONTAP 7.3.3P4.
2. When you arrive at the 1-5 menu, use option 1 to boot normally.
http://10.61.77.165/ONTAP/733P4_setup_q.exe
4. Issue the download command to write the new Data ONTAP to disk
and flash.
cf enable
Do Not Distribute
High Availability Lab 1 – Cluster is Down
For some reason, one of your Storage Controllers is down, and the other
Storage Controller is not taking over.
Notes We need both of these systems back up, and we need the clustering re-
enabled.
Hint: You may want to review your Student Guide’s Module on High Availability,
and consult the NetApp Knowledge Base for answers.
Do Not Distribute
Descriptions and instructions
1) Using your Linux host, try to ping the IP address of your Storage Controller.
Attempt to telnet to the Storage Controller.
Notes 2) Use your Classroom PC, open a CMD tool window and ping the IP address of
your Storage Controller. Attempt to telnet to the Storage Controller.
Use your RLM to log into your system and determine what the issues are.
Note, when you are finished, log out of the RLM, and verify that you can
telnet to the Storage Controller once again.
Do Not Distribute
CIFS Lab 1 – Oplocks
Create a share
Notes Mount the share on your Windows host and copy some files into it,
while monitoring the CIFS ops that are being done.
Copy those files again into the share, monitor the CIFS ops, and
compare the results
Steps Actions
Do Not Distribute
aggregate on your Storage Controller called aggr1, and a 1Gb volume
from aggr1 called cifs1.
Once there, perform the following actions:
Do Not Distribute
Copy the c:\Backup_Folder to your CIFS share
Observe the CIFS Operations on the Storage Controller. What was the
maximum number of CIFS Ops per second during this copy procedure?
_________________________________________________________
7.
TELL YOUR INSTRUCTOR THAT YOU HAVE REACHED THIS POINT
8. Once your instructor has given his nod, login to the Storage Controller
again.
10. Observe the CIFS Operations on the Storage Controller. What was the
maximum number of CIFS Ops per second during this copy procedure?
_________________________________________________________
Do Not Distribute
What is the difference?
_________________________________________________________
_________________________________________________________
Note that the differences may not appear very evident with a small,
single copy procedure. However multiple copies at various sizes will
show a marked different in the amount of network traffic overhead that is
needed with and without oplocks enabled.
Discuss this in class.
Do Not Distribute
NetApp Data ONTAP 7.3.5.1, FAS3140
Product
Virus Scanning Example
Description
Time estimate: 30 minutes
Steps Actions
1. Create a CIFS share, calling it whatever you wish. Mount your new
NetApp Internal Only
Do Not Distribute
CIFS share on your Windows host, and copy your c:\Backup_Folder to
your CIFS share. Note how long it takes to copy.
filer1> vscan on
filer1> vscan
Lots of extensions, huh? We’re going to add to that list, just for the
practice.
Something to note is at the end it will list the # of files scanned and
scanned failures. They should both be 0.
3. You are going to create the same .MP3 extension for scanning that is in
your Student Guide. The steps are:
Do Not Distribute
Now go to your Windows host, and go back to the CIFS share that you
created and mounted. The Backup_Folder is still there, and we’re going
to overwrite it. Copy your c:\Backup_Folder to your CIFS Share. Select
‘yes’ and ‘overwrite’ to all questions that are asked.
5. Note that it took just a little bit longer to copy. CTRL-C the systat
command on the Storage Controller. Notice the large increase in
network activity?
Use the vscan command again. You may notice that the files scanned
is now over 200.
6.
LET YOUR INSTRUCTOR KNOW YOU HAVE REACHED THIS
POINT!
________________________________________________________
________________________________________________________
So how can we get it to work otherwise? Try to figure it out, and write
your answer here.
________________________________________________________
________________________________________________________
Do Not Distribute
Descriptions and instructions
You login to you Linux system, and notice that there are 2 mounts that have
been created to your Storage Controller. They are:
If you look, you’ll notice that the /vol0 and /mnt1 mount points are listed as Stale
Mounts. You need to get your /mnt1 and /vol0 mount points working again.
Troubleshoot the problem and bring them back online.
Do Not Distribute
NetApp Data ONTAP 7.3.3P4, FAS3140
Product
Login to your Linux box and try to change into your /mnt1 directory. It will fail
with a Permission Denied! Something tells you that perhaps the user mapping
isn’t working correctly (hint hint).
Go back to your Linux host and see if you can now change back into you /mnt1
mount point.
Do Not Distribute
NetApp Data ONTAP 7.3.3P4, FAS3140
Product
For some reason, you cannot get to your /mnt2 directory anymore. It was just
Notes an NFS volume on your Storage Controller you created earlier. Your task here
is to bring it back online once again.
Do Not Distribute
Performance Lab 1 – Filer Performance Suffering
Note that this lab does not need to be solved. It is meant for you understand
the tools that are available, and framing a case.
Do Not Distribute
SAN – iSCSI Lab 0 - Provision Storage and License iSCSI
Step Action
1. Create two 10G Flexible Volumes on aggr1 named “vol1” and “vol2” by issuing the
following commands:
filerx> vol create vol1 aggr1 10G
filerx> vol create vol2 aggr1 10G
Do Not Distribute
people will not have in-depth experience with this tool, so the following steps were
created for you to proceed. To launch SnapDrive, go to:
Start -> Administrative Tools -> Computer Management.
SnapDrive is listed under Storage. Selecting it will start the service. Under
SnapDrive open up your computer name listed there, and click on Disks. When
Disks is open, right click on the name “Disks” and select “Create Disk”
When complete, your SnapDrive screen should look similar to the picture below.
Note the picture is an example only; your LUN will look different.
WHEN YOU ARE FINISHED, LET YOUR INSTRUCTOR KNOW.
Do Not Distribute
Data ONTAP 7.3.3P4, FAS3140
Product
Windows iSCSI LUN is offline
Symptom
Time estimate – 30 minutes
Looking at your Windows host, you see that your mapped LUN you just created with
SnapDrive is offline. Not only that, but SnapDrive itself does not seem to be
working. The object is to get the LUN back online quickly, so you are tasked with:
Troubleshoot the Storage Controller to find out why things are not working
Notes correctly
Forget about getting SnapDrive working and use the iSCSI service itself to
attach back to the Storage Controller and re-map your LUN
After that is completed, bonus points go to those that can figure out why SnapDrive
isn’t working, and get it functional again.
Do Not Distribute
Data ONTAP 7.3.3P4, FAS3140
Product
Your iSCSI map is once again no longer online
Symptom
Time estimate – 15 minutes
Notes
Hint: After you have fixed the problem, you will more than likely have to
refresh iSCSI Windows host in the iSCSI Initiator Control Panel, for your
Windows host to allow for access again.
Do Not Distribute
1. Telnet into your Storage Controller using the remote console. The IP and port information
are on the IP handout.
2. The Storage Controller FC adapters are required to be in ‘target’ mode in order to connect to
the fabric correctly. Sometimes on newly installed Storage Controllers, the ports are all set
to ‘initiator’. The following steps are in this lab in case your classroom Storage Controllers is
not setup correctly. Note that if you run the ‘fcadmin config’ command, and see that
adapters 0c and 0d are setup as ‘target’, you can skip this section, and go on to the next lab,
“Create and connect to a LUN”.
In this case ports 0c and 0d are configured as initiators and need to be reconfigured. If ports
0c and 0d are targets and online nothing else needs to be done.
Note: If the Status reads ‘offline’ for adapters 0c and 0d, make sure that FCP has been
enabled.
3. Issue the following command:
filerx> reboot
When prompted to “Press CTRL-C for special boot menu”, press CTRL-C
4. When the prompt shown below appears select 5 for Maintenance mode boot.
Starting boot on Mon Mar 5 13:39:13 GMT 2007
NetApp Internal Only
Do Not Distribute
(1) Normal boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Initialize all disks.
(4a) Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.
Selection (1-5)? 5
5. At the prompt issue the following commands:
*> fcadmin offline 0c
*> fcadmin offline 0d
*> fcadmin config -t target 0c
*> fcadmin config -t target 0d
6. Reboot the filer by issuing the following commands:
*> halt
LOADER> bye
7. Check your work by issuing the following command:
filerx> fcadmin config
The output should look like this:
Adapter Type State Status
---------------------------------------------------
0a initiator UNDEFINED online
0b initiator UNDEFINED online
0c target CONFIGURED online
0d target CONFIGURED online
Do Not Distribute
1. You will need the Linux FCP ports in order to be able to add them to the igroup. To do
this, login to your Linux host and run the following command:
sanlun is part of the NetApp Host Utilities, and necessary for any flavor of UNIX to
collect information used by support. Here is a sample output for comparison:
host3 WWPN:10000000c9787a8a
host4 WWPN:10000000c9787a8b
Do Not Distribute
This will tell you what the Storage Controller can see, here is some sample
output:
What is common between the Linux and the Storage Controller data? Notice on
adapter 0c the WWPN 10:00:00:00:c9:78:7a:8a appears, which matches
host3 on the Linux host. Also notice adapter 0d the WWPN
10:00:00:00:c9:78:7a:8b appears, which matches host4 on the Linux system.
So we’re going to use both of those WWPNs in our iGroup for the next step.
3. . Create a LUN on the Storage Controller for FCP using the lun setup command.
Provide the following values:
NetApp Internal Only
Do Not Distribute
LUN = /vol/vol2/lun2
Type = linux
Size = 1g
Name of initiator group [windows_fcp]: linux_fcp
Initiator Group Members is the WWPN that you noted in the above steps.
Add both the WWPN’s that you saw earlier on the Linux host.
4. Since we have multiple paths from the Linux host to the Storage Controller, we’re
going to enable multipathing.
On the Linux host, type the command:
Did you see your new LUN? If you did, that’s great, but not likely. The Linux host
needs to re-scan it’s BUS to show the new LUN. To do this, you need to issue the
command below EXACTLY as it is shown. You need to re-scan the hosts host3 and
host4 that you used in the above steps.
6.
Now run the sanlun command again and see if you can see your Storage
Controller’s new FCP LUN. Here’s an example of what you should see:
NetApp Internal Only
Do Not Distribute
[root@IBM-Host1B ~]# sanlun lun show all
controller: lun-pathname device filename adapter protocol lun size lun state
filer1: /vol/vol2/lun2 /dev/sdd host4 FCP 1g (1073741824) GOOD
filer1: /vol/vol2/lun2 /dev/sde host4 FCP 1g (1073741824) GOOD
filer1: /vol/vol2/lun2 /dev/sdc host3 FCP 1g (1073741824) GOOD
filer1: /vol/vol2/lun2 /dev/sdb host3 FCP 1g (1073741824) GOOD
We’re seeing the same LUN 4 times? 2 adapters on the host each going
through 2 adapters on the Storage Controller makes 4 paths to the LUN. In the
next few commands, this will become more clear. Let’s configure the LUN
using multipathing, so if there’s 1 link that fails, the LUN will still be visible
and able to serve data.
Note: If you want to see what the NetApp multipath config is for Linux, check
out /etc/multipath.conf.
7. Since we turned on multpathing earlier, we’re going to use the device for that.
Let first do a check to make sure that ALUA is enabled on the drives. By
default, it should be, and it is what is generally preferred to be used. Issue the
command:
multipath –ll
Note that the command option above is 2 lower case letter ‘L’s, not the number
one. Here is a sample of what you will see. Note the bolded area below that
will tell you if ALUA is being used:
Note the ‘dm-2’ in the above data. This command is showing you what the
multipath device is for the current configuration.
Do Not Distribute
Here is some sample output. Note the DM-MP DevName bolded below. The
name at the end should start with ‘dm-#’. In this example, it is dm-2. That
device is what you’re going to use as your drive on the Linux host.
This command may make things more clear the hosts# and the
primary/secondary controller ports that make up the 4 devices.
9. Prepare and mount the drive. In our example, we’re going to use /dev/dm-2.
Use whichever drive you were show in your output above.
# fdisk /dev/dm-2
So you won’t spend a lot of cycles wondering how Linux fdisk works, use the
following as a step-by-step guide. Note that these are fdisk commands
below, and used after running fdisk:
n = new partition
p = primary partition
1 = partition number
Use all the cylinders
w = write the partition table
NOTE: DO NOT REBOOT YOUR LINUX SYSTEM! The fdisk command will
prompt you to reboot, however in these labs it is not necessary.
10. Create the file system on the Linux host with the following command. Use
whichever device (/dev/dm-2) that you just partitioned. You will be
prompted to format the entire device, you should answer ‘y’.
NetApp Internal Only
Do Not Distribute
mkfs -t ext3 /dev/dm-2
11. Let’s create a directory and mount our LUN. Again, use your /dev/dm-2 that
you partitioned above.
mkdir /linux_lun
mount /dev/dm-2 /linux_lun
12. Change into the /linux_lun directory, and let’s just copy a file into it, just to
know that we can use it.
cp /etc/hosts /linux_lun/file1
Do Not Distribute
Product Data ONTAP 7.3.3p4, NetApp 3140
Do Not Distribute
Create a volume snapshot of vol2 called clone_snap
Create a clone of /vol/vol2/lun2 called lun2_clone using the snapshot above
Do a snap list of vol1and see that you have a snapshot that’s (busy,LUNs).
2. Now split the lun2_clone from the parent snapshot. When it is finished, check:
snap list vol2 – notice that the snapshot clone_snap is no longer busy (and
can be deleted!)
lun show – will now show you have a new lun called /vol/vol2/lun2_clone.
3. Ok, so let’s login to your Linux host now. We’re going to be doing work here,
but first, let’s copy another file into your /linux_lun directory.
cp /etc/passwd /linux_lun/file2
4. Now, remember how we earlier mounted /vol/vol2/lun2 on the Linux host? That’s
your task now for the lun2_clone. Here’s some tips:
Remember to:
Map your LUN to an igroup
Rescan your Linux BUS, and then use ‘sanlun lun show –p’ to show the new
device
Make a directory called /linux_clone and mount your new device
5. Successful?
List out the contents of /linux_clone. Do you see the file1 file we had there earlier?
List out the contents of /linux_lun. Do you see file1 and file2?
Everything that was inside that snapshot was saved, and can be brought back just
that easy. You just cannot delete the snapshot that is backing a LUN.
Do Not Distribute
Product Data ONTAP 7.3.3P4, FAS3140, SnapDrive 6.x
Download nSANity from the NOW site and place it on your classroom PC. Extract the
content, and use nSANity to collect data on your Storage Controller (both yours and your
Notes partners), your Windows host, and your Linux host.
Once you have the data, normally the next step would be to upload the file to NetApp for
analysis. However, if you wish to view the data that was collected, you can use nSANity with
the –x EXTRACT option against the file. This option will unarchive all of the commands that
were used to generate data. Note that there will be a lot of files that will be extracted n the
end.
Do Not Distribute
Exercise 1: NDMPcopy Failure
Scenario
Description and instructions
Your task is to find out what the issue is and address it.
Follow the notes below to begin the lab. Note that you will be
using both Storage Controllers in the cluster for this lab.
Steps Actions
1. Note, for this lab you will be utilizing both Storage Controllers in the
NetApp Internal Only
Do Not Distribute
cluster. The Storage Controller prompts dictate which one should be
issuing the commands:
filer1 = The odd numbered Storage Controllers (filer1x, filer3x,
filer5x, filer7x)
filer2 = The even numbered Storage Controller (filer2x, filer4x,
filer6x, filer8x)
Verify the following to provision the storage systems for this scenario.
Each storage system has a five disk aggregate. (aggr1 should already
exist and meet the need for this lab.)
On filer1, create a 1GB volume called srcvol1 in aggr1
On filer2, create a 1GB volume called dstvol1 in aggr1
2. Enable the NDMP service and set NDMP access to all for on both
storage systems.
filer1> options ndmpd.enable on
filer1> options ndmpd.access all
Do Not Distribute
filer1> options ndmpd.connectlog.enabled on
6.
STOP: Let the instructor know when you reach this point.
7. Perform the data transfer from the source volume to the destination
volume using the ndmpcopy command from the destination.
Do Not Distribute
Product Data ONTAP 7.3.3P4, FAS3140
Follow the notes below to begin the lab. Note that you will be
using your own Storage Controller for this lab to allow students
more time to use all the commands used to setup, configure,
manage, and destroy the mirrors.
Do Not Distribute
1. Verify the following to provision the storage systems for this scenario.
Each storage system has a five disk aggregate. (aggr1 should already
exist and meet the need for this lab.)
On your Storage Controller, create the following:
2. Verify that dstvol2 is the same size or larger than srcvol2 on the other
Storage Controller with the vol status –b command.
Example:
filerA> vol status –b srcvol2
Volume Block Size (bytes) Vol Size (blocks) FS Size
(blocks)
------ ------------------ ------------------ -------------
srcvol2 4096 25600 25600
3. At the system prompt, restrict the destination volume for the other
Storage Controller with the vol restrict command.
Use the vol status command to verify that the dstvol2 is now
restricted.
Step
This step will specify the destinations hosts that are allowed to access
NetApp Internal Only
Do Not Distribute
4.
the source. For the purpose of this exercise, the destination host and the
source are the same.
6. Start SnapMirror
At the system console of the source and destination appliances, use the
snapmirror on command to turn on SnapMirror. Enter the following:
filerA> snapmirror on
Step
Use the vol status -v command to verify the current status of vol0,
NetApp Internal Only
Do Not Distribute
7.
srcvol2 and dstvol2.
What is the status of vol0; is it online or restricted?
What is the snapmirrored status of vol0; is it on or off?
What is the status of srcvol2; is it online or restricted?
What is the snapmirrored status of srcvol2; is it on or off?
What is the status of dstvol2; is it online or restricted?
What is the snapmirrored status of dstvol2; is it on or off? _
8. The snapmirror initialize command will start the transfer from the source
to the destination. It uses the /etc/snapmirror.conf file on filerA as its
map for what to transfer.
Do Not Distribute
1. When setting up a destination volume for qtree snapmirror the
destination volume should be online and not restricted.
At the system prompt, verify that the destination volume for the other
Storage Controller is online.
Here is an example:
filerA> qtree status
Volume Tree Style Oplocks Status
------------ ---------- ----- -------- ---------
vol0 unix enabled normal
vol0 src_qtree unix enabled normal
srcvol2 unix enabled normal
dstvol2 unix enabled normal
Step
You are going to add a configuration for a qtree snapmirror for a qtree
NetApp Internal Only
Do Not Distribute
4.
on vol0 of the to a qtree on the same vol0.
filerA> snapmirror on
Make the first baseline transfer for the qtree using the snapmirror
initialize command
Step
Do Not Distribute
7.
times until the transfer completes. Observe that the system display
shows the Source, Destination, State, Lag, and Status headings. Verify
that the values shown under these headings are reasonable for the
current exercise. When the transfer completes, your output should look
similar to the following:
filerA> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
filerA:/vol/vol0/src_qtree filerA:/vol/vol0/dst_qtree Snapmirrored 00:00:53 Idle
filerA:srcvol2 filerA:dstvol2 Snapmirrored 00:00:52 Idle
Step
1. In order to mimic a catastrophe with the srcvol2 volume, we are going to turn
the volume off. In order to do that, we need to shutdown snapmirror first.
If there are any running snapmirror jobs will start to give errors, since
they are terminating abnormally.
Turn the srcvol2 volume offline. Note that if there are any snapmirror
jobs that have not timed out, then you will not be able to turn the
volume offline. If this is the case, wait several seconds and try the
command again until you are successful.
Now that the srcvol2 volume is offline, turn snapmirror back on again.
You will see errors that the source is unreachable.
filerA> snapmirror on
Step
Check the available volumes on the system. Note you may have to wait up
NetApp Internal Only
Do Not Distribute
2.
to 60 seconds for the snapmirror to update.
The status output from these commands should not indicate any errors. The
snapmirror transfer error messages being written to the console,
/etc/messages and /etc/log/snapmirror file are the indication that there is
something wrong with the filerA site.
Step
Now, let’s take care of the volume Snapmirror and make the dstvol2 volume
NetApp Internal Only
Do Not Distribute
5.
read-writeable. Before beginning this procedure you should turn off
SnapMirror. This prevents SnapMirror from trying to perform updates and
unintentionally updating the destination qtree and volumes from the source
qtrees or volumes. It will also suppress the error messages that are sure to
be displayed on the console.
Enter the following command on filerA
7. Check the snapmirror status on the system. Your results should look similar
to the output below.
8. Using an NFS export mount the vol0 and dstvol2 volumes for filerA from your
Unix adminhost or if you prefer, create CIFS shares for each of the volumes
and open them from your Windows host.
Step
Do Not Distribute
9.
new files or directories. Write down a summary record of your changes.
Changes to /vol/dstvol2:
Changes to /vol/vol0/dst_qtree:
Step
In order to start the resync, snapmirror needs to be turned on and the volume
1.
re-enabled. Enter the following commands.
filerA> snapmirror on
Because the mirrors were broken and the volumes are not in an appropriate
state for snapmirror to execute the updates as defined in the
/etc/snapmirror.conf file you will get error messages on the console.
Step
Before starting the resync for dstvol2 volume to the srcvol2 volume,
NetApp Internal Only
Do Not Distribute
2.
applications and users must stop accessing the dstvol2 volume. This also
applies for the resync of the dst_qtree to the src_qtree.
Start the resync for srcvol2. Notice that we are using dstvol2 volume as the
3.
source because it has the most recent updates and the srcvol2 volume as
the destination . Enter the following command.
Step
The SnapMirror resync command used a baseline snapshot for the resync.
NetApp Internal Only
Do Not Distribute
4.
Now use the SnapMirror update command to request any changes made
since the baseline snapshot was used. Issue the following command to
complete the synchronization of filerA:dstvol2 with filerA:srcvol2
5.
Enter the following command.
filerA> vol status srcvol2
Is srcvol2 in a read-write state?
6. Before you start the resync for the qtree src_qtree check the status of the qtrees.
What is the status of the qtree src_qtree?
Step
Now let’s complete the first resync for the qtree, src_qtree, on vol0 on filerA.
NetApp Internal Only
Do Not Distribute
7.
Notice that we are using filerA:/vol/vol/dst_qtree as the source because it has
the most recent updates. On filerA we are using filerA:/vol/vol0/src_qtree as
the destination . Enter the following command.
8. The SnapMirror resync command used a baseline snapshot for the resync.
The SnapMirror update command requests any changes made since the
baseline snapshot be transferred to the destination. Issue the following
command to complete the synchronization of filerA:/vol/vol0/dst_qtree with
filerA:/vol/vol0/src_qtree
Step
Do Not Distribute
10.
filerA:dstvol2 read-write you were instructed to make changes to those
objects. You were also asked to record a summary of those changes. Now
access /vol/src/ and /vol/vol0/src_qtree and verify that the changes you made
are the same as what you are now seeing. Write down a summary record of
your changes you see. Hint: If you want to quickly check, you can enter
these commands on the Storage Controller:
filerA*> ls /vol/srcrvol2
filerA*> ls/vol/vol0/src_qtree
Changes to /vol/srcvol2:
:
Changes to /vol/vol0/src_qtree:
11. You have just completed the first resync and made filerA:srcvol2 a
snapmirror of filerA:dstvol2 and filerA:/vol/vol0/src_qtree a snapmirror of
filerA:/vol/vol0/dst_qtree. Now it’s time to complete the second resync and
reverse the mirror relationship so that the srcvol2 volume and src_qtree qtree
are once again the source for these objects.
12. In order to start the second resync for you need to break the mirror again.
Enter the following command.
Step
Start the second resync for the dstvol2 volume. Enter the following
NetApp Internal Only
Do Not Distribute
13.
command.
Step
Do Not Distribute
14.
break the mirror filerA:/vol/vol0/src_qtree. Enter the following command.
Now let’s complete the second resync. Notice that we are using
15.
filerA:/vol/vol/src_qtree as the source and filerA:/vol/vol0/dst_qtree as the
destination . Enter the following command.
SnapMirror Compression
Step
NetApp Internal Only
Do Not Distribute
Let’s go through entering in SnapMirror compression. This is done by adding
1.
an extra line in the /etc/snapmirror.conf file, and then re-configuring
SnapMirror to allow compression.
Since we’re working with /vol/srcvol2 and /vol/dstvol2 on the same Storage
Controller, we’ll set that up now.
Since we’re using the same Storage Controller, the ‘multi’ part has the
source and destination looking the same. For the actual SnapMirror line, the
source has become the relationship name “comp”, however note that the
destination Storage Controller has not changed!
Let’s see if compression works. First, type in snapmirror on so Data ONTAP
3.
will re-read the /etc/snapmirror.conf file. If there’s an error, you will see it
come up now. If there’s no error, you typed it in perfectly!
Next, type ‘date’ and hit return. Keep typing ‘date’ until the time is exactly
4.
after the minute (i.e., the seconds read 00).
NetApp Internal Only
Do Not Distribute
Now let’s make a file on the source. Enter the following commands on the
Storage Controller:
Now type the ‘date’ command again. When it reaches 55 seconds, start
5.
typing the following command:
You should very soon see the SnapMirror transfer begin. To know what’s
going on with compression, you can see it during a transfer. Here is an
example of what you may see (note the compression status)
Source: dev:srcvol2
Destination: dev:dstvol2
Status: Transferring
Progress: 10328 KB
Compression Ratio: 93.3 : 1
State: Snapmirrored
Lag: 00:01:01
Mirror Timestamp: Sun Sep 11 17:29:01 EDT 2011
Base Snapshot: dev (0151752914)_dst.9
Current Transfer Type: Scheduled
Current Transfer Error: -
Contents: Replica
Last Transfer Type: Scheduled
Last Transfer Size: 56 KB
Last Transfer Duration: 00:00:01
Last Transfer From: dev:srcvol2
Do Not Distribute
Description and instructions
Notes This lab will request that you enable SnapVault and set it up to
backup your Windows host at c:\Backup_Folder. You will need to
enter an appropriate schedule, and actually backup the required
folder, and have it restored.
Do Not Distribute
1. In SnapVault terms, the primary systems are the hosts being backed up, in
this case, the Windows host. The Secondary is the Storage Controller
which is doing the backups. So in this example, each Storage Controller
becomes a Secondary Storage Controller.
For the labs here, substitute Secondary with the hostname of your
Storage Controller.
Do Not Distribute
Steps Actions
OSSV for Windows has already been installed on your Windows hosts.
2. Perform the SnapVault initial baseline transfer. At this stage the destination
qtree ossv does not exist and will be created by the snapvault start
command:
3. Verify that the secondary qtree ossv was created in the secondary volume
sv1. (Hint: qtree status)
5. On the Open Systems SnapVault primary, drop down to the bin directory to
run and verify the outputs of the following commands. Note you will have to
use Windows ‘cmd’ and use a DOS window:
C:\Program Files\netapp\snapvault\bin\snapvault
destinations
Do Not Distribute
Steps Actions
You will see a snapshot ending with *_sv1-base.0. That’s what we’ll need.
Enter the command:
Secondary> snapvault update -s <snapshot>
/vol/sv1/Backup_Folder
2. Now let’s enter a SnapVault schedule for the volume. That way, at certain
times each hour, night, or weekly, SnapVault will do an incremental backup
of the systems that are using the volume for Snapvault.
Enter snapvault snap sched to view the schedule you just made.
Do Not Distribute
Steps Actions
Notes:
If during the Open Systems SnapVault agent installation, you specified the
IP address of the authorized secondary system, you must enter this IP
address in the command syntax.
3. Display the SnapVault log from the Open Systems SnapVault client and the
secondary system. You should see entries for the initial baseline transfer,
the manual update, and the restore operations:
C:\Program Files\netapp\snapvault\etc\snapvault
Do Not Distribute
Description and instructions
Steps Actions
1.
NetApp Internal Only
Do Not Distribute
First we need to create a new VMkernal port.
To create a new VMkernel port (see your instructor for the assigned IP
Address).
3. Stop: Let your instructor know when you reach this point.
Do Not Distribute
Filer1> qtree create /vol/nfsstore2/nfstree
5. Create an export for the nfstree qtree and change the export for
the nfsstore2 volume.
Do Not Distribute
1. From the vSphere Client and select the host from the Inventory
panel.
2. Click the Configuration tab and click Storage in the Hardware
panel.
3. Click Datastores and click Add Storage.
4. Select Network File System and click Next.
5. Enter the server name, the mount point folder name, and the
datastore name.
6. Click Next.
7. In the Network File System Summary page, review the
configuration options and click Finish.
7.
Did both the volume /vol/nfsstore AND the qtree /vol/nfsstore2/nfstree
NetApp Internal Only
Do Not Distribute
work?
________________________________________________
________________________________________________
________________________________________________
________________________________________________
________________________________________________
________________________________________________
________________________________________________
If one or both of them did not work, it is your job now to figure out why
they didn’t, and make sure that in the end, both NFS exports are
mounted in VMware as Datastores.
Do Not Distribute
Product Data ONTAP 7.3.3P4, FAS storage system, Windows Domain
Steps Actions
Referring to your classroom IP map, create verify the VMkernel port that
1.
NetApp Internal Only
Do Not Distribute
you created on a new vSwitch.
Note: You may need to free up some adapters to create the new switch.
2.
Configure the iSCSI software initiator on your ESX host to use your
assigned storage system interface as a target
1. From the vSphere Client and select the host from the Inventory
panel.
2. Click the Configuration tab and click Storage Adapters in the
Hardware panel.
3. Select the iSCSI Software adapter and click Properties.
4. The iSCSI Initiator Properties pop-up appears. Click Configure.
5. Click Enable and OK.
6. The IQN (nodename) for the ESX server should appear. You will
need this later, so make sure you copy & paste this into a buffer
to use when you create a igroup as part of the LUN setup.
3.
On your assigned storage system, create a volume and LUN as follows:
4.
NOTE: If you refresh your connections, you should see that the Storage
Controller is connected. It should look like the picture below:
NetApp Internal Only
Do Not Distribute
If you do not see your Storage Controller here after a refresh, perform
the following steps:
1. Go back to the properties of your iSCSI software adapter
2. Under the Dynamic Discovery tab, click on Add
3. Add your e0a IP address. This is important, and it forces ESX to
try a connection to that host. Keep the default port of3260. Click
Ok.
4. Click on Close and rescan your storage when prompted. You
should now see your Storage Controller.
5. If not, check the Storage Controller settings and make sure
iSCSI is turned on, enabled for port e0a, and check your igroup
to see if your ESX box has logged into your e0a interface.
Create a new iSCSI LUN and datastore using the following parameters.
5.
Use the ESX Software iSCSI initiator to connect the LUN to the ESX
host.
NetApp Internal Only
Do Not Distribute
1. From the vSphere Client and select the host from the Inventory
panel.
2. Click the Configuration tab and click Storage in the Hardware
panel.
3. Click Datastores and click Add Storage.
4. Select Disk/LUN and click Next. Your Storage Controller’s disk
should be listed.
5. Enter the server name, the mount point folder name, and a
datastore name called filer#_iscsi
0. .Where # is your Storage Controller’s ID number (i.e. 1 =
filer1x; 2 = filer2x; etc.)
6. Click Next.
7. In the Network File System Summary page, review the
configuration options and click Finish.
6. Verify that you can browse the new datastore. Right click on the
datastore you created, and select Browse.
7. STOP: Let the instructor know when you reach this point.
8. Verify again that you can still browse the datastore. You most likely
cannot at this point. Something has changed, and needs to be fixed.
You job now is to find out what happened on the Storage Controller, and
fix so you can once again browse your iSCSI datastore. When you
believe you have fixed the problem, click on the Refresh link in VMware
to see if your LUN is now back online. Note, if it is not fixed, the refresh
can take several minutes before timing out.