Professional Documents
Culture Documents
Types of Information Your Values
Types of Information Your Values
Storage system MAC address for the storage system's built-in Ethernet interface
Host name fas121
Password Netapp
Time zone
Storage system location
Language used for multiprotocol storage systems
Administration host Host name 192.168.1.1
IP address 192.168.1.212
Virtual interfaces Link Names (physical interfaces names such as e0, e5a, or e9b)
Simulator ns0,ns1
Number of links (number of physical interfaces to include in the vif)
Name of virtual interface (Name of vif, such as vif0
Ethernet interfaces Interface name Ns0
IP address 192.168.1.21
Subnet mask 255.255.255.0
Partner IP address
Media type (network type)
Are jumbo frames supported?
MTU size for jumbo frames
Router (if used) Gateway name
IP address
Location of HTTP directory
DNS Domain name netappu.com
Server address 1 192.168.1.212
Server address 2
Server address 3
NIS Domain name
Server name 1
Server name 2
Server name 3
Windows domain netappu.com
WINS servers 1 192.168.1.212
2
3
Windows Active Directory domain administrator user name Administrator
Windows Active Directory domain administrator password Netapp
Active Directory (command-line setup only)
RMC MAC address
IP address
Network mask (subnet mask)
Gateway
Media type
Mailhost
RLM MAC address
IP address
Network mask (subnet mask)
Gateway
AutoSupport mailhost
AutoSupport recipient(s)
http://192.168.1.21/na_admin/
Filer General
By default, there is no such file, but if user modifies this file, they will
have
/etc/syslog.conf ----------- which will tell where to direct messages at screen ( typically
/etc/messages )
Source –v /etc/rc - this command reads and executes any file containing filer commands
line by line
Telneting to Filer
Only one user can do telnet
Options telnet
Autosupport Configuration
Filer>Options autosupport
autosupport.doit <string>
Autosupport troubleshooting
1. ping netapp.com from filer
2. TCP 443 SSL should be open at SMTP server
SMTP server may stay in DMZ side
3. Mail relay in exchange must be specified. Filer’s host name or IP address must be
specified in mail relay. Routing for netapp.com or routing by this host or routing by
this ip must be enabled for filer. Filer is acting as a SMTP client. In general setup
of mail system, no SMTP client is able to send the mail thru mail server to other SMTP
server when host’s identity is different as far as mail id is concerned. Relaying is
blocked generally.
4. Proxy server http / https must pass http url
RAID group
• vol add vol0 –g rg0 2 add 2 disks to raid group 0 of vol0
• vol options yervol raidsize 16 changes the raidsize settings of the vol yervol to
16
• vol create newvol –t raid_dp –r 16 32@36
Raid DP 28
Raid 4 14
Disk Fail/unfail
• priv set advanced when disk goes bad
• disk fail partially then prefail copy
• disk unfail is seen when sysconfig –r is done.
Somestimes it may just hang there, so disk fail
• sysconfig –d
Disk troubleshoot
• Priv set advanced -i <disk name> would release the disk &
reconstruct the the RAID group
• Led_on < 1d.16>
• Led_off < drive id >
• Blink_on 4.19 ( failed disk now will be orange )
• Blink_off 4.19
Zeroing disks
Priv set advanced
Disk zero spares --- to zero out the data in all spares
Filer> nv
NVRAM3 6V
Raidtime out in options raid controlls ( 24 hr ) the trigger when bat low
In 940s – NVRAM5 is used as Cluster interconnect card as well, “two in one” on slot 11
Time Deamond
(port 123, 13, 37 must be open)
Because of this hourly snapshot creation also fails or in progress message appears.
Because of timed.max_skew set to 30 min, we may see above message in every 30min- 1 hr
If we set this to 5s and see how skew happening – if we see lot of skew messages (once we turned ON to timed.log ON ), MB replacement
may require.
For temporary do
# ntptrace –v filername
Filer>options timed
From filer view => set date and time : Synchronize now < ip of NTP server > => do synchronize now and check NTP from unix host.
Tip : if there are multiple interfaces in filer, make sure that they are properly listed in NIS or DNS server – same host name , multiple ip
addresses may require
BPS ( Blocks Per Sector ) of Disk
Block Append Checksum requires each disk drive to format it to 520 or 512 BPS per sector This provides a total of 4160 bytes in 8 sectors.
This space is broken into two parts. First part is 4096 bytes ( 4K - the WAFL file size ) of file system data. The remaining 64 bytes contain
the checksum data for previous 4096 bytes. In this manner, the checksum block is appeneded to each block of data.
Enviromental Status
The top line in each channel says failures to yes , if there is any.
Power
Cooling
Temperature
( if there is no problem )
Volume
• vol options vol0
• vol status vol0 -r ( raid info of volume )
• sysconfig –r
• vol options vol0 raidsize 9
• vol add vol0 <number of disk >
• vol status –l ---- to display all volume
Double Parity
vol create –t raiddp –r 2 ( minimum of two )
(There are two parity disks for holding parity and double parity data)
environment dump
View ) - not root. RSH sec settings must be set with either ip or hostname, but with
matching username for logon accounts ( not root, but the domain admin account )
( add this unix host in /etc/hosts.equiv file – similar for windows host as well )
Registry Walk
Filer> registry walk status.vol.<vol name>
Scheduling any job at filer
From windows host ( admin host ), enable rsh ( windows 2003 box )
C:\> rsh sim –l root –n sysconfig –r gave the output result ( sim is filer )
P/W
To change admin host administrator’s p/w
• Filer>passwd
• Filer>login administrator
• Filer>new password:…..
Quotas
• lines in /etc/quotas
• /vol/vol0/testftp tree 10m
WAFL stuffs
NFS troubleshooting
• wcc –u <unix user> ---------- unix credential
• >exportfs -c host pathname ro|rw|root #checks access cache for host permission
• >exportfs -s pathname # verifies the path to which a wol is exported
• >exportfs -f #flush cache access entries and reload
• >exportfs -r #ensures only persistent exports are loaded
• >vol read_fsid
• # mount --- will display what protocol being used for mounting ( in unix host )
• # mount –o tcp < >
• Qtree security
• Portmap –d
• Rpcinfo –p < filer ip >
Told customer to get rid of the nosuid on the exports file and that solved the issue.
It is found by
=> gives hex number – should match any number above so that it indicates, file of which volume has problem. Hex number can be
converted to decimal value as well
In unix side
# find –inum <decimal value >
# /etc/mnttab
( Sometimes, vol fsid number found must be reversed to get the exact place of innode )
Check the local unix system – file level and owner level Permission and also qtreee security
( Sometimes filer permission comes to stay on top of local permission at unix box, so that it cannot be seen – they will become hidden )
To find use
# chmod
#chown
NFS Performance
• pktt – start e5a , -dump e5a, pktt –stop ( all three– start to end)
• sysstat
• nfsstat –d ( displays cache statistic )
• -z ( zero out the stat )
• -m ( mount point statistics )
• perfstat –b –f filename > perfstat.begin
• perfstat –e –f filename > perfstat.end
• # time mkfile 10m test ( time it takes )
• # time cp test
• windows host > sio_ntap_sol 100 100 4096 100m 10 2 a.file
b.file –noflock
NFS error 70
File or directory that was opened by NFS client was either removed or replaced on the NFS filer server
Networking Troubleshooting
• filer>traceroute
• filer>ping
• Filer > ifconfig for IP address related issues
• Filer > routed status
• Filer > routed OFF
• Filer > routed ON
DHCP
Filer cannot have DHCP dynamic address. It is stored in /rc file as static even if DHCP is
choosen.
Packet
• netstat –i
• netstat – i <interface name like ns0,e5a etc >
• netdiag –vv
• ifstat –a # flow control information at bottom
10/100/1GB flow etc purely switch based : what
Ever switch is set, filer takes that
Port
• netstat –a to check all open ports on filer
• netstat ----- to see all connected connections
Port numbers
• 514 / tcp rsh
• 135 tcp/udp rpc
• udp rpc for sun
Network troubleshooting
Cannot Ping to other subnet
Checking steps
• rdfile /etc/rc
• ifconfig –a
• >netstat –rn #---- gateway line must be there
• >routed status
• >routed ON # --- if gateway is not there add manually
Brocade Switch
• #switchshow
• # wwn
10:00:00:05:1e:34:b0:fc - may be the output
• # ssn "10:00:00:05:1e:34:b0:fc" - setting the switch serial number to wwn
MCData Switch
If direct connection works but not thru mcdata, verify that OSMS is licensed and enabled.
CIFS setup
cifs setup
Cifs general
• Cifs shares
• Cifs access permission
• Cifs restart
• Cifs shares eng
• Cifs shares –add eng /vol/cifsvol/eng
• Cifs access eng full control
• Cifs sessions
• Cifs sessions –s
• Cifs terminate –t 10
CIFS performance
• cifs stats
• smb_hist -z
• sysstat –c 15 2 ( 15 iterations every 2 seconds )
• statit
• WAFL_susp
• Ifstat -a
• netstat –m -r -i ( can be used any one )
• netdiag –v, -sap
• cifs sessions
Cifs homedirctory
1. volume snapvol is created
2. qtree is created as root of this vol => snapvol ; sec is unix
3. share is created as snaphome of this qtree as
/vol/snapvol/home with everyone/full control
4. options cifs.home-dir /vol/snapvol/home
5. options cifs.home-dir-namestyle <blank>
6. edit /etc/cifs_homedir.cfg file and add at the end
/vol/snapvol/home
CIFS troubleshooting NT4 domain
• cifs setup error : filer’s security information differs from domain controller, cifs
could not start
• Sol :
• NT4 PDC/BDC : Server Management – Delete the account, recreate the account and rerun
the setup.
• NT4 PDC and BDC secure channel communication/verification
• BDC c:> netdom bdc \\bdcname /query
CIFS troubleshooting
• wcc –s domain\name -----windows – match with
/etc/lclgroups.cfg file - any
changes here requires reboot
• wcc –u username --------------unix
• Cifs domaininfo - tells dns entry
• rdfile /etc/rc --------- will have dns info
• options wafl
Should see unix
Pcuser
• /etc/usermap
• /etc/passwd these two files are read at the first time
B.
1. Check DNS servers, must point to itself and must have at least 4,5 services - AD
C.
Net view \\filername should show all shares from windows side and cifs shares should show from filer side
But, when share is accessed from windows machine, we get No Network Provider Present. Ping works, iscsi works, iscsi drives are OK –
can access. But, cifs shares does not work. In filer side we see ‘Called name not present ( 0x82). Cifs resetdc also gives the same message.
Check :
1. If filer and windowsdc is rebooted at the same time because of power failure this is
seen. Filer needs to come first and then DC
2. make sure that there is no virus related activities goin on that host. Virus scan to
windows host or filer can also make this happen
Check /etc/usermap.cfg
/etc/passwd file
/vol/test - check this is UNIX or NTFS
Cifs resetdc
Cifs Options
Cifs.show_snapshot ON
Wafl.net_admin_priv_map_to_root ON*
Options cifs.trace_login ON
* to take ownership of file by windows top level administrator when file is created from unix side and has only unix ACLs
Scenario A
1. qtree in vol is created with mixed sec
2. share that qtree
3. groupwise users access in unix are defined in /etc/group file
/etc/group - > is in unix side. Client or NIS server
Eng::gid:khanna, Uddhav
In client side
(this was the cause when user upgraded from 6.4 to 6.5 and some files in mixed qtree’s
folders were not able to access nor change the permission from even root user from NFS side.
Above permission reset made it work.
Scenario B
CIFS audit
• options cifs.audit.enable on
cifs.audit.file_access_events.enable on
cifs.audit.logon_events.enable on
cifs.audit.logzie 524288
cifs.audit.saveas /etc/log/adtlog.evt
Veritas Backup Exec 9.1 : mycomputer -> shares -> sessions shows Veritas Backup Exec
Administrative account connections for every share in filer. One connection per share and it
grows each and every day as well as stays there each and everyday. This must be wiped out.
Virus Scan
Fpolicy
• fpolicy show
• fpolicy enable
• fpolicy options
• fpolicy server
Quotas
rdfile /etc/quotas
Cluster Prerequisite
Cluster
• cf disable
• cf enable
• cf status
cf giveback
F1 F2
cf takeover Can shutdown
When comes up
Waiting for giveback
from partner
cf giveback
Sometimes, due to active state, this may not run. Make sure that no cifs sessions are running. Also snapmirror should be off
San FCP
• switch>cfgshow
• >fcp show cfmode (standby,partner,mixed)
• >fcp set cfmode mixed
• >fcp show adapters
• >fcp show initiators
• >fcp setup
• >fcp set cfmode [dual_fabric | mixed | partner | standby ]
• >fcp nodename
• >fcp config
• >fcp status
• >fcp start
• >fcp config 10b
• >igroup show
• >fcp stats vtic
• ( virtual target interconnect adapter )
• >fcp stats 5a
• >sysstat –f 1
• Igroup show
• lun show –m
• lun show -v
#newfs /dev/rdsk/c1t1d0s6
#reboot -- -r
#sanlun
LUN
1. lun create
2. lun setup
3. lun show –m, -v
4. lun stat –a –o –i 2
5. lun destroy -f < lun path > ( the –f command destroy the lun even if it is mapped )
6. lun move
7. lun map | unmap <lun path><initiator group>[<lun id>]
8. lun online
9. priv set diag
10.lun geometry
1. create qtree
2. share qtree
3. create lun – snap drive can be used – so that lun is created inside qtree
(if qtree is not set properly, cannot access cifs shares – access denied error message
happens )
LUN restore from snapshot (snap restore of lun – snap restore licensing req )
When volume, qtree,files their space reserve is disabled by default, to change this – we must do:
#format clt0d1
Snapshot of LUN
Rws is the file created when snapshot of LUN is taken. 124 event ID is generated by
SnapDrive. When deletion of this snapshot LUN is tried 134 is created as well. When there is
busy snapshot, other snapshot may hang and 134 is also generated
( lun files can only be restored to either root volume or qtree root directories )
( Also, when the lun is copied, it may not be full, so it my go fast while copying )
iSCSI
OR
filer > iscsi seurity default –s method –p inpassword –n inname [-o outpassword –m outname
]
( any initiator connection ) [[ only this one works]]
Troubleshooting
Space Reservation
df -r
.snapshot
Nfs snapshot
.snapshot directory is shown only at the mount point, although it actually exists in every
directory in the tree
Cifs snapshot
SnapDrvDc.exe
1. Break mirror
2. Check that lun is online
3. if using by terminal services and ge the Failure in Checking Policies error , Errro
Code : 13040, then log off, and log back in or if this does not work, reboot the
windows host.
Single File Snap Restore ( SFSR ) is done before snapdrive makes the connection. During this
time snap drive virtually does not work and issues 13040 error.
No other lun restore can be done from same host. As SFSR is going on in background sol is :
wait patiently. Log off and log back in after while, the drive should come.
Snap restore
Volume Restore
File restore
Snapshot restore
• Snap restore –f –t file –s < snapshot > /vol/vol0/<directory name> - to restore for
directory
Vol
• vol status –b
• vol create vol1 2
• vol restrict vol1
• vol copy start vol0 vol1
• vol online vol1
• snap list vol1
… snapshot_for_volcopy.0
• snap create vol1 snap1
Snap Mirror
• /etc/snapmirror.conf
• vol status –b vol1 (size in blocks)
• vol status vol1
• options snapmirror.access host=filerA
• filerB>vol restrict vol2
• >wrfile /etc/snapmirror.conf
• vol status –v
• filerB>snapmirror initialize –S filerA:vol1 filerB:vol2 #baseline data transfer
• snapmirror status
• snapmirror status –l more detail info
• snapmirror off
• snapmirror break filerb:vol2
• snapmirror on
• snapmirror quiesce filerB:/vol/vol0/mymirror (break a qtree snapmirror)
• snapmirror resync –S filerB:vol2 filerA:vol1
----
for qtree:
Breaking snapmirror
1. snapmirror quiesce < destination path> #--- check from Snapmirror.conf file
2. snapmirror off
3. snapmirror break < destination path>
Have to resync
Synchronous Snapmirror
• /etc/snapmirror.conf
• filera:/vol1 filerb:/vol2 - sync
• #multi path
• src_con = multi()
• src_con:/vol1 dest:/vol2 - sync
• #src_con = failover()
Requirement
snapmirror optimization
(100/1000)*10,000,000 /8 = 125,000
Snapmirror problem
Snapmirror source transfer from <vol> to <destination filer>:<vol. : request denied, previous
request still pending
Sol : On Destination
Snapvault
• >options snapvault.enable on
• >options snapvault.access host=name
baseline qtree
Snapvault troubleshooting
Ifa backup relationship from OSSV is created and then deleted from secondary, any attempt to recreate it fails with error message:
“Transfer aborted: the qtreee is not the source for the snapmirror destination”
Example
Transfer aborted : the qtree is not the source for the snapmirror destination
To workaround
Snapvault.exe destinations
• >options ndmpd.enable on
• >options ndmpd.access dfm-host
• options ndmpd.authtype <challenge | plaintext >
• >options snapvault.enable on
• >options snapvault.access host
• >options ndmpd.preferred_interface e2 #optional
Snaplock
VIF
Create steps
a) vif create vif1 e0 e7a e6b e8 --------single mode
OR
Tip 1
Tip 2
If there is 3 port ( eg : 2 Gig and 1 100 bast T Ethernet ) so that e0 ( default – 100 base T ) – e0 must be turned off
Vfiler
If the hosting filer administrator does not have CIFS or NFS access to the data contained in V filers, except for that in Vfiler0. After storage
unit is assigned to a Vfiler, the hosting filer administrator loses access ot htat storage unit. The Vfiler administration gains access to the
Vfiler by rsh to the Vfiler’s IP address.
As hosting filer administrator, before you create a Vfiler with the /vol/vol1 volume, you can configure the /etc/exports file so that you can
mount the /vol/vol1 volume. After you create the Vfiler, an attempt to mount the /vol/vol1 volume would result in the Stale NFS file
handle” error message. The Vfiler administrator can then edit the Vfiler’s /etc/exports file to export /vol/vol1, run the exportfs-a command
on the Vfiler, tehn mount /vol/vol1, if allowed.
VFM
Cache location
<C:\Documents and Settings\All Users\Application Data\NuView\>, which contains the cache directory:
1. Take a snapshot of the application in case there is a need to return to the working state. This can be done through VFM in the
Tools menu by selecting Take Application Snapshot Now. Have the user create a snapshot and save it.
2. Save a copy of the VFM application folder < C:\Documents and Settings\All Users\Application Data\NuView> somewhere for
backup purposes.
3. Exit VFM and stop the StorageXReplicationAgent service and the StorageXServer service.
4. Create a folder on a different drive on the VFM server where the application directory should reside in the future. Please use a
local destination for the folder for example D:\VFMAppData. A mapped drive does not work in this situation. Create a new
subdirectory called NuView in the new location. Ex: D:\VFMAppData\NuView
5. Go to the C:\Documents and Settings\All Users\Application Data\NuView directory and copy the StorageX directory to the new
location created by the user under the NuView subdirectory. The new location should look something like this:
D:\VFMAppData\NuView\StorageX
6. Open the registry with regedit.exe and find the HKEY_LOCAL_MACHINE\SOFTWARE\NuView\StorageX key. Add a new
String Value here with the name AppDataDir and set the value data to the root of the new cache location. Ex: D:\VFMAppData
7. Close regedit and start the StorageX Server and Replication Agent services.
8. Start VFM and wait as it reads through the new cache directory and loads roots and information that were copied to the new
location.
Ndmpd should be ON
To check
Testing
Some issues
a. If veritas is showing RED to LTO tape devices, then reboot LTO and restart veritas services
b. If backup is done from Veritas software, make sure that no sessions are staying back as cifs share sessions. Go to my
computer->Manage->connect to filer->shares->sessions.
Administrative shares of backups are seen here as sticking – not going away even after backup is complete and you see huge list here.
McData Side
Switch: WWN[1:000:080088:020751]
Fabric: WWN[1:000:080088:020751]
Name: CNX01
Domain: 97
Type: switch
Version: 06.01.00
Vendor: IBM
No tapes found.
• storage unalias -a
• storage alias mc0 WWN[xx:xxx:xxxxxx:xxxxxx][Lx]
• storage alias st0 WWN[yy:yyy:yyyyyy:yyyyyy][Ly]
• and to cause the filer to create the aliases via the "source" command
• source /vol/vol0/etc/tape_alias
Drives
slot x: FC Host Adapter 3a (Dual-channel, QLogic 2312 (2342) rev. 2, 64-bit, L-port,
<OFFLINE (hard)>)
Firmware rev: 3.3.142
Host Loop Id: 0 FC Node Name: 2:000:00e08b:1c780b
Cacheline size: 16 FC Packet size: 2048
SRAM parity: Yes External GBIC: No
Link Data Rate: 1 Gbit
I/O base 0x9e00, size 0x100
memory mapped I/O base 0xa0c00000, size 0x1000
• Options timed
• Timed.enable on
• Timed.servers ntp2.usno.navy.mil:<ip address>
• Rdate <host>
Out of inodes
• df –i /vol/vol0
OR
To change
• *>ps –h –l ( eelll)
• *> sysstat –x 1
http://www.apparentnetworks.com/sas/
NDMP
Levels 0,1,2
0 is baseline copy
NDMP copy from Vol to Vol ( /etc/hosts.eqiv file must have two filers information – their entries )
( best solution for data migration ; snapmirror or vol copy will cause fragmentation - filer will retain ACLs )
a) Ndmpcopy source:path_to_vol destination:path_to_volume -level 0 –dpass
For data changed since level1
b) ndmpcopy source:path_to_vol destination:path_to_vol –level1 –dpass
Finally turn off cifs and nfs : for final incremental backup
c) ndmpcopy source:path_to_vol destination:path_to_vol –level9 -dpass
( After this level 0 is done, a level 1 ndmpcopy may be done to copy the data has changed since level 1 )
Data Migration
Tip:
If wrongly copied to vol – sometimes, we see vol inside vol0 and vol cannot be deleted. When accessed by \\filer\C$ - we see vol and that
cannot be deleted. It says folders lost or not found. In that case, the folder can be deleted. Renaming possible, rename it and delete it.
Sync core
Unix commands
1. Error Code : 9035 : An attempt to resize lun ‘/vol/vtape/nvlun0/lun0.un’ on filer ’10.40.3.2’failed. desription New size exceeds this
lun&app geometry
Sol : size was more than 10%. Lun cannot increase more than 10% of initial size. Like if initial is 130GB then 1300 GB is
the max possible.
Exchange data base restore to different location
Snap drive is used to restore the snapshot and hence database files if different location is desired
Exchange Restore
1. Up to the minute – what ever have since last crash, will replay the log files automatically so database is up-to the minutes. Test
back and restore, will have fundamentally no effect. If backup is done and mail is deleted, and restored instantly, the intermittent
log files will be deleted and hence no effect. Basically it was within the database, so system take as it already exists and ignore, but
will have all latest mail
2. Point in time – till that time ; all the backup after that date and time is not usable. Cannot get mails after that ; logs are deleted.
1. Mount the previous SME shapshots – both database and log files
2. copy those to recovery storage group directory
3. if tried to restore – will get exchange error : C1041739 error
4. Copy eseutil.exe files to that directory – files are
Eseutil.exe
Ese.dll
Exchmem.dll
Exosal.dll
Jcb.dll
5. run eseutil /mh pirv1.edb
Eseutil /p pirv1.edb
6. Restore, during that time system ask to overwrite the database files, it is the option at the bottom of database properties, choose
that option and restore
7. System is monted.
When IS is created in log volume
When IS is created in log volume, if it was in separate before, SME fails and reports
VSS_E_PROVIDER_VETO
VSS_E_PROVIDER_VETO
Remote Verification Error Log ( if verification server is different than present server )
Eseutil.exe 6.5.7226.0
Ese.dll
Jcb.dll
Exosal.dll
Exchmem.dll
0xC00402ba
(while different verification server is used, symptom message says “generated on” and “generated to” so above ‘5
files’ needs to be checked.
Local error code : 0xC004031d Event ID 209 also event id 264 - job failed
Event ID : 177
Sol : Local server has preferred address set to dedicatedly connected cable. Local server has one nic for filer iscsi
connection and other nic for public. Filer had two nics – one for dedicated iscsi connection ( 192.168.1.1) and the
other 10.1.8.11 for cifs and other connections. From remote change preferred ip to 10.1.8.11 ( the other ip of filer ).
Make sure that at least drives can be created from this verification server. After preferred ip address change no above
error happened.
The main problem of RPC was not able to ping from private network to public network. If remote verification is
doing from another server, it is advisable to not to make iscsi session to the same nic where Local ( source ) server is
talking. It gives RPC error – error code 0xC004146e with event ID 250 and Event ID 117.
Make sure that snap drive services have the same account & p/w information in both servers.
SME error:
Facility : Win32
ID no : C00706b6
Reason: MS Exchange Services were not started and database were not mounted
Error : The target virtual disk in snapshot is inconsistent or does not exist and cannot be deleted
( SnapDrive Error code : 0xc0040302)
Unable to create snapshot. Check application log ( SnapDrive error code: 0xc00402be)
Failed to create snapshot of Drive M
Error message : ZAPI : An attempt to create snapshot ‘ name_name_recent’ of the ‘name volume’ volume with
async option ‘false’ failed on the filer < filer name>
Sol: ( Greyed out snapshots are systemwide, like from volume itself ) Snapshots has to be deleted manually.
Snaplist –q <vol>
Event ID 51
An error has detected on device \Device\harddisk7 and hard disk is Netapp lun
System Configuration
NetApp Release 7.2: Mon Jul 31 14:53:25 PDT 2006
System ID: 0099907364 (fas121)
System Serial Number: 987654-32-0 (fas121)
Model Name: Simulator
Processors: 1
slot 0: NetApp Virtual SCSI Host Adapter v0
25 Disks: 11.8GB
2 shelves with LRC
slot 1: NetApp Virtual SCSI Host Adapter v1
slot 2: NetApp Virtual SCSI Host Adapter v2
slot 3: NetApp Virtual SCSI Host Adapter v3
slot 4: NetApp Virtual SCSI Host Adapter v4
25 Disks: 11.8GB
2 shelves with LRC
slot 5: NetApp Virtual SCSI Host Adapter v5
slot 6: NetApp Virtual SCSI Host Adapter v6
slot 7: NetApp Virtual SCSI Host Adapter v7
slot 8: NetApp Virtual SCSI Host Adapter v8
4 Tapes: VT-100MB
VT-100MB
VT-100MB
VT-100MB
From command:
fas121> sysconfig
NetApp Release 7.2: Mon Jul 31 14:53:25 PDT 2006
System ID: 0099907364 (fas121)
System Serial Number: 987654-32-0 (fas121)
Model Name: Simulator
Processors: 1
slot 0: NetApp Virtual SCSI Host Adapter v0
25 Disks: 11.8GB
2 shelves with LRC
slot 1: NetApp Virtual SCSI Host Adapter v1
slot 2: NetApp Virtual SCSI Host Adapter v2
slot 3: NetApp Virtual SCSI Host Adapter v3
slot 4: NetApp Virtual SCSI Host Adapter v4
25 Disks: 11.8GB
2 shelves with LRC
slot 5: NetApp Virtual SCSI Host Adapter v5
slot 6: NetApp Virtual SCSI Host Adapter v6
slot 7: NetApp Virtual SCSI Host Adapter v7
slot 8: NetApp Virtual SCSI Host Adapter v8
4 Tapes: VT-100MB
VT-100MB
VT-100MB
VT-100MB