You are on page 1of 8

Managing Aggregates node2> aggr create aggr1 -t raid_dp -r 6 10 Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.

done:notice]: Addition of Disk /aggr1/plex0/rg1/v0.36 Shelf 2 Bay 4 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v0.22 Shelf 1 Bay 6 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v0.35 Shelf 2 Bay 3 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg1/v0.21 Shelf 1 Bay 5 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.20 Shelf 1 Bay 4 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.34 Shelf 2 Bay 2 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.19 Shelf 1 Bay 3 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.33 Shelf 2 Bay 1 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.18 Shelf 1 Bay 2 [NETAPP VD-50MB aggr1 has completed successfully Thu May 19 12:43:27 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v0.32 Shelf 2 Bay 0 [NETAPP VD-50MB aggr1 has completed successfully Creation of an aggregate with 10 disks has completed. node2> Thu May 19 12:43:29 GMT [node2: wafl.vol.add:notice]: Aggregate aggr1 has been added to the system. Aggregate Status node2> aggr status aggr1 Aggr State aggr1 online Volumes: <none> Plex /aggr1/plex0: online, normal, active RAID group /aggr1/plex0/rg0: normal Status Options raidsize=6 raid_dp, aggr 0042] S/N [28003913] to aggregate 0042] S/N [28003802] to aggregate 0042] S/N [28003914] to aggregate 0042] S/N [28003803] to aggregate 0042] S/N [28003915] to aggregate 0042] S/N [28003804] to aggregate 0042] S/N [28003905] to aggregate 0042] S/N [28003916] to aggregate 0042] S/N [28003906] to aggregate 0042] S/N [28003917] to aggregate

RAID group /aggr1/plex0/rg1: normal Raid Group Status node2> aggr status -r aggr1 Aggregate aggr1 (online, raid_dp) (zoned checksums) Plex /aggr1/plex0 (online, normal, active) RAID group /aggr1/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHA N Pool Type RPM Used (MB/blks) --------- ------------------ ---- ---- ---- ----- --------------------------dparity v0.32 v0 2 0 FC:A - FCAL N/A 70/144384 77/158848 parity data data data data v0.18 v0 v0.33 v0 v0.19 v0 v0.34 v0 v0.20 v0 1 2 FC:A - FCAL N/A 70/144384 2 1 FC :A - FCAL N/A 70/144384 1 3 FC :A - FCAL N/A 70/144384 2 2 FC :A - FCAL N/A 70/144384 1 4 FC :A - FCAL N/A 70/144384 77/158848 77/158848 77/158848 77/158848 77/158848 Phys (MB/blks)

RAID group /aggr1/plex0/rg1 (normal) RAID Disk Device HA SHELF BAY CHA N Pool Type RPM Used (MB/blks) --------- ------------------ ---- ---- ---- ----- --------------------------dparity v0.21 v0 1 5 FC:A - FCAL N/A 70/144384 77/158848 parity data data node2> Offline and Destroy node2> aggr offline aggr1 Aggregate 'aggr1' is now offline. node2> Thu May 19 12:45:08 GMT [node2: volaggr.offline:CRITICAL]: Some aggregates are offline. Volume creation could cause duplicate FSIDs. node2> aggr status aggr1 Aggr State aggr1 offline Volumes: <none> Plex /aggr1/plex0: online, normal, active RAID group /aggr1/plex0/rg0: normal RAID group /aggr1/plex0/rg1: normal node2> Status Options raidsize=6, raid_dp, aggr v0.35 v0 v0.22 v0 v0.36 v0 2 3 FC:A - FCAL N/A 70/144384 1 6 FC :A - FCAL N/A 70/144384 2 4 FC :A - FCAL N/A 70/144384 77/158848 77/158848 77/158848 Phys (MB/blks)

lost_write_protect=off

node2> aggr destroy aggr1 Are you sure you want to destroy this aggregate? y Thu May 19 12:45:28 GMT [node2: raid.config.vol.destroyed:info]: Aggregate 'aggr1' destroyed. Aggregate 'aggr1' destroyed. node2> Undestroying Aggregate node2> priv set advanced Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. node2*> aggr undestroy aggr1 To proceed with aggr undestroy, select one of following options [1] abandon the command [2] undestroy aggregate aggr1 ID: 0xebf48fe6-11e083a7-5000939f-88951d56 Selection (1-2)? 2 Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.22 Shelf 1 Bay 6 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.32 Shelf 2 Bay 0 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.21 Shelf 1 Bay 5 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.33 Shelf 2 Bay 1 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.36 Shelf 2 Bay 4 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.18 Shelf 1 Bay 2 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.19 Shelf 1 Bay 3 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.35 Shelf 2 Bay 3 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.20 Shelf 1 Bay 4 [NETAPP VD-50MB 2011 ! 0042] S/N [28003804] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003916] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003803] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003802] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003917] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003914] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003905] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003913] had its label edited on Thu May 19 12:50:38 GMT 0042] S/N [28003906] had its label edited on Thu May 19 12:50:38 GMT

Thu May 19 12:47:15 GMT [node2: mgr.boot.edited_disk:CRITICAL]: RAID Disk v0.34 Shelf 2 Bay 2 [NETAPP VD-50MB 2011 ! Thu May 19 12:47:16 GMT [node2: raid.rg.reparity.start:notice]: /aggr1/plex0/rg1: starting parity recomputation Thu May 19 12:47:16 GMT [node2: raid.rg.reparity.start:notice]: /aggr1/plex0/rg0: starting parity recomputation Aggregate 'aggr1' undestroyed. Run wafliron to bring the aggregate online. node2*> Resolving inconsistency node2*> aggr status aggr1 Aggr State aggr1 restricted wafl inconsistent Volumes: <none> Plex /aggr1/plex0: online, normal, active RAID group /aggr1/plex0/rg0: recomputing parity 6% completed RAID group /aggr1/plex0/rg1: recomputing parity 7% completed node2*> node2*> aggr wafliron start aggr1 Aggregate 'aggr1' is now online. Thu May 19 12:47:35 GMT [node2: wafl.inode.fill.enable:info]: fill reservation enabled for inode 7049 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.overwrite.enable:info]: overwrite reservation enabled for inode 7049 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.fill.enable:info]: fill reservation enabled for inode 7050 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.overwrite.enable:info]: overwrite reservation enabled for inode 7050 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.fill.enable:info]: fill reservation enabled for inode 7051 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.overwrite.enable:info]: overwrite reservation enabled for inode 7051 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.fill.enable:info]: fill reservation enabled for inode 7052 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.inode.overwrite.enable:info]: overwrite reservation enabled for inode 7052 (vol vol0). Thu May 19 12:47:35 GMT [node2: wafl.iron.start:notice]: Starting wafliron on aggregate aggr1. Thu May 19 12:47:36 GMT [node2: wafl.iron.parallel.mount.notify:notice]: Wafliron parallel mount enabled. Wafliron is mounting flexible volumes in parallel in the aggregate aggr1. Thu May 19 12:47:37 GMT [node2: wafl.scan.start:info]: Starting wafliron demand on aggregate aggr1. Status Options lost_write_protect=off raid_dp, aggr 0042] S/N [28003915] had its label edited on Thu May 19 12:50:38 GMT

Thu May 19 12:47:37 GMT [node2: wafl.iron.completion.times:info]: Mounting phase of aggregate aggr1 took 2s 488ms. Thu May 19 12:47:37 GMT [node2: wafl.iron.completion.times:info]: Inode scanning phase of aggregate aggr1 took 50ms. Thu May 19 12:47:37 GMT [node2: wafl.iron.completion.times:info]: Lost blocks search phase of aggregate aggr1 took 0ms. Thu May 19 12:47:37 GMT [node2: wafl.iron.completion.times:info]: Lost inodes search phase of aggregate aggr1 took 0ms. Thu May 19 12:47:37 GMT [node2: wafl.check.info:error]: WAFLIRON, aggregate aggr1: Clearing inconsistency flag on aggregate aggr1. node2*> node2*> Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Rootdir mount phase of aggregate aggr1 took 119ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Activemap mount phase of aggregate aggr1 took 71ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Snap inofiles mount phase of aggregate aggr1 took 0ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Snap selfcover mount phase of aggregate aggr1 took 0ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Snapdir mount phase of aggregate aggr1 took 60ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Snapmaps mount phase of aggregate aggr1 took 0ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Summary map mount phase of aggregate aggr1 took 20ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Refcnt mount phase of aggregate aggr1 took 0ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Metadir mount phase of aggregate aggr1 took 140ms. Thu May 19 12:47:39 GMT [node2: wafl.iron.mount.times:info]: Flex vols mount phase of aggregate aggr1 took 0ms. Thu May 19 12:47:39 GMT [node2: wafl.scan.iron.done:info]: Aggregate aggr1, wafliron completed. node2*> Increasing the aggregate size node2> aggr status -r aggr2 Aggregate aggr2 (online, raid_dp) (zoned checksums) Plex /aggr2/plex0 (online, normal, active) RAID group /aggr2/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHA N Pool Type RPM Used (MB/blks) --------- ------------------ ---- ---- ---- ----- --------------------------dparity v0.37 v0 2 5 FC:A - FCAL N/A 70/144384 77/158848 parity v0.24 v0 1 8 FC:A - FCAL N/A 70/144384 77/158848 Phys (MB/blks)

data data data data node2>

v0.38 v0 v0.25 v0 v0.39 v0 v0.26 v0

2 6 FC :A - FCAL N/A 70/144384 1 9 FC :A - FCAL N/A 70/144384 2 7 FC :A - FCAL N/A 70/144384 1 10 FC:A - FCAL N/A 70/144384

77/158848 77/158848 77/158848 77/158848

node2> df -Ah aggr2 Aggregate aggr2 aggr2/.snapshot node2> node2> aggr add aggr2 5 Thu May 19 12:46:15 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr2/plex0/rg0/v0.42 Shelf 2 Bay 10 [NETAPP VD-50MB aggr2 has completed successfully Thu May 19 12:46:15 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr2/plex0/rg0/v0.28 Shelf 1 Bay 12 [NETAPP VD-50MB aggr2 has completed successfully Thu May 19 12:46:15 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr2/plex0/rg0/v0.41 Shelf 2 Bay 9 [NETAPP VD-50MB aggr2 has completed successfully Thu May 19 12:46:15 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr2/plex0/rg0/v0.27 Shelf 1 Bay 11 [NETAPP VD-50MB aggr2 has completed successfully Thu May 19 12:46:15 GMT [node2: raid.vol.disk.add.done:notice]: Addition of Disk /aggr2/plex0/rg0/v0.40 Shelf 2 Bay 8 [NETAPP VD-50MB aggr2 has completed successfully Addition of 5 disks to the aggregate has completed. node2> node2> aggr status -r aggr2 Aggregate aggr2 (online, raid_dp) (zoned checksums) Plex /aggr2/plex0 (online, normal, active) RAID group /aggr2/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHA N Pool Type RPM Used (MB/blks) --------- ------------------ ---- ---- ---- ----- --------------------------dparity v0.37 v0 2 5 FC:A - FCAL N/A 70/144384 77/158848 parity data data data data data data data data data v0.24 v0 v0.38 v0 v0.25 v0 v0.39 v0 v0.26 v0 v0.40 v0 v0.27 v0 v0.41 v0 v0.28 v0 v0.42 v0 1 8 FC:A - FCAL N/A 70/144384 2 6 FC :A - FCAL N/A 70/144384 1 9 FC :A - FCAL N/A 70/144384 2 7 FC :A - FCAL N/A 70/144384 1 10 FC:A - FCAL N/A 70/144384 2 8 FC :A - FCAL N/A 70/144384 1 11 FC:A - FCAL N/A 70/144384 2 9 FC :A - FCAL N/A 70/144384 1 12 FC:A - FCAL N/A 70/144384 2 10 FC:A - FCAL N/A 70/144384 77/158848 77/158848 77/158848 77/158848 77/158848 77/158848 77/158848 77/158848 77/158848 77/158848 Phys (MB/blks) 0042] S/N [28003921] to aggregate 0042] S/N [28003910] to aggregate 0042] S/N [28003922] to aggregate 0042] S/N [28003911] to aggregate 0042] S/N [28004023] to aggregate total 171MB 9216KB used 76KB 0KB avail capacity 170MB 9216KB 0% 0%

node2> node2> df -Ah Aggregate aggr0 aggr0/.snapshot aggr2 aggr2/.snapshot node2> node2> Rena ming the aggregate node2*> aggr rename aggr1 newaggr Thu May 19 12:48:44 GMT [node2: raid.config.vol.renamed:info]: Aggregate 'aggr1' renamed to 'newaggr'. 'aggr1' renamed to 'newaggr' node2*> aggr status newaggr Aggr State newaggr online Volumes: <none> Plex /newaggr/plex0: online, normal, active RAID group /newaggr/plex0/rg0: normal RAID group /newaggr/plex0/rg1: normal node2*> Changing Raidgroup size node2*> aggr status -v aggr2 Aggr State aggr2 online raidsize=16, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on Volumes: <none> Plex /aggr2/plex0: online, normal, active RAID group /aggr2/plex0/rg0: normal node2*> Status Options nosnap=off, raidtype=raid_dp, raid_dp, aggr Status Options total 171MB 384MB 20MB used 161MB 0KB 84KB 0MB avail capacity 9360KB 9216KB 384MB 20MB 0% 0% 95% 0%

9216KB

raid_dp, aggr

node2*> aggr options aggr2 raidsize 6 node2*> aggr status -v aggr2 Aggr State aggr2 online raidsize=6, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on Volumes: <none> Plex /aggr2/plex0: online, normal, active RAID group /aggr2/plex0/rg0: normal Other commands Creating aggregate by selecting disk speed. aggr create aggr1 R 10000 10 Creating aggregate by selecting disk type. aggr create aggr1 T FCAL 10 Creating aggregate by just giving disk count. aggr create aggr1 10 Finding the space allocated to volumes belongs to a ggregate aggr show_space aggr1 h Status Options nosnap=off, raidtype=raid_dp, raid_dp, aggr

You might also like