You are on page 1of 6

Netapp:

1. create an aggregate --------------------------------------------------------- aggr create


<aggrname>
2. create a volume -------------------------------------------------------------- vol create
<volname> -cap k/m/g/t size
3. create a qtree -> if the company uses it ------------------------------------- qtree create
/vol/volname/qtreename
4. create a lun -----------------------------------------------------------------lun create
/vol/volname/qtreename/lunname <size>
5. zone the server to the array ---------------------------------------------------refer to the
zoning
6. create an initiator group and add the initiators of the server to the igroup ----- igroup
create <igroup name>
7. map the lun to the initiator group -----------------------------------------------igroup add
igroupname -wwn xxxxxxxxxxxxxx

NAS
netapp -NFS
1. create an aggregate
2. create a volume
3. create a qtree -> if the company uses it
4. go to nfs and click on add export ---------------------------------> make sure that security
is changed to unix qtree security /vol/volname/qtree unix
5. give the root and rw access of the host and export the share ------------------------
exportfs -io root=192.168.1.2,rw=192.168.1.2 /vol/volname/qtreename
exportfs -p root=192.168.1.2,rw=192.168.1.2 /vol/volname/qtreename - persistently
writing to the /etc/exports file on the server

netapp - cifs
1. create an aggregate
2. create a volume
3. create a qtree -> if the company uses it
4. go to cifs shares and add a share -----------------------cifs share -add sharename
/vol/volname/qtreename
5. give the access of the person or group who needs access (AD authentication)

NETAPP Commands:

Aggregate: aggr create aggr_name –t raid_type –r (no.of disks in raid group) (no. of disks in aggregate)
Ex. Aggr create aggr0 –t raid_dp –r 10 20

VOLUME: Vol create vol_name aggr_name size


Ex. Vol create vol01 aggr0 10g

Qtrees: qtree create /vol/vol01/qtree1

LUN: lun create /vol/vol01/qtree1/lun1 –t 10g


LUN map: lun map –f /vol/vol01/lun_no igroup_name lun_id

Snap: snap create –v vol01 snapshot_name

Snap list vol01


snap list –A aggr0
snap delete –v vol01 snap1
snap sched vol01: 0 2 8 @ 3,6,9,12  0  week 2day 3,6,9,12 hours

Replace disk if failed  disk reassign oldname newname

Upgrade  disk –fw –update

Flex clone  volclone create clone_name –b source_vol_name

Snap lock
Vol create traditional_vol –V –L snaplock_type no. of disks[@ disk size]
Aggr create aggrname –L –V –L snaplock_type no. of disks[@ disk size]
Snap Mirror: Migrating from 3020 to 3070

1. filer 1 and filer 2 communication for that we need to edit

SM license add

SM access on both source and destination

SM allow on both source and destination

SM conf on destination

2. vol status -- identify the volume

3. qtree status -- identify the qtree

4. Identify the volume where you want to copy the data and make sure that you have enough

space for the qtree and create a qtree on destination.(use quota report to know

the size of the qtree if needed)

5. create a qtree on dest filer  qtree create /vol/vol01/qtree1

6. go on to dest filer and initialize SM

7. snapmirror initialize –s filer1: /vol/vol01/qtree1 filer2:/vol/vol01/qtree1

8. once this is done data starts migrating

9. to know the status  snapmirror status

10. once session status is snap mirrored, the base data is migrated

11. on the server side  ask the customer to gracefully shutdown the applications so that we can

do a last update.once we get confirmation, all the apps are shutdown do

remove all the nfs and cifs shares from the source filer.

12. snapmirror update filer2: /vol/vol01/qtree1

13. check the status  snapmirror status

14. once the status is snap mirrored, quiesce the I/O by command

15. snapmirror quiesce filer2: /vol/vol01/qtree1

16. break the SM using snapmirror break filer2: /vol/vol01/qtree1


DISK allocation:

Windows:
1. Logon to windows host.
2. computer management.
3. right click on the disk management.
4. rescan and you will find new disk.
Solaris:
1. log on to Solaris host.
2. devfsadm for Emulex and cfgadm for Qlogic.
3. do a format command
4. Label the disk.
5. do vxdctl enable  disks added to veritas suite
6. /etc/vx/bin/vx disk setup –c diskname  to setup the disk.
7. VX disk list.
AIX:
1. Log on to AIX host.
2. cfgmgr command.
3. lspv to see new disk.
LINUX:
1. Log on to linux box.
2. reboot system.
3. once the system is up and running you should see the new disk.
NFS and CIFS cut over:

NFS process:

Remove: exportfs  know NFS shares


Exportfs –u /vol/vol01  remove all NFS shares
Exportfs –au  remove all shares

Add: exportfs –io access=rw,rw=srv1, root=srv1 /vol/vol01/qtree1


Exportfs –p access=rw, rw=srv1 /vol/vol01/qtree1
Exportfs –a  to see
all shares

CIFS process:

Remove: cifs shares  to know the CIFS shares


Cifs sessions  to know whether the users are still connected
Cifs shares disconnect group_name
Cifs access –delete sharename group_name

Add: cifs share –add sharename /vol/vol01/qtree1


Cifs access “sharename” “group_name” “Full control” (OR)
Cifs access “sharename” “auth_users” “full control”
Solaris commands:

1. vx dg inti dg_name disk_name –c0t0d0


2. vxassist –g dg_name male vol_name 10G
3. mkfs –F vxfs /dev/vx/rdsk/dgname/volname 10G
4. mkdir /dev/directory
5. mount /dev/vx/rdsk/dgname/volname/dev/directory

Logs  /var/adm/messages
h/w changes  reboot –r
mounts  /etc/vfstab
installed packages  pkg info

AIX Commands:

1. mkvg –Y vol_group_name –S 16 hdisk_name


2. mklv –Y log_vol_name vol_gr_nmae # of LP’s
3. crfs –V jfs –d log_vol_name –m /mount point (vversion, jfs  file sys,
ddestination)
4. mount /mount point

Varyonvg vol_gr_name  activate the vol group


Varyoffvg vol_gr_name  deactivate the vol group

Logs  /etc/syslog.conf
h/w changes  cfgmgr
mount point  /etc/filesystems

Linux:

Logs  /var/log/messages
Services  /etc/init.d
n/w  /etc/sysconfig/network_scripts

You might also like