You are on page 1of 12

Netapp Platform for ESX Dev

Needs expression and Netapp answer

•Netapp has comitted on IO / s Capacity TB Virtual Machines ESX Servers


performances based on those Configurations Base Upgrade Base Upgrade Base Upgrade Base Upgrade
metrics. low 1500 3000 2 4 50 100 4+1 7+1
medium 3000 6000 4 8 100 200 7+1 2*(7+1)
high 6000 12000 8 16 200 400 2*(7+1) 4*(7+1)
• The maximum size of one ESX
ultra high 12000 24000 16 32 400 800 4*(7+1) 8*(7+1)
farm will be 7 Active ESX for 1
spare, and maximum number of
VMs per farm is 100.

Configuration FAS model #shelves #disks Disk type


•The maximum size for one VM is Low 2050C 2 28 144GB 15Krpm
of 40GB. Low upgraded 2050C 3 42 144GB 15Krpm
Medium 3020HA 4 56 144GB 15Krpm
Medium 3020HA 6 84 144GB 15Krpm
upgraded
•All these numbers were fixed High 3140A 6 84 144GB 15Krpm
High upgraded 3140A 12 168 144GB 15Krpm
during the first step of needs Ultra High 3170A 12 168 144GB 15Krpm
consolidation from all places, Ultra High 3170A 23 322 144GB 15Krpm
upgraded
our future standards must be in
line with that.
Hardware Description and configuration limitations
Configuration FAS model #shelves #disks Disk type

High 3140A 6 84 144GB 15Krpm

High upgraded 3140A 12 168 144GB 15Krpm

Configuration IO/s Capacity TB # VMs max ESX servers

High 6000 8 200 2*(7+1)

High upgraded 12000 16 400 4*(7+1)

DEV platform is based on a High configuration but ESX


farm is not compliant to the standard.
Thus, although the limitations stand the same we had to
change the logical storage configuration compared to the
standards we have stated with the community.
Front end view for max config 400VM package racking plan Date 2008-10-10
Author Stefan ANDRIEUX
DEV Racks Version 01.01

47 47 47 47 47 47
Catalyst 4948-10GE Catalyst 4948-10GE
X2 -1 X2 -1

PS 1 PS 1

PS 2 PS 2

FA N X2 -2 FA N X2 -2

STA TU S CON MGT STA TU S CON MGT

46 46 46 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
46 46 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
46

45 45 45 45 45 45

44 44 44 44 44 44

43 43 43 43 43 43

42 42 42 42 42 42

41 41 41 41 41 41

PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1 PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1
9 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 1

40 40 40 40 40 40

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN


39 39 39 39 39 39

38 38 38 38 38 38
U ID iL O 2 2 1 U ID iL O 2 2 1

37 37 37 37 37 37

PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1 PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1
9 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 1

36 36 36 36 36 36

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN


35 35 35 35 35 35

34 34 34 34 34 34
U ID iL O 2 2 1 U ID iL O 2 2 1

33 33 33 33 33 33

PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1 PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1
9 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 1

32 32 32 32 32 32

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN


31 31 31 31 31 31

30 30 30 30 30 30
U ID iL O 2 2 1 U ID iL O 2 2 1

29 29 29 29 29 29

PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1 PS2 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- Ex8 PCI- Ex4 PCI- X100MHz PS1
9 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 1

28 28 28 28 28 28

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN

Dual 1000T LAN


27 27 27 27 27 27

26 26 26 26 26 26
U ID iL O 2 2 1 U ID iL O 2 2 1

25 25 25 25 25 25

24 24 24 24 24 24

23 23 23 23 23 23

22 22 22 22 22 22

21 21 21 21 21 21

20 20 20 20 20 20

19 19 19 19 19 19

18 18 18 18 18 18

17 17 17 17 17 17

16 ESH4 16 16 16 16 ESH4 16
4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P
X

X
2

2
1 G b 2 Gb 4 G b 1 G b 2 Gb 4 G b
S HE LF A S HE LF A
15 15 15 15 15 15
3

3
ID 1 2 ID 1 2
B B

ESH4 ESH4
4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P

14 14 14 14 14 14

13 ESH4 13 13 ESH4 13 13 ESH4 13


4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P
X

X
2

2
1 G b 2 Gb 4 G b 1 Gb 2 G b 4 Gb 1 G b 2 Gb 4 G b
S HE LF A SHELF A S HE LF A
12 12 12 12 12 12
3

3
ID 1 2 ID 1 2 ID 1 2
B B B

ESH4 ESH4 ESH4


4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P

11 11 11 11 11 11

10 ESH4 10 10 ESH4 10 10 ESH4 10


4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P
X

X
2

2
1 G b 2 Gb 4 G b 1 Gb 2 G b 4 Gb 1 G b 2 Gb 4 G b
S HE LF A SHELF A S HE LF A
9 9 9 9 9 9
3

3
ID 1 2 ID 1 2 ID 1 2
B B B

ESH4 ESH4 ESH4


4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P

8 8 8 8 8 8

7 ESH4 7 7 7 7 ESH4 7

In te l*P R O

O R G
K A

K B
X1037

T /L N

T /L N

G R
N
0 =O F
1 0 =

0 0 =
A C

A C
1
4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P

1
In te l*P R O

O R G
K A

K B
X1037

T /L N

T /L N

G R
N
0 =O F
1 0 =

0 0 =
A C

A C
1

1
X

X
2

2
1 G b 2 Gb 4 G b 1 G b 2 Gb 4 G b
S HE LF A S HE LF A
6 6 6 6 6 6
3

3
ID 1 2 ID 1 2
B B

ESH4 ESH4
4 Gb 2 G b 1 Gb
EL P
e0a e0b 0 a 0 b 0 c 0 d
4 Gb 2 G b 1 Gb
EL P

5 5 5 5 5 5
In te l*P R O

O R G
K A

K B
X1037

T /L N

T /L N

G R
N
O F

4 ESH4 4 4 4 4 ESH4 4
0 =
1 0 =

0 0 =
A C

A C
1

4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P
In te l*P R O

OR G
KA

KB
X1037

T /L N

T /L N

GRN
0= OF
10=

00=
AC

AC
1

1
X

X
2

2
1 G b 2 Gb 4 G b 1 G b 2 Gb 4 G b
S HE LF A S HE LF A
3 3 3 3 3 3
3

3
ID 1 2 ID 1 2
B B

ESH4 ESH4
e0a e0b 0 a 0 b 0 c 0 d
4 Gb 2 G b 1 Gb
EL P 4 Gb 2 G b 1 Gb
EL P

2 2 2 2 2 2

1 1 1 1 1 1

Devlopment NetApp Array

Page 2 on 2
Filers Specifications

 Cluster is FASPARTIGESX901A and FASPARTIGESX901B

 Located at Tigery (as mentionned in TIG)

 No snapmirror replication nor snapvault

 No CIFS, protocol used is NFS v3 (needs licence)

 Deduplication activated for storage space savings (needs


Nearstore + as-is licence)

 Backup is done at server level with networker

 No need for Antivirus server


Network specifications

 Admin Network on Public SRV, managed by a Single VIF of 2 links

 Private VLAN dedicated to NFS traffic, managed by a Single VIF made


of 2 multi-VIFs of 2 links

 RLM on standard RILO to migrate on DMZ in the short term

FASPARTIGESX901A/B
E0a e0b E4a E4b E4c E4d

Vif-adm single Vif-nfs1 Vif-nfs2


multi multi

Vif-nfs
single
Network configuration
FASPARTIGESX901A vif name vif type interfaces dns host dns alias IP comments
vifadm s e0a+e0b faspartigesx01a-adm faspartigesx901a 184.13.33.160
vif-nfs1 m e4a+e4b N.A.
vif-nfs2 m e4c+e4d N.A.
vif-nfs s vif-nfs1+vif-nfs2 faspartigesx01a-nfs1 184.13.96.12 IP in rc file
faspartigesx01a-nfs2 184.13.100.12 Alias in rc file
RLM rc-faspartigesx901a 184.13.135.1

FASPARTIGESX901B vif name vif type interfaces dns host dns alias IP comments
vifadm s e0a+e0b faspartigesx01b-adm faspartigesx901b 184.13.33.161
vif-nfs1 m e4a+e4b N.A.
vif-nfs2 m e4c+e4d N.A.
vif-nfs s vif-nfs1+vif-nfs2 faspartigesx01b-nfs1 184.13.96.13 Alias in rc file
faspartigesx01b-nfs2 184.13.100.13 IP in rc file
RLM rc-faspartigesx901b 184.13.135.2

/etc/rc file exemple


hostname FASPARTIGESX901A
vif create single vif-adm e0a e0b
vif create multi vif-nfs1 -b rr e4b e4a
vif create multi vif-nfs2 -b rr e4d e4c
vif create single vif-nfs vif-nfs1 vif-nfs2
ifconfig vif-adm `hostname`-vifadm mediatype auto netmask 255.255.248.0 partner vif-adm nfo *
ifconfig vif-nfs `hostname`-vif-nfs mediatype auto netmask 255.255.248.0 partner vif-nfs nfo *
ifconfig vif-nfs alias 184.13.100.12 netmask 255.255.248.0
route add default 184.13.32.1 1
routed on
options dns.domainname fr.world.socgen
options dns.enable on
options nis.enable off
savecore

* Works with the option cf.takeover.on_network_interface_failure.policy setup to « any_nic » for enabling cluster takeover when all links in a vif are
down.
Logical storage configuration

FASPARTIGESX901A FASPARTIGESX901B
Aggregate: aggr0devt05 Aggregate: aggr1devt05

Raidsize: 20 Raidsize: 20

Volumes: Volumes:
Name Rôle size Name Rôle size
/vol/devt05e901asw01 swap 240GB /vol/devt05e901bds01 datastore 480GB
/vol/devt05e901ads01 datastore 480GB /vol/devt05e901bds02 datastore 480GB
/vol/devt05e901ads02 datastore 480GB /vol/devt05e901bds03 datastore 480GB
/vol/devt05e901ads03 datastore 480GB /vol/devt05e901bds04 datastore 480GB
/vol/devt05e901ads04 datastore 480GB

NFS Clients NFS Clients


IP1 IP2 IP1 IP2
vmpard13-vmk1 vmpard13-vmk2 vmpard13-vmk1 vmpard13-vmk2
vmpard14-vmk1 vmpard14-vmk2 vmpard14-vmk1 vmpard14-vmk2
vmpard15-vmk1 vmpard15-vmk2 vmpard15-vmk1 vmpard15-vmk2
Specific configuration

 Disable snapshot
Filer> snap sched fasXXvolYYn 0 0 0
Filer> snap reserve fasXXvolYYn 0

 Volume Tuning
Filer> vol options <vol_name> no_atime_update on

 NFS options
Filer> options nfs.tcp.recvwindowsize 64240

 Enable Deduplication
Filer> sis on /vol/fasXXvolYYn
Deduplication schedule :
 Once everyday, start at Mid-night.
How to administrate filers

 Ssh from srvparnasadm01


 Known as FASPARTIGESX901A/B
 Known as TIGESX901A/B

 ssh to RLM interface (rc-<filername>)


 Use your Unix login to connect
 You first have to change your password (default is 1NFS<login_name>)

 With Filerview
 Launch a web browser on http://<filer>-adm/na_admin
 Connect with your login
 Click on Filerview
 Connect with your login
DFM
 DFM groups
 Filer group is ESXDEV

 Filer configuration group is


/Config_groups_ESX/Tigery /ESXDEV

 DFM Best practices document available


Available documents
 Installation guide

 Standardization document

 DFM document

 Snapmirror document

To come:

 NAS survival Guide specific section

 NAS/ESX capacity planning table

 Validation tests results

You might also like