You are on page 1of 14

Best Practices for running

VMware vSphere
TM
on
Network Attached Storage
WHI T E PAPER
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 2
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Overview of the Steps to Provision NFS Datastores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Issues to Consider for High Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Security Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Additional Attributes of NFS Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Thin Provisioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
De-duplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Backup and Restore Granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Summary of Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Networking Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Datastore Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Filer Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
ESX Server Advanced Settings and Timeout settings. . . . . . . . . . . . . . . . . . . . . . . . . . .9
Previously thought to be Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Appendix 1: Troubleshooting Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Appendix 2: NFS Advanced Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Appendix 3: Solutions to common problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
About the Author. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Introduction
ThesignicantpresenceofNetworkFilesystemStorage(NFS)inthedatacentertoday,aswellasthelower
cost-per-portforIPbasedStorage,hasleadtomanypeoplewantingtodeployvirtualizationenvironments
withNetworkAttachedStorage(NAS)sharedstorageresources.
Asvirtualizationincreasesadoption,sodoesthedeploymentofVMwareESXserversthatleverageNAS.Forthe
purposeofclarity,bothNFSandNASrefertothesametypeofstorageprotocolandwillbeusedastermsfor
thesamethingthroughoutthispaper.
ThecapabilitiesofVMwareVirtualInfrastructure3(VI3)onNFSareverysimilartotheVMwarevSphereon
block-basedstorage.VMwareoferssupportforalmostallfeaturesandfunctionsonNFSasitdoesfor
vSphereonSAN.RunningvSphereonNFSisaveryviableoptionformanyvirtualizationdeploymentsasit
ofersstrongperformanceandstabilityifconguredcorrectly.
ThispaperprovidesanoverviewoftheconsiderationsandbestpracticesfordeploymentofVMwarevSphere
onNFSbasedstorage.Italsoexaminesthemythsthatexistandwillattempttodispelconfusionastowhen
NFSshouldandshouldnotbeusedwithvSphere.Itwillalsoprovidesometroubleshootingtipsandtricks.
Background
VMwareintroducedthesupportofIPbasedstorageinrelease3oftheESXserver.Priortothatrelease,theonly
optionforsharedstoragepoolswasFibreChannel(FC).WithVI3,bothiSCSIandNFSstoragewereintroduced
asstorageresourcesthatcouldbesharedacrossaclusterofESXservers.Theadditionofnewchoiceshas
ledtoanumberofpeopleaskingWhatisthebeststorageprotocolchoiceforonetodeployavirtualization
projecton?Theanswertothatquestionhasbeenthesubjectofmuchdebate,andthereseemstobenosingle
correctanswer.
Theconsiderationsforthischoicetendtohingeontheissueofcost,performance,availability,andease
ofmanageability.However,anadditionalfactorshouldalsobethelegacyenvironmentandthestorage
administratorfamiliaritywithoneprotocolvs.theotherbasedonwhatisalreadyinstalled.
Thebottomlineis,ratherthanaskwhichstorageprotocoltodeployvirtualizationon,thequestionshould
be,Whichvirtualizationsolutionenablesonetoleveragemultiplestorageprotocolsfortheirvirtualization
environment?And,Whichwillgivethemthebestabilitytomovevirtualmachinesfromonestoragepoolto
another,regardlessofwhatstorageprotocolituses,withoutdowntime,orapplicationdisruption?Oncethose
questionsareconsidered,theclearanswerisVMwarevSphere.
However,toinvestigatetheoptionsabitfurther,performanceofFCisperceivedasbeingabitmoreindustrial
strengththanIPbasedstorage.However,formostvirtualizationenvironments,NFSandiSCSIprovidesuitable
I/Operformance.Thecomparisonhasbeenthesubjectofmanypapersandprojects.OnepostedonVMTNis
locatedat:http://www.vmware.com/les/pdf/storage_protocol_perf.pdf.
Thegeneralconclusionreachedbytheabovepaperisthatformostworkloads,theperformanceissimilar
withaslightincreaseinESXServerCPUoverheadpertransactionforNFSandabitmoreforsoftwareiSCSI.
Formostvirtualizationenvironments,theendusermightnotevenbeabletodetecttheperformancedelta
fromonevirtualmachinerunningonIPbasedstoragevs.anotheronFCstorage.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 3
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 4
ThemoreimportantconsiderationthatoftenleadspeopletochooseNFSstoragefortheirvirtualization
environmentistheeaseofprovisioningandmaintainingNFSsharedstoragepools.NFSstorageisoftenless
costlythanFCstoragetosetupandmaintain.Forthisreason,NFStendstobethechoicetakenbysmallto
mediumbusinessesthataredeployingvirtualizationaswellasthechoicefordeploymentofvirtualdesktop
infrastructures.Thispaperwillinvestigatethetradeofsandconsiderationsinmoredetail.
Overview of the Steps to Provision
NFS Datastores
BeforeNFSstoragecanbeaddressedbyanESXserver,thefollowingissuesneedtobeaddressed:
1. Have a virtual switch congured for IP based storage.
2. The ESX hosts needs to be congured to enable its NFS client.
3. The NFS storage server needs to have been congured to export a mount point that is accessible to
the ESX server on a trusted network.
FormoredetailsonNFSstorageoptionsandsetup,consultthebestpracticesforVMwareprovidedby
thestoragevendor.
EMC with VMware vSphere 4 Applied Best Practices
NetApp and VMware vSphere Storage Best Practices
Regardingitemoneabove,tocongurethevSwitchforIPstorageaccessyouwillneedtocreateanewvSwitch
underESXserverconguration,networkingtabinvCenter.Indicatingitisavmkerneltypeconnectionwill
automaticallyaddtothevSwitch.Youwillneedtopopulatethenetworkaccessinformation.
Regardingitemtwoabove,toconguretheESXhostforrunningitsNFSclient,youllneedtoopenarewall
portfortheNFSclient.Todothis,selectthecongurationtabfortheESXServerinVirtualCenterandclickon
SecurityProle(listedundersoftwareoptions)andthenchecktheboxforNFSClientlistedundertheremote
accesschoicesintheFirewallPropertiesscreen.
Withtheseitemsaddressed,anNFSdatastorecannowbeaddedtotheESXserverfollowingthesameprocess
usedtocongureadatastoreforblockbased(FCoriSCSI)datastores.
On the ESX Server confguration tab in VMware VirtualCenter, select storage (listed under hardware options)
andthenclicktheaddbutton.
On the screen for select storage type, select Network File System and in the next screen enter the IP
addressoftheNFSserver,mountpointforthespecicdestinationonthatserverandthedesiredname
forthatnewdatastore.
If everything is completed correctly, the new NFS datastore will show up in the refreshed list of datastores
availableforthatESXserver.
ThemaindiferencesinprovisioninganNFSdatastorescomparedtoblockbasestoragedatastoresare:
For NFS there are fewer screens to navigate through but more data entry required than block based storage.
The NFS device needs to be specifed via an IP address and folder (mount point) on that fler, rather than a
picklistofoptionstochoosefrom.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 5
Issues to Consider for High Availability
Toachievehighavailability,theLANonwhichtheNFStrafcwillrunneedstobedesignedwithavailability,
downtime-avoidance,isolation,andnosingle-fail-pointoffailureinmind.
Multipleadministratorsneedtobeinvolvedindesigningforhigh-availability:Boththevirtualadministratorand
thenetworkadministrator.IfdonecorrectlythefailovercapabilitiesofanIPbasedstoragenetworkcanbeas
robustasthatofaFCstoragenetwork.
ThissectionoutlinesthesestepsandinvestigatesthreelevelsofhighavailabilityavailableinaNFSstorage
congurationforvSphere.
Terminology
First,itisimportanttodeneafewtermsthatoftencauseconfusioninthediscussionofIPbasedstorage
networking.Somecommontermsandtheredenitionsareasfollows:
NIC/adapter/port/linkEndpointsofanetworkconnection.
Teamed/trunked/bonded/bundled portsPairingoftwoconnectionsthataretreatedasoneconnectionby
anetworkswitchorserver.Theresultofthispairingarealsoreferredtoasanether-channel.Thisispairingof
connectionsisdenedasLinkAggregationinthe802.3networkingspecication.
Cross stack ether channelApairingofportsthatcanspanacrosstwophysicalLANswitchesmanagedas
onelogicalswitch.Thisisonlyanoptionwithalimitednumberofswitchesthatareavailabletoday.
IP hashMethodofswitchingtoanalternatepathbasedonahashoftheIPaddressofbothendpointsfor
multipleconnections.
Virtual IP (VIF)AninterfaceusedbytheNASdevicetopresentthesameIPaddressoutoftwoportsfrom
thatsinglearray.
Avoiding single points of failure a the NIC, switch, filer levels
TherstlevelofHighAvailability(HA)istoavoidasinglepointoffailurebeingaNICcardinanESXserver,or
thecablebetweentheNICcardandtheswitch.WiththesolutionhavingtwoNICsconnectedtothesameLAN
switchandconguredasteamedattheswitchandhavingIPhashfailoverenabledattheESXserver.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 6
ThesecondlevelofHAistoavoidasinglepointoffailurebeingalossoftheswitchtowhichtheESXconnects.
Withthissolution,onehasfourpotentialNICcardsintheESXserverconguredwithIPhashfailoverandtwo
pairsgoingtoseparateLANswitcheswitheachpairconguredasteamedattherespectiveLANswitches.
ThethirdlevelofHAprotectsagainstlossofaler(orNAShead)becomingunavailable.Withstoragevendors
thatprovideclusteredNASheadsthatcantakeoverforanotherintheeventofafailure,onecancongurethe
LANsuchthatdowntimecanbeavoidedintheeventoflosingasingleler,orNAShead.
AnevenhigherlevelofperformanceandHAcanbuildonthepreviousHAlevelwiththeadditionofCrossStack
Ether-channelcapableswitches.Withcertainnetworkswitches,itispossibletoteamportsacrosstwoseparate
physicalswitchesthataremanagedasonelogicalswitch.Thisprovidesadditionalresilienceaswellassome
performanceoptimizationthatonecangetHAwithfewerNICs,orhavemorepathsavailableacrosswhichone
candistributeloadsharing.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 7
Caveat: NIC teaming provides failover but not load-balanced performance (in the common case of a single
NAS datastore)
ItisalsoimportanttounderstandthatthereisonlyoneactivepipefortheconnectionbetweentheESXserver
andasinglestoragetarget(LUNormountpoint).Thismeansthatalthoughtheremaybealternateconnections
availableforfailover,thebandwidthforasingledatastoreandtheunderlyingstorageislimitedtowhatasingle
connectioncanprovide.Toleveragemoreavailablebandwidth,anESXserverhasmultipleconnectionsfrom
servertostoragetargets.Onewouldneedtoconguremultipledatastoreswitheachdatastoreusingseparate
connectionsbetweentheserverandthestorage.Thisiswhereoneoftenrunsintothedistinctionbetweenload
balancingandloadsharing.Thecongurationoftrafcspreadacrosstwoormoredatastoresconguredon
separateconnectionsbetweentheESXserverandthestoragearrayisloadsharing.
Security Considerations
VMwarevSphereimplementationofNFSsupportsNFSversion3inTCP.ThereiscurrentlynosupportforNFS
version2,UDP,orCIFS/SMB.KerberosisalsonotsupportedintheESXServer4,andassuchtrafcisnot
encrypted.StoragetrafcistransmittedascleartextacrosstheLAN.Therefore,itisconsideredbestpracticeto
useNFSstorageontrustednetworksonly.Andtoisolatethetrafconseparatephysicalswitchesorleveragea
privateVLAN.
AnothersecurityconcernisthattheESXServermustmounttheNFSserverwithrootaccess.Thisraisessome
concernsabouthackersgettingaccesstotheNFSserver.Toaddresstheconcern,itisbestpracticetouseof
eitherdedicatedLANorVLANtoprovideprotectionandisolation.
Additional Attributes of NFS Storage
ThereareseveraladditionaloptionstoconsiderwhenusingNFSasasharedstoragepoolforvirtualization.
Someadditionalconsiderationsarethinprovisioning,de-duplication,andtheease-of-backup-and-restore
ofvirtualmachines,virtualdisks,andevenlesonavirtualdiskviaarraybasedsnapshots.Thenextsection
focusesontheoptionsavailableandthecriterionmightmakeonechoiceabetterselectionthanthealternative.
Thin Provisioning
Virtualdisks(VMDKs)createdonNFSdatastoresareinthinprovisionedformatbydefault.Thiscapabilityofers
betterdiskutilizationoftheunderlyingstoragecapacityinthatitremoveswhatisoftenconsideredwasted
diskspace.Forthepurposeofthispaper,VMwarewilldenewasteddiskspaceasallocatedbutnotused.The
thin-provisioningtechnologyremovesasignicantamountofwasteddiskspace.
OnNFSdatastores,thedefaultvirtualdiskformatisthin.Assuch,lessallocationofVMFSvolumestorage
spacethanisneededforthesamesetofvirtualdisksprovisionedasthickformat.Althoughthethinformat
virtualdiskscanalsobeprovisionedonblock-basedVMFSdatastores.VMwarevCenternowprovidessupport
forboththecreationofthinvirtualdisksaswellasthemonitoringthedatastoresthatmaybeover-committed
(i.e.thatcantriggeralarmsforvariousthresholdsofeitherspaceusedapproachingcapacityorprovisioned
paceoutpacingcapacitybyvaryingdegreesofovercommit.)
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 8
De-duplication
SomeNASstoragevendorsoferdatade-duplicationfeaturesthatcangreatlyreducetheamountofstorage
spacerequired.Itisimportanttodistinguishbetweenin-placede-duplicationandde-duplicationforbackup
streams.Bothofersignicantsavingsinspacerequirements,butin-placede-duplicationseemstobefarmore
signicantforvirtualizationenvironments.Somecustomershavebeenabletoreducetheirstorageneedsby
upto75percentoftheirpreviousstoragefootprintwiththeuseofinplacede-duplicationtechnology.
Backup and Restore Granularity
ThebackupofaVMwarevirtualenvironmentcanbedoneinseveralways.Somecommonmethodsaretouse
VMwareConsolidatedBackup(VCB),placingagentsineachGuestOS,orleveragingarraybasedsnapshot
technology.However,theprocessofrestoringthevirtualmachinegetstobeabitmorecomplicatedasthere
areoftenmanylevelsofgranularitythatarerequestedforrestore.Somerecoverysolutionsmightrequirethe
entiredatastoretoberestoredwhileothersmightrequireaspecicvirtualmachineorevenavirtualdiskfora
virtualmachinetoberecovered.Themostgranularrequestistohaveonlyaspecicleonavirtualdiskfora
givenvirtualmachinerestored.
WithNFSandarraybasedsnapshots,onehasthegreatesteaseandexibilityonwhatlevelofgranularitycan
berestored.Withanarray-basedsnapshotofaNFSdatastore,onecanquicklymountapoint-in-timecopyof
theentireNFSdatastore,andthenselectivelyextractanylevelofgranularitytheywant.Althoughthisdoes
openupabitofasecurityrisk,NFSdoesprovideoneofthemostexibleandefcientrestorefrombackup
optionavailabletoday.Forthisreason,NFSearnshighmarksforeaseofbackupandrestorecapability.
Summary of Best Practices
Toaddressthesecurityconcernsandprovideoptimalperformance,VMwareprovidesthefollowing
bestpractices:
Networking Settings
Toisolatestoragetrafcfromothernetworkingtrafc,itisconsideredbestpracticetouseeitherdedicated
switchesorVLANsforyourNFSandiSCSIESXservertrafc.TheminimumNICspeedshouldbe1gigE.
InVMwarevSphere,useof10gigEissupported.BesttolookattheVMware HCLtoconrmwhichmodelsare
supported.
Itisimportanttonotover-subscribethenetworkconnectionbetweentheLANswitchandthestoragearray.
Theretransmittingofdroppedpacketscanfurtherdegradetheperformanceofanalreadyheavilycongested
networkfabric.
Datastore Settings
Thedefaultsettingforthemaximumnumberofmountpoints/datastoreanESXservercanconcurrentlymount
iseight.Althoughthelimitcanbeincreasedto64intheexistingrelease.IfyouincreasemaxNFSmounts
abovethedefaultsettingofeight,makesuretoalsoincreaseNet.TcpipHeapSizeaswell.If32mountpointsare
used,increasetcpip.Heapsizeto30MB.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 9
TCP/IP Heap Size
ThesafestwaytocalculatethetcpipheapsizegiventhenumberofNFSvolumesconguredistolinearlyscale
thedefaultvalues.For8NFSvolumesthedefaultmin/maxsizesofthetcpipheaparerespectively6MB/30MB.
Thismeansthetcpipheapsizeforahostconguredwith64NFSvolumesshouldhavethemin/maxtcpipheap
sizessetto48MB/240MB.
Filer Settings
InaVMwarecluster,itisimportanttomakesuretomountdatastoresthesamewayonallESXservers.Same
host(hostname/FQDN/IP),exportanddatastorename.AlsomakesureNFSserversettingsarepersistenton
theNFSler(useFilerView,exportfsp,oredit/etc/exports).
ESX Server Advanced Settings and Timeout settings
WhensettingtheNICteamingsettings,itisconsideredbestpracticetoselectnoforNICteamingfailback
option.Ifthereissomeintermittentbehaviorinthenetwork,thiswillpreventtheip-oppingofNICcards
beingused.
WhensettingupVMwareHA,itisagoodstartingpointtoalsosetthefollowingESXservertimeoutsand
settingsundertheESXserveradvancedsettingtab.
NFS.HeartbeatFrequency = 12
NFS.HeartbeatTimeout = 5
NFS.HeartbeatMaxFailures = 10
NFS Heartbeats
NFSheartbeatsareusedtodetermineifanNFSvolumeisstillavailable.NFSheartbeatsareactually
GETATTRrequestsontherootlehandleoftheNFSVolume.ThereisasystemworldthatrunseveryNFS.
HeartbeatFrequencysecondstocheckifitneedstoissueheartbeatrequestsforanyoftheNFSvolumes.Ifa
volumeismarkedavailable,aheartbeatwillonlybeissuesifithasbeen>NFS.HeartBeatDeltasecondssince
asuccessfulGETATTR(notnecessarilyaheartbeatGETATTR)forthatvolumewasissuedTheNFSheartbeat
worldwillalwaysissueheartbeatsforNFSvolumesthataremarkedunavailable.Hereistheformulatocalculate
howlongitcantakeESXtomarkanNFSvolumeasunavailable:
RoundUp(NFS.HeartbeatDelta,NFS.HeartbeatFrequency)+(NFS.HeartbeatFrequency*(NFS.
HeartbeatMaxFailures-1))+NFS.HeartbeatTimeout
OnceavolumeisbackupitcantakeNFS.HeartbeatFrequencysecondsbeforeESXmarksthevolumeas
available.See Appendix 2 for more details on these settings.
Previously thought to be Best Practices
SomeearlyadoptersofVI3onNFScreatedsomebestpracticesthatarenolongerviewedfavorably.Theyare:
1. Turn of the NFS locking within the ESX server
2. Not placing virtual machine swap space on NFS storage.
Boththesehavebeendebunkedandthefollowingsectionprovidesmoredetailsastowhythesearenotno
longerconsideredbestpractices.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 1 0
NFS Locking
NFSlockingonESXdoesnotusetheNLMprotocol.VMwarehasestablisheditsownlockingprotocol.These
NFSlocksareimplementedbycreatinglocklesontheNFSserver.Locklesarenamed.lck-<leid>,where
<leid>isthevalueoftheleideldreturnedfromaGETATTRrequestforthelebeinglocking.Oncealock
leiscreated,VMwareperiodically(everyNFS.DiskFileLockUpdateFreqseconds)sendupdatestothelockle
toletotherESXhostsknowthatthelockisstillactive.Thelockleupdatesgeneratesmall(84byte)WRITE
requeststotheNFSserver.ChanginganyoftheNFSlockingparameterswillchangehowlongittakesto
recoverstalelocks.ThefollowingformulacanbeusedtocalculatehowlongittakestorecoverastaleNFSlock:
(NFS.DiskFileLockUpdateFreq*NFS.LockRenewMaxFailureNumber)+NFS.LockUpdateTimeout
Ifanyoftheseparametersaremodied,itsveryimportantthatallESXhostsintheclusteruseidentical
settings.HavinginconsistentNFSlocksettingsacrossESXhostscanresultindatacorruption!
InvSpheretheoptiontochangetheNFS.Lockdisablesettinghasbeenremoved.Thiswasdonetoremove
thetemptationtodisabletheVMwarelockingmechanismforNFS.Soitisnolongeranoptiontoturnitof
invSphere.
Virtual Machine Swap Space Location
ThesuggestiontonotplacevirtualmachineswapspaceontheNFSdatastorewassomethingthatappearedin
earlytrainingmaterialsforVI3andinsomeKnowledgeBasearticles.However,thiswasdeterminedtobeabit
tooconservativeandhasbeenremovedfromthemajorityofthematerials.Thecurrentpositionstatesit
isfarbettertoaddresstheperformanceconcernbynotovercommittingthememoryofthevirtualmachines
thatareonaNFSdatastorethantomoveittootherstorageandloosethebenetofencapsulation.
Theperformanceimpactofhavingthevirtualmachineswapspaceonseparatestoragewasnotasbiga
diferenceasoriginallythought.SokeepingthevirtualmachineswapspaceontheNFSdatastoreisnow
consideredtobethebestpractice.
Conclusion
Inshort,NetworkAttachedStoragehasmaturedsignicantlyinrecentyearsanditofersasolidavailability
andhighperformancefoundationfordeploymentwithvirtualizationenvironments.Followingthebest
practicesoutlinedinthispaperwillensuresuccessfuldeploymentsofvSphereonNFS.Manypeoplehave
deployedbothvSphereandVI3onNFSandareverypleasedwiththeresultstheyareexperiencing.NFS
ofersaverysolidstorageplatformforvirtualization.
Bothperformanceandavailabilityareholdinguptoexpectationsandasbestpracticesarebeingfurther
dened,theexperienceofrunningVMwaretechnologyonNFSisprovingtobeasolidchoiceofstorage
protocols.SeveralstoragetechnologypartnersareworkingcloselywithVMwaretofurtherdenethebest
practicesandextendthebenetsforcustomerschoosingtodeploymentVMwarevSphereonthisstorage
protocoloption.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 1 1
Appendix 1: Troubleshooting Tips
TherearemanypossiblecausesforNetworkAttachedStoragenottobeavailableonanESXserver.Itis
importanttotroubleshootthesymptomsandisolatetheprobablecauseinasystematicmanner.Rulingout
issuesattheNFSstorageserverrstandthenmakingsurethelistofitemsthatmustbesetupcorrectlyprior
toaddingaNFSdatastore.Thefollowingitemsandthetableareintendedtohelptroubleshootcommonareas
ofdisconnect.
IfyoudonothaveaconnectionfromtheServiceConsoletotheNASdevice,youwillgetthefollowingerror
message:
ThemountrequestwasdeniedbytheNFSserver.Checkthattheexportexistsandthattheclientcanaccessit.
IfyouaretryingtomountashareoverUDPratherthanTCP,youwillgetadescriptivemessagetellingyouthat
thisisnotallowed.Similarly,ifyouaretryingtomountashareusingaversionofNFSwhichisnotversion3,
youwillalsogetaverydescriptivemessage.
Ifyoudonothavetheno_root_squashoptionset,youwillgetthefollowingerrorwhenyoutrytocreatea
virtualmachineontheNASdatastore:
Figure 1.no_root_squashpermissionspop-up
Thiscanbeconfusingsinceyoucanstillcreatethedatastore,butyouwillnotbeabletocreateanyvirtual
machinesonit.
Note: the NetApp equivalent of no_root_squash is anon=0 in the /etc/exports le.
SYMPTOM POSSI BLE CAUSE CATEGORY
Typically,failuretomount Theportmapornfsdaemonsarenotrunning
Typically,failuretomountorfailuretowriteenable.
Aspacebetweenthemountpointandthe(rw)
causesthesharetoberead-onlyafrustratingand
hardtodiagnoseproblem
Syntaxerroronclientmountcommandor
servers/etc/exports
MountsOK,butaccesstothedataisimpossibleor
notasspecied
Problemswithpermissions,uidsandgids
Mountfailures,timeouts,excessivelyslowmounts,
orintermittentmounts
FirewallslteringpacketsnecessaryforNFS
Mountfailures,timeouts,excessivelyslowmounts,
orintermittentmounts
BadDNSonserver
Table 1.NFSCongurationissues
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 1 2
Appendix 2: NFS Advanced Options
NFS.DiskFileLockUpdateFreq
TimebetweenupdatestotheNFSlockfileontheNFSserver.Increasingthisvaluewillincreasethetimeit
takestorecoverstaleNFSlocks.(SeeNFSLocking)
NFS.LockUpdateTimeout
AmountoftimeVMWarewaitsbeforeweabortalockupdaterequest.(SeeNFSLocking)
NFS.LockRenewMaxFailureNumber
NumberoflockupdatefailuresthatmustoccurbeforeVMaremarksthelockasstale.(SeeNFSLocking)
NFS.HeartbeatFrequency
HowoftentheNFSheartbeatworldrunstoseeifanyNFSvolumesneedaheartbeatrequest.(SeeNFS
Heartbeats)
NFS.HeartbeatTimeout
AmountoftimeVMwarewaitsbeforeabortingaheartbeatrequest.(SeeNFSHeartbeats)
NFS.HeartbeatDelta
AmountoftimeafterasuccessfulGETATTRrequestbeforetheheartbeatworldwillissueaheartbeat
requestforavolume.IfanNFSvolumeisinanunavailablestate,anupdatewillbesenteverytimethe
heartbeatworldruns(NFS.HeartbeatFrequencyseconds).(SeeNFSHeartbeats)
NFS.HeartbeatMaxFailures
NumberofconsecutiveheartbeatrequeststhatmustfailbeforeVMwaresmarkaserverasunavailable.
(SeeNFSHeartbeats)
NFS.MaxVolumes
MaximumnumberofNFSvolumethatcanbemounted.TheTCP/IPheapmustbeincreasedtoaccommo-
datethenumberofNFSvolumesconfigured(SeeTCP/IPHeapSize)
NFS.SendBufferSize
ThisisthesizeofthesendbufferforNFSsockets.Thisvaluewaschosenbasedoninternalperformance
testing.Customersshouldnotneedtoadjustthisvalue.
NFS.ReceiveBufferSize
ThisisthesizeofthereceivebufferforNFSsockets.Thisvaluewaschosenbasedoninternalperformance
testing.Customersshouldnotneedtoadjustthisvalue.
NFS.VolumeRemountFrequency
ThisdetermineshowoftenVMWarewouldtrytomountanNFSvolumethatwasinitiallyunmountable.
Onceavolumeismounted,itneverneedstoberemounted.Thevolumemaybemarkedunavailableif
VMWarelosesconnectivitytotheNFSserverbutitwillstillremainmounted.
Best Practices for Running VMware vSphere on NFS
T ECHNI CAL WHI T E PAPER / 1 3
Appendix 3: Solutions to common problems
When I mount a volume, I do not see an entry in /vmfs/volumes.
Thisgenerallyindicatesthemountrequestfailed.Thevmkernellogle
(/proc/vmware/logor/var/log/vmkernel)shouldcontainsomewarningstoindicatewhatwentwrong.
The server does not support NFS v3 over TCP.
Fromtheconsolerun:rpcinfo -p [server name]
Verifyyouseeentriesfor:
100000 2 tcp 111 portmapper
100003 2 tcp 2049 nfs
100005 3 tcp 896 mountd
The server name or volume is entered incorrectly
Fromtheconsolerun:showmount -e [server name]
Verifyyouseethevolumelistedcorrectly.
The client is not authorized by the server to mount the volume.
NFSservercanimposerestrictionsonwhichclientsareallowedtoaccessagivenNFSvolume.
Fromtheconsolerun:showmount -e
Ingeneral,thiswillshowyoutheIPaddressesthatareallowedtomountthevolume.Ifyouaretryingtomount
avolumefromaLinuxNFSServertheservers/etc/exportsshouldlooksomethinglike:
*(rw,no_root_squash,sync)
Note: Work with root squashed volumes by rc1.
There is no default gateway configured
Fromtheconsolerun:esxcfg-route
Ifthegatewayisnotcongured,gotothestepaboveforconguringyourdefaultroute.
Best Practices for Running VMware vSphere on NFS
VMware, Inc.3401HillviewAvenuePaloAltoCA94304USATel877-486-9273Fax650-427-5001www.vmware.com
Copyright2009VMware,Inc.Allrightsreserved.ThisproductisprotectedbyU.S.andinternationalcopyrightandintellectualpropertylaws.VMwareproductsarecoveredbyoneormorepatentslistedat
http://www.vmware.com/go/patents.VMwareisaregisteredtrademarkortrademarkofVMware,Inc.intheUnitedStatesand/orotherjurisdictions.Allothermarksandnamesmentionedhereinmaybe
trademarksoftheirrespectivecompanies.ItemNo:VMW_09Q4_WP_NFS_BestPractices_EN_P14_R1
The portgroup for the vmknic has no uplink
Forsomesomeunknownreasonanupgradefrombuildtobuildcanresultinasituationwhereaportgroup
losesitsuplink.Itmightlooklikethefollowing:
Switch Name
vswitch0
vmnic0
Num Ports
32
Used Ports Configured Ports
4
Uplinks
32
PortGroup Name
nfs
Internal [ID] VLAN [ID] Used Ports
0
Uplinks
0
Note: In this case the portgroup nfs has no uplink. If this occurs:
1. Disable any vmknic using this portgroup: esxcfg-vmknic -D [portgroup name]
2. Remove the portgroup: esxcfg-vswitch -D [portgroup name]
3. Remove the vswitch: esxcfg-vswitch -d [vswitch name] 1 Go through the process above of
conguring a new vswitch and portgroup. Verify the portgroup has an uplink. Do not forget to re-enable
your vmknic and double check your default route is still set correctly.
About the Author
PaulManningisaStorageArchitectinthetechnicalmarketinggroupatVMwareandisfocusedonvirtual
storagemanagement.Previously,heworkedatEMCandOracle,wherehehadmorethantenyearsexperience
designinganddevelopingstorageinfrastructureanddeploymentbestpractices.Hehasalsodevelopedand
deliveredtrainingcoursesonbestpracticesforhighlyavailablestorageinfrastructuretoavarietyofcustomers
andpartnersintheUnitedStatesandabroad.Heistheauthorofnumerouspublicationsandpresentedmany
talksonthetopicofbestpracticesforstoragedeploymentsandperformanceoptimization.
Check out the following multi vendor blog posts on NFS:
The Virtual Storage Guy
Virtual Geek

You might also like