You are on page 1of 27

DeployingVMwareVirtualInfrastructure3.

5inaHeterogeneousServer andStorageEnvironmentUsingaBrocade8GbpsInfrastructure

ExecutiveSummary
This paper provides guidance for setting up a VMware ESX 3.5 environment based on heterogeneousstorageandservercomponentsandBrocadeinfrastructure(hardwareandsoftware products).Thefollowingtechnologycomponentsweredeployed: Brocade51008GbpsFibreChannelSwitches Brocade825(dualport)8GbpsFibreChanneltoPCIeHostBusAdapters(HBAs) HPProLiantDL380G5Servers HPProLiantBL460/480CBladeServers HPVirtualConnectFibreChannel DellPowerEdge2950IIIServers HPEVA4400DiskArray EMCCX4120DiskArray

This paper is intended to provide an endtoend view of successfully deploying a Virtual Machine environmentutilizingthebenefitsofahighperformanceBrocade8GbpsFibreChannelenvironment to provided enhanced performance and availability to different Virtual Machine (VM) types along withbestpracticesandeasytousesetupandconfigurationinstructions.However,thispaperisnot intendedtoreplaceanydocumentationsuppliedwiththeindividualcomponents.

Introduction
Server virtualization has become a fundamental technology in most data center environments. A virtual infrastructure offers a variety of benefits, ranging from more efficient use of resources and reductionofserversprawltothefinancialsideofreducedcapitalexpenditures. Amongthedifferentservervirtualizationvendors,VMwarecurrentlyhasthebroadestmarketshare andoffersawidevarietyoftechnologiestomanage,maintainandimproveresourceutilizationand performance.Withtheadventofmulticoreprocessors,CPUpowerisnolongerthebottleneckfor deploying large numbers of Virtual Machines on a single VMware ESX Server. Storage capacity requirementsaresteadilygrowingbecauseoftherapidlygrowingnumbersofVirtualMachines.Fibre Channel SANs are the first choice for providing shared storage to ESX Server environments and VMwareESXDRSandHAclusters.SANbasedbackuphasalsoevolvedintoacommodity.

Many different factors have increased the demand for higher performance and more granular controlofstorageworkloadsrelatedtoindividualVirtualMachinesandtheirrespectiveapplications. InSeptember2008,Brocade815/825HBAswerecertifiedforVMwareESX.TheBrocade815(single port)and825(dualport)8GbpsFibreChannelHBAsinconjunctionwiththeBrocade8GbpsFibre Channel Switch platform allows for homogeneous Brocade 8 Gbps servertostorage connectivity. ThisBrocade8Gbpssolutionisthefocusofthispaper. Thispaperincludesthefollowingtopics: SettingupaBrocadeswitchinfrastructure(basedontheBrocade5100) SettinguphostconnectivitywithBrocadeHBAs SettingupbootfromFCwithBrocadeHBAs SettingupaVMwareESXServerI3.5environmentwithBrocade StreamliningvirtualizedworkloadsonESXServerI3.5withan8Gbpsinfrastructure NPIVforworkloadoptimization SMISandmonitoringbasicsandintroduction Reviewofconnectivitybestpracticesandbasiclayoutconsiderations

Thefollowingfigureillustratestheenvironmentthatwasusedforthispaper.

Figure1:Setupofblueprintenvironment

Part1:InfrastructureConsiderationsandBestPractices
SettingUpStorage
SettingupstorageforaVMwareESXenvironmentseemstobeeasyatafirstglance.Takingacloser lookatthestoragerequirements,onewillfindamorecomplicatedtruth.Eventually,therearethree basicusecasesforstoragepresentedtoanESXServer: BootLUN TheESXServercanbeinstalledtobootfromalocalSCSIdiskortobootfromSAN.Inboth cases,theLUNisusuallypartitionedintosixdefaultpartitions,whichbelongeithertotheESX service console or to the VMkernel. Usually, a VMFS partition is created to fill up the remainingspaceonalocalSCSIdisk. When booting from SAN, the storage administrator can size the boot LUN in a granular manner,soitisabestpracticenottocreateaVMFSonthebootLUN,butratherhavethe VMFSes reside on dedicated LUNs. The following figure demonstrates a typical bootfrom SANconfiguration.

Figure2:BootfromSANconfiguration Usually,thesizeofthebootLUNdoesnotexceed25GB.Performancerequirementsforthe bootLUNarelowtomoderate. VMFSDatastore The VMFS is a file system specifically designed to host Virtual Machine files, especially the large Virtual Machine Disk (VMDK) files (.vmdk). The VMFS is designed to keep SCSI reservations at a minimum to allow for seamless operation of multiple VMs on the same datastore. 3

From a VI3 management perspective, the administrator should try to keep the number of VMFSdatastoreslow.Ontheotherhand,thiscanofcourseleadtoperformancebottlenecks whentoomanyVirtualMachinediskfilesareplacedontothesameVMFS,i.e.thesameLUN, especially, if multiple Virtual Machine disks require higher I/O rates or higher sequential throughput. RawDeviceMapping ForVirtualMachineswithhighperformanceapplicationsthatdemandstorageperformance, VMware has introduced Raw Device Mappings (RDMs). With an RDM, a physical LUN is presented to a VM as a VMDK file. So from an ESX Server perspective, the VM is still accessing the VMDK file, while the file is actually a pointer that redirects the whole SCSI traffictotherawLUN,asindicatedinFigure3.

Figure3:RawDeviceMappings RDMsrepresentaveryintelligentwaytoguaranteeexclusiveLUNaccesstotherespective VMandapplicationinsidetheVM.Ofcourse,thismeansahigheradministrativeeffortand definitely a higher number of LUNs presented to the ESX Server. Which virtual disks are finallycreatedastypicalVMDKfilesandwhichonesarecreatedasRDMsisdeterminedby the performance and availability requirements of the OS and application inside each individualVM. Asaruleofthumbandbestpracticeapproach,wecanrecommendthefollowing: CreatelargerLUNsandVMFSvolumesandplacemultipleVirtualMachinediskswithlow ormoderatestorageperformancerequirementsontothoseVMFSs. PlacethebootdisksofVirtualMachinesontoaVMFS. Create dedicated RDMS for Virtual Machine disks with highly transactional (e.g. OLTP Databasetablespaces)orhighlysequentialloadpatterns(e.g.DataWarehousing).

Depending on the environment size and IT strategy, LUNs may also be presented from differentarrays(sameordifferentvendors)toreflectdifferentperformanceandavailability requirements. In the end, LUNs will be presented from the Array to the ESX Server or ESX Servercluster,whichinvolvesthesecondinfrastructurelayer,theSANfabric.

SettingUptheSwitchInfrastructure
The Fibre Channel fabric, or switch infrastructure, plays the central role in connecting storage and servers. Performance, availability, and security are fundamental building blocks of a stable storage networkingstrategy.Thisappliestophysicalandvirtualenvironmentsalike.However,theimpactof losingaccesstothestoragedeviceismuchhigheronanESXServerthatisrunning20or30Virtual Machines(andapplications)thanonaphysicalserverthatisrunningasingleapplicationonly. ZoningisthestandardwaytoprovideSANbasedsecurity.Simplyspeaking,zoningisthepartitioning of a Fibre Channel fabric into smaller subsets to restrict interference, add security, and to simplify management. If a SAN contains several storage devices, similar to our blueprint environment, systemsconnectedtotheSANshouldnotbeallowedtointeractwithallstoragedevices. Zoning is sometimes confused with LUN masking, because it serves the same goals. LUN masking, however, works on the array or SCSI level, while zoning works on the Fibre Channel port or WWN level. Portlevel zoning (commonly referred to as hard zoning) and WWNlevel zoning (commonly referred to as soft zoning) may be combined depending on the specific requirements. However, WWNbased zoning is more flexible and allows for faster reconfigurations of the physical environment. Regardless of the actual zoning approachhard or softthere is one best practice that should be respected anywhere and also in ESX Server environments. This practice is called single initiator/singletargetzoning. Dependingonyourenvironment,youcanbenefit fromisolatingtrafficasmuchaspossibleinyour storageareanetwork.SANswithalargenumberofstoragevolumes(e.g.whenyouhavepresenteda lotofRDMs)andheavyhosttrafficcanbenefitthemost.Implementingsingleinitiator/singletarget zoning allows you to isolate traffic for each port. Singleinitiator, singletarget zoning creates small zonesinthefabricwithonlytwozonemembers(portsorWWNs).Thezoneconsistsofonetarget(a storage unit port), and one initiator (a host system port). The key benefit of singleinitiator/single targetzoningistrafficisolationormasking.Thoughitlookslikeahugeinitialefforttocreatealarger number of zones, the benefits are increased stability and simplified fault isolation and troubleshooting.

Figure4:Zoning Especially in a virtualized environment, one more attribute must be supported by the fabric infrastructureandbythehostandFCHBAinsidethehost:NPIV.

SettingUpN_PortIDVirtualization(NPIV)ontheHost
NPIV, or N_Port ID Virtualization, is a Fibre Channel capability that allows multiple N_Port IDs to shareasinglephysicalN_Port.ItallowsmultipleFibreChannelinitiatorstooccupyasinglephysical port,easinghardwarerequirementsinSANdesign,especiallywherevirtualSANsarerequired. In a server virtualization environment, NPIV allows each Virtual Machine to have a unique Fibre ChannelWorldWideName(WWN),thevirtualHBAport.ThisenablesmultipleVirtualMachinesto share a single physical HBA and switch port. In a VMware ESX environment, the ESX hypervisor leverages NPIV to assign individual WWNs to each Virtual Machine, so that each VM can be recognizedasaspecificendpointinthefabric. Thisbringsquiteafewbenefits.Inparticular,themoregranularsecurityenablesrestrictionofLUN accesstotheindividualVMwiththisWWN.Italsoallowsforagranularsingleinitiator/singletarget zoning approacheven for VMswhich is recognized as a best practice for physical server environmentsbyalmostanystoragevendor. From a monitoring perspective, the same tools that can be used for monitoring physical server connectionscannowbeleveragedtotheindividualVM. AstheWWNisnowassociatedwiththeindividualVM,theWWNfollowstheVMwhenitismigrated toanotherESXServer(regardlessofwhetherthisisahotorcoldmigration).NoSANreconfiguration isnecessarywhenaVMismigrated.

Figure5:NPIV

VMwareVirtualInfrastructure3.5offerstheabilitytoconfigureNPIVforindividualVirtualMachines. NPIVrequiresVMstouseRDMfordiskaccess.Asoutlinedearlier,RDMenablesaVMtohavedirect accesstoaLUN. Thefirstcoupleofsectionsinthispaperhaveoutlinedthehighlevelconfigurationrequirements. Nowletusgointoamoredetailedandpracticalapproach.

Part2:StepbyStepGuideforConfiguration
SettingupZoningontheBrocade5100
1. IdentifytheWWNsofarraysandswitches EMCClariionCX4120 Open a Web browser, type the Navisphere IP address, and launch Navisphere using the propercredentials GototheStorage Domainsandexpandthetree. NavigatetoPhysical SPs SP A I/O Modules. IdentifytheFC Slot(e.g.SlotA0),andidentifytheFChostports. RightclickontoeachhostportandselectProperties. ThePropertieswindowwillshowthePortWWN.

Repeatforallrequiredports.

HPEVA4400 OpenaWebbrowser,typetheCommandViewEVAIPaddressandtheproperportnumber (e.g.https://cveva:2372),andlaunchCommandViewEVAusingthepropercredentials Expandthetree. NavigatetoHardware Controller Enclosure Controller 1.

Intherighthandpane,clicktheHost Portstab,whichwillshowtheWWNs.

Repeatforcontroller2.

HostwithBrocade825HBA BootthehostandwaituntilthePOSTdisplaystheBrocadeBIOS. PressALTBtoentertheBIOS. TheWWNsaredisplayed.

2. ConfigurezoningontheBrocade5100Switch Log in to Brocade Web Tools via a Web browser with the correct switch IP address and propercredentials. Launch the Switch Name Server and identify, which WWNs/components are connected to whichports.

ThenlaunchtheZoneAdmin. o SelecttheAliastab. o SelectNewandspecifyanameforthisport/WWNthenconfirmOK. o FromtheMemberSelectionList,selecttheproperWWN(identifiedearlier)andclick Add Member.

o ClickSave Config.

o o o o Repeatforallconnectedports. SelecttheZonestab. SelectNewZone,specifyanameforthisZone,andconfirmOK. FromtheMemberSelectionList,selecttheproperWWNsoftheinitiator(hostport) andtarget(arrayport)asidentifiedearlier,andaddthemtothezonebyclickingAdd Member. o ClickSave Config.

o Createadditionalzonesinthesameway. o GototheZoneConfigtab. o Select all the zones that you have created in the previous step and click Add Member. o ClickSaveConfig. o Click Enable Config and select the previously created configuration. 3. InstalltheBrocadeHBAdriversonanESXServerthatisbootedfromalocalSCSIdisk ThissectionassumesthattheBrocadeHBAisproperlyinstalledinthephysicalserverandthe ESXisalreadyupandrunning. o DownloadthecurrentdriverpackagefortheHBAfromtheBrocadeWebsiteunder www.brocade.com/hba. o Transfer the downloaded .tgz archive to the Service Console via SCP of SFTP, preferablytothe/tmpdirectory. o Log in to the Service Console via SSH with root privileges and change to the /tmp directory(orthedirectorywherethe.tgzfileislocated). 9

o Untarthedriverusingthefollowingcommand: tar zxf bfa_driver_esx35_<version>.tar.gz o Assoonasthearchiveisextracted,installthepackagewiththefollowingscript: sh vmw-drv-install.sh o ReboottheESXServer o Oncetheserverisrebooted,verifythatthedriverpackageisloadedtothesystem withthefollowingcommands: vmkload_mod -l Thislistsinstalleddrivernames.Verifythatanentryforbfaexists. o StarttheHCMAgentbyusingthefollowingcommand /usr/bin/hcmagentservice start o Makesuretheagentisautomaticallystartedafteranyreboot: chkconfig -add hcmagentservice o ConfiguretheServiceConsoleFirewalltoenableHCMtraffic: /usr/sbin/esxcfg-firewall -o 34568,tcp,in,https /usr/sbin/esxcfg-firewall -o 34568,udp,out,https 4. InstallBrocadeHCMmanagement AstheESXServiceConsoledoesnotsupportagraphicaluserinterfaceforHBAconfiguration,the HCMcanbeinstalledtoanyothermachine,e.g.aWindowsVMtoremotelyconnecttotheHCM AgentontheESXServerforremotemanagement. o DownloadthecurrentHBAsoftwareinstallerpackagefortheHBAfromtheBrocade Websiteunderwww.brocade.com/hba. o Transferthedownloaded.exefiletotheManagementServerandexecutethefile. o Follow the default installation steps (if you are installing into a VM that is not SAN connected,justselecttheHCMcomponentanddonotinstallthedriver).

10

o Once the installation is completed, doubleclick the Brocade FC HBA icon on the desktoptolaunchtheHostConnectivityManager(HCM).

o LoginasAdministratorwithpassphrasepassword..TheHCMwilllaunch.

o ConnecttotheESXServerbyclickingDiscovery Setup.

Useadmin/passwordasdefaultcredentials. o IntheHCMwindowyoucannowmonitorandconfigurethedifferentHBAsettings.

5. PresentaLUNforbootfromSAN UnlikeonanESXServerthatisinstalledontoandbootedfromalocalSCSILUN,bootfromSAN requiressomeadditionalsteps. As presented in Part 2, Step 1, the Brocade HBA BIOS provides a very quick way to gather the portWWNsoftheHBA.Inanextstep,thebootLUNneedstobeconfiguredonthearray,and hastobepresentedtotherespectivehost.OfcoursethisprocedureisthesameforabootLUN andanyotherLUNthatispresentedtoanESXServer(evenforuseasaVMFSvolumeorRDM).

11

EMCClariionCX4120 Open a Web browser, type the Navisphere IP address, and launch Navisphere using the propercredentials. o RightclickonthearrayiconandselectConnectivityStatus.

o ClickNew. o ForInitiatorName,entertheWWNNandWWPNoftheBrocadeHBAinthefollowing format: WWNN:WWPN (e.g.20:00:00:05:1e:56:c7:80:10:00:00:05:1e:56:c7:80) ChooseNew HostandprovidetheNameandIPaddress.

ConfirmOKandfinalizetheconfigurationbyconfirmingthepopupmessages. o FromtheStorage Groupstab,selectthestoragegroupnameandthehost,right clickonthehost,andselectConnectivity Status.

12

o ClickonthenewhostinitiatorpathandselectConnect Hosts.

o Finalizetheconfigurationbyconfirmingallpopupmessages. o VerifyontheSwitchZoneAdmin,thatarrayportandHBAportareproperlyzoned. Repeatforallrequiredports.

HPEVA4400 OpenaWebbrowser,typetheCommandViewEVAIPaddress,andtheproperportnumber (e.g.https://cveva:2372),andlaunchCommandViewEVAusingthepropercredentials. Expandthetree. NavigatetoHosts. Intherighthandpane,clicktheAdd Hostbutton.SpecifytheHostname. Either select the host WWN from the dropdown list or specify the WWN manually in the formataaaa-bbbb-cccc-dddd(e.g.10000000C95EA678). SpecifyVMware asoperatingsystemselectionandclickAdd Host.

GototheHostsfolderandselectthehostthatyouhavejustcreated.

13

ClickAddPortandspecifytheWWNofthesecondHBAport,thenconfirm.

VerifythatbothWWNsarelisted. Expandthetreeagain. NavigatetoVirtual Disks. Intherighthandpane,clicktheCreate Vdiskbutton. Specify a vdisk name, VRaid level, and disk size, and click Create Vdisk to create thisdisk.

IntheVirtual Disksfolder,gototherecentlycreatedvdisk. Intherighthandpane,selectthePresentationtab,andthenclickthePresentbutton. Inthehostselectionlist,selecttheHostandclickAssignLUN.

14

SelecttheLUNnumberandclickPresent.

6. InstalltheESXServerinabootfromSANconfiguration OnceyouhaveverifiedproperLUNpresentationandzoning,ESXServercanbeinstalledontothe host.Theserverneedstobepreparedinthefollowingway: ChangethebootorderofinserverBIOStobootfromCDtobootfromharddisk. ChangethebootadapterordertobootfromtheBrocadeHBAfirst,andthenfromotherSCSI controllers. DisableanybuiltinIDEcontrollers. HavetheESXServer3.5Update3CDavailable. DownloadtheBrocadeDriverUpdateDisk(DUD)forSANbootfromwww.brocade.com/hba andhavetheDUDavailableforinstallation. BootfromSANconfigurationprocedure Boottheserver. When the Brocade HBA appears in the POST, press Alt-B or CTRL-B to enter the HBA BIOS.

15

SelectthefirstadapterandchooseAdapter Settings.

Makesurethatthefollowingsettingsareapplied:
BIOS Port Speed Boot LUN Enabled Auto First LUN

PressESCtogobackintothemainmenu,andselecttheBoot Device Settingsentry.

TheWWNspresentedinthenextscreenarethevisiblearraytargetports.

SelectthefirstarrayportWWN.

16

IntheLUNselection,identifythebootLUNnumberandselectthisLUNentry.

Gobacktothetargetportselections. SelectthenextavailableentryandpressM.

Editthisentrytopointtothesecondavailablearrayport. Gobacktothemainmenuandrepeatthestepsforthesecondadapter/adapterport. ExittheBrocadeConfigmenu,whichwilltheservercausetorebootautomatically. InserttheBrocadeDriverUpdateDisk.TheserverwillbootfromthisCD. ThebootscreenoftheCDlookslikeastandardESXServer3.5installation.

17

TheESXServerinstallsasusual,withonemaindifferencethebfa driverisloaded:

Oncethedriverisproperlyinstalled, theinstallationprocedure willpromptforinsertion of theESXServer3.5CD.

TheESXServerwillinstallasusual. The installation target device is sda, the designated SAN LUN. After the server has been successfullybootedfromSAN,theESXServerfirewallneedstobeconfiguredtoallowHCMagent traffic.FordetailsseePart2,Section3. 7. SetUpNPIVforworkloadoptimization

Asoutlinedearlier,NPIVisusedtoallowVirtualMachinestoberecognized asanendpointinthe fabricandtoallowformoregranularcontrolofresourceaccess. ConfiguringNPIVinvolvesacoupleofconfigurationstepsoutlinedhere. o o o ConfiguringNPIVontheBrocadeFibreChannelSwitch. LogintotheswitchviaSSH. IdentifytheportstheESXServerHBAsareconnectedto. Runportcfgshow x(wherexisthenumberoftheswitchport). Theoutputwilllisttheportconfiguration.TheNPIVsettingisdisplayed(highlightedherein yellow):
switch11:admin> portcfgshow 18 Area Number: 18 Speed Level: AUTO(HW) AL_PA Offset 13: OFF Trunk Port ON Long Distance OFF VC Link Init OFF Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF

18

Persistent Disable NPIV capability QOS E_Port Port Auto Disable: Rate Limit EX Port Mirror Port Credit Recovery F_Port Buffers

OFF ON ON OFF OFF OFF OFF ON OFF

o o

IncaseNPIVisOFF,NPIVcanbeenabledwiththefollowingcommand: portCfgNPIV <port number> 1 NPIVcanbedisabledwiththefollowingcommand: portCfgNPIV <port number> 0 IdentifyingHBAsintheESXServer LogintotheESXServerviaSSHwithrootprivileges. IdentifytheHBAs.
[root@esx-brocade-dell root]# ls /proc/scsi ata_piix bfa mptscsih scsi sg vsa0

TheBrocadeHBAislistedasbfa. Determinetheinstancenumberornumbersisthenextstep.
[root@esx-brocade-dell root]# ls /proc/scsi/bfa 4 5 HbaApiNode

Aquickcheckforeachinstancerevealstypeandconnectivitystatusofeachinstance(port)
[root@esx-brocade-dell root]# cat /proc/scsi/bfa/4 Chip Revision: Rev-C Manufacturer: Brocade Model Description: Brocade-825 Instance Num: 0 Serial Num: ALX0441D07H Firmware Version: FCHBA1.1.0 Hardware Version: Rev-C Bios Version: Optrom Version: Port Count: 2 WWNN: 20:00:00:05:1e:61:67:61 WWPN: 10:00:00:05:1e:61:67:61 Instance num: 0 Target ID: 0 WWPN: 50:06:01:61:3c:e0:1e:e1 Target ID: 1 WWPN: 50:06:01:69:3c:e0:1e:e1 Target ID: 2 WWPN: 50:01:43:80:02:5b:25:0c Target ID: 3 WWPN: 50:01:43:80:02:5b:25:0d [root@esx-brocade-dell root]# cat /proc/scsi/bfa/5 Chip Revision: Rev-C Manufacturer: Brocade Model Description: Brocade-825 Instance Num: 1

19

Serial Num: ALX0441D07H Firmware Version: FCHBA1.1.0 Hardware Version: Rev-C Bios Version: Optrom Version: Port Count: 2 WWNN: 20:00:00:05:1e:61:67:62 WWPN: 10:00:00:05:1e:61:67:62 Instance num: 1 Target ID: 0 WWPN: 50:06:01:68:3c:e0:1e:e1 Target ID: 1 WWPN: 50:06:01:60:3c:e0:1e:e1 Target ID: 2 WWPN: 50:01:43:80:02:5b:25:08 Target ID: 3 WWPN: 50:01:43:80:02:5b:25:09

ConfiguringNPIVintheVM o GototheOptionstabandselectFibre Channel NPIVandGenerate new WWNs.

ThenclickOK. Returntothe Edit Settings Optionsscreen.

20

Verify the creation of Node and Port WWNs. Every Virtual Machine that is successfully NPIV enabled has a Node WWN and Port WWN combination, also referredtoaVport.ThoseentriesareuniqueandmaintainedbytheESXServer/VC. Toenablemultipathing,theESXServerautomaticallycreatesuptofourPortWWNs foranindividualVM. o GototheVirtualMachine,rightclick,andthenchooseEdit Settings. o AddanewharddiskasanRDMwithaseparateSCSIcontrollertotheVM.

OncetheRDMisadded,gototheVirtualMachineandrightclickagain,thenchoose Edit Settings. SettingupZoningforNPIV Zoningdefineswhichinitiator(HBA)canconnecttowhichtarget(arrayport).NPIVenablesuseof thesamemethodologyaszoninginphysicalenvironments.TheWWPNscreatedbyenablingNPIV fortheVMcanbeusedforthezoningoperations. ThefollowingrequirementsmustbemettoenableasuccessfulVMzoning: ThephysicalHBA(theBrocade825)musthaveaccesstoallLUNsthatareusedbyVMs. Thehostmode(hostpresentationbehavior)forthephysicalHBAmustbethesameasfor anyNPIVenabledVMaccessacrossthisHBA. LUNs must be presented to physical and virtual HBAs with the same LUN number. OtherwisetheESXServerwillnotrecognizethedifferentpathstotheLUNandwillnotbe configuredproperlyformultipathing. LUNmaskingonthearrayhastoincludephysicalandvirtualWWNs. o OpenBrocadeWebToolsandtheZoneadmin. o CreateanewAliasnamefortheVM. o

21

AddNodeandPortWWNsoftheVMviatheAdd Otherbutton.

AddtheAliastotherequiredzonesorcreatenewzones(ArrayPorttoVM) SettingupLUNMaskingforNPIV EMCClariionCX4120 TheClariionfamilyrequiresLUNmaskingforNPIV.TheLUNthatwillbeassignedtotheVMmust bepresented(masked)tothephysicalHBAandtheVMsVport. MakesurethenativeHBAontheESXServerismaskedtothedesiredLUNsonthearray. CreatetheVMandconfiguretheRDMstorage. EnableNPIVfortheVMintheconfigurationoptions. Record(copy/paste)thePortandNodeWWNsoftheVM. Open a Web browser, type the Navisphere IP address, and launch Navisphere using the propercredentials. RightclickonthearrayiconandselectConnectivity Status. ClickNew. ForInitiator NameentertheNPIVWWNNandWWPNintheproperformat(seeearlier inthispaper). o

22

Choose Existing Host and use the same host name that is currently used for the physicalHBApath.ThenclickOK. IntheStorage Groupstab,selectthestoragegroupnamethatispresentingtheLUNto thephysicalHBA,selectthephysicalhost,andrightclickonthehost. SelectConnectivity Statusfromthecontextmenu.

ClickonthenewhostinitiatorpathandselectReconnect.

HPEVA4400 ImplementingLUNmaskingontheEVA4400requiresaspecificsetofstepsforenablingtheEVA torecognizetheNPIVVPort.ThecriticalissueistohavetheVPortassignedLUNpathvisibleto theVMatthetimeitpowersupoveraVPort.ShouldtheLUNpathsnotbevisible,ESXServerwill destroytheVPort,causingthedrivertodropitsFDISClogin.Topreventthiscycle,VPortWWNs areprogrammedintothehostgroupsandLUNmaskingconfigurationsattheEVAserverpriorto poweringontheNPIVenabledVM. MakesurethenativeHBAontheESXserverismaskedtodesiredLUNsonthearray. CreatetheVMandconfiguretheRDMstorage. EnableNPIVfortheVMintheconfigurationoptions. OpentheCommandViewEVAinterfacetocreateVMHostentity: o ExpandEVAStorageandselecthostfolder. o ClickAdd HostfromHostFolderProperties. 23

o EntertheVMhostnameandentertheNPIVWWPN. o ClickAdd host. o RepeatifmultipleWWPNareassignedtotheVM. StoragepresentationusingthecommandviewEVA: o IntheVirtualDisksfolder,usethevirtualdiskthatwasassociatedtotheVM. o SelectthediskandtheselectthePresentationtab. o ClickthePresentbuttonandselectthenewlycreatedVMhostentry. o ClickAssign LUN andselectexactlytheLUNnumberthatwasalsopresentedto thephysicalHBA. o Select Save Changes to enable the newly configured presentation. 8. StreamlineworkloadswithQoSinan8GbpsinfrastructureQoSsetup NPIVisusedtopresentdedicatedLUNsintoVMsviaRawDeviceMapping.NPIVenablesthe isolationoftrafficintodedicatedzonesandtheseparationofthosezonesfromeachother.And QualityofService(QoS)takesworkloadoptimizationevenonestepfurther. QoShasoneprerequisite:thereisaServerApplicationOptimization(SAO)licenserequiredon theswitch(es)towhichtheHBAsareconnected.

o LogintotheBrocadeHostConnectivityManager o ConnecttotheESXServerandselecttheHBA. o RightclickontotheHBAandselectPort Configuration Basic.

24

o InthePortConfigurationdialogbox,enableQoS.

o OncetheportisQoSenabled,theQoSstatusisreflectedintheportproperties,alsoshowing theavailableprioritizationlevels.

25

o InordertouseQoSonaVMlevel,theVMsNPIVWWNsneedtobemembersofthe appropriatezone.ThereasonbehindthisisthatQoSissimplyconfiguredwithZonePrefixes namedPriority Values. Thefollowingpriorityvaluesareavailable: o High(H) o Medium(M) o Low(L) ExampleofasmallQoSzoneconfigurationset: cfgcreate QoSTestcfg, "QoSH1_esx1_z_1" /* H high priority; 1 flow id */ zonecreate "QoSH1_esx1_z_1", "10:00:00:00:00:01:00:00; 10:00:00:00:00:03:00:00; 10:00:00:00:00:04:00:00" cfgenable QoSTestcfg UsingthismethodVMscanbeassigneddifferentpriorityvaluesdependingonthebandwidth requiredbytheVM.ASQLServerVM,forexample,typicallyrequiresmorebandwidththena Webserverusedtoservetheorganizationsintranetusers.Butthisreallydependsonthe workloadtheVMneedstohandle. WhenusingBrocadeHBAsandBrocadeSANSwitchesandDirectorstheinformationofwhat QoSzonesareconfiguredandforwhichVMisautomaticallypropagatedfromthefabricto the HBAs. There is no other configuration necessary than enabling QoS on the HBAs using Brocade HCM (Host Connectivity Manager). Then configure the appropriate zones and put theminyourActiveZoneSet.OncetheZoneSetisactive,QoSworks. ThefollowinggraphillustrateshowQoSworks:

26

SummaryandConclusion
Server virtualization offers quite a few benefits. The downside of server virtualization is Virtual Machinesprawl,usuallyresultinginalargenumberofVirtualMachines(andapplicationworkloads insidetheVMs)runninginparallel.TheseVMsusuallyalsoshareacommonstorageenvironment. Asaresult,storageperformancecanbecomeanissuesoonerorlater. N_PortVirtualization(NPIV)isoneapproachtopresentingstorageLUNdirectlytoVirtualMachines, andmakingthestorageenvironmentVMaware.CombiningBrocade8GbpsSwitchesandBrocade 8GbpsHBAswiththeESXServersNPIVcapabilitiesallowsforefficientperformancemanagement. NPIVplaysanimportantrolebyassigningWWNsdirectlytoVirtualMachines,thusenablingZoning andLUNmaskingsetupdirectlyonVirtualMachinesrunningontheESXServer. TheBrocade815/825HBAswiththe8Gbpstechnologyarethefoundationforprovidingthecorrect bandwidthforlargernumbersofVMsinparallelwhilesupportingNPIV.BeyondNPIV,BrocadeHBAs andSwitchesallowforadditionalbandwidthmanagementusingQoS(QualityofService). The Brocade QoS implementation features an intelligent zoning setup to prioritize and deprioritize workloadsfromtheVMthroughtothestoragedevice,thereforeenablinghighperformance(highly prioritized)MicrosoftExchange,SQLServer,orbackupimplementationsinavirtualizedenvironment.
Brocade, the Bwing symbol, BigIron, DCX, Fabric OS, FastIron, IronPoint, IronShield, IronView, IronWare, JetCore, NetIron, SecureIron,ServerIron,StorageX,andTurboIronareregisteredtrademarks,andDCFMandSANHealtharetrademarksofBrocade CommunicationsSystems,Inc.,intheUnitedStatesand/orinothercountries.Allotherbrands,products,orservicenamesareor maybetrademarksorservicemarksof,andareusedtoidentify,productsorservicesoftheirrespectiveowners..

27

You might also like