TECHNICAL REPORT

Configuring iSCSI Connectivity with VMware vSphere 5 and Dell EqualLogic PS Series Storage
ABSTRACT

This Technical Report will explain how to configure and connect a Dell™ EqualLogic™ PS Series SAN to a VMware® vSphere™ 5 Environment using the software iSCSI initiator.

TR1075 V1.0

Copyright © 2011 Dell Inc. All Rights Reserved. EqualLogic is a registered trademark of Dell Inc. Dell is a trademark of Dell Inc. All trademarks and registered trademarks mentioned herein are the property of their respective owners. Information in this document is subject to change without notice. Reproduction in any manner whatsoever without the written permission of Dell is strictly forbidden. [Nov 2011]

WWW.DELL.COM/PSseries

Preface
PS Series arrays optimize resources by automating performance and network load balancing. Additionally, PS Series arrays offer all-inclusive array management software, host software, and free firmware updates.

Audience
The information in this guide is intended for VMware vSPhere Administrators configuring SAN access to a Dell EqualLogic PS Series SAN.

Related Documentation
For detailed information about PS Series arrays, groups, volumes, array software, and host software, log in to the Documentation page at the customer support site.

Dell Online Services
You can learn about Dell products and services using this procedure:
1. Visit http://www.dell.com or the URL specified in any Dell product information.

2. Use the locale menu or click on the link that specifies your country or region.

Dell EqualLogic Storage Solutions
To learn more about Dell EqualLogic products and new releases being planned, visit the Dell EqualLogic TechCenter site: http://delltechcenter.com/page/EqualLogic. Here you can also find articles, demos, online discussions, technical documentation, and more details about the benefits of our product family.

................................................................................4 Section 1: vSwitch Configuration .................................................................................................................................5 Standard vSwitch Configuration .................................................................................. 18 Associate VMkernel Ports to Physical Adapters ..................................................................................................................................................................................14 Section 2: Configure VMware iSCSI Software Initiator .....................................................................................................Round Robin ....................................................................................................................................................... 5 Step 2: Add iSCSI VMkernel Ports.25 Section 3: Connect to the Dell EqualLogic PS Series SAN ............................................. 26 Step 1: Configure Dynamic Discovery of PS Series SAN................................................................................. 24 vSphere Distributed Switch Configuration ............. 33 FAQ ...28 Step 1: Install iSCSI Software Initiator .......................................................................................3 Example Installation Steps ..........1 Features Of The vSphere Software iSCSI Initiator ....... 28 Step 2: Create and Configure Volume....................1 Configuring vSphere iSCSI Software Initiator with PS Series Storage ................................................................................................................................................. 15 Add Port Groups .............................................................. 13 Step 1: Step 2: Step 3: Step 4: Step 5: Configure vSphere Distributed Virtual Switch ........................................................... 21 Configure Jumbo Frames ...............................................................................................................................35 Technical Support and Customer Service ............................................. 25 Step 2: Binding VMkernel Ports to iSCSI Software Initiator ................................Table of Contents Executive Summary ...............35 Summary ............ 8 Step 3: Associate VMkernel Ports to Physical Adapters .... 10 Step 4: Configure Jumbo Frames ..................................................................................................................................................................................................2 Establishing Sessions to the SAN ........... 32 Step 4: Enabling VMware Native Multipathing ...................................................... 32 Step 4: Create VMFS Datastores and Connect More Volumes..............................................................................5 Step 1: Configure Standard vSwitch and Storage Heartbeat ....................... 29 Step 3: Connect to a Volume on PS Series SAN.................2 VMkernel Storage Heartbeat ................36 ................................................ 17 Configure Storage Heartbeat and iSCSI VMkernel Ports ...................................................1 Introduction .

3. Report 1.8 or Higher The following table lists the documents referred to in this Technical Report.com Vendor VMware VMware Dell Dell Document Title iSCSI SAN Configuration Guide vSphere System Administration Guides Dell EqualLogic PS Series Array Administration Guide Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere 5 and PS Series SANs iii .dell.Revision Information The following table describes the release history of this Technical Report.0 4. All PS Series Technical Reports are available on the Customer Support site at: support. Vendor VMware® Dell Model vSphere 5.0 Date November 2011 Document Revision Initial Release The following table shows the software and firmware used for the preparation of this Technical Report.x Dell™ EqualLogic™ PS Series SAN Software Revision 5.

.

This Technical Report summarizes the steps specific to connecting to a PS Series SAN. The Dell™ EqualLogic™ PS Series SAN is a highly virtualized shared storage platform that works with VMware vSphere 5 to provide these advanced features. Many of these new features require advanced configuration in order to work properly. 1 .EXECUTIVE SUMMARY VMware® vSphere™ 5 is VMware’s newest flagship product allowing for advanced server virtualization and management. This Technical Report will cover the steps for utilizing the software iSCSI initiator inside the ESXi host. iSCSI Software Initiator – With ESXi 4 and continuing in ESXi 5. High Availability clustering (HA). These steps are documented in VMware’s iSCSI SAN Configuration Guide which can be found on VMware’s website. and advanced load balancing all require some manner of shared storage accessed by each of the servers. the iSCSI software initiator was re-written from the ground up for better performance and functionality. INTRODUCTION VMware® vSphere™ 5 offers intelligent and advanced enhancements to the iSCSI software initiator in conjunction with iSCSI SAN connectivity. Jumbo Frame support allows for larger packets of data to be transferred between the ESXi 5 hosts and the SAN for increased efficiency and performance. Jumbo Frames – With ESXi 5 Jumbo Frames can be enabled on the iSCSI software initiator. Many of the advanced features provided by VMware including the ability to move running Virtual Machines (VMs) between active servers. Users connecting their vSphere environment using iSCSI HBAs should not follow these steps. Note: Jumbo Frames are not required. FEATURES OF THE VSPHERE SOFTWARE ISCSI INITIATOR VMware vSphere 5 has support for various advances with iSCSI SAN connectivity. Your network infrastructure must be able to fully support them to achieve any benefit. This Technical Report will cover the new features in the iSCSI software initiator as well as how to configure them to connect to the SAN. With ESXi 5. and should configure their environment as outlined in the VMware SAN Configuration Guide. This Technical Report will discuss how to configure your VMware ESXi 5™ environment to communicate with the Dell EqualLogic PS Series SAN. they are optional. This Technical Report will address some of the new features in vSphere as well as show administrators how to connect a vSphere 5 environment to a Dell™ EqualLogic™ PS Series iSCSI SAN. Jumbo Frames can be configured and enabled from the vCenter GUI which is a change from vSphere 4 which required CLI.

Dell has a MPIO plug-in that will enhance MPIO with the existing iSCSI software initiator for easier management.8 or later 3. The rest of this document assumes the environment will be using multiple NICs and attaching to a Dell EqualLogic PS Series SAN utilizing Native Multipathing (NMP) from VMware. both the VMware Native Multi-Path (NMP) and the Dell EqualLogic network load balancer will take care of load balancing and spreading the I/O across all avai lable paths. This allows for multiple connections to be concurrently used to allow for greater bandwidth. This enables ESXi 5 to take full advantage of the scale out networking in the PS Series SAN.MPIO – With ESXi 5 and vSphere 5. ESXi 5 has streamlined this so that the entire process can be done through the vCenter GUI. ESTABLISHING SESSIONS TO THE SAN Before continuing. Each session to the SAN will come from one vmkernel port which will go out a single physical network interface card (NIC). Running ESXi 5 2. CONFIGURING VSPHERE ISCSI SOFTWARE INITIATOR WITH PS SERIES STORAGE Taking advantage of all of these features requires advanced configuration by vSphere administrators. VMware uses VMkernel ports as the session initiators so we must configure each port that we want to use as a path to the storage. This configuration will be a one to one (1:1) VMkernel port to NIC relationship. customers can benefit from Multi-Path I/O from the ESXi 5 hosts to the SAN. More than one Network Interface Card (NIC) set aside for iSCSI traffic Not every environment will require all of the steps detailed in this Technical Report. Running Dell EqualLogic PS Series SAN Firmware 4. they will be different for each environment. The rest of this Technical Report will focus on the installation and configuration of an iSCSI software initiator connection to a PS Series SAN. Third Party MPIO Support – VMware has provided an architecture that enables storage vendors to provide new and advanced intelligent integration. To do this.3. Once these sessions to the SAN are initiated. In ESX 4 these configurations were done through a combination of CLI commands and GUI processes. Each volume on the PS Series array can be utilized by ESXi as either a Datastore or a Raw Device Map (RDM). better performance and bandwidth. Each of these steps can be found inside the VMware iSCSI SAN configuration guide and where names and IP Addresses are used. the iSCSI software adapter utilizes the VMkernel 2 . This is merely an example and demonstration of how to configure a new vSphere ESXi 5 environment correctly and connect it to the EqualLogic SAN. The following assumptions are made for this example: 1. we first must discuss how VMware ESXi establishes its connection to the SAN utilizing the vSphere iSCSI Software Adapter.

depending upon array. With the improvements to vSphere and MPIO. server and network load. administrators can take advantage of multiple paths to the SAN for greater bandwidth and performance. network traffic that is using the default vmkernel port will fail. To prevent this from occurring. Figure 1: Conceptual Image of iSCSI Sessions using 1:1 VMkernel mapping with 2 physical NICs for iSCSI Traffic VMKERNEL STORAGE HEARTBEAT In the VMware virtual networking model. Dell recommends that a highly available vmkernel port be created on the iSCSI subnet serving as the default vmkernel port for such outgoing traffic. This trend can be expanded depending on the number of NICs you have in the system. if the physical NIC that is being used as the uplink for the default vmkernel port goes down. logins may not be completed in a timely manner. Although iSCSI traffic isn’t directly affected by t his condition. It is simply used as the lowest vmkernel port for vmkping and other iSCSI network functions. As a result. This means in the following example you would establish 2 sessions to a single volume on the SAN. associating a separate NIC with each VMkernel port. Depending on the environment this can create a single session to a volume or up to 8 sessions (ESXi maximum number of paths to a volume). a side effect of the suppressed ping replies is that the EqualLogic PS Series group will not be able to accurately determine connectivity during the login process. SSH access. you would establish 1 VMkernel per physical NIC. and ICMP ping replies. and therefore a suboptimal placement of iSCSI sessions will occur. Administrators have the ability to use additional NICs for failover but this document will focus on enabling NMP with Round Robin or preparation for 3 rd Party Multipathing with the Dell EqualLogic Multipathing Extension Module. When properly configured this heartbeat sits outside of the iSCSI software initiator and does not consume any additional iSCSI storage connections. certain types of vmkernel network traffic are sent out on a default vmkernel port for each subnet. This includes vMotion traffic. The iSCSI multipathing network configuration requires that the iSCSI vmkernel ports use a single physical NIC as an uplink. This does require some additional configuration which is discussed in detail in this Technical Report.ports that were created and establishes a session to the SAN and to that volume to communicate. This heartbeat 3 . This means if there are 2 physical NICs. Use a one to one (1:1) ratio of VMkernel Ports to physical network cards. In some scenarios. Each VMkernel port is bound to a physical adapter.

This is referred to in the VMware iSCSI document as 1:1 port binding. Due to how the PS Series SAN automatically load balances volumes across multiple members and iSCSi connections across multiple ports. There are some suggested configurations depending on the number of NICs that will be used for iSCSI traffic. This Technical Report will guide the user through establishing a Storage Heartbeat during the vSwitch configuration. This would be a typical solution for many environments to utilize all of the bandwidth available to the ESXi host’s network interfaces. and the number of volumes.has to be the lowest VMkernel port on the vSwitch and is not bound to the software initiator. 4 . Every environment will differ depending on the number of hosts. EXAMPLE INSTALLATION STEPS Each environment will be different but the following is a list of example installation steps for configuring a new ESXi 5 host to connect to a PS Series SAN. the number of EqualLogic members. This Technical Report will focus on one-to-one VMkernel mapping with 2 physical NICs and 2 VMkernel Ports. So if there are 2 NICs. Throughout these examples the names and IP addresses assigned will need to be changed to be relevant in your environment. It is always recommended to separate iSCSI traffic and standard management traffic and this Storage Heartbeat should not be on the same subnet as the ESXi management traffic. These examples assume a switch with Jumbo Frames support on the physical hardware. In a default configuration assign one VMkernel port for each physical NIC in the system. this configuration will give both redundancy and performance gains when configured properly. Sample Configurations 2 physical 1Gbe NICs 4 physical 1Gbe NICs 2 physical 10Gbe NICs 2 VMkernel Ports (1 per physical NIC) 4 VMkernel Ports (1 per physical NIC) 2 VMkernel Ports (1 per physical NIC) This provides scalability and performance as the SAN environment grows without having to make changes on each ESXi host. assign 2 VMkernel Ports. Keep in mind that it is the VMkernel port that establishes the iSCSI session to the volume and the physical NIC is just the means it utilizes to get there.

These can be either vSphere Standard Switches (vSwitch) or vSphere Distributed Switches (vDS). Once this is configured there only need to be changes made if more NICs are being added or if more or less paths to the storage are needed. the ESXi 5 host will create multiple iSCSI connections to the PS Series SAN. The steps are very similar but will be described in detail for each method. Step 1: Configure Standard vSwitch and Storage Heartbeat This step will create a new standard vSwitch with the Storage Heartbeat VMkernel port. View the Release Notes of the PS Series Firmware for the current connection limits of pools and groups for the Dell EqualLogic PS Series SAN. Administrators should choose one method and apply it to their entire ESXi cluster for ease of configuration and management. Every new volume will have more iSCSI connections as well. 5 . Either method is viable for the environment and will depend on the Administrator’s familiarity with the method along with the VMware license structure in the environment. All of these configurations are done at the iSCSI vSwitch level.If more iSCSI connections are desired follow the above sample configurations to obtain the number of VMkernel Ports that match the environment and the number of paths you need to the PS Series SAN Always keep in mind the entire infrastructure of the virtual datacenter when deciding on network path and volume count. This means that once it is completed. Standard vSwitch Configuration If you are using vSphere Distributed Switches for iSCSI connectivity skip this section and move to the vSphere Distributed Switch section. Example Environment SECTION 1: VSWITCH CONFIGURATION This Technical Report will discuss the two ways to configure the virtual switches in ESXi 5.

4. 2. From the vCenter GUI select the ESXi host to configure and click the Configuration tab. 5. Verify the View is set to vSphere Standard Switch and click Add Networking. Select VMkernel and click Next . Select Networking from the Hardware pane.1. This brings up the Add Network Wizard. Select all of the physical network adapters that will be used for PS Series SAN connectivity and click Next . 6 . For the Network Label type in Storage Heartbeat and click Next. 3. 6.

8. Enter in the IP Address and Subnet Mask for the Storage Heartbeat. 7 . This will not come into play during iSCSI connectivity. Because this is non-routed. Verify the settings and click Finish to complete the vSwitch creation.7. the VMkernel Default Gateway can be ignored as it is the gateway of the management VMkernel. Enter in the values and click Next . This must be on the same network subnet as the PS Series Group IP Address.

select VMkernel and click Next . Each VMkernel Port will need its own IP Address and they must all be on the same subnet as each other and be on the same subnet as the PS Series Group IP Address. You will see the new vSwitch in the Configuration screen. 8 .Step 2: Add iSCSI VMkernel Ports This next step will assign VMkernel Ports to the new vSwitch. Now add a VMkernel port for each physical network adapter to correspond to the 1:1 VMkernel binding discussed earlier. Click Add. It will also assign the IP Addresses to the iSCSI# VMkernel Ports. 1. 2. This will open the Properties pane of the switch. Click Properties next to the newly created vSwitch.

For the Network Label type in iSCSI1 and click Next . Enter in the IP Address and the Subnet Mask. 4. Continue adding iSCSI# VMkernel Ports for each physical network adapter that will be communicating with the SAN. Verify the settings and click Finish to configure the VMkernel Port. Click Next . 5.3. This address must be on the same subnet as the Storage Heartbeat and the PS Series Group IP Address. 6. In this example there are two physical 9 .

5. 4.NICs so iSCSI1 and iSCSI2 are created. Step 3: Associate VMkernel Ports to Physical Adapters The next step is used to create the individual 1:1 path bindings for each VMkernel Port to a NIC. Select the first iSCSI# VMkernel Port and click the Edit button. each environment will differ and these numbers can change based on the number of NICs and the number of paths assigned. 2. 3. 10 . This is required in order to take advantage of the new advanced features such as Round Robin MPIO or 3rd party MPIO plug-ins that are available from Dell. Click the NIC Teaming tab. Again. Select the adapters that are not going to be assigned to the VMkernel (vmnic7 in this example) and click the Move Down button until it is listed under Unused Adapters. From our previous step there is the Storage Heartbeat and two iSCSI# VMkernel ports and two NICs. This means that the Storage Heartbeat will have both NICs assigned to it and we will assign each iSCSI VMkernel port one NIC to it. Click Properties next to the Standard vSwitch being used for iSCSI communication. Click the checkbox next to Override switch failover order. 1. We need to change the NIC Teaming so that only a single vmnic is in each uplink to create a 1:1 binding. The Storage Heartbeat has both physical network adapters assigned to it for high availability. When finished you will see something similar in the Properties pane.

Just as before. Select the next iSCSI# VMkernel Port and click the Edit button. This time select another adapter that has not already been bound to an Active Adapter. click the NIC Teaming tab and select the check box for Override switch failover order. In this example iSCSI2 is bound to vmnic7 so vmnic6 is moved to Unused Adapters. 7. When this is completed click OK. 11 . 8.6.

9. NOTE: Do not modify the adapters for the Storage Heartbeat. The Storage Heartbeat leverages all of the available physical NICs. Once all of the iSCSI# VMkernel Ports are bound 1:1 with physical network adapters click Close to exit the properties of the vSwitch. 12 . In this example we assigned iSCSI1 to vmnic6 and iSCSI2 to vmnic7. Do this same thing for each of the iSCSI# VMkernel ports so that each VMkernel port is mapped to only one adapter. 10. Verify the action and click OK. Be sure to move all but one adapter to unused adapters so that it uses a 1:1 binding.

From the Properties pane of the Standard vSwitch you will see the vSwitch itself. In order for Jumbo Frames to work they need to be configured on the vSwitch as well as the Storage Heartbeat VMkernel Port and iSCSI VMkernel Ports. In addition. 1. To change it to 9000 for Jumbo Frames click the Edit button. 2. In order for Jumbo Frames to be supported. Select the vSwitch. the physical switch layer must be configured to support Jumbo Frames. Select the General tab and under the Advanced Properties change the MTU from 1500 to 9000 and click OK. you will see under the Advanced Properties pane on the right the MTU is defaulted to 1500. it must be configured on each of these items. the Storage Heartbeat and each of the iSCSI# VMkernel Ports configured. To enable Jumbo Frames select the vSwitch created for iSCSI connectivity and click Properties. 13 .Step 4: Configure Jumbo Frames One of the enhancements in vSphere 5 for iSCSI configuration is the ability to adjust Jumbo Frames support from the GUI instead of through the CLI.

14 . When this is complete click Close to exit out of the vSwitch Properties page. One of the benefits to a vDS is the ability to create and configure a single network profile and then attach multiple hosts to this configuration. All of the VMkernel Ports in the vSwitch must be configured for Jumbo Frame as well as the vSwitch properties itself in order for Jumbo Frames to work properly. vSphere Distributed Switch Configuration Some environments utilize vSphere Distributed Switches (vDS) for network connections and management. Select the Storage Heartbeat and click the Edit button. 5.3. 4. For each of the VMkernel Ports. 6. Under the General tab in the NIC Settings. Jumbo Frames must also be enabled. Do this for each iSCSI# VMkernel Port. change the MTU to 9000 and click OK.

15 . 3. Name the vDS iSCSI. 1 for the Storage Heartbeat and 1 for each iSCSI connection. These steps follow the same premise as creating a vSphere Standard Switch. Click Add a vSphere Distributed Switch. From the vCenter GUI Home Screen click Networking in the Inventory section. 4. For this example we are going to create a vSphere Distributed Switch version 5. Step 1: Configure vSphere Distributed Virtual Switch This step will create a new vDS and is done at the cluster level. Select this and click Next . In this example we are creating 3 uplink ports. Select the number of uplink ports to match the number of VMkernels and physical NICs that will be used for iSCSI and click Next . 1. 2.Administrators using the vSphere Standard Virtual Switch can skip these steps and move on to Section 2.

which should be the whole cluster.5. For each ESXi host. select the physical network adapters they will use for iSCSI traffic and click Next . 16 . Select all of the ESXi hosts that are going to participate in this vDS.

From the Home screen in the vCenter GUI click Networking. Step 2: Add Port Groups This next step will create and configure the port groups that will be used to assign the VMkernel ports to. Change the Name to Storage Heartbeat.6. 2. 17 . 3. Uncheck the “Automatically create a default port group ” checkbox. Verify the settings and click Finish. 1. Select the iSCSI vDS that was just created and click Create a new port group. Click Next .

This step needs to be completed on each ESXi host. Step 3: Configure Storage Heartbeat and iSCSI VMkernel Ports Now that the port groups for the vDS have been configured. Each VMkernel Port will need its own IP Address and they must all be on the same subnet and be on the same subnet as the PS Series Group IP Address. In this example we created port groups iSCSI1 and iSCSI2. This will assign the IP Addresses for the iSCSI# VMkernel Ports as well as the Storage Heartbeat.4. Repeat the above steps and create an iSCSI# Port Group for each physical NIC that will be used for iSCSI connectivity. we need to configure each of the VMkernel ports on each separate ESXi 5 host. 18 .

2.1. 3. 8. Click Add. In this example there is iSCSI1 and iSCSI2 so two more VMkernel ports are added. Because this is non-routed. Continue adding additional VMkernel ports to match the number of iSCSI# VMkernels. 4. This must be on the same network as the iSCSI PS Series Group IP Address. Enter in the IP Address and Subnet Mask for the Storage Heartbeat. 7. Click Select port group and assign it to the Storage Heartbeat port group and click Next . 19 . 5. From the vCenter GUI select an ESXi host and click on the Configuration tab. 9. Click Close when all of the VMkernel ports are added. Verify the settings on the host and click Finish. Select VMkernel and click Next . Click on Networking under the Hardware pane. On the Distributed Switch: iSCSI click Manage Virtual Adapters. Change the View to vSphere Distributed Switch . This will not come into play during iSCSI connectivity. the VMkernel Default Gateway can be ignored as it is the gateway of the management VMkernel. 6. Choose New virtual adapter and click Next . Enter in the values and click Next .

10. 20 . Repeat this on each ESXi host that is participating in the vDS switch. 11. In this example there are two hosts and each has a Storage Heartbeat and two iSCSI# vmkernel ports configured. This can be seen from the vCenter GUI by clicking on Networking from the Home screen and then clicking on the new vDS iSCSI.

This is done by selecting the dvUplinks that are not going to be assigned to the VMkernel (dvUplink2 in this example) and clicking the Move Down button until it is listed under Unused Uplinks. Select the new iSCSI vDS and click the Configuration tab. We need to change the teaming so that only a single dvUplink is in each uplink to create a 1:1 binding. This is only done for the iSCSI# VMkernel ports and not the Storage Heartbeat VMkernel port. 6. 7. Next to iSCSI1 click the Edit Settings icon 4. 21 . Click Ok. 1. 2. From the vCenter GUI click on Networking from the Home page. 5. Under Policies click Teaming and Failover.Step 4: Associate VMkernel Ports to Physical Adapters This step will configure the 1:1 binding of VMkernel ports to physical NIC adapters. Do the same for iSCSI2 by moving dvUplink1 to unused uplinks. This is done at the cluster vDS level and not the individual ESXi host level. 3.

As you can see in the following example. The same vmnic does not have to be used on every ESXi host as long as the appropriate NIC is attached to the proper dvUplink. 22 . the highly available Storage Heartbeat is available on all of the physical adapters and iSCSI1 is bound to a specific adapter (dvUplink1) and iSCSI2 is bound to the other adapter (dvUplink2). One of the benefits of configuring a vDS is that new hosts added will be able to leverage the configuration settings that are already configured including the 1:1 binding.8.

23 .

Select the vDS iSCSI and click Edit Settings. the physical switch layer must be able to support Jumbo Frames. Change the Maximum MTU to 9000 and click Ok. 3. In addition. 1. 24 . From the vCenter GUI click on Networking in the Home page. In order for Jumbo Frames to work they need to be configured on the vDS as well as each of the VMkernel Ports. 2. 4.Step 5: Configure Jumbo Frames One of the enhancements in vSphere 5 for iSCSI configuration is the ability to adjust Jumbo Frames support from the GUI instead of through the CLI. Jumbo Frames has to be configured at both the cluster vDS level as well as each ESXi host level. Under the Properties tab click Advanced.

From the vCenter GUI select the ESXi host and click the Configuration tab. 3. Configure CHAP – (Optional) 25 .Now that Jumbo Frames has been enabled on the vDS each ESXi host has to be configured. Do this for all of the vmk# VMkernel ports on all of the hosts in order to enable Jumbo Frames across the environment. Select the first vmk# VMkernel port and on the right side of the pane you will see the MTU value under the NIC Settings. This section will detail the installation and configuration of the VMware iSCSI Software Initiator. 1. 2. On the Distributed Switch: iSCSI click Manager Virtual Adapters. Under the NIC Settings change the MTU to 9000 and click OK. In the Hardware pane click Storage Adapters. Change the View to vSphere Distributed Switch . 3. 4. 2. In the upper right hand corner click Add Select Add Software iSCSI Adapter and click OK. 5. These steps are done on each ESXi host that needs connectivity to the SAN. SECTION 2: CONFIGURE VMWARE ISCSI SOFTWARE INITIATOR Now that the virtual switches are configured and the VMkernel ports are bound to physical NICs in a 1:1 fashion. 6. From the vCenter GUI select an ESXi host and click on the Configuration tab. the next thing to configure is the iSCSI Initiator. Click on Networking under the Hardware pane. Click Edit . Click OK on the dialogue box to add the iSCSI Adapter. 1. Step 1: Install iSCSI Software Initiator VMware ESXi 5 does not come with the iSCSI Software Initiator added by default.

2. Step 2: Binding VMkernel Ports to iSCSI Software Initiator The next step is to bind each of the iSCSI# VMkernel ports to the iSCSI Software Adapter. 26 .x this could only be done via CLI commands but with ESXi 5 it is now configured through the vCenter GUI. for larger cluster environments. To do this click on the CHAP button. Enter in the appropriate information and click Ok. Click the Network Configuration tab in the iSCSI Initiator. In previous versions of ESX 4. Under the General tab you can configure CHAP if your PS Series SAN is configured for volume access through CHAP. from an ease of administration point of view. 1. In fact. Click Properties. 1. 2. CHAP is often the preferred method of volume access authentication. Click Add. Click the newly installed iSCSI Software Adapter.CHAP authentication for access control lists can be very beneficial. This is done to tell the iSCSI Software Adapter which VMkernel ports to use for connectivity to the SAN.

When all of the iSCSI# VMkernel Ports are added to the iSCSI Software Adapter you will see each of the Port Group Policies show up as Compliant if they are correctly configured. 27 . Select one of the iSCSI# VMkernel ports and add it by clicking OK. If you do not see any adapters here that should be. You can also see which physical NIC each one is assigned to. Continue adding all of the available iSCSI# VMkernel ports.3. 5. verify that each of the iSCSI# VMkernel ports were bound 1:1 to physical network adapters. Path status will show Not Used until volumes are actually attached. 4. You will see the iSCSI# VMkernel port groups along with the VMkernel Adapter and which physical network card it is assigned to. Note that the Storage Heartbeat is not listed here because it cannot be assigned to the iSCSI Adapter as it is not bound in a 1:1 fashion.

SECTION 3: CONNECT TO THE DELL EQUALLOGIC PS SERIES SAN Now that the advanced configuration for the vSphere iSCSI Software Initiator has been completed. In this example we will attach the iSCSI Software Initiator to the SAN and to a single volume. This is done to enable rescans to find new volumes that the ESXi host has access rights to.6. 28 . the next stage is to connect to the Dell EqualLogic PS Series SAN and to the volumes it contains. Step 1: Configure Dynamic Discovery of PS Series SAN The first thing to do is add the PS Series Group IP Address to the dynamic discovery of the ESXi host iSCSI Software Initiator. More information for complete administration of the Dell PS Series SAN can be found in the PS Series Administrators Guide. When all of the iSCSI# port groups are assigned to the software iSCSI adapter click Close.

Click Close. If there are no volumes configured on the PS Series array for this ESXi host click No otherwise click Yes to rescan for new volumes. 29 . Click the Configuration tab and select Storage Adapters under the Hardware pane. 5. In this example we will create a 500GB volume and assign it to this ESXi host via the iqn name. 4. You will be prompted for a rescan of the host bus adapter. Click the Dynamic Discovery tab. From the vCenter GUI select an ESXi host.1. 3. 2. Click Add. In the Add Send Target Server box type in the Group IP Address of the PS Series SAN and hit Ok. Step 2: Create and Configure Volume The next step will be to create a new volume and assign it to the ESXi host. If CHAP was previously configured you could also use CHAP in the Access Control List (ACL). Click on the iSCSI Software Adapter and click Properties. This can be done multiple ways so refer to the Group Administrators Guide for more information.

select the options. in this example ESXVOLDEMO. 2. and click Next .1. Create a new volume. then Create Volume. 30 . From the Dell EqualLogic PS Series Group Manager GUI click the Volumes button. Set the volume size. and click Next .

The iqn can be copied and pasted into the Group Manager interface for the Initiator Name. 4. 5. To find the iSCSI Initiator Name from the vCenter GUI click on the ESXi host. Keep in mind that as a vSphere environment grows. Initiator Name or any combination of the three. This option is necessary to enable all of the advanced vSphere capabilities that rely on shared storage. only the IP Address of the iSCSI# vmkernel ports need to be added. click on the Configuration tab and select Storage Adapters under the Hardware pane. This will need to be checked and additional ESXi hosts iqns added to the Access tab when configuring access for your remaining ESXi hosts. IP Address. 6. There is a check box option for “Allow simultaneous connections from initiators with different IQN names”. Under iSCSI Access you can choose to use CHAP. being able to scale the number of connections to each volume is important.3. 31 . If using IP Address. For initial creation just use one IP address and then add the additional IP Addresses to the volume via the Access tab.

NOTE: This needs to be done for every existing and new volume that you want the Round Robin policy to apply to. 32 . 8. To configure Round Robin Multipathing on a volume. Click the Manage Paths button. Select Storage Adapters under the Hardware pane and click the iSCSI Software Adapter. Step 3: Connect to a Volume on PS Series SAN The next step is to connect to the volume on the SAN and verify the connection status. This will reconfigure the volume to utilize a load balancing policy going across all available paths. In the vCenter GUI select the ESXi host and click on the Configuration tab. Since the iSCSI access and configuration was configured in the last step. To enable Round Robin select the drop down next to Path Selection and choose Round Robin (VMware). is that now we can take advantage of VMware’s native MPIO by enabling Round Robin. 2.Round Robin One of the advanced features that is enabled by configuring the iSCSI Software Initiator in the way we have done. right click on the volume and click Properties. Click Next to continue the volume creation. Right click on the iSCSI Software Adapter and select Rescan. if everything has been configured properly under Devices there will be a new EQLOGIC iSCSI Disk with the correct size shown. This combined with the fan out intelligent design of the PS Series group allows for greater and better bandwidth utilization.7. the only thing to do now is to rescan the adapters and make sure the volume appears correctly. 1. Review the volume creation information on the next screen and click Finish . When this is done. This will display the path information with a default of Fixed Path. Step 4: Enabling VMware Native Multipathing .

To verify that all of the configuration settings were made correctly. The entire volume name can be seen by expanding out the Path ID. See the PS Series Group Administration Guide for more information on adding more access control connections to a volume. Step 4: Create VMFS Datastores and Connect More Volumes Each existing volume can be modified to allow multiple ESXi hosts to attach to it by adding the Initiator Name (iqn) in the Access Tab inside the PS Series Group Manager GUI. 1. Review the information and click Next. in the PS Series Group Manager GUI. Select Disk/LUN and click Next . In order for ESXi to leverage the new volume for virtual machines it needs to be formatted VMFS. 33 . You will see all of the iSCSI# VMkernel ports IP Addresses for each ESXi host that has connectivity to the volume. 4. For a more detailed explanation of VMFS see the VMware Administrator’s Guide but the following is a summary of the steps taken. Select the newly scanned volume and click Next . 2. select the Volume and then click the Connections tab. Click Add Storage. Choose the File System Version (VMFS-5 or VMFS-3) and click Next . From the Configuration tab click Storage. 3. 5. 6.

9. This will format the volume and make it available for the cluster to be able to install VMs to. it is recommended to name the Datastore the same as the PS Series Volume name. and click Next .7. 8. Select the capacity (Maximum Available Space) and click Next . 34 . Give the Datastore a name. Verify the entire configuration and click Finish. in this case ESXVOLDEMO.

With all of the advanced features that vSphere has that relies on shared storage. As of the writing of this document the largest volume that can be created on the Dell EqualLogic PS Series SAN is 15TB. Q: What is the maximum size of a VMFS Datastore? A: VMware ESXi 5 supports VMFS Datastore size of 64TB. Remove the Storage Heartbeat and add the appropriate iSCSI# VMkernel ports. Host Profiles for ESXi 5 allow for iSCSI configuration to be configured by using an existing profile. The only change the administrator will need to make is to re-verify the Compliance in the Network Configuration tab of the iSCSI Software Adapter as the Storage Heartbeat will be incorrectly added. it is important to follow these steps to enable them in the vSphere environment. 35 . SUMMARY This Technical Report is intended to guide vSphere 5 administrators in the proper configuration of the VMware iSCSI Software Initiator and connect it to the Dell EqualLogic PS Series SAN. Always consult the VMware iSCSI Configuration Guide for the latest full documentation for configuring vSphere environments.FAQ Q: Can I use Host Profiles to configure iSCSI for new ESXi Hosts? A: Unlike ESX 4. Always see the readme for PS Series firmware to determine the maximums for each firmware version.

If you have an Express Service Code. Log in. or click “Create Account” to request a new support account. The code helps the Dell automated support telephone system direct your call more efficiently. click “Contact Us. If not.TECHNICAL SUPPORT AND CUSTOMER SERVICE Dell support service is available to answer your questions about PS Series SAN arrays. 5. At the top right. 36 .equallogic.com. call 1-800-945-3355. If you are a customer in the United States or Canada in need of technical support.” and call the phone number or select the link for the type of support you need. Visit support. go to Step 3. 3. 4. Contacting Dell 1. have it ready. 2.

Sign up to vote on this title
UsefulNot useful