You are on page 1of 25

Best Practices for EqualLogic in an

OpenStack Private Cloud


A Dell EqualLogic best practices paper
Dell Storage Engineering
October 2014

Revisions
Date

Description

October 2014

Initial release

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF
ANY KIND.
2014 Dell Inc. Confidential. All rights reserved. Reproduction of this material in any manner whatsoever without the
express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the Dell logo, Dell Boomi, Dell Precision ,OptiPlex, Latitude, PowerEdge, PowerVault,
PowerConnect, OpenManage, EqualLogic, Compellent, KACE, FlexAddress, Force10 and Vostro are
trademarks of Dell Inc. Other Dell trademarks may be used in this document. Cisco Nexus, Cisco MDS, Cisco NX0S, and other Cisco Catalyst are registered trademarks of Cisco System Inc. EMC VNX, and EMC Unisphere are
registered trademarks of EMC Corporation. Intel, Pentium, Xeon, Core and Celeron are registered trademarks of
Intel Corporation in the U.S. and other countries. AMD is a registered trademark and AMD Opteron, AMD
Phenom and AMD Sempron are trademarks of Advanced Micro Devices, Inc. Microsoft, Windows, Windows
Server, Internet Explorer, MS-DOS, Windows Vista and Active Directory are either trademarks or registered
trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat and Red Hat Enterprise
Linux are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell and SUSE are
registered trademarks of Novell Inc. in the United States and other countries. Oracle is a registered trademark of
Oracle Corporation and/or its affiliates. Citrix, Xen, XenServer and XenMotion are either registered trademarks or
trademarks of Citrix Systems, Inc. in the United States and/or other countries. VMware, Virtual SMP, vMotion,
vCenter and vSphere are registered trademarks or trademarks of VMware, Inc. in the United States or other
countries. IBM is a registered trademark of International Business Machines Corporation. Broadcom and
NetXtreme are registered trademarks of Broadcom Corporation. Qlogic is a registered trademark of QLogic
Corporation.

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Table of contents
Revisions ..................................................................................................................................................................................................2
Acknowledgements ............................................................................................................................................................................. 4
Feedback ................................................................................................................................................................................................ 4
Executive summary .............................................................................................................................................................................. 4
1

Introduction .....................................................................................................................................................................................5
1.1

Audience................................................................................................................................................................................5

OpenStack storage concepts ..................................................................................................................................................... 6


2.1

Nova ephemeral storage .................................................................................................................................................. 6

2.2

Swift object storage ........................................................................................................................................................... 6

2.3

Cinder block storage ......................................................................................................................................................... 6

Cinder block storage service ....................................................................................................................................................... 7


3.1

EqualLogic driver for Cinder ............................................................................................................................................ 8

3.2

Driver functions .................................................................................................................................................................. 9

Installing EqualLogic as Cinder back-end storage............................................................................................................... 10


4.1

Prerequisites ....................................................................................................................................................................... 11

4.2

OpenStack network traffic types ................................................................................................................................... 11

4.3

Configuring Cinder ........................................................................................................................................................... 13

4.4

Adding an EqualLogic array member ............................................................................................................................14

4.5

Using more than one EqualLogic group or pool .......................................................................................................14

4.6

When to use the Cinder LVM driver .............................................................................................................................. 17

EqualLogic MPIO ..........................................................................................................................................................................19

Conclusion .................................................................................................................................................................................... 23

Test configuration details .......................................................................................................................................................... 24

Additional Resources .................................................................................................................................................................. 25

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Acknowledgements
This best practice white paper was produced by the following members of the Dell Storage team:
Engineering: Clay Cooper
Editing: Camille Daily

Feedback
We encourage readers of this publication to provide feedback on the quality and usefulness of this
information by sending an email to SISfeedback@Dell.com.

SISfeedback@Dell.com

Executive summary
This paper provides an overview of and best practices for using an EqualLogic PS Series storage group as
back-end storage for the OpenStack Cinder block storage service.

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Introduction
OpenStack is a suite of services running on Linux server nodes that provide Infrastructure as a Service
(IaaS) by provisioning virtualized compute instances and networks from a pool of heterogeneous
enterprise hardware. OpenStack also makes block and object storage available to the compute instances
through the Cinder block storage service and the Swift object store service.
Dell EqualLogic PS Series arrays provide a storage solution that delivers the benefits of consolidated
networked storage in a self-managing iSCSI storage area network (SAN) that is affordable and easy to use,
regardless of scale. Built on an advanced, peer storage architecture, EqualLogic storage simplifies the
deployment and administration of consolidated storage environments, enabling:

Perpetual self-optimization with automated load balancing across disks, RAID sets, connections,
cache and controllers.
Efficient enterprise scalability for both performance and capacity without forklift upgrades.
Powerful, intelligent and simplified management.

This technical paper serves as a guide to the effective utilization of EqualLogic PS Series storage in an
OpenStack private cloud.

1.1

Audience
This technical white paper is for storage administrators, SAN system designers, storage consultants, or
anyone tasked with building an OpenStack private cloud that includes EqualLogic PS Series storage. It is
assumed that all readers have experience designing, deploying and administering an OpenStack private
cloud and shared storage solutions. Also, there are some assumptions made in terms of familiarity with all
current Ethernet standards as defined by the IEEE (Institute of Electrical and Electronic Engineers) as well
as TCP/IP (Transmission Control Protocol/Internet Protocol) and iSCSI standards as defined by the IETF
(Internet Engineering Task Force).

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

OpenStack storage concepts


It is important to understand the types of storage available to the compute instances in an OpenStack
cloud and which services provide them. Section 2 explains the characteristics and purpose of each storage
type as well as what storage hardware is best suited to provide the raw storage.

2.1

Nova ephemeral storage


Ephemeral storage gets its name from the fact that it does not persist beyond the lifecycle of the compute
instance with which it is associated. It is the virtual disk or disks created by the OpenStack Nova compute
service at the time of compute instance provisioning. The virtual disks are created in the file system
underlying the Nova node hypervisor, usually on storage local to the Nova node, and are typically tens of
GBs in size.
For persistent storage, compute instances use block storage volumes provided by the OpenStack Cinder
service or the OpenStack Swift object store.

2.2

Swift object storage


The OpenStack Swift object store is a highly scalable, redundant, distributed storage system for static,
unstructured data sets such as content delivery networks or cold storage archives. It is also the default
storage location for OpenStack Cinder backups and OpenStack Glance images.
Typically Swift object stores are very large, in the tens of TBs. Data replication and availability are handled
by Swift using replica-based, location-aware redundancy. Swift is optimized for local commodity HDD and
hardware RAID is not required or recommended. While this is a good fit for HDD in or directly attached to
the Swift server node, it is not a good fit for traditional SAN storage systems that already provide robust
data protection mechanisms such as RAID, snapshots, and replication.

2.3

Cinder block storage


The OpenStack Cinder block storage service provides volumes to the compute instances for persistent
secondary storage. Each Cinder volume can be attached to only one compute instance at a time, but a
compute instance can have multiple attached volumes. OpenStack even supports provisioning compute
instances to boot directly from Cinder volumes rather than from ephemeral disks.
Cinder nodes can utilize all types of raw storage when provisioning volumes. Storage within or directly
attached to the Cinder nodes themselves, network file system or NAS devices, and hardware or software
SAN storage systems can all be used as back-end storage.
Since Cinder relies on the data protection mechanisms provided by the back-end storage system, SAN
storage systems such as EqualLogic PS Series are an excellent fit.

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Cinder block storage service


By default, Cinder uses the LVM driver to provision volumes within raw storage local to the Cinder node. It
uses Linux logical volume management (LVM) to create volumes within the raw storage and then makes
them available to the Nova node using Linux iSCSI target software. The Nova node connects to the Cinder
node volumes using the Linux iSCSI initiator then the Nova node hypervisor presents those volumes to the
compute instances as virtual disks.
Making the Cinder volumes available to the compute instances is a two-step process. First, the volumes
are created by the Cinder service. Once created, the volumes can then be attached to specific compute
instances. iSCSI connections will not be initiated until the volume is attached to the compute instance.
Notice that in this scenario, the Cinder node is in the data path.

Figure 1

Logical diagram of the Cinder block storage service using the default LVM driver and local
storage

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

3.1

EqualLogic driver for Cinder


The Cinder block storage service has the capability to use other drivers, also called plugins, which enable
the Cinder node to provision volumes directly onto external storage systems. A driver for EqualLogic is
included in the OpenStack distribution as of the Havana release, on which Red Hat Enterprise Linux
OpenStack Platform 4 is based.
The EqualLogic driver, eqlx.py, is written in Python like the rest of OpenStack. The driver allows the Cinder
node to handle the control plane functions of volume administration and access control by connecting to
the management interface of the EqualLogic storage group via Secure Shell (SSH). One benefit of the
EqualLogic driver is that the Cinder node is taken out of the production data path and the Nova node
connects directly to the EqualLogic storage volumes using iSCSI.
One thing to note is that the Cinder node (rather than the Nova node) will make an iSCSI connection to
the EqualLogic storage group when creating Glance images from volumes or volumes from Glance
images, or when backing up volumes to an object store such as Swift.

Figure 2

Logical diagram of the Cinder block storage service using the EqualLogic driver and a PS Series
storage group

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

3.2

Driver functions
As mentioned, the EqualLogic driver gives Cinder the ability to initiate volume administration tasks at the
EqualLogic storage group using the management interface.
The following functions are direct calls to native EqualLogic functions:
Volume create and delete
Volume attach to and detach from a compute instance
- Access rules are assigned to the volume that grants access to the IQN of the Nova node iSCSI
initiator, or to a set of credentials when using CHAP
Snapshot create and delete
Create volume from snapshot
Get volume information
Clone volume
Extend volume

The following functions are generic volume actions performed by the Cinder node that do not leverage
native EqualLogic functionality. They require iSCSI connectivity from the Cinder node to the EqualLogic
storage:
Create volume from Glance image
Create Glance image from volume
Volume backup to an object store

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Installing EqualLogic as Cinder back-end storage


Section 4 outlines the tasks and considerations involved in configuring the Cinder block storage service to
use an EqualLogic PS Series storage group as back-end storage for provisioning volumes. Figure 3
illustrates the test environment used to analyze EqualLogic driver function and best practices.

Figure 3

10

The OpenStack private cloud test environment with EqualLogic PS Series storage

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

4.1

Prerequisites
The instructions below assume the following:
A fully deployed OpenStack private cloud using Red Hat Enterprise Linux 6.5 and OpenStack
Platform 5 (based on the OpenStack Icehouse release) including running Nova and Cinder services.
RHN access configured on each node, including the following RHN channels:
- rhel-x86_64-server-6-ost-5
- rhel-x86_64-server-6
- rhel-x86_64-server-6-ost-foreman
An initialized and configured EqualLogic PS Series storage group.
Proper network connectivity among the OpenStack nodes and the EqualLogic storage group. See
Section 4.2 for more detail on network connectivity requirements.
The Linux iSCSI initiator installed on the Nova and Cinder nodes.
yum install iscsi-initiator-utils
Update the python-paramiko RPM on the Cinder node to avoid an SSH failure when connecting to
the EqualLogic storage.
yum update python-paramiko
For more information, see the following Cinder bug: https://bugs.launchpad.net/cinder/+bug/1150720
Properly configured SAN interface on Nova and Cinder nodes.
For SAN interface configuration best practices in Red Hat Enterprise Linux, see RHEL 6.3 NIC optimization
and best practices with EqualLogic SANs:
http://en.community.dell.com/techcenter/extras/m/white_papers/20438152.aspx

4.2

OpenStack network traffic types


There are many types of network traffic in an OpenStack cloud. Logical isolation is important for some
network traffic types, such as the PXE-based bare metal provisioning of OpenStack nodes, while other
types of network traffic can be converged depending on the hardware available and the performance
required. The following is a list of potential networks in an OpenStack cloud.
PXE network for bare metal provisioning of OpenStack nodes.
Public network for compute instances to access the internet using assigned floating IP addresses.
The public network also allows end users to access the OpenStack service APIs on the nodes.
Management network for inter-node communication including database traffic, message queuing
and HA protocols.
Storage network (SAN) for iSCSI traffic between the Nova nodes and the Cinder nodes or external
storage systems.
Private network for connectivity among compute instances using fixed IP addresses.

11

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Out-of-band network for access to server, storage and switch management interfaces and remote
access controllers.
Figure 4 illustrates the network connectivity as implemented in the simplified test environment. It consists
of the following networks.
A Management network for inter-node communication and for the hardware management
interfaces. This network also serves as the public network
A SAN for the iSCSI traffic among the OpenStack nodes and the EqualLogic storage group
A private network for connectivity among the compute instances

Figure 4

12

Network connectivity requirements for the OpenStack nodes and the EqualLogic storage
group

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

4.3

Configuring Cinder
To configure the Cinder block storage service, perform the following steps at the Cinder node.
1.

Edit /etc/cinder/cinder.conf to include the following parameters:


[DEFAULT]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip=IP_EQLX
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
Other optional parameters:
san_thin_provision=true|false
eqlx_use_chap=true|false
eqlx_chap_login=EQLX_UNAME
eqlx_chap_password=EQLX_PW
eqlx_cli_timeout=30
eqlx_cli_max_retries=5
san_ssh_port=22
ssh_conn_timeout=30
san_private_key=SAN_KEY_PATH
ssh_min_pool_conn=1
ssh_max_pool_conn=5

For more information on configuring the Cinder EqualLogic driver, see the official RHEL OpenStack
platform documentation see: https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Configuration_Reference_Guide/section_vo
lume-drivers.html#dell-equallogic-driver
2. Restart the Cinder volume service.
/etc/init.d/openstack-cinder-volume restart
3. Look for driver initialization success in the Cinder volume log file.

13

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

grep 'EQL-driver: Setup is complete' /var/log/cinder/volume.log

4.4

Adding an EqualLogic array member


Array members can be added to an EqualLogic PS Series group at any time, up to the group maximum of
16 array members. Once added to the group, assign the array member to the pool that the Cinder
EqualLogic driver is configured to use in /etc/cinder/cinder.conf.
The Cinder EqualLogic driver regularly checks for pool status, once a minute by default. New space
available to the pool will be automatically detected.

4.5

Using more than one EqualLogic group or pool


The Cinder block storage service can be configured to make multiple back-end storage types available for
volume provisioning. This means that more than one EqualLogic group and/or pool can be used as backend storage for the same Cinder service. Below is an example of a cinder.conf file that configures two
EqualLogic groups as two separate Cinder storage back-ends.
[DEFAULT]
enabled_backends=backend1,backend2
san_ssh_port=22
ssh_conn_timeout=30

[backend1]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX1
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL

[backend2]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend2
san_ip=IP_EQLX2

14

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
Once the back-ends are configured in cinder.conf and activated by restarting the Cinder volume service,
they can be associated with a volume type and then chosen during volume provisioning.
To associate a back-end to a volume type:
cinder type-create EQL-group-1
cinder type-key EQL-group-1 set volume_backend_name=backend1
Volume types can be useful for differentiating between EqualLogic storage groups or pools with different
performance profiles. For example, array members with SATA HDD versus those with SSD array members.
It is also possible to add more than one EqualLogic group or pool to the same back-end storage type. This
might be done when adding a second EqualLogic group or pool with an identical performance profile.
Notice that in the cinder.conf below, there are still two different EqualLogic groups but now with a single
back-end name.
[DEFAULT]
enabled_backends=backend1
san_ssh_port=22
ssh_conn_timeout=30

[backend1]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX1
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL

[backend2]

15

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX2
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
When more than one back-end storage is configured with the same back-end name, the Cinder Scheduler
will determine which specific back-end on which to provision a volume by filtering based on availability
zones, capacity and capability and then choosing the back-end based on available capacity.
For more information on configuring multiple back-end storage types see:
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_manag
e-volumes.html#multi_backend

16

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

4.6

When to use the Cinder LVM driver


It is possible to use the Cinder LVM driver with an EqualLogic PS Series Group and there are use cases for
doing so. EqualLogic storage pools require at least one array member, and an array member can only be a
member of one pool at a time. So dedicating a storage pool to Cinder requires a dedicated array member.
It is technically possible for Cinder to share a storage pool with another solution. Cinder is only aware (in
its database) of the volumes it creates, and will not delete or attempt to attach compute instances to other
volumes that may already exist. The Cinder EqualLogic driver checks the available space of the storage
pool prior to creating each volume and presents an error if there is insufficient free space in the pool.
Space available for Cinder volumes can further be limited by quota.
However, if multiple Cinder back-ends are required and only one EqualLogic array member is available,
the Cinder LVM driver can be used instead. By creating very large volumes and connecting the Cinder
node to them using iSCSI, the Cinder LVM driver can provision volumes within them as it would with local
or direct attached storage. See Figure 1 for a logical diagram of the Cinder LVM driver in use.
1.

At the EqualLogic storage:


a. Create two very large volumes.
b. Setup rules to allow access by the Cinder node.
2. At the Cinder node:
a. Install Host Integration Tools for Linux for MPIO to the volumes.
b. Login to the target volumes.
c. Identify the multipath device for each volume in /dev/eql/.
d. Initialize volumes as LVM physical volumes.
pvcreate /dev/eql/volume1
pvcreate /dev/eql/volume2
3. Create an LVM volume group on each volume.
vgcreate cinder-volumes-1
vgcreate cinder-volumes-2
4. Edit /etc/cinder/cinder.conf to contain the lines below.
5. Edit /etc/lvm/lvm.conf to set issue_discards = 1 to pass Linux TRIM and SCSI UNMAP commands
to the EqualLogic storage, freeing up space as Cinder volumes are de-provisioned.
6. /etc/init.d/openstack-cinder-volume restart
[DEFAULT]
enabled_backends=lvmdriver-1,lvmdriver-2
[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

17

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

volume_backend_name=LVM_iSCSI
[lvmdriver-2]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI

18

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

EqualLogic MPIO
Multi-path I/O to EqualLogic volumes from OpenStack nodes is currently not supported and is not
enabled with the default Nova configuration in RHEL OSP 5. However, it is technically possible to enable
iSCSI multipath I/O on a particular compute instance using the following steps. It is recognized that these
steps are not friendly to the OpenStack workflow. They are provided as a guide for OpenStack developers
to fully enable EqualLogic MPIO in the future.
1.

Make the following change to /etc/nova/nova.conf.


iscsi_use_multipath=true

2. Restart the Nova API and compute services at the nodes running each service.
/etc/init.d/openstack-nova-api restart
/etc/init.d/openstack-nova compute restart
Linux multipathd will not be used to enable multi-path I/O to EqualLogic volumes. However, currently
Nova code requires multipathd, specifically the multipath utility, to be installed when iSCSI multipath is set
to true.
3. Install, enable, and configure the native multipath application at the Nova node running the
compute service.
yum install device-mapper-multipath
mpathconf --enable
mpathconf --find_multipaths y
Host Integration Tools (HIT) for Linux provide EqualLogic recommended multi-path (MPIO) functionality
by creating a multipath device for each volume and providing a kernel driver which together intelligently
direct I/O to the volume slices on the correct EqualLogic array members. HIT for Linux also provides
command-line tools for discovering and connecting to EqualLogic volumes and a performance tuning
system check. Follow the instructions below to install HIT for Linux.
4. Install the HIT for Linux at the Nova node running the compute service.
a. Download the latest HIT for Linux ISO from the Dell EqualLogic support site (login required).
i. https://eqlsupport.dell.com/secure/login.aspx
b. Mount the ISO image from within Linux.
c. Change to the directory of the ISO mount point, for example:
cd /media/CDROM
d. Run the HIT for Linux installer script.
./install --nogpgcheck
e. Follow the instructions, choosing to include only the SAN interface subnets.

19

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Eqltune, the EqualLogic performance tuning utility, will be run automatically by the HIT for Linux
installer. This utility will detect and fix problematic settings for block devices, Ethernet adapters,
sysctl tunable options, and more. Most importantly, it configures RHEL 6 to allow I/O over multiple
interfaces on the same SAN subnet. Eqltune will record and can, if necessary, restore the original
configuration. Run eqltune from the command line for further information.
f.

Once complete, include into the shell the HIT bash configuration file for command line
completion of EqualLogic tools. Note the space between the period and the full path.

. /etc/bash_completion.d/equallogic
5. Using the Horizon web interface:
a. Create a volume on the EqualLogic group.
b. Attach the volume to a running compute instance.
6. At the Nova node hosting the compute instance, multiple iSCSI sessions can be observed. Initially,
one iSCSI session per SAN interface will be created. HIT will monitor the volume sessions and will
create additional sessions if the volume is distributed across more than one array member. For
example, if the volume is distributed across two array members, HIT will ensure that four iSCSI
sessions are created. Use the following command to view current iSCSI sessions.
iscsiadm m session
7.

Unfortunately, Nova references a single path device node when adding the virtual disk to the
compute instance and I/O will not be distributed across the multiple iSCSI sessions. Correcting this
requires identifying the multipath device created by HIT for the volume and modifying the
compute instance configuration file to reference it. This can be done with the following steps.
a. List the Cinder volumes and identify the volume attached to the compute instance.
cinder list
b. Find the EqualLogic multipath device that corresponds to this volume.
ls -al /dev/eql/
c. Find the Nova ID of the compute instance.
nova list
d. Find the KVM domain name of the compute instance.
nova show <compute instance ID>

...
OS-EXT-SRV-ATTR:instance_name

| instance-00000005

...
e. Find the KVM domain ID of the compute instance using the KVM domain name.

20

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

virsh list

f.

Find the correct virtual disk.

virsh domblklist <domain ID>


g. Edit the KVM domain configuration file for the compute instance and change the source
device of the virtual disk to the EqualLogic multipath device.
virsh edit <domain ID>
Before:
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/disk/by-path/ip-10.10.0.160:3260-iscsi-iqn.200105.com.equallogic:0-af1ff6-806f921e4-a3cb0d3e6ab53f3e-volume-b03dc2578b01-41e9-bf8a-8c0a0a28047f-lun-0'/>
<target dev='vdb' bus='virtio'/>
<serial>b03dc257-8b01-41e9-bf8a-8c0a0a28047f</serial>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
</disk>
After:
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/eql/volume-37283b00-b81c-4298-a9ef-353d6a2eabe5'/>
<target dev='vdc' bus='virtio'/>
<serial>37283b00-b81c-4298-a9ef-353d6a2eabe5</serial>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>

21

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

h. Restart the compute instance.

22

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Conclusion
Cinder block storage is an important component of an OpenStack private cloud, providing persistent
storage or bootable volumes for Nova compute instances.
A Cinder driver, also called a plugin, specifically for EqualLogic is included in the community distribution of
OpenStack as of the Havana release. The driver allows an EqualLogic PS Series storage group to serve as
back-end storage for the Cinder. The Cinder service can provision volumes on the EqualLogic group
automatically, allowing the Nova node to connect directly to the storage group using iSCSI on behalf of
the compute instances.
Configuring Cinder to use EqualLogic as back-end storage is a very straightforward process. The
EqualLogic group can be easily scaled out without the need for Cinder reconfiguration. The Cinder service
can utilize multiple array members or pools per storage group, and even supports multiple groups. Pools
or groups with different performance profiles can be differentiated using Cinder volume typing which
allows end users to choose backend storage types at the time of volume creation within the Horizon
interface.
While EqualLogic MPIO does not work out of the box, it can be enabled on a per compute instance basis.
The steps are provided as a guide to future OpenStack development.

23

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Test configuration details


Hardware

Description

Blade enclosure

(1) Dell PowerEdge M1000e chassis:


CMC firmware: 4.5

Blade servers

(2) Dell PowerEdge M620 server (Cinder nodes):


RedHat Enterprise Linux 6.5 x86_64
RHEL OpenStack Platform 5
BIOS version: 2.2.7
iDRAC firmware: 1.56.55
(2) Intel Xeon E5-2620
16GB RAM
Dual port Intel x520-k 10GbE CNA
Driver: 3.17.3
Firmware: 15.0.28
(2) Dell PowerEdge M820 server (Nova nodes):
RedHat Enterprise Linux 6.5 x86_64
RHEL OpenStack Platform 5
BIOS version: 2.0.24
iDRAC firmware: 1.56.55
(4) Intel Xeon E5-4620
256GB RAM
Dual port Intel x520-k 10GbE CNA
Driver: 3.17.3
Firmware: 15.0.28
Dell EqualLogic Host Integration Tools for Linux 1.3.0

24

Rack servers

(2) Dell PowerEdge R620 server (RHOS Manager):


RedHat Enterprise Linux 6.5 x86_64
RHEL OpenStack Platform 5
BIOS version: 2.2.2
iDRAC firmware: 1.56.55
(2) Intel Xeon E5-2650
64GB RAM
Dual port Broadcom BCM57810 10GbE CNA
Driver: 1.72.51-0
Firmware: 7.8.53

Blade I/O modules

(2) Dell 10Gb Ethernet Pass-through module

SAN switches

(2) Dell Force10 s4810


Firmware: 9.3.0.0

SAN array members

(2) Dell EqualLogic PS6210


(2) 10GbE network adapters
(24) SED Seagate ST900MM0036 900GB 10K
Firmware: LEF5
Firmware: 7.0.3

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud

Additional Resources
EqualLogic Configuration Guide:
http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/19852516/download.aspx
EqualLogic Compatibility Matrix (ECM):
http://en.community.dell.com/techcenter/storage/w/wiki/2661.equallogic-compatibilitymatrix.aspx
EqualLogic Switch Configuration Guides:
http://en.community.dell.com/techcenter/storage/w/wiki/4250.switch-configuration-guides-bysis.aspx
The latest EqualLogic firmware updates and documentation (site requires a login):
http://support.equallogic.com
Official RHEL OpenStack platform documentation:
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Configuration_Reference_Guide/sect
ion_volume-drivers.html#dell-equallogic-driver
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_
manage-volumes.html#multi_backend
RHEL 6.3 NIC optimization and best practices with EqualLogic SANs:
http://en.community.dell.com/techcenter/extras/m/white_papers/20438152.aspx
Dell Tech Center Storage page:
http://en.community.dell.com/techcenter/storage/

25

BP1081 | Best Practices for EqualLogic in an OpenStack Private Cloud