Professional Documents
Culture Documents
www.citrix.com
Contents
Introduction........................................................................................................................................................4
CitrixXenServerforEnterprisereadyVirtualization .................................................................................................4
EMCCLARiiONCX4StoragePlatform ........................................................................................................................5
Introduction.........................................................................................................................................................47
PreconfigurationXenServer ...............................................................................................................................47
EMCCLARiiONandXenServeriSCSISetup ..........................................................................................................53
Page3
Introduction
IT departments face the constant demand to respond more rapidly to changing business priorities,
application demands, and user dynamics all without compromising security or manageability or
increasing server count. They must deliver robust data management, business uptime, and complete
backup and recovery capabilities. In order to address these challenges, enterprises need to:
Adjust allocation of server and storage resources for different application workloads on the
fly
Deploy a tightly unified server and storage virtualization solution that is reliable, not overly
complex and leverages all available capabilities
This document presents setup and configuration instructions for using EMC CLARiiON CX4
series storage systems as networked attached storage solutions for Citrix XenServer.
Live migration
Page5
Delivering up to twice the performance and scale as the previous CLARiiON generation,
CLARiiON CX4 is the leading midrange storage solution to meet a full range of needsfrom
departmental applications to data-center-class business-critical systems.
Basic features include:
Hot-pluggable I/O modules 4 Gb/s FC, 8 Gb/s FC, 1 Gb/s iSCSI, and 10 Gb/s iSCSI
RAID level 0, 1, 1/0, 3, 5, and 6, individual disk support, and global hot sparing
For more information on the EMC CLARiiON CX4 Series visit the EMC website:
http://www.emc.com/products/series/cx4-series.htm
Page6
Managed LUNs: Managed LUNs are accessible via the StorageLink feature included in Citrix
XenServer Enterprise and Platinum Editions, and are hosted on a variety of storage arrays.
LUNs are allocated on demand via StorageLink and mapped dynamically to the host via the
StorageLink service while a VM is active. All the thin provisioning and fast clone capabilities
of the device are exposed via StorageLink.
VHD files. The VHD format can be used to store VDIs in a sparse format. Being sparse,
the image file grows proportionally to the number of writes to the disk by the Virtual
Machine (VM), so large portions of the disk which are typically unused do not consume
unnecessary space. VHD on NFS, iSCSI, or Hardware HBA storage repositories can be
shared among all hosts in a pool.
The section entitled XenServer Shared Storage Options discusses each option in more detail.
Page7
Managing Storage
There are four XenServer object classes that are used to describe, configure, and manage storage:
Storage Repositories (SRs) are storage targets containing homogeneous virtual disks
(VDIs). SR commands provide operations for creating, destroying, resizing, cloning,
connecting and discovering the individual Virtual Disk Images (VDIs) that they contain. A
storage repository is a persistent, on-disk data structure. So the act of "creating" a new SR is
similar to that of formatting a disk. SRs are long-lived, and may be shared among XenServer
hosts or moved between them. There are several classes of XenServer SRs available:
o Local Storage By default, XenServer uses the local disk on the physical host on
which it is installed. The Linux Logical Volume Manager (LVM) is used to manage
VM storage. A VDI is implemented in VHD format in an LVM logical volume of
the specified size.
o NFS NFS is a ubiquitous form of storage infrastructure that is available in many
environments. XenServer allows existing NFS servers that support NFS V3 over
TCP/IP to be used immediately as a storage repository for virtual disks (VDIs).
VDIs are stored in the Microsoft VHD format only.
o iSCSI and Fibre Channel The creation of iSCSI or Fibre Channel (Hardware HBA)
SRs involves erasing any existing data on a specified LUN. A LUN will need to be
created on the EMC storage before creating the XenServer SR. Volume management
is performed via LVM (Logical Volume Manager), and the underlying VDI storage
on an iSCSI or FC SR is VHD.
o Advanced StorageLink Technology Citrix StorageLink lets users automate the
configuration and provisioning of virtual machine storage, taking advantage of
advanced features of the attached storage array. StorageLink enables the user to
create Citrix XenServer virtual machines from logical vendor-specific storage
repositories that support advanced capabilities such as snapshots, cloning, thin
provisioning, and data deduplication. StorageLink also uses advanced storage
capabilities to rapidly create virtual machines, increase storage utilization, and
provide improved business continuity while lowering total cost of ownership. Citrix
StorageLink seamlessly integrates with storage arrays using either the standards-based
SMI-S interface or via a custom vendor-specific StorageLink Storage Adapter.
Virtual Disk Images (VDIs) are an on-disk representation of a virtual disk provided to a
VM. VDIs are the fundamental unit of virtualized storage in XenServer. Similar to SRs,
VDIs are persistent, on-disk objects that exist independently of XenServer Hosts.
Page8
Physical Block Devices (PBDs) represent the interface between a physical server and an
attached SR. PBDs are connector objects that allow a given SR to be mapped to a
XenServer host. PBDs store the device configuration fields that are used to connect to and
interact with a given storage target.
Virtual Block Devices (VBDs) are a connector object (similar to the PBD described above)
that allows mappings between VDIs and Virtual Machines (VMs). In addition to providing a
mechanism to attach (or plug) a VDI into a VM, VBDs allow the fine-tuning of parameters
regarding QoS (quality of service), statistics, and the bootability of a given VDI.
Page9
Page10
Page11
Page12
Page13
Storage Multipathing
XenServer 5.0 introduced Active/Active multipathing for iSCSI and FC protocols for I/O
datapaths. Dynamic multipathing uses a round-robin mode load balancing algorithm, so both routes
will have active traffic on them during normal operations. Multipathing can be enabled via
XenCenter or on the command line.
XenServer 5.5 supports ALUA, asymmetric logical unit access. ALUA is a relatively new
multipathing technology for asymmetric arrays. EMC CLARiiON CX4 arrays are ALUA compliant.
Page14
Used Environment
For the creation of this document the following hardware was used:
FC Cables
Hardware Installation
For the installation of server hardware, HBAs, connectivity components (e.g. network or FC
switches) and storage arrays, please refer to the documentation provided by the relevant
manufacturer.
Documentation for the EMC CLARiiON CX4 array can be found on EMC Powerlink
(https://powerlink.emc.com/).
Details on supported arrays and required FLARE and SMI-S provider versions can be found on the
StorageLink Hardware Compatibility List: http://hcl.vmd.citrix.com/SLG-HCLHome.aspx
Software Installation
For the installation of server software, firmware and storage array software, please refer to the
documentation provided by the relevant manufacturer.
Documentation for the various software components for the EMC CLARiiON CX4 array (e.g.
FLARE and Navisphere software can be found on EMC Powerlink (https://powerlink.emc.com/).
Documentation for setting up Citrix XenServer and components can be found on the Citrix
Knowledge Center: http://support.citrix.com/product/xens/
Page16
If this is the Pool Master you will be asked to select a new master. If there are VMs running on this
host it will also XenMotion these VMs to another server in the pool. Select the host you want to be
the new master and click on Enter Maintenance.
Once the server is in Maintenance Mode* right-click on the host and select Properties. In the
Properties window select the Multipathing option on the left and mark Enable multipathing on this
server. Click OK.
Page17
If you put the pool master in Maintenance Mode, XenCenter needs to reconnect to the new pool
master, which will take some time. Wait until XenCenter automatically reconnects to the pool.
After enabling multipathing exit Maintenance Mode by right-clicking on the server and selecting
Exit Maintenance Mode.
Repeat the above procedure for all servers in the Resource Pool.
Page18
Page19
On the General tab of the Storage Systems Properties window that appeared make sure that the
Storage Group checkbox is marked and click OK.
Provisioning Storage
In Navisphere click on the Provision icon on the left. In the Welcome window of the Storage
Provisioning Wizard click Next.
In the Select Servers window select Continue without assigning LUNs at this time and click Next.
Page20
In the Select Storage System windows ensure your storage system is selected and click Next.
In the Select Storage Pool window select the RAID set you want this LUN to reside on. Click Next.
Page21
In the LUN Properties window you can chose to give the LUN a name or use an automatically
generated LUN name. You can also set the size of the LUN in this window. Set these values to what
is relevant for your environment and click Next.
In the Select Folder window you have the option to add the LUN to an existing or new folder.
Folders are a way to organize your LUNs in the Navisphere management interface. Click Next.
Page22
Verify the values in the Summary window and click Finish to create the LUN.
If the LUN creation is completed successfully click Finish to close the Results window.
Page23
This LUN should now be visible in Navisphere under the LUN Folders section (location depending
on which Storage Processor is the current owner) as well as under the RAID Group on which the
LUN was created.
Page24
Page25
Page26
Page27
Page28
Page29
After the installation completes, start a command prompt and go to the c:\Program
Files\EMC\ECIM\ECOM\bin directory and run TestSMIProvider.exe
Accept defaults for Connection Type, Host, Port, Username and Password* and logging settings.
* EMC best practices recommend to change the default admin password and to create a separate
user when using the SMI-S Provider. If these best practices were followed, fill in the appropriate
credentials.
Page30
Add System: y
Array Type: 1
User: <fill in the username for the array, typically the same user as for logging into
Navisphere>
After some time confirmation of successfully adding the array should be displayed.
Press Enter to continue and once back at the prompt select q to exit TestSMIProvider.exe
Page32
Configuring StorageLink
Open the StorageLink Manager and connect to the StorageLink Gateway.
Page33
First step is to add the XenServer host systems. In the middle pane, select Add Hypervisor Host
Enter the hostname or IP address of the XenServer master in the Hostname field and provide valid
credentials for connecting to the XenServer Resource Pool. Uncheck Enable Site Recovery for this
Host.
Page34
Click on OK. StorageLink will now enumerate the hosts in the Resource Pool, VMs on the resource
pool as well as available iSCSI and FC initiators. These can be viewed by expanding the tree in the
left pane of the StorageLink Manager.
Page35
Next add new storage array. Select Storage Infrastructure from the left pane and select Add Storage
Systems from the middle pane.
In the Add Storage Adapter wizard fill out the relevant information.
Storage adapter: Select EMC CLARiiON Storage Adapter (SMI-S) from the drop down box.
Page36
CIMOM IP address: <the IP address of the server on which the SMI-S provider is
installed>
User name: <the username for the SMI-S provider (not the array credentials)>
Click OK. After the job completes, the added array can be viewed in the left pane.
Page37
Page38
In the Properties window, select StorageLink Gateway on the left and fill out the StorageLink
Gateway details.
Username: <the username for the StorageLink Gateway service, as entered during the
StorageLink installation>
Page39
Page40
Select Advanced StorageLink technology as the type of new storage and click Next.
Page41
Give the Storage Repository a name and select EMC CLARiiON from the drop down list. Click
Next.
Page42
Page43
Page44
After the Storage Repository has been created you can start creating VMs on this Storage
Repository. During the VM creation, the correct initiator (FC or iSCSI) will be configured
automatically on the array. If changes are required for the initiator settings on the array, please
follow the instructions in the EMC documentation.
Page45
Page46
Pre-configuration XenServer
It's recommended best practice to create a separate network for your Storage Repository apart from
your management traffic and VM network traffic. The storage traffic should be on separate subnets.
This section describes the steps required to create a separate storage network.
Page47
Page48
Give the new interface a recognizable name, and select the Network you want the dedicated
interface on.
Page49
Click on the Use these IP and DNS settings: radio button and enter a starting IP address for the
NICs in the Network.
Repeat the above steps for each NIC dedicated to storage, and click OK.
Page50
In the environment used for this document we have 2 paths available from each server and created 2
dedicated storage networks.
Page51
Page52
Page53
In the Create Initiator Record window fill out the relevant information for the XenServer host you
are adding.
WWW/IQN: Fill out the IQN of the XenServer host you are adding. Use the relevant
IQNs for your environment as noted in the "Preparing XenServer" section above.
SP - port: Select the iSCSI port on your CLARiiON array which the XenServer host will
have access to. You need to create an initiator record for each path to the array.
Failover Mode 4 is Asymmetric Active/Active and is based on the Asymmetric Logical Unit
Access (AULA) standard. A whitepaper for this feature can be found on EMC Powerlink by
searching for "Asymmetric Active/Active" in the documents section.
Filled out the window would look something like this:
Click OK.
Click Yes to confirm.
Page55
When returning to the Connectivity Status window, click Refresh All to make the added Host
Initiator Record visible.
To add additional paths to this initiator, select the initiator and select Create.
Repeat the steps described above, but since these are additional paths to an existing initiator record
we can mark Selected Hosts. Ensure that you use a different value for the SP - port for each
additional to path to the array.
Page56
Repeat the steps above for each initiator record you want to add.
Once all initiator records are added for each XenServer host and for each path. Click Refresh All to
make all records visible. In the environment used for the creation of this document 2 XenServers
were used and 4 paths were available for each host.
Click OK to close the window.
Page57
In the Create Storage Group Window enter a name for this Storage Group and select OK.
Page58
Expand the Storage Group tree and right-click the newly created Storage Group and select
Properties.
Page59
Select the LUN tab. Find the LUN you want to make available to your XenServer environment and
select it. Click Add. Click Apply to confirm your selection
Page60
Select your XenServers from the Available Hosts section and click the
to the Hosts to be Connected section and click on OK.
Page61
Page62
Page63
In the Location window give the SR a name and provide the IP addresses of all the adapters in the
EMC CLARiiON array separated by commas.
If CHAP is enabled on your storage array, select Use CHAP and fill out the credentials.
Page64
From the Target IQN dropdown list, select the wildcard option * (<ip addresses>). After this click
Discover LUNs.
Page65
Once the LUN you created before has been discovered click Finish.
XenServer will now verify if there are existing SRs on the LUN to re-attach to. Since this is a newly
created LUN we need to confirm to format the disk and create the SR. Click Yes to confirm.
Page66
After completion of the formatting, you can verify the creation of the SR in XenCenter by selecting
the SR in the left hand pane.
To verify the multipathing, expand the Multipathing box in the right pane.
Page67
Page68
Page69
If the FC HBAs in the XenServer hosts are setup correctly and the FC infrastructure is setup
correctly, the FC initiators should be automatically detected and visible in the Connectivity Status
dialog box.
In the Register Initiator Record window fill out the relevant information.
Page70
WWN/IQN :prepopulated.
SPport:prepopulated.
InitiatorType:SelectCLARiiONOpen
FailoverMode:Setto4*
HostInformation:keepSelectedHost
Failover Mode 4 is Asymmetric Active/Active and is based on the Asymmetric Logical Unit
Access (AULA) standard. A whitepaper for this feature can be found on EMC Powerlink by
searching for "Asymmetric Active/Active" in the documents section.
Filled out the window would look something like this:
Click OK.
Click Yes to confirm.
Page71
Repeat the steps above for each initiator record you want to register.
Once all initiator records are registered for each XenServer host and for each path. Click Refresh All
to make all records visible.
Page72
In the Create Storage Group Window enter a name for this Storage Group and select OK.
Page73
Expand the Storage Group tree and right-click the newly created Storage Group and select
Properties.
Page74
Select the LUN tab. Find the LUN you want to make available to your XenServer environment and
select it. Click Add. Click Apply to confirm your selection
Page75
Select the FC initiators for your XenServers from the Available Hosts section and click the
button to add them to the Hosts to be Connected section and click on OK.
Page76
Page77
Page78
Select Hardware HBA as the type of new storage and click Next.
Page79
XenServer will now probe the Storage Array for available LUNs.
One this is finished give the SR a name and select the LUN you want to use for this Storage
Repository and click Finish
Page80
XenServer will now verify if there are existing SRs on the LUN to re-attach to.
Since this is a newly created LUN we need to confirm to format the disk and create the SR. Click
Yes to confirm.
Page81
After completion of the formatting, you can verify the creation of the SR in XenCenter by selecting
the SR in the left hand pane.
To verify the multipathing, expand the Multipathing box in the right pane.
The number of active paths can differ depending on the zoning setup of the FC switches. Please
follow best practices as recommended by the FC switch vendor.
Page82
Page83
AboutCitrix
CitrixSystems,Inc.(NASDAQ:CTXS)istheleadingproviderofvirtualization,networkingandsoftwareasaservice
technologiesformorethan230,000organizationsworldwide.ItsCitrixDeliveryCenter,CitrixCloudCenter(C3)
andCitrixOnlineServicesproductfamiliesradicallysimplifycomputingformillionsofusers,deliveringapplications
asanondemandservicetoanyuser,inanylocationonanydevice.Citrixcustomersincludetheworldslargest
Internetcompanies,99percentofFortuneGlobal500enterprises,andhundredsofthousandsofsmallbusinesses
andprosumersworldwide.Citrixpartnerswithover10,000companiesworldwideinmorethan100countries.
Foundedin1989,annualrevenuein2008was$1.6billion.
2009CitrixSystems,Inc.Allrightsreserved.Citrix,AccessGateway,BranchRepeater,CitrixRepeater,
HDX,XenServer,XenApp,XenDesktopandCitrixDeliveryCenteraretrademarksofCitrixSystems,Inc.
and/oroneormoreofitssubsidiaries,andmayberegisteredintheUnitedStatesPatentandTrademarkOffice
andinothercountries.Allothertrademarksandregisteredtrademarksarepropertyoftheirrespectiveowners.
Page84