You are on page 1of 154

Citrix CloudPlatform

(powered by Apache
CloudStack) Version 4.5.1
Getting Started Guide
Revised on July 03, 2015 09:30 am IST

Citrix CloudPlatform

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5.1 Getting Started Guide

Citrix CloudPlatform (powered by Apache CloudStack) Version
4.5.1 Getting Started Guide
Revised on July 03, 2015 09:30 am IST
Author

Citrix CloudPlatform

© 2014 Citrix Systems, Inc. All rights reserved. Specifications are subject to change without
notice. Citrix Systems, Inc., the Citrix logo, Citrix XenServer, Citrix XenCenter, and CloudPlatform
are trademarks or registered trademarks of Citrix Systems, Inc. All other brands or products are
trademarks or registered trademarks of their respective holders.

If you have already installed CloudPlatform and you want to learn more about getting started with
CloudPlatform, read this document. This document will help you learn about the CloudPlatform user
interface, provisioning your cloud infrastructure using the hypervisors that CloudPlatform supports,
using accounts in CloudPlatform, and configuring projects in CloudPlatform.

1. About
1.1.
1.2.
1.3.
1.4.
1.5.

this Guide
About the Audience for this Guide .................................................................................
Using the Product Documentation ..................................................................................
Experimental Features ..................................................................................................
Additional Information and Help .....................................................................................
Contacting Support .......................................................................................................

1
1
1
1
1
2

2. User Interface
3
2.1. Using SSH Keys for Authentication ................................................................................ 3
2.1.1. Creating an Instance from a Template that Supports SSH Keys ............................ 3
2.1.2. Creating the SSH Keypair .................................................................................. 3
2.1.3. Creating an Instance .......................................................................................... 4
2.1.4. Logging On Using the SSH Keypair .................................................................... 5
2.1.5. Resetting SSH Keys ........................................................................................... 5
3. Provisioning Your Cloud infrastructure on KVM
7
3.1. Overview of Provisioning Steps ..................................................................................... 7
3.2. Adding a Zone .............................................................................................................. 7
3.2.1. Creating a Secondary Storage Mount Point for the Zone ...................................... 8
3.2.2. Adding a New Zone ........................................................................................... 8
3.3. Adding a Pod ............................................................................................................. 17
3.4. Adding a KVM Cluster ................................................................................................. 18
3.4.1. Adding a KVM Host ......................................................................................... 19
3.5. Adding Primary Storage .............................................................................................. 20
3.6. Adding Secondary Storage .......................................................................................... 21
3.6.1. Adding an NFS Secondary Storage for Each Zone ............................................. 22
3.6.2. Configuring S3 Object Store for Secondary Storage ........................................... 23
3.6.3. Upgrading from NFS to Object Storage ............................................................. 26
3.7. Initialize and Test ........................................................................................................ 27
4. Provisioning Your Cloud infrastructure on XenServer
4.1. Overview of Provisioning Steps ...................................................................................
4.2. Adding a Zone ............................................................................................................
4.2.1. Creating a Secondary Storage Mount Point for the Zone ....................................
4.2.2. Adding a New Zone .........................................................................................
4.3. Adding a Pod .............................................................................................................
4.4. Adding a XenServer Cluster ........................................................................................
4.4.1. Running a Cluster with a Single Host ................................................................
4.4.2. Adding a XenServer Host .................................................................................
4.5. Adding Primary Storage ..............................................................................................
4.6. Adding Secondary Storage ..........................................................................................
4.6.1. Adding an NFS Secondary Storage for Each Zone .............................................
4.6.2. Configuring S3 Object Store for Secondary Storage ...........................................
4.6.3. Upgrading from NFS to Object Storage .............................................................
4.7. Initialize and Test ........................................................................................................

29
29
29
30
30
40
40
41
41
46
47
48
49
52
53

5. Provisioning Your Cloud infrastructure on VMware vSphere
5.1. Overview of Provisioning Steps ...................................................................................
5.2. Adding a Zone ............................................................................................................
5.2.1. Creating a Secondary Storage Mount Point for the Zone ....................................
5.2.2. Adding a New Zone .........................................................................................
5.3. Adding a Pod .............................................................................................................
5.4. Add Cluster: vSphere ..................................................................................................
5.4.1. VMware Cluster Size Limit ................................................................................
5.4.2. Adding a vSphere Cluster .................................................................................
5.4.3. Adding a Host (vSphere) ..................................................................................

55
55
55
56
56
63
63
64
64
65
iii

........... 109 109 109 109 121 121 122 123 124 125 126 127 127 A....... 8.......2.......................... 8.......6......................................................................................... (Optional) Setting Baremetal Configuration Parameters .......................................................................................................... Adding a Hyper-V Cluster ...........................................................................4........... 5.............. 76 6........................................................................................................................................1 Getting Started Guide 5...............................1.................................................................... Adding a Hyper-V Host ........1........................ Overview of Provisioning Steps .................... Upgrading from NFS to Object Storage ........ Initialize and Test ......................2................................... Create a Baremetal Template .................... 75 6....................... 92 7.............. 88 7..... 8................................4.......... 8................................................ Provisioning Cloud Infrastructure on Baremetal 8...2....................2.................................. 105 7...................... 8.........8....1................... 103 7.............................7....................... Initialize and Test .............................................. 8............ 66 67 68 69 72 73 6................... 132 Index iv 149 .................5..................................................... Important Global Configuration Parameters 129 A...................... Adding Secondary Storage .............................1................................. (Experimental Feature) Adding an LXC Host ..6.....1...........................6.............6............................ 106 7........................................................................2.6........................ Adding a Pod ......... Adding a New Zone .. Provisioning a Baremetal Instance .. Provisioning Your Cloud Infrastructure on Hyper-V 75 6...... 8............................... 91 7...................... Adding a New Zone ..................................6......... 84 6....7........................ Adding an NFS Secondary Storage for Each Zone ................................................6..................................................... Adding Primary Storage ...................1.. System Account ..........................................................................9........ 75 6.... Upgrading from NFS to Object Storage .... Limitations of Baremetal Deployment ......5............................Citrix CloudPlatform (powered by Apache CloudStack) Version 4......... 84 6............................................1........................... Adding Primary Storage ........ 106 7................................................. Add a Baremetal Cluster ........................................................... 91 7... Adding Secondary Storage .....................5.................5........................2.................................. Adding a Zone ......1............. Adding an LXC Cluster .................................... Adding a Pod .... Create a Baremetal Compute Offering ............. 8....... Accessing Your Baremetal Instance ....................2............. 5. Important Healthcheck Parameters ............... 87 6...................................................8...............................3... 5................. 83 6................ Adding Secondary Storage ...... 86 6............................ Provisioning Completion Notification .............1........1...........3................2.......................................1..................................6....................................... 8.......................................................................... Initialize and Test ............................. 101 7............. 76 6...........................................1........ 5...................1....... Adding a Zone .........................6........................7........................................... 102 7................................2... Add a Baremetal Host .................. Adding an NFS Secondary Storage for Each Zone .................................... Adding a New Zone ................2................................................. 8............... 5...................3............................. Creating a Secondary Storage Mount Point for the Zone .................................3.................. Overview of Provisioning Steps ...............................................5.............. Adding Primary Storage ........ Adding a Zone .....................7... 107 8........ 8.......... Create a Secondary Storage Mount Point for the New Zone ..................................... 92 7...................... 100 7. Configuring S3 Object Store for Secondary Storage .................................................2....................4............................................4..........2.. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC 91 7.............1...4.......

Additional Information and Help Troubleshooting articles by the Citrix support team are available in the Citrix Knowledge Center at support.com/product/cs/. About this Guide 1. 1.2. see the API documents that are available with CloudPlatform.1. and share any feedback with Citrix. The following experimental featues are inluded in this release: • Linux Containers • Supported Management Server OS and Supported Hypervisors: RHEL7/CentOS 7 for experimental use with Linux Containers 1. 1.4. 1 .3. About the Audience for this Guide This guide is meant for anyone responsible for configuring and administering the public cloud infrastructure and the private cloud infrastructure of enterprises using CloudPlatform such as cloud administrators and Information Technology (IT) administrators. For information about the Application Programming Interfaces (APIs) that is used in this product. For any issues with these experimental features. see the Citrix CloudPlatform (powered by Apache CloudStack) Release Notes.citrix. customers can open a support ticket but Citrix cannot commit to debugging or providing fixes for them. Using the Product Documentation The following guides provide information about CloudPlatform: • Citrix CloudPlatform (powered by Apache CloudStack) Installation Guide • Citrix CloudPlatform (powered by Apache CloudStack) Concepts Guide • Citrix CloudPlatform (powered by Apache CloudStack) Getting Started Guide • Citrix CloudPlatform (powered by Apache CloudStack) Administration Guide • Citrix CloudPlatform (powered by Apache CloudStack) Hypervisor Configuration Guide • Citrix CloudPlatform (powered by Apache CloudStack) Developer's Guide For complete information on any known limitations or issues in this release.Chapter 1. Experimental Features CloudPlatform product releases include some experimental features for customers to test and experiment with in non-production environments.

citrix. To contact the 1 support team.5. Contacting Support The support team is available to help customers plan and execute their installations.Chapter 1. About this Guide 1. 1 http://support.citrix.com/cloudsupport 2 .com/cloudsupport by using the account credentials you received when you purchased your support contract. log in to the support portal at support.

see the Creating VMs section in the Citrix CloudPlatform (powered by Apache CloudStack) Version 4. Creating the SSH Keypair You must make a call to the createSSHKeyPair API method. Copy the file to /etc/init. Creating an Instance from a Template that Supports SSH Keys To create an instance from a template that supports SSH keys. Using SSH Keys for Authentication In addition to the username and password authentication. Run the script while starting up the operating system: chkconfig --add cloud-set-guest-sshkey 6. you can manage multiple instances. Using a single SSH key pair. one cloud user cannot log in to another cloud user's instances unless they share their ssh key files.1.in 3. Give the necessary permissions on the script: chmod +x /etc/init.d/cloud-set-guest-sshkey 5. You can either use the CloudPlatform python API library or the curl commands to make the call to the CloudPlatform API.cloud.1.com/templates/4. 2. Stop the instance. do the following: 1. make a call from the CloudPlatform server to create a SSH keypair called "keypair-doc" for the administrator account in the root domain: 3 . For more information on creating a new instance.1.1 Administration Guide.1. Download the cloud-set-guest-sshkey script file from the following link: http://download. 2.2/bindir/cloud-set-guest-sshkey. 4.Chapter 2. CloudPlatform supports using SSH keys to log in to the cloud infrastructure for additional security for your cloud infrastructure.d.2.5. 2. User Interface 2. You can use the createSSHKeyPair API to generate the SSH keys. Because each cloud user has their own ssh key. For example. Create a new instance by using the template provided by CloudPlatform.

and you will need to use the API keys.0" encoding="ISO-8859-1"?><createsshkeypairresponse cloud-stack-version="3. The file looks like this: -----BEGIN RSA PRIVATE KEY----MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3 -----END RSA PRIVATE KEY----- 3.Chapter 2. your URL or port number will be different. Copy the key data into a file.0. User Interface Note Ensure that you adjust these values to meet your needs. Creating an Instance Ensure that you use the same SSH key name that you created. Save the file. 2.1.0. 1.3.20120228045507"><keypair><name>keypairdoc</name><fingerprint>f6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56</ fingerprint><privatekey>-----BEGIN RSA PRIVATE KEY----MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3 -----END RSA PRIVATE KEY----</privatekey></keypair></createsshkeypairresponse> 2. If you are making the API call from a different server. 4 . Run the following curl command: curl --globoff "http://localhost:8080/?command=createSSHKeyPair&name=keypairdoc&account=admin&domainid=1" The output appears similar to the following: <?xml version="1.

Logging On Using the SSH Keypair Note You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair. then call resetSSHKeyForVirtualMachine.ssh/keypair-doc <ip address> The -i parameter directs the SSH client to use an SSH key found at ~/. A sample curl command to create a new instance is: curl --globoff http://localhost:<port number>/? command=deployVirtualMachine&zoneId=1&serviceOfferingId=18727021-7556-4110-9322d625b52e0813&templateId=e899c18ace13-4bbf-98a9-625c5026e0b5&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5&account =admin\&domainid=1&keypair=keypair-doc Substitute the template. 2. Just create or register a new keypair. Logging On Using the SSH Keypair To test the successful generation of the your SSH key. a user can set or reset the SSH keypair assigned to a virtual machine. verify whether you can log on to the CloudPlatform setup. Resetting SSH Keys With the resetSSHKeyForVirtualMachine API command. run the following command: ssh -i ~/.1. For example. on a Linux OS.ssh/keypair-doc. 5 . and the user can access the VM by using the new keypair. A lost or compromised SSH keypair can be changed. 2.5. service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.4.1.

6 .

For conceptual information about zones.5 Administration Guide. pods. Provisioning Your Cloud infrastructure on KVM This section describes how to add zones. 3. hosts. hosts. and networks in CloudPlatform. Then.Chapter 3. clusters. refer to Chapter 3 Adding Regions to Your Cloud Infrastructure (optional) of CloudPlatform (powered by CloudStack) Version 4. After you complete provisioning the cloud infrastructure.2. Adding a Zone Adding a zone consists of three phases: 7 . you can access the CloudPlatform UI and add the compute resources for CloudPlatform to manage.1 Concepts Guide. you will have a deployment with the following basic structure: For information on adding a region to your cloud infrastructure. you can provision the cloud infrastructure. storage. or scale the cloud infrastructure up at any time.1.5. storage. and networks to your cloud using KVM hypervisor. pods. 3. Overview of Provisioning Steps After installing Management Server. refer to CloudPlatform (powered by CloudStack) Version 4. clusters.

2. create a mount point for secondary storage. you can seed the latest system VM template to the secondary storage. and secondary storage. 8 . In the right side panel. Seed the secondary storage with the latest template that is used for CloudPlatform system VMs. # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary 3. you must first configure the zone’s physical network. Creating a Secondary Storage Mount Point for the Zone To ensure that you deploy the latest system VMs in a new zone. In the left navigation bar. refer to the CloudPlatform (powered by Apache CloudStack) Version 4. Log-in to the CloudPlatform UI using the root administrator account. continue with adding the new zone. you need to add the first pod.1 Installation Guide. 3. click Infrastructure. Adding a New Zone To add a zone. Provisioning Your Cloud infrastructure on KVM • Create a secondary storage mount point for the zone • Seed the system VM template on the secondary storage. Then. Replace NFS server name and NFS share paths in the following example with the server name and path that you use. For this. After you seed the secondary storage with the latest system VM template. under Zones. Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues.Chapter 3.2. you must seed the latest system VM template to the secondary storage of the zone. On Management Server.5. click View all. primary storage. Then. • Add the zone. 3. host. 1. Mount the secondary storage on your Management Server. you must ensure that you have performed the steps to seed the system VM template. cluster. 3. Note Before you proceed with adding a new zone. For example: # mkdir -p /mnt/secondary 2. 1.2. you must first create a mount point for the secondary storage. For more information about seeding the secondary storage.2.1.

) These DNS servers will be accessed via the management traffic network interface of the System VMs. You can provide guest isolation through layer-3 means such as security groups (IP address source filtering).5.1 Concepts Guide.2. 5. a basic zone with 9 . • Hypervisor: Choose KVM as the hypervisor for the first cluster in the zone.2. refer to Chapter 4. After you enter the details. Configuring Basic Zone 1. • Advanced: Used for more sophisticated network topologies.2.2.Adding a New Zone 4. choose this. and you will be using its Elastic IP and Elastic Load Balancing features. The Add zone wizard panel appears. click Add Zone. you need to enter the following details.1.2. The private IP address you provide for the pods must have a route to the internal DNS server named here. choose this. console proxies.2. The public IP addresses for the zone must have a route to the DNS server named here.1. Network Offering Description DefaultSharedNetworkOfferingWithSGService If you want to enable security groups for guest traffic isolation. “Advanced Zone Configuration ” 3. With the EIP and ELB features. DefaultSharedNetscalerEIPandELBNetworkOffering If you have installed a Citrix NetScaler appliance as part of your zone network. For more information. choose this. • Internal DNS 1 and Internal DNS 2: These are DNS servers for use by system VMs in the zone (these are VMs used by CloudPlatform itself. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall. • IPv4 DNS 1 and IPv4 DNS 2: These are DNS servers for use by guest VMs in the zone.2. do one of the following: • For Basic: Section 3. These DNS servers will be accessed via the public network that you will add later. and Secondary Storage VMs. VPN. click Next. (See Using Security Groups to Control Traffic to VMs. • Network Offering: Select the network services that will be available on the network for guest VMs. such as virtual routers. 6. or load balancer support.) DefaultSharedNetworkOffering If you do not need security groups. Select one of the following network types: • Basic: (For AWS-style networking). Based on the option (Basic or Advanced) that you selected. “Configuring Basic Zone ” • For Advanced: Section 3. In the next page. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. After you select Basic in the Add Zone wizard and click Next. • Name: A name that you can use to identify the zone. Provides a single network where each VM instance is assigned with an IP directly from the network.

if required. Provisioning Your Cloud infrastructure on KVM Network Offering Description security groups enabled can offer 1:1 static NAT and load balancing. 5. Only users in that domain will be allowed to use this zone. Select the traffic type that the physical network will carry and click Next. You must enter the Domain and account details to which the zone is assigned to. • Type: NetScaler device type that is being added. For a comparison of the types. Select to use local storage for provisioning the storage used for User VMs. then click Next. The Edit traffic type dialog appears where you can enter the label and click OK. • Dedicated: Select to indicate that this device is to be dedicated to a single account. You can also change the network name.Chapter 3. Select this checkbox to indicate that this is a dedicated zone. • Private interface: Interface of NetScaler that is configured to be part of the private network. the value in the Capacity field has no significance – implicitly. see About Using a NetScaler Load Balancer. • Enable Local storage for User VMs. drag and drop traffic types onto the physical network. 4. Provide the requested details to set up the NetScaler. For more information about the types. NetScaler MPX. To add more. public. For all other hypervisors. 2. When you select Dedicated. Click Next. • IP address: The NSIP (NetScaler IP) address of the NetScaler device. or NetScaler SDX. specify the DNS suffix. It could be NetScaler VPX. roll over the icons to display their tool tips. QuickCloudNoServices TBD • Network Domain: (Optional) If you want to assign a special domain name to the guest VM network. This screen displays some traffic types that are already assigned. (applies to NetScaler only) If you chose the network offering for NetScaler. These traffic labels will be defined only for the hypervisor selected for the first cluster. • Enable Local Storage for System VMs. guest. 10 . These labels must match the labels you have already defined on the hypervisor host. the labels can be configured after the zone is created. its value is 1. • Capacity: Number of guest networks/accounts that will share the NetScaler device. • Username/Password: The authentication credentials to access the device. CloudPlatform uses these credentials to access the device. • Number of retries: Number of times a command run on the device before considering the operation failed. Default is 2. 3. • Public interface: Interface of NetScaler that is configured to be part of the public network. The traffic types are management. Select to use local storage for provisioning the storage used for System VMs. • Dedicated. and storage traffic. you get an additional screen where you need to enter information. Click the Edit button under each traffic type icon that you added to the physical netwok to assign a network traffic label to it.

• Cluster name: Enter a name for the cluster. Click Next. If multiple NICs are used. and DHCP. In a new cluster. • Guest netmask: The netmask in use on the subnet that the guests will use. The IPs in this range will be used for the static NAT capability.5 Concepts Guide To configure the first cluster. these IPs must be in the same CIDR as the pod CIDR. If required. • If one NIC is used. You can always add more hosts later.5 Concepts Guide 11 . In a new zone. 10. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. 7. • Reserved system netmask: The network prefix that defines the pod's subnet. You can always add more pods later. Use CIDR notation. • Gateway: The gateway in use for these IP addresses. enter the following and click Next: • Pod Name: A name for the pod. refer to CloudPlatform (powered by CloudStack) Version 4. such as Secondary Storage VMs. Console Proxy VMs. in CloudPlatform.Adding a New Zone 6. (applies to NetScaler only) Configure the IP range for public traffic. enter the following and click Next: • Hypervisor: The type of hypervisor software that all hosts in this cluster will run. CloudPlatform adds the first pod. CloudPlatform adds the first host. • Reserved system gateway: The gateway for the hosts in that pod. 8. To configure the first pod. For conceptual information about clusters. Enter the following details and click Add. • Netmask: The netmask associated with this IP range. In a new pod. which CloudPlatform can assign to guests. For conceptual information about hosts in CloudPlatform. You can always add more clusters later. you can repeat this step to add more IP ranges. • Citrix strongly recommends the use of multiple NICs. Configure the network for guest traffic. they may be in different subnets. Enter the following and click Next: • Guest gateway: The gateway that the guests should use. • Guest start IP/End IP: Enter the first and the last IP addresses that define a range. which you enabled by selecting the network offering for NetScaler with EIP and ELB. CloudPlatform adds the first cluster. • VLAN: The VLAN that will be used for public traffic. • Start IP/End IP: A range of IP addresses that must be accessible from the Internet and will be allocated for access to guest VMs. 9. refer to CloudPlatform (powered by CloudStack) Version 4.

3. • Internal DNS 1 and Internal DNS 2: These are DNS servers for use by system VMs in the zone (these are VMs used by CloudPlatform itself. For more information. refer to the Chapter 4 Configuring KVM for CloudPlatform section in the CloudPlatform (powered by Apache CloudStack) Version 4. To configure the first primary storage server. you need to install the hypervisor software on the host.5. You can always add more servers later. For more information. enter the following and click Next: • Name: The name of the storage device. the host must not have any VMs running on it. For example. • Protocol: For KVM.2. Advanced Zone Configuration 1. you can enter the following details. 11. • Username: The username is root.5 Concepts Guide. Before you configure the host. and Secondary Storage VMs. To configure the first host.Chapter 3. • Name: A name for the zone. After you select Advanced in the Add Zone wizard and click Next. • IPv4 DNS 1 and IPv4 DNS 2: These are DNS servers for use by guest VMs in the zone.5 Hypervisor Configuration Guide.1 Administration Guide. Provisioning Your Cloud infrastructure on KVM Note When you add a hypervisor host to CloudPlatform. For conceptual information about primary storage in CloudPlatform. These DNS servers will be accessed via the public network that you will add later. The public IP addresses for the zone must have a route to the DNS server named here.2. • Password: This is the password associated with the user name (from your KVM hypervisor configuration). you can set this to the cloud's HA tag (set in the ha. console proxies. choose NFS or SharedMountPoint. click Next.2. After you enter these details. such as virtual routers. refer to CloudPlatform (powered by CloudStack) Version 4. • Host Tags: (Optional) Any labels that you use to categorize hosts for ease of maintenance.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. In a new cluster. refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. enter the following and click Next: • Host Name: The DNS name or IP address of the host. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. CloudPlatform adds the first primary storage server.) These DNS servers will be accessed via the management traffic 12 .

guest. You must enter the Domain and account details to which the zone is assigned to. 13 . This screen displays that one network has been already configured. you need to add traffic types to them. • Hypervisor: Choose KVM as the hypervisor for the first cluster in the zone. roll over the icons to display their tool tips. • Network Domain: (Optional) If you want to assign a special domain name to the guest VM network. You can move the traffic type icons from one network to another. and storage traffic. These traffic labels will be defined only for the hypervisor selected for the first cluster. 3. Select the traffic type that the physical network will carry and click Next. Only users in that domain will be allowed to use this zone. You can also change the network names if desired. if the default traffic types shown for Network 1 do not match your actual setup. The private IP address you provide for the pods must have a route to the internal DNS server named here.0/24.Adding a New Zone network interface of the System VMs. Select to use local storage for provisioning the storage used for User VMs. Select this checkbox to indicate that this is a dedicated zone. • Enable Local Storage for System VMs. Select to use local storage for provisioning the storage used for System VMs. 10. As a matter of good practice you should set different CIDRs for different zones. This will make it easier to set up VPNs between networks in different zones. public. These labels must match the labels you have already defined on the hypervisor host. Drag and drop traffic types onto a greyed-out physical network to make it active.1. for example. Click the Edit button under each traffic type icon that you added to the physical netwok to assign a network traffic label to it. • Guest CIDR: This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone. For more information about the types.1. • Dedicated. you can move them down. The Edit traffic type dialog appears where you can enter the label and click OK. • Enable Local storage for User VMs. The traffic types are management. If you have multiple physical networks. 2. For example. specify the DNS suffix.

• VLAN: The VLAN that will be used for public traffic. Enter the following details and click Add. you can repeat this step to add more public Internet IP ranges. Configure the IP range for the public (Internet) traffic. Click Next. 5. Provisioning Your Cloud infrastructure on KVM 4. 14 . In a new zone. 6. To configure the first pod. Click Next. enter the following and click Next: • Pod Name: A name that you can use to identify the pod. • Gateway: The gateway in use for these IP addresses. • Netmask: The netmask associated with this IP range. • Reserved system netmask: The network prefix that defines the pod's subnet.Chapter 3. Use CIDR notation. You can always add more pods later. CloudPlatform adds the first pod. If desired. • Reserved system gateway: The gateway for the hosts in that pod. • Start IP/End IP: A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks.

Adding a New Zone • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. This is the password associated with the user name (from your KVM install). refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. You can always add more clusters later. CloudPlatform adds the first cluster. Note When you deploy CloudPlatform. For more information. you need to install the hypervisor software on the host. To configure the first primary storage server. • In a new cluster. Before you configure the host. and DHCP. • Username: Usually root. enter the following and click Next: • Name: The name of the storage device. • Password.5. the hypervisor host must not have any VMs running on it. • In a new cluster. Console Proxy VMs. then click Next: • Host Name: The DNS name or IP address of the host. you can set to the cloud's HA tag (set in the ha. For example. You can always add more servers later. enter the following.1 Hypervisor Configuration Guide. 15 . • In the new pod. • Host Tags: (Optional) Any labels that you use to categorize hosts for ease of maintenance. CloudPlatform adds the first host. To configure the first cluster. choose NFS or SharedMountPoint. To configure the first host. • Specify a range of VLAN IDs to carry guest traffic for each physical network and click Next.5. For more information. CIFS • Server: The IP address or DNS name of the storage device. such as Secondary Storage VMs. enter the following and click Next: • Hypervisor: Select KVM. refer to the Chapter 4 Configuring KVM for CloudPlatform section in the CloudPlatform (powered by Apache CloudStack) Version 4. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. You can always add more hosts later. • Cluster name: Enter a name for the cluster.1 Administration Guide. CloudPlatform adds the first primary storage server.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. • Protocol: For KVM.

sun:02:01ec9bb549-1271378984. • Tags (optional): The comma-separated list of tags for this storage device. For example. if cluster A provides primary storage that has tags T1 and T2. • SR Name-Label: Enter the name-label of the SR that has been set up outside CloudPlatform. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. It should be an equivalent set or superset of the tags on your disk offerings. • Tags (optional): The comma-separated list of tags for this storage device. For example. The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the 16 . 3.1986-03. preSetup • Server: The IP address or DNS name of the storage device. For example. It should be an equivalent set or superset of the tags on your disk offerings. • Target IQN: The IQN of the target. It should be an equivalent set or superset of the tags on your disk offerings. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. iSCSI • Server: The IP address or DNS name of the storage device. if cluster A provides primary storage that has tags T1 and T2. For example. if cluster A provides primary storage that has tags T1 and T2. • Tags (optional): The comma-separated list of tags for this storage device. iqn. Provisioning Your Cloud infrastructure on KVM • Path: The exported path from the server. • Tags (optional): The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.Chapter 3. For example. • Path: The exported path from the server.com. • Lun: The LUN number. The tag sets on primary storage across clusters in a Zone must be identical. The tag sets on primary storage across clusters in a Zone must be identical. For example. if cluster A provides primary storage that has tags T1 and T2. NFS • Server: The IP address or DNS name of the storage device.

It should be an equivalent set or superset of the tags on your disk offerings. CloudPlatform adds the first pod to it. For example. you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. Log-in to the CloudPlatform UI. VMFS • Server: The IP address or DNS name of the vCenter server. For example. 3. You can perform the following procedure to add more pods to the zone at anytime. Adding a Pod After you create a new zone. It should be an equivalent set or superset of the tags on your disk offerings. SharedMountPoint • Path: The path on each host that is where this primary storage is mounted. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. For example. To configure the first secondary storage server on hosts. 2. • Tags (optional): The comma-separated list of tags for this storage device. • In a new zone. "/cloud. • Path: The exported path from the server. The tag sets on primary storage across clusters in a Zone must be identical. click View all. In the left navigation bar. In the right-side panel.Adding a Pod Zone must also provide primary storage that has tags T1 and T2.3. • Path: A combination of the datacenter name and the datastore name. if cluster A provides primary storage that has tags T1 and T2.VM/ cluster1datastore". select Infrastructure. The tag sets on primary storage across clusters in a Zone must be identical. For example. The format is "/" datacenter name "/" datastore name. under Zones. enter the following and click Next: • NFS Server: The IP address of the server. CloudPlatform adds the first secondary storage server. if cluster A provides primary storage that has tags T1 and T2. 1. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. • Click Launch.dc. 17 . • Tags (optional): The comma-separated list of tags for this storage device. "/mnt/primary". Before you enter information in the fields on this screen. 3.

and DHCP. In the right-side panel. 7. click the zone where you want to add a pod. For more information. 6. 5. • Pod name: The name that you can use to identify the pod. 1. 7. In the page that lists the zones that you configured. In the Add Cluster dialog box. In the CloudPlatform UI. click View all. In the Add Pod dialog bozx. you must add at least one cluster. Adding a KVM Cluster You need to tell CloudPlatform about the hosts that it will manage. • Hypervisor: Select KVM as the hypervisor for this cluster. Click the Compute and Storage tab. 18 . click Add Cluster. • Account: Enter an account name that belongs to the above selected domain so that you can dedicate the pod to this account. click View all. you must have installed the hypervisor on the hosts and logged-in to the CloudPlatform UI. 3. see System Reserved IP Addresses. • Pod Name: Select the pod where you want to create the cluster. At the Pods node in the diagram. Provisioning Your Cloud infrastructure on KVM 4. in the left navigation bar. 6. At the Clusters node of the diagram. • Reserved system netmask: The network prefix that defines the pod's subnet. so before you begin adding hosts to the cloud. • Reserved system gateway: The gateway for the hosts in that pod.4.Chapter 3. 3. • Dedicate: Select if you want to dedicate the pod to a specific domain. click the Compute and Storage tab. click Infrastructure. click View all. 4. Console Proxy VMs. Hosts exist inside clusters. In the page that lists the pods configured in the zone that you selected. enter the following details: • Zone: Select the name of the zone where you want to add the new pod. In the page that list the clusters. 2. click Add Pod. Click OK. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. 8. In the details page of the zone. 5. click the zone where you want to add the cluster. • Domain: Select the domain to which you want to dedicate the pod. do the following and click OK: • Zone Name: Select the zone where you want to create the cluster. Use CIDR notation. such as Secondary Storage VMs. In the page that lists the zones that you have configured. under Zones. Before you perform the following steps.

In the details page of the zone. 4. 2.1.1. click Infrastructure. Adding a KVM Host You can add KVM hosts to a cluster at any time.1 Hypervisor Configuration Guide. • If shared mountpoint storage is in use. In the left navigation bar. You will need to know the version of the KVM software that CloudPlatform supports and the additional configuration that is required to ensure that the host will work with CloudPlatform. private. 19 . click the zone where you want to add the hosts.4. • Do not keep more than 16 hosts in a cluster. 6.5. • Domain: Select the domain to which you want to dedicate the cluster. 7. 3.Adding a KVM Host • Cluster Name: Enter a name that you can use to identify the cluster. You must ensure that you have installed the KVM Hypervisor software on the host. Log-in to the CloudPlatform UI using an administrator account. • Make sure the new host has the same network configuration (guest. the administrator must ensure that the new host has all the same mountpoints (with storage mounted) as the other hosts in the cluster.1. In the page that lists the zones that are configured with CloudPlatform. under Zones. For hardware requirements. • Account: Enter an account name that belongs to the domain so that you can dedicate the cluster to this account.4. click View all. In the right-side panel.1. click View all. 5. refer to CloudPlatform (powered by Apache CloudStack) Version 4. 3. Configuration requirements: • Each cluster must contain only hosts with KVM hypervisor.4. and public network) as the other hosts in the cluster. 3. click the Compute and Storage tab.2. • Dedicate: Select if you want to dedicate the cluster to a specific domain. 3. Requirements for KVM Hosts Warning Ensure that the KVM host does not have any VMs running on it before you add it to CloudPlatform. Steps for Adding a KVM Host 1. An the Clusters node.

Adding Primary Storage Warning Ensure that the preallocated storage is empty and contains no data before you use it for primary storage (for example. Account: Select the account that is associated with the domain so that you can dedicate the host to this account. In the page that lists the clusters available with the zone. In the Add Host panel. click the cluster where you want to add the host. you can set to the cloud's HA tag (set in the ha. 9. 3. Repeat the procedure for adding additional hosts.5. click the View Hosts link. provide the following information: • Zone: Select the zone where you want to add the host. refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. Under the Details tab. • Username: Usually root. such as when adding a new cluster or adding more servers to an existing cluster. 2. 10. It will display automatically in the UI. you must possess an empty SAN volume or an empty NFS share).5. You can add primary storage servers at any time. There may be a slight delay while the host is provisioned. click Add Host. When you add the storage to CloudPlatform any existing data will be destroyed. 20 . • Host Name: The DNS name or IP address of the host. the first primary storage is added as part of that procedure. 1. For example. • Password: This is the password associated with the user name from your KVM install. • Pod: Select the pod in the zone where you want to add the host. • Host Tags (Optional). For more information. • Cluster: Select the cluster in the pod where you want to add the host. 11.1 Administration Guide. click Infrastructure. Provisioning Your Cloud infrastructure on KVM 8. Any labels that you use to categorize hosts for ease of maintenance. In the left navigation bar.Chapter 3. When you create a new zone. • Dedicate: Select to indicate that this hiost is to be dedicated to a specific domain and account Domain: Select the domain to which you want to dedicate the host. 12. Log-in to the CloudPlatform UI. In the page that lists the hosts available with the cluster.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled.

Adding Secondary Storage
3. In the right-side panel, under Zones, click View all.
4. In the page that lists the zones, click the zone where you want to add the primary storage.
5. Click the Compute and Storage tab.
6. In the Primary Storage node of the diagram, click View all.
7. In the page that lists the primary stiorages, click Add Primary Storage.
8. In the Add Primary Storage dialog box, provide the following information. The information can
vary depending on your selection of the protocol in the Protocol field.
• Scope: Indicate whether the storage is available to all hosts in the zone or only to hosts in a
single cluster.
• Pod: (Visible only if you choose Cluster in the Scope field.) The pod for the storage device.
• Cluster: (Visible only if you choose Cluster in the Scope field.) The cluster for the storage
device.
• Name: The name of the storage device.
• Protocol: For KVM, select NFS or SharedMountPoint.
• Server (for NFS, iSCSI, SMB/CIFS or PreSetup): The IP address or DNS name of the storage
device.
• Server (NFS/iSCSI, PreSetup) IP address or DNS name of the storage device.
• Path (for NFS): In NFS this is the exported path from the server.
• Path (for SharedMountPoint): With KVM this is the path on each host that is where this primary
storage is mounted. For example, "/mnt/primary".
• Tags (optional): The comma-separated list of tags for this storage device. It should be an
equivalent set or superset of the tags on your disk offerings
The tag sets on primary storage across clusters in a Zone must be identical. For example, if
cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must
also provide primary storage that has tags T1 and T2.
9. Click OK.

3.6. Adding Secondary Storage
Note
Ensure that the storage is empty and contains no data before you use it for secondary storage.
When you add the storage to CloudPlatform any existing data will be destroyed.

When you create a new zone, the first secondary storage is added as part of that procedure. You can
add secondary storage servers at any time to add more servers to an existing zone.

21

Chapter 3. Provisioning Your Cloud infrastructure on KVM
1. To prepare for the zone-based Secondary Storage, you should have created and mounted an
NFS share during Management Server installation.
2. Ensure that you have prepared the system VM template during Management Server installation.
3. Log-in to the CloudPlatform UI using root administrator account.
4. In the left navigation bar, click Infrastructure.
5. In the right-side panel, under Zones, click View all.
6. In the page that lists the zones, click the zone where you want to add the secondary storage.
7. Click the Compute and Storage tab.
8. In Secondary Storage node of the diagram, click View all.
9. In the page that lists the secondary stiorages, click Add Secondary Storage.
10. In the Add Secondary Storage dialog box, enter the following details:
• Name: Give the storage a descriptive name.
• Provider: Choose the type of storage provider (such as S3, or NFS). NFS can be used for
zone-based storage, and the others for region-wide object storage. Depending on which
provider you choose, additional fields will appear. Complete all the required fields for your
selected provider. For more information, consult the provider's documentation, such as the S3
website.

Warning
You can use only a single region-wide object storage account per region. For example, you
can not use S3 accounts from different users.

• Zone: The zone where the NFS Secondary Storage is to be located.
• Server: The IP address or DNS name of the storage device.
• Path: The exported path from the server.
• NFS server: The name of the zone's Secondary Storage.
• Path: The path to the zone's Secondary Storage.

3.6.1. Adding an NFS Secondary Storage for Each Zone
You can skip this section if you are upgrading an existing zone from NFS to object storage. You only
need to perform the steps below when setting up a new zone that does not yet have its NFS server.
Every zone must have at least one NFS store provisioned; multiple NFS servers are allowed per zone.
To provision an NFS Staging Store for a zone:
1. To prepare for the zone-based Secondary Storage, you should have created and mounted an
NFS share during Management Server installation.

22

Configuring S3 Object Store for Secondary Storage
2. Make sure you prepared the system VM template during Management Server installation.
3. Log-in to the CloudPlatform UI using root administrator account.
4. In the left navigation bar, click Infrastructure.
5. In the right-side panel, under Zones, click View all.
6. In the page that lists the zones, click the zone where you want to add the secondary storage.
7. Click the Compute and Storage tab.
8. In Secondary Storage node of the diagram, click View all.
9. In the Select View list box, select Secondary Storage.
10. Click the Add NFS Secondary Storage button.
11. Fill out the dialog box fields, then click OK:
• Zone. The zone where the NFS Secondary Storage is to be located.
• NFS server. The name of the zone's Secondary Storage.
• Path. The path to the zone's Secondary Storage.

3.6.2. Configuring S3 Object Store for Secondary Storage
You can configure CloudPlatform to use Amazon S3 Object Store as a secondary storage. S3 Object
Store can be used with Amazon Simple Storage Service or any other provider that supports the S3
interface.
1. Make sure you prepared the system VM template during Management Server installation.
2. Log in to the CloudPlatform UI as root administrator.
3. In the left navigation bar, click Infrastructure.
4. In Secondary Storage, click View All.
5. Click Add Secondary Storage.
6. Specify the following:

23

S3 can be used with Amazon Simple Storage Service or any other provider that supports the S3 interface. Provisioning Your Cloud infrastructure on KVM • Name: Give the storage a descriptive name. 24 . • Provider: Select S3 for region-wide object storage.Chapter 3.

you can not use S3 accounts from different users. Don't e-mail it to anyone. “Upgrading from NFS to Object Storage ”. • End Point: The IP address or DNS name of the S3 storage server. • Socket Timeout: The default timeout for reading from a connected socket. the ID is a secret.3. • Access Key: The Access Key ID of the administrator. Each Access Key ID has a Secret Access Key associated with it. and only you and AWS should have it. • Path: The path to the zone's Secondary Staging Store. Upgrading from NFS to Object Storage in the Installation Guide. Your Secret Access Key is a secret. do not select this option.6. For example: 10. where 8080 is the listening port of the S3 storage server. • Bucket : The container of the objects stored in Amazon S3. or post it on the AWS Discussion Forums. These credentials are used to securely sign the requests through a REST of Query API to the CloudPlatform services. • Create NFS Secondary Staging Store: If the zone already contains a secondary staging store. This key is just a long string of characters (and not a file) that you use to calculate the digital signature that you include in the request. and Path).10. Your files are stored as objects in a location called a bucket. • Zone: The zone where S3 the Object Store is to be located. • Use HTTPS: Specify if you want a secure connection with the S3 storage. NFS Server. For example. as described in Section 3.Configuring S3 Object Store for Secondary Storage Warning You can use only a single region-wide object storage account per region. You can get this from the admin user Details tab in the Accounts page.1:8080. 25 . In this case. Enter the name of the bucket where you store your files. you can skip the rest of the fields described below (Zone.29. No authorized person from AWS will ever ask for your Secret Access Key. You can get this from the admin user Details tab in the Accounts page. • Secret Key: The secret key ID of the administrator. When you configure your Amazon S3 bucket as a website. include it any AWS requests. Because you include it in each request. • Connection Timeout: The default timeout for creating new connections. • Max Error Retry: The number of retry after service exceptions due to internal errors. the service delivers the files in your bucket to web browsers as if they were hosted on a web server. Select if you are upgrading an existing NFS secondary storage into an object storage.

all newly created templates. Unused objects in the NFS staging store are garbage collected over time. • For volume. Provisioning Your Cloud infrastructure on KVM 3. Therefore. Post migration. and your S3 object store specified in the command has been added as a new region-wide secondary storage. then locate the desired zone. snapshots are moved to the object store. Locate the secondary storage that you want to upgrade to object storage. write your own script to migrate those remaining items to S3.s3cfg file>&details[2]. a copy of the volume is generated in the new S3 object store when ExtractVolume command is executed. Log-in to the CloudPlatform UI using an administrator account. value=<S3 server IP:8080> All existing NFS secondary storages has been converted to NFS staging stores for each zone. volumes.key=accesskey&details[0]. This would help coalesce delta snapshots across NFS and S3 stores. A copy of the template is generated in the new S3 object store when ExtractTemplate or CopyTemplate is executed. The snapshots taken before migration are pushed to S3 store when you try to use that snapshot by executing createVolumeCmd by passing snapshot id.key=secretkey&details[1]. ISOs.value=<trueorfalse>&details[4]. Upgrading from NFS to Object Storage In an existing zone that is using NFS for secondary storage. After upgrade. 3.6. and select Secondary Storage in the Compute and Storage tab. consider the following: • For each new snapshot taken after migration. • You can deploy VM from templates because they are already in the NFS staging store.key=endpoint&details[4]. if you want to completely shut down the NFS storage you have previously used. 1. The existing NFS storage in the zone will be converted to an NFS Staging Store.value=<access key from .s3cfg file>&details[1]. ISOs. key=usehttps&details[3]. snapshots are migrated on an on-demand basis based on when they are accessed.value=<bucketname>&details[3]. 2. click View All. Fire an admin API to update CloudPlatform to use object storage: http://<MGMTIP>:8096/client/api?command=updateCloudToUseObjectStore&name=<S3 storage name>&provider=S3&details[0].value=<secretKey from . volumes. then select the desired secondary storage. you can upgrade the zone to use a region-wide object storage without causing downtime.Chapter 3. rather than as a batch job. ensure that you take a full snapshot to newly added S3. • All the items in the NFS storage is not migrated to S3. click View All.3. All previously created templates. Perform either of the following in the Infrastructure page: • In Zones. • In Secondary Storage.key=bucket&details[2]. 26 .

In the template selection. Click Add Instance and follow the steps in the wizard. This can take 30 minutes or more. e.7. choose the primary network for the guest. you can add more hosts. Click on the CentOS 5. and delete VMs. Use any descriptive text you would like. To use the VM. 4. Select a service offering. b.Initialize and Test 3. Click Launch VM. if desired. In default network. g. choose the template to use in the VM. and clusters. click the View Console button. d." Do not proceed to the next step until this status is displayed. 3. depending on the speed of your network. including instructions for how to allow incoming network traffic to the VM. 27 . Initialize and Test After you complete all the configuration. Choose the zone you just added. and move a VM from one host to another. Congratulations! You have successfully completed a CloudPlatform Installation. CloudPlatform starts the initialization. f. the administrator's dashboard should be displayed in the CloudPlatform UI. add another data disk. 2. stop. and filter by My Instances. likely only the provided CentOS template is available. In data disk offering. c. pods. If this is a fresh installation. Optionally give your VM a name and a group. If you decide to grow your deployment. It might take some time to download the template and complete the VM startup. When the initialization has completed successfully. you would have only one option here. In a trial installation. A reboot is not required if you have a PV-enabled OS kernel in use. start. This is a second volume that will be available to but not mounted in the guest. click Templates. Verify that the system is ready. a. Go to the Instances tab. You can watch the VM’s progress in the Instances screen.5 (64bit) no Gui (KVM) template. For more information about using VMs. zones. Check to be sure that the status is "Download Complete. see Working With Virtual Machines in the Administrator’s Guide. Be sure that the hardware you have allows starting the selected service offering. 1. primary storage. In the left navigation bar. Your VM will be created and started.

28 .

you can provision the cloud infrastructure. Adding a Zone Adding a zone consists of three phases: 29 .2. After you complete provisioning the cloud infrastructure. you will have a deployment with the following basic structure: For information on adding a region to your cloud infrastructure.1. or scale the cloud infrastructure up at any time. hosts. and networks in CloudPlatform. pods. 4.Chapter 4. hosts. refer to Chapter 3 Adding Regions to Your Cloud Infrastructure (optional) of CloudPlatform (powered by CloudStack) Version 4. pods. Overview of Provisioning Steps After installing Management Server. For conceptual information about zones. storage. clusters. refer to CloudPlatform (powered by CloudStack) Version 4. Provisioning Your Cloud infrastructure on XenServer This section describes how to add zones. 4. and networks to your cloud using XenServer hypervisor. you can access the CloudPlatform UI and add the compute resources for CloudPlatform to manage. clusters. storage. Then.5 Administration Guide.5 Concepts Guide.

you must seed the latest system VM template to the secondary storage of the zone. Provisioning Your Cloud infrastructure on XenServer • Create a secondary storage mount point for the zone • Seed the system VM template on the secondary storage. 1. Then.2. primary storage. create a mount point for secondary storage. 1. On Management Server. click Infrastructure. • Add the zone. under Zones. host. Seed the secondary storage with the latest template that is used for CloudPlatform system VMs. 2. After you seed the secondary storage with the latest system VM template. Replace NFS server name and NFS share paths in the following example with the server name and path that you use. and secondary storage.2. you must ensure that you have performed the steps to seed the system VM template. 4. # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary 3. 30 . 4.2. Creating a Secondary Storage Mount Point for the Zone To ensure that you deploy the latest system VMs in a new zone. For this. Adding a New Zone To add a zone. In the left navigation bar. Mount the secondary storage on your Management Server. Log-in to the CloudPlatform UI using the root administrator account. Note Before you proceed with adding a new zone. 3.5 Installation Guide. Then. you must first configure the zone’s physical network. you can seed the latest system VM template to the secondary storage.Chapter 4. click View all.1. continue with adding the new zone. you need to add the first pod. cluster. refer to the CloudPlatform (powered by Apache CloudStack) Version 4. In the right side panel. Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues. you must first create a mount point for the secondary storage. For more information about seeding the secondary storage. For example: # mkdir -p /mnt/secondary 2.

You can provide guest isolation through layer-3 means such as security groups (IP address source filtering). • Network Offering. Your choice here determines what network services will be available on the network for guest VMs. click Add Zone.2. or load balancer support. (See Using Security Groups to Control Traffic to VMs. The private IP address you provide for the pods must have a route to the internal DNS server named here.1 Concepts Guide.2.2.2. In the next page. “Advanced Zone Configuration ” 4. choose this.2.) DefaultSharedNetworkOffering If you do not need security groups. you will be asked to enter the following details.2. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall. • DNS 1 and 2. For more information. • Hypervisor. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. 5.Adding a New Zone 4. Select one of the following network types: • Basic: (For AWS-style networking). and Secondary Storage VMs. Network Offering Description DefaultSharedNetworkOfferingWithSGService If you want to enable security groups for guest traffic isolation. choose this.1. refer to Chapter 4. and you will be using its Elastic IP and Elastic Load Balancing features.) These DNS servers will be accessed via the management traffic network interface of the System VMs. do one of the following: • Section 4. console proxies. a basic zone with 31 . Then click Next. Choose XenServer as the hypervisor for the first cluster in the zone. • Advanced: Used for more sophisticated network topologies. The public IP addresses for the zone must have a route to the DNS server named here. choose this. These are DNS servers for use by system VMs in the zone (these are VMs used by CloudPlatform itself. Based on the option (Basic or Advanced) that you selected. 6. After you select Basic in the Add Zone wizard and click Next. such as virtual routers. The Add zone wizard panel appears. These DNS servers will be accessed via the public network you will add later. A name for the zone. Provides a single network where each VM instance is assigned with an IP directly from the network. VPN. • Name.5. “Basic Zone Configuration ” • Section 4. DefaultSharedNetscalerEIPandELBNetworkOffering If you have installed a Citrix NetScaler appliance as part of your zone network. Basic Zone Configuration 1. • Internal DNS 1 and Internal DNS 2. With the EIP and ELB features.1.2. These are DNS servers for use by guest VMs in the zone.

These labels must match the labels you have already defined on the hypervisor host. Provide the requested details to set up the NetScaler. Number of times to attempt a command on the device before considering the operation failed. It could be NetScaler VPX. Click Next. Only users in that domain will be allowed to use this zone. NetScaler MPX. A popup dialog appears where you can type the label. • Private interface. The traffic types are management.Chapter 4. Choose which traffic types will be carried by the physical network. To add more. • Enable Local storage for User VMs. • Dedicated. public. 3. You must enter the Domain and account details to which the zone is assigned to. 32 . For more information about the types. The authentication credentials to access the device. This screen starts out with some traffic types already assigned. • Number of retries. Default is 2. Select to use local storage for provisioning the storage used for System VMs. For a comparison of the types. When Dedicated is checked. specify the DNS suffix. Interface of NetScaler that is configured to be part of the public network. Select this checkbox to indicate that this is a dedicated zone. When marked as dedicated. its value is 1. • Network Domain. then click Next. drag and drop traffic types onto the network. Assign a network traffic label to each traffic type on the physical network. • IP address. You can also change the network name if desired. These traffic labels will be defined only for the hypervisor selected for the first cluster. NetScaler device type that is being added. 5. and storage traffic. • Public interface. CloudPlatform uses these credentials to access the device. then click OK. or see Basic Zone Network Traffic Types. Select to use local storage for provisioning the storage used for User VMs. To assign each label. • Type. guest. (Optional) If you want to assign a special domain name to the guest VM network. • Username/Password. • Capacity. or NetScaler SDX. (NetScaler only) If you chose the network offering for NetScaler. • Enable Local Storage for System VMs. roll over the icons to display their tool tips. 2. 4. Number of guest networks/accounts that will share this NetScaler device. you have an additional screen to fill out. click the Edit button under the traffic type icon. • Dedicated. Interface of NetScaler that is configured to be part of the private network. the value in the Capacity field has no significance – implicitly. see About Using a NetScaler Load Balancer. The NSIP (NetScaler IP) address of the NetScaler device. Provisioning Your Cloud infrastructure on XenServer Network Offering Description security groups enabled can offer 1:1 static NAT and load balancing. this device will be dedicated to a single account.

You can always add more hosts later. Enter the first and last IP addresses that define a range that CloudPlatform can assign to guests. • Reserved system netmask. and DHCP. You can always add more pods later. then click Add. • Guest netmask. Provide the following. The netmask in use on the subnet the guests will use. The gateway in use for these IP addresses. click Next.Adding a New Zone 6. Configure the network for guest traffic. 7. CloudPlatform adds the first pod for you. CloudPlatform adds the first cluster for you. • VLAN. The IP range in the management network that CloudPlatform uses to manage various system VMs. enter the following. then click Next: • Guest gateway. When done. For more information. In a new zone. CloudPlatform adds the first host for you. The VLAN that will be used for public traffic. 10. see About Hosts. The IPs in this range will be used for the static NAT capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. these IPs should be in the same CIDR as the pod CIDR. • Reserved system gateway. If desired. they may be in a different subnet. 33 . enter the following. Enter a name for the cluster. then click Next: • Hypervisor. • Start/End Reserved System IP. • If one NIC is used. Select XenServer. If multiple NICs are used. 8. This can be text of your choosing and is not used by CloudPlatform. Enter the following details. see System Reserved IP Addresses. The gateway for the hosts in that pod. In a new cluster. The gateway that the guests should use. To configure the first cluster. • Netmask. • Gateway. Use CIDR notation. • We strongly recommend the use of multiple NICs. You can always add more clusters later. • Guest start IP/End IP. In a new pod. To configure the first pod. Console Proxy VMs. For an overview of what a host is. you can repeat this step to add more IP ranges. The network prefix that defines the pod's subnet. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest VMs. (NetScaler only) Configure the IP range for public traffic. such as Secondary Storage VMs. A name for the pod. then click Next: • Pod Name. • Start IP/End IP. • Cluster name. 9. The netmask associated with this IP range.

or PreSetup. • DNS 1 and 2. For XenServer. Advanced Zone Configuration 1. For more information. see HA-Enabled Virtual Machines as well as HA for Hosts. enter the following. iSCSI.2. choose either NFS. refer to Chapter 2 Installing XenServer for CloudPlatform in the CloudPlatform (powered by Apache CloudStack) Version 4. console proxies. 34 . This is the password for the user named above (from your XenServer install). You can always add more servers later. then click Next: • Name.5.2. In a new cluster. • Host Tags. • Protocol. For an overview of what primary storage is. These are DNS servers for use by system VMs in the zone(these are VMs used by CloudPlatform itself. (Optional) Any labels that you use to categorize hosts for ease of maintenance. Provisioning Your Cloud infrastructure on XenServer Note When you add a hypervisor host to CloudPlatform. For example. such as virtual routers. CloudPlatform adds the first primary storage server for you. • Password. The name of the storage device. you need to install the hypervisor software on the host. enter the following. • Internal DNS 1 and Internal DNS 2. the host must not have any VMs already running. These DNS servers will be accessed via the public network you will add later. For more information.1 Hypervisor Configuration Guide To configure the first host.2. 11. Then click Next. The DNS name or IP address of the host. Before you can configure the host. After you select Advanced in the Add Zone wizard and click Next. • Name. A name for the zone. • Username. The public IP addresses for the zone must have a route to the DNS server named here. To configure the first primary storage server. you can set this to the cloud's HA tag (set in the ha. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. The remaining fields in the screen vary depending on what you choose here.and Secondary Storage VMs.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled.Chapter 4. The private IP address you provide for the pods must have a route to the internal DNS server named here.) These DNS servers will be accessed via the management traffic network interface of the System VMs. you will be asked to enter the following details. These are DNS servers for use by guest VMs in the zone. The username is root. 4. then click Next: • Host Name. see About Primary Storage.

Adding a New Zone • Network Domain. If you have multiple physical networks. click the Edit button under the traffic type icon within each physical network. For example. • Guest CIDR. Choose XenServer as the hypervisor for the first cluster in the zone. • Enable Local Storage for System VMs. Assign a network traffic label to each traffic type on each physical network. You must enter the Domain and account details to which the zone is assigned to.1. Select to use local storage for provisioning the storage used for User VMs. For more information about the types. A popup dialog appears where you can type the label. 10. Choose which traffic types will be carried by the physical network. public. This screen starts out with one network already configured. Select this checkbox to indicate that this is a dedicated zone. The traffic types are management.1. 2. you need to add more. 3. if the default traffic types shown for Network 1 do not match your actual setup. roll over the icons to display their tool tips. specify the DNS suffix. Only users in that domain will be allowed to use this zone. then click OK. This will make it easier to set up VPNs between networks in different zones. (Optional) If you want to assign a special domain name to the guest VM network. Drag and drop traffic types onto a greyed-out network and it will become active. for example. These traffic labels will be defined only for the hypervisor selected for the first cluster. • Hypervisor. This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone. To assign each label. Select to use local storage for provisioning the storage used for System VMs. These labels must match the labels you have already defined on the hypervisor host. • Dedicated. • Enable Local storage for User VMs. guest. and storage traffic. 35 . As a matter of good practice you should set different CIDRs for different zones. You can also change the network names if desired. you can move them down.0/24. You can move the traffic icons from one network to another.

6. • Reserved system netmask. Provisioning Your Cloud infrastructure on XenServer 4. CloudPlatform adds the first pod for you. The gateway in use for these IP addresses. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks. The network prefix that defines the pod's subnet. 36 . enter the following. Configure the IP range for public Internet traffic. Use CIDR notation. 5. • Netmask. then click Next: • Pod Name. When done. If desired.Chapter 4. The gateway for the hosts in that pod. • VLAN. You can always add more pods later. click Next. • Reserved system gateway. • Start IP/End IP. then click Add. To configure the first pod. • Gateway. A name for the pod. Enter the following details. Click Next. The netmask associated with this IP range. In a new zone. The VLAN that will be used for public traffic. you can repeat this step to add more public Internet IP ranges.

see section: 6. CloudPlatform adds the first cluster for you. To configure the first primary storage server. refer to Chapter 2 Installing XenServer for CloudPlatform in the CloudPlatform (powered by Apache CloudStack) Version 4. you can set to the cloud's HA tag (set in the ha. • In a new cluster. see HA-Enabled Virtual Machines as well as HA for Hosts. CloudPlatform adds the first primary storage server for you. and DHCP. • Specify a range of VLAN IDs to carry guest traffic for each physical network (for more information. both in the Administration Guide. Before you can configure the host.1. you need to install the hypervisor software on the host. • In a new cluster. enter the following. You can always add more hosts later. This can be text of your choosing and is not used by CloudPlatform. then click Next: • Name. enter the following. then click Next: • Host Name. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. You can always add more servers later. enter the following. The IP range in the management network that CloudPlatform uses to manage various system VMs. To configure the first cluster. To configure the first host.5. VLAN Allocation Example in the Citrix CloudPlatform (powered by Apache CloudStack) Version 4.1 Hypervisor Configuration Guide.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. You can always add more clusters later. The DNS name or IP address of the host. such as Secondary Storage VMs. For example. Usually root. • Cluster name. • In a new pod. 37 .1 Administration Guide). For more information. The name of the storage device. then click Next: • Hypervisor. (Optional) Any labels that you use to categorize hosts for ease of maintenance.5. For more information. • Username. • Password. Console Proxy VMs. then click Next. Select XenServer.Adding a New Zone • Start/End Reserved System IP. • Host Tags. CloudPlatform adds the first host for you. the hypervisor host must not have any VMs already running. This is the password for the user named above (from your XenServer install). Enter a name for the cluster.11. Note When you deploy CloudPlatform.

For example. The LUN number. choose either NFS. The comma-separated list of tags for this storage device. The comma-separated list of tags for this storage device. The IP address or DNS name of the storage device. The IQN of the target. • Target IQN. The tag sets on primary storage across clusters in a Zone must be identical. • Tags (optional). all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The exported path from the server. For XenServer. • Path. It should be an equivalent set or superset of the tags on your disk offerings. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical.sun:02:01ec9bb549-1271378984. It should be an equivalent set or superset of the tags on your disk offerings. preSetup • Server. Provisioning Your Cloud infrastructure on XenServer • Protocol. • Lun. 38 . NFS • Server. The IP address or DNS name of the storage device. The IP address or DNS name of the storage device. The tag sets on primary storage across clusters in a Zone must be identical. Enter the name-label of the SR that has been set up outside CloudPlatform. • Path. CIFS • Server.com.Chapter 4. • SR Name-Label. if cluster A provides primary storage that has tags T1 and T2. For example. The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. • Tags (optional). if cluster A provides primary storage that has tags T1 and T2.1986-03. iqn. For example. 3. For example. iSCSI. if cluster A provides primary storage that has tags T1 and T2. For example. The IP address or DNS name of the storage device. The exported path from the server. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. or PreSetup. iSCSI • Server. • Tags (optional).

• Tags (optional). Before you can fill out this screen. To configure the first secondary storage server on XenServer hosts. • Path. enter the following.dc. CloudPlatform adds the first secondary storage server for you. It should be an equivalent set or superset of the tags on your disk offerings. • Tags (optional). all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. The comma-separated list of tags for this storage device. "/cloud. For example. • Click Launch. you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. The tag sets on primary storage across clusters in a Zone must be identical. The format is "/" datacenter name "/" datastore name. if cluster A provides primary storage that has tags T1 and T2.Adding a New Zone • Tags (optional). then click Next: • NFS Server. if cluster A provides primary storage that has tags T1 and T2. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. SharedMountPoint • Path. A combination of the datacenter name and the datastore name. VMFS • Server.VM/ cluster1datastore". "/mnt/primary". • In a new zone. The IP address or DNS name of the vCenter server. For example. For example. It should be an equivalent set or superset of the tags on your disk offerings. • Path. The exported path from the server. The comma-separated list of tags for this storage device. if cluster A provides primary storage that has tags T1 and T2. The comma-separated list of tags for this storage device. The IP address of the server. For example. The path on each host that is where this primary storage is mounted. 39 . all other clusters in the Zone must also provide primary storage that has tags T1 and T2. For example.

In the Add Pod dialog bozx. In the page that lists the zones that you have configured. • Reserved system netmask: The network prefix that defines the pod's subnet. click the Compute and Storage tab. • Domain: Select the domain to which you want to dedicate the pod. • Reserved system gateway: The gateway for the hosts in that pod. click View all. Click OK. click the zone where you want to add a pod. Provisioning Your Cloud infrastructure on XenServer 4.4. 2. click View all. In the right-side panel. In the page that lists the pods configured in the zone that you selected. under Zones. Hosts exist inside clusters. 1. Console Proxy VMs. In the details page of the zone. 4. In the right-side panel. Log-in to the CloudPlatform UI. • Account: Enter an account name that belongs to the above selected domain so that you can dedicate the pod to this account. You can perform the following procedure to add more pods to the zone at anytime. Adding a XenServer Cluster You need to tell CloudPlatform about the hosts that it will manage. Use CIDR notation. In the CloudPlatform UI. CloudPlatform adds the first pod to it. 3. click View all. such as Secondary Storage VMs. At the Pods node in the diagram. click Add Pod. 7.Chapter 4. Adding a Pod After you create a new zone. 4. • Pod name: The name that you can use to identify the pod. under Zones. 3. 4. 1. select Infrastructure. and DHCP. Before you perform the following steps. For more information. click the zone where you want to add the cluster. 8. 2. In the page that lists the zones that you configured. in the left navigation bar. 5.3. see System Reserved IP Addresses. 40 . you must have installed the hypervisor on the hosts and logged-in to the CloudPlatform UI. Click the Compute and Storage tab. click Infrastructure. you must add at least one cluster. enter the following details: • Zone: Select the name of the zone where you want to add the new pod. click View all. so before you begin adding hosts to the cloud. 6. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. In the left navigation bar. At the Clusters node of the diagram. • Dedicate: Select if you want to dedicate the pod to a specific domain. 5.

1.2. connect the new host identically to other hosts in the cluster. • If network bonding is in use. the second host checks when was the last time the first host wrote a timestamp to its file on the shared pool. • Cluster Name: Enter a name that you can use to identify the cluster. As part of the investigation. CloudPlatform performs this: all the hosts are configured to write a timestamp to a file on the shared storage pool. 4. • Hypervisor: Select XenServer as the hypervisor for this cluster. if there are two hosts and one shared storage pool in a cluster in CloudPlatform. In such cases. click Add Cluster. To detect if XenServer host is down. 41 . When CloudPlatform finds a host in not responding to PingCommands. you are recommended to use XenServer 6. Therefore. Running a Cluster with a Single Host With a single host in a cluster. • Pod Name: Select the pod where you want to create the cluster. you are recommended to upgrade all existing XenServer clusters to XenServer 6. manually enable Pool HA for XenServer 6. • Account: Enter an account name that belongs to the domain so that you can dedicate the cluster to this account. For HA support. CloudPlatform cannot investigate whether the host is truly down. • Domain: Select the domain to which you want to dedicate the cluster. For host HA support. it picks another host from the cluster and directs it to investigate if the first host is truly down. General Requirements for XenServer Hosts Consider the following requirements before you add a XenServer host: • Make sure the hypervisor host does not have any VMs already running before you add it to CloudPlatform. • Each cluster must contain only hosts with the identical hypervisor. do the following and click OK: • Zone Name: Select the zone where you want to create the cluster.2 SP1 Hotfix XS62ESP1004. 4.2.Running a Cluster with a Single Host 6.4. 4.2 SP1 Hotfix XS62ESP1004.1.2 SP1 Hotfix XS62ESP1004. In the page that list the clusters. • If you are upgrading to CloudPlatform 4. both the hosts are writing a timestamp to a file by the name of uuid of the host on the shared storage pool. manually enable Pool HA for XenServer 6.2 SP1 Hotfix XS62ESP1004. • Dedicate: Select if you want to dedicate the cluster to a specific domain. 7. In the Add Cluster dialog box.4. • Do not add more than 8 hosts in a cluster. Adding a XenServer Host XenServer hosts can be added to a cluster at any time.4. • On fresh installation of CloudPlatform.5. If the timestamp is greater than 60 seconds (default value) it concludes that the host is down. hosts are placed in Alert state in CloudPlatform.

Adding Hosts to a New Cluster 1.2. Ensure that the hosts are successfully joined the master pool. 4. perform the following as per your requirement. For hardware requirements.2. Additional Requirements for XenServer Hosts Before Adding to CloudPlatform 4.1.1.Chapter 4. 3.2. In this case. see the installation section for your hypervisor in the CloudPlatform Installation Guide. continue to step 5 that provide information on how to enabled pool HA. and therefore is not operational from CloudPlatform perspective.3. The following command list all the hosts in the pool: # xe host-list The following command checks whether pool’s master is set to the new master: # xe pool-list params=master 5.1.1. see Section 4. For more information. “Adding a XenServer Host to CloudPlatform ” CloudPlatform automatically adds all the hosts in the pool to the CloudPlatform cluster.4.4. CloudPlatform cannot reflect the right state of the VMs. “Enabling Pool HA”. 4. Therefore. 2. • CloudPlatform no longer performs pool join and pool eject.2.2. procedure for adding and removing hosts in CloudPlatform has been changed.2. If you want to add multiple hosts simultaneously.2 SPI Hotfix XS62ESP1004 Before hosts are added to CloudPlatform via UI.4. Join all the other slave hosts to the master host: # xe pool-join master-address=<masterhost ipaddress> master-username=<username> masterpassword=<password> 4.2 SP1 Hotfix XS62ESP1004 release.2 SP1 Hotfix XS62ESP1004 release. Enable pool HA by providing the heartbeat Storage Repository. CloudPlatform cannot talk to the entire pool.2. When master host goes down.4.2.2.2. XenServer Version 6. If you want to add only one host to the cluster. Perform pool join and pool eject by using Citrix XenCenter before you add or delete hosts from CloudPlatform. Provisioning Your Cloud infrastructure on XenServer • CloudPlatform does not support Pool HA for versions prior to XenServer 6. 42 . • Host HA is manually enabled for XenServer 6.4. Add the master host to the new cluster in CloudPlatform as explained in Section 4. 6. choose one of the hosts to be the master host.

Create a Storage Repository for the XenServer pool. To enable XenServer HA. Disable Pool HA.1. Find the master host: # xe pool-list params=master 3. Adding a XenServer Host to an Existing CloudPlatform Cluster When you add a host to a HA-enabled pool. Primary storage Storage Repository used by CloudPlatform should not be used for HA purpose.2. Join the host to an existing pool: # xe pool-join master-address="master host ip address from #c" master-username=root master-password="password for root" Wait 10 minute for the operation to successfully be completed.2 SP1 Hotfix XS62ESP1004 clusters. Find the IP of the master host: # xe host-list uuid="host uuid return in #b" params=address 4. run the following: # xe pool-ha-enable heartbeat-sr-uuids=<sr_uuid> Note Storage Repository used for heart beat should be a dedicated Storage Repository for HA . Do not enable XenServer HA in hosts on versions prior to XenServer 6.2. pool HA has to be enabled outside of CloudPlatform. 2.1.4. Enabling Pool HA If you are using XenServer 6. 1.4.Adding a XenServer Host 4. 5.2.2.2.3. perform the following: 1. 4. This Storage Repository is not managed by CloudPlatform. # xe pool-ha-disable 2. Enable pool HA by providing the heartbeat Storage Repository: # xe pool-ha-enable heartbeat-sr-uuids="uuid of the HA SR" 43 .2 SP1 Hotfix XS62ESP1004. You can use any shared Storage Repository that XenServer supports. Configure a dedicated shared Storage Repository as HA Storage Repository.

You will need to know which version of the XenServer software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform.3. which should be avoided.4. perform the following before hosts are added to CloudPlatform via UI. Ensure that the hosts have joined the master pool successfully. Continue with Section 4. For adding additional hosts to an existing cluster.2. Note Adding host to a cluster will fail if the host is not added to XenServer pool.2. Provisioning Your Cloud infrastructure on XenServer Note When you re-enable pool HA. “Adding a XenServer Host to CloudPlatform ”. To find these installation details. Adding a XenServer Host to CloudPlatform 1. 6. If the heartbeat-sr-uuids parameter is skipped. Continue with Section 4. 44 . The following command list all the hosts in the pool: # xe host-list The following command checks whether pool’s master is set to the new master: # xe pool-list params=master 3. 4. see the appropriate section for XenServer in the CloudPlatform Hypervisor Configuration Guide. 4.2. “Adding a XenServer Host to CloudPlatform ”. There is no manual steps required in this case.2. If you have not already done so. install the XenServer software on the host.2. ensure that you use xe pool-ha-enable with the heartbeat-sr-uuids parameter pointing to the correct HA Storage Repository.3. 1. Manually join the current host to the existing pool of the first host that have been added to the cluster: # xe pool-join master-address=<masterhost ipaddress> master-username=<username> masterpassword=<password> 2.4.2 SP1 Hotfix XS62ESP1004 Addition of the first host in a XenServer cluster will succeed.3. any Storage Repository is randomly picked up for HA. XenServer Versions Prior to 6.4.4.2.Chapter 4.

• Dedicate: Select to indicate that this hiost is to be dedicated to a specific domain and account • Domain: Select the domain to which you want to dedicate the host. In the details page of the zone. you can set to the cloud's HA tag (set in the ha.4. click the cluster where you want to add the host. 7. click the View Hosts link. provide the following information: • Zone: Select the zone where you want to add the host. There may be a slight delay while the host is provisioned.1 Administration Guide. In the right-side panel. • Host Name: The DNS name or IP address of the host. • Username: Usually root.5. 10. It will display automatically in the UI.4. In the page that lists the hosts available with the cluster.2. For more information. • Password: This is the password associated with the user name from your KVM install. • Cluster: Select the cluster in the pod where you want to add the host. In the page that lists the zones that are configured with CloudPlatform. under Zones. Any labels that you use to categorize hosts for ease of maintenance. click View all. “General Requirements for XenServer Hosts ”. 6. 14. click View all. 13. • Host Tags (Optional). • Pod: Select the pod in the zone where you want to add the host. In the page that lists the clusters available with the zone. refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. 12.2. click the zone where you want to add the hosts. 4. Under the Details tab.2. • Account: Select the account that is associated with the domain so that you can dedicate the host to this account. 5. 9.Adding a XenServer Host 2. click Add Host. click Infrastructure. Review the XenServer requirements listed in this chapter. “Additional Requirements for XenServer Hosts Before Adding to CloudPlatform” and Section 4. Log-in to the CloudPlatform UI using an administrator account. complete the configuration requirements listed in Section 4.1.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. Repeat for additional hosts. 8. 11. In the left navigation bar. 45 . 3. For example. Depending on host version used. click the Compute and Storage tab. In the Add Host panel. An the Clusters node.

manually perform the following to designate an existing slave host as master: 1. Log in to the CloudPlatform UI. Point other slaves to the new master by running the following command on the master: # xe pool-recover-slaves 4. choose Infrastructure. In the left navigation. such as when adding a new cluster or adding more servers to an existing cluster. You can add primary storage servers at any time.2 SP1 Hotfix XS62ESP1004 release. Provisioning Your Cloud infrastructure on XenServer 4. If master hosts on versions prior to XenServer 6. When you create a new zone. 4. Ensure that all the slaves are pointed to the new master by running the command on all the slaves: # xe pool-list params=master 4. Adding the storage to CloudPlatform will destroy any existing data.2 SP1 Hotfix XS62ESP1004 go down. Recovering When Master Goes Down for XenServer Cluster Versions Prior to 6. To recover. 1. 3.2 SP1 Hotfix XS62ESP1004 CloudPlatform does not provide support for pool HA for any versions prior to XenServer 6. click View All. you have an empty SAN volume or an empty NFS share).4.4. master host cannot be brought up. CloudPlatform cannot connect to the pool and therefore is not operational from CloudPlatform perspective. click View All.5.2. attempt to bring up the master host.Chapter 4. then click the zone in which you want to add the primary storage. Click the Compute and Storage tab. the first primary storage is added as part of that procedure. Make a slave as the master by running the following command on a slave: xe pool-emergency-transition-to-master 2. 2. In Zones. 46 . be sure there is nothing on the storage (ex. Ensure that the new master is effective: The following command checks whether pool’s master is set to the new master: # xe pool-list params=master 3. Adding Primary Storage Warning When using preallocated storage for primary storage. If for some reason. In the Primary Storage node of the diagram.

) The cluster for the storage device.sun:02:01ec9bb549-1271378984 • Lun # (for iSCSI): In iSCSI this is the LUN number. 3. Adding Secondary Storage Note Be sure there is nothing stored on the server. 2. For example.Adding Secondary Storage 5. The information required varies depending on your choice in Protocol. Provide the following information in the dialog. • Tags (optional): The comma-separated list of tags for this storage device. iqn. • Target IQN (for iSCSI): In iSCSI this is the IQN of the target. you should have created and mounted an NFS share during Management Server installation. or PreSetup): The IP address or DNS name of the storage device. It should be an equivalent set or superset of the tags on your disk offerings The tag sets on primary storage across clusters in a Zone must be identical. 7. • Cluster: (Visible only if you choose Cluster in the Scope field. 6. • Name: The name of the storage device. Adding the server to CloudPlatform will destroy any existing data. 47 . To prepare for the zone-based Secondary Storage. • Scope: Indicate whether the storage is available to all hosts in the zone or only to hosts in a single cluster.6. For example. • Server (for NFS. Click OK. or PreSetup. iSCSI. all other clusters in the Zone must also provide primary storage that has tags T1 and T2.) The pod for the storage device. the first secondary storage is added as part of that procedure. When you create a new zone. iSCSI. Make sure you prepared the system VM template during Management Server installation. • Path (for NFS): In NFS this is the exported path from the server. You can add secondary storage servers at any time to add more servers to an existing zone. 4. • Protocol: Select NFS. • Pod: (Visible only if you choose Cluster in the Scope field.1986-03.com. if cluster A provides primary storage that has tags T1 and T2. • SR Name-Label (for PreSetup): Enter the name-label of the SR that has been set up outside CloudPlatform. 1. Click Add Primary Storage. For example.

This checkbox and the three fields below it must be filled in. • Path. you can not use S3 accounts from different users. Click Add Secondary Storage.Chapter 4. an NFS staging storage in each zone is still required. The exported path from the server. • Path: The path to the zone's Secondary Storage. and the others for region-wide object storage. such as the S3 website. The IP address or DNS name of the storage device. • Zone: The zone where the NFS Secondary Storage is to be located. or NFS). click View All. consult the provider's documentation. “Upgrading from NFS to Object Storage ”. Fill in the following fields: • Name: Give the storage a descriptive name. and Path). you can skip the rest of the fields described below (Zone. 48 . For example. • Server. NFS Server. This option is not required if you are upgrading an existing NFS secondary storage into an object storage. 7. Adding an NFS Secondary Storage for Each Zone You can skip this section if you are upgrading an existing zone from NFS to object storage. Fill in all the required fields for your selected provider. You only need to perform the steps below when setting up a new zone that does not yet have its NFS server.6. Depending on which provider you choose. In this case. as described in Section 4. 4. 4. • Create NFS Secondary Storage: Be sure this box is checked. Warning You can use only a single region-wide object storage account per region. Provisioning Your Cloud infrastructure on XenServer 3. Warning If you are setting up a new zone. For more information. • Provider: Choose the type of storage provider (such as S3. Log in to the CloudPlatform UI as root administrator. In Secondary Storage. unless the zone already contains a secondary staging store. 6.3. Even when object storage (such as S3) is used as the secondary storage provider.1. 5. additional fields will appear. NFS can be used for zone-based storage. click Infrastructure. be sure the box is checked. In the left navigation bar. • NFS server: The name of the zone's Secondary Storage.6.

4. click Infrastructure. In Select View. Log in to the CloudPlatform UI as root administrator. 6. 3. 8. Specify the following: 49 . S3 Object Store can be used with Amazon Simple Storage Service or any other provider that supports the S3 interface. click View All. 5. • Path. 6. 5. Configuring S3 Object Store for Secondary Storage You can configure CloudPlatform to use Amazon S3 Object Store as a secondary storage. Fill out the dialog box fields. then click OK: • Zone. 4. choose Secondary Storage. The path to the zone's Secondary Storage. Click Add Secondary Storage. 2. click View All. In Secondary Storage. To provision an NFS Staging Store for a zone: 1.6. click Infrastructure. multiple NFS servers are allowed per zone. 3.2. Make sure you prepared the system VM template during Management Server installation. In Secondary Storage. you should have created and mounted an NFS share during Management Server installation. 2. Click the Add NFS Secondary Storage button. The zone where the NFS Secondary Storage is to be located. The name of the zone's Secondary Storage. 1. • NFS server. In the left navigation bar. Log in to the CloudPlatform UI as root administrator. 4.Configuring S3 Object Store for Secondary Storage Every zone must have at least one NFS store provisioned. 7. Make sure you prepared the system VM template during Management Server installation. In the left navigation bar. To prepare for the zone-based Secondary Storage.

• Provider: Select S3 for region-wide object storage. 50 . S3 can be used with Amazon Simple Storage Service or any other provider that supports the S3 interface. Provisioning Your Cloud infrastructure on XenServer • Name: Give the storage a descriptive name.Chapter 4.

and only you and AWS should have it. as described in Section 4. You can get this from the admin user Details tab in the Accounts page. you can not use S3 accounts from different users. • Connection Timeout: The default timeout for creating new connections. You can get this from the admin user Details tab in the Accounts page. • Bucket : The container of the objects stored in Amazon S3. the ID is a secret. • Socket Timeout: The default timeout for reading from a connected socket.29.10. Select if you are upgrading an existing NFS secondary storage into an object storage. Enter the name of the bucket where you store your files. When you configure your Amazon S3 bucket as a website. • Path: The path to the zone's Secondary Staging Store. do not select this option. No authorized person from AWS will ever ask for your Secret Access Key. • Zone: The zone where S3 the Object Store is to be located. In this case. • Access Key: The Access Key ID of the administrator. 51 .3. These credentials are used to securely sign the requests through a REST of Query API to the CloudPlatform services. and Path). Your Secret Access Key is a secret. • End Point: The IP address or DNS name of the S3 storage server. Upgrading from NFS to Object Storage in the Installation Guide. or post it on the AWS Discussion Forums. For example.1:8080. where 8080 is the listening port of the S3 storage server. This key is just a long string of characters (and not a file) that you use to calculate the digital signature that you include in the request. Don't e-mail it to anyone. • Use HTTPS: Specify if you want a secure connection with the S3 storage. Each Access Key ID has a Secret Access Key associated with it.6. “Upgrading from NFS to Object Storage ”. Your files are stored as objects in a location called a bucket. you can skip the rest of the fields described below (Zone. Because you include it in each request. For example: 10. • Max Error Retry: The number of retry after service exceptions due to internal errors. include it any AWS requests. the service delivers the files in your bucket to web browsers as if they were hosted on a web server. NFS Server. • Create NFS Secondary Staging Store: If the zone already contains a secondary staging store.Configuring S3 Object Store for Secondary Storage Warning You can use only a single region-wide object storage account per region. • Secret Key: The secret key ID of the administrator.

value=<bucketname>&details[3]. ensure that you take a full snapshot to newly added S3. Locate the secondary storage that you want to upgrade to object storage. This would help coalesce delta snapshots across NFS and S3 stores. volumes. value=<S3 server IP:8080> All existing NFS secondary storages has been converted to NFS staging stores for each zone.value=<access key from . if you want to completely shut down the NFS storage you have previously used.key=secretkey&details[1]. ISOs. 1. key=usehttps&details[3].value=<secretKey from . and your S3 object store specified in the command has been added as a new region-wide secondary storage. a copy of the volume is generated in the new S3 object store when ExtractVolume command is executed. A copy of the template is generated in the new S3 object store when ExtractTemplate or CopyTemplate is executed. The existing NFS storage in the zone will be converted to an NFS Staging Store. write your own script to migrate those remaining items to S3. rather than as a batch job. Log in as admin to the CloudPlatform UI. snapshots are moved to the object store.s3cfg file>&details[2].Chapter 4.6.value=<trueorfalse>&details[4]. volumes. Provisioning Your Cloud infrastructure on XenServer 4. Unused objects in the NFS staging store are garbage collected over time. • All the items in the NFS storage is not migrated to S3. then locate the desired zone.s3cfg file>&details[1]. • You can deploy VM from templates because they are already in the NFS staging store.key=endpoint&details[4]. • For volume. snapshots are migrated on an on-demand basis based on when they are accessed. Upgrading from NFS to Object Storage In an existing zone that is using NFS for secondary storage. click View All. ISOs. click View All. 2. After upgrade.3.key=accesskey&details[0]. The snapshots taken before migration are pushed to S3 store when you try to use that snapshot by executing createVolumeCmd by passing snapshot id. Fire an admin API to update CloudPlatform to use object storage: http://<MGMTIP>:8096/client/api?command=updateCloudToUseObjectStore&name=<S3 storage name>&provider=S3&details[0]. all newly created templates. and select Secondary Storage in the Compute and Storage tab. Therefore. then select the desired secondary storage. 52 . • In Secondary Storage.key=bucket&details[2]. 3. consider the following: • For each new snapshot taken after migration. Post migration. All previously created templates. Perform either of the following in the Infrastructure page: • In Zones. you can upgrade the zone to use a region-wide object storage without causing downtime.

see Working With Virtual Machines in the Administrator’s Guide. stop. To use the VM. Select a service offering. g. If you decide to grow your deployment. When the initialization has completed successfully. A reboot is not required if you have a PVenabled OS kernel in use. f. in Linux on XenServer you will see / dev/xvdb in the guest after rebooting the VM. In the left navigation bar. and clusters. 53 ." Do not proceed to the next step until this status is displayed. Check to be sure that the status is "Download Complete. Your VM will be created and started. Verify that the system is ready. You can watch the VM’s progress in the Instances screen. For example. if desired. d.7. you can add more hosts. b. Click Launch VM. click the View Console button. c. 1. a. In data disk offering. Be sure that the hardware you have allows starting the selected service offering. Click the template. It might take some time to download the template and complete the VM startup. and delete VMs. zones. In the template selection. pods. 2. the administrator's Dashboard should be displayed in the CloudPlatform UI. This can take 30 minutes or more. and filter by My Instances. 4. including instructions for how to allow incoming network traffic to the VM. you would have only one option here. If this is a fresh installation. Go to the Instances tab. add another data disk. Optionally give your VM a name and a group. likely only the provided CentOS template is available. e. Click Add Instance and follow the steps in the wizard. Initialize and Test After everything is configured. choose the primary network for the guest. start. and move a VM from one host to another. primary storage. This is a second volume that will be available to but not mounted in the guest. In default network. 3.Initialize and Test 4. In a trial installation. depending on the speed of your network. Congratulations! You have successfully completed a CloudPlatform Installation. select Templates. choose the template to use in the VM. CloudPlatform will perform its initialization. For more information about using VMs. Choose the zone you just added. Use any descriptive text you would like.

54 .

pods.5 Administration Guide.2. you will have a deployment with the following basic structure: For information on adding a region to your cloud infrastructure. Adding a Zone Adding a zone consists of three phases: 55 . you can provision the cloud infrastructure. and networks to your cloud using VMware vSphere hypervisor. pods. or scale the cloud infrastructure up at any time. and networks in CloudPlatform. storage. 5. refer to CloudPlatform (powered by CloudStack) Version 4. Then. After you complete provisioning the cloud infrastructure. For conceptual information about zones. you can access the CloudPlatform UI and add the compute resources for CloudPlatform to manage.1. Overview of Provisioning Steps After installing Management Server. hosts.Chapter 5. Provisioning Your Cloud infrastructure on VMware vSphere This section describes how to add zones. clusters. 5. refer to Chapter 3 Adding Regions to Your Cloud Infrastructure (optional) of CloudPlatform (powered by CloudStack) Version 4. storage.5 Concepts Guide. clusters. hosts.

Creating a Secondary Storage Mount Point for the Zone To ensure that you deploy the latest system VMs in a new zone. Then. 56 . Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues. # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary 3. Replace NFS server name and NFS share paths in the following example with the server name and path that you use. Log-in to the CloudPlatform UI using the root administrator account.Chapter 5. Mount the secondary storage on your Management Server. you need to add the first pod.2. On Management Server. Note Before you proceed with adding a new zone. For this. click View all. you must seed the latest system VM template to the secondary storage of the zone.1.2. click Infrastructure. For more information about seeding the secondary storage. In the left navigation bar. and secondary storage. 1. In the right side panel.2. Provisioning Your Cloud infrastructure on VMware vSphere • Create a secondary storage mount point for the zone • Seed the system VM template on the secondary storage.5 Installation Guide. you can seed the latest system VM template to the secondary storage. • Add the zone. create a mount point for secondary storage. Seed the secondary storage with the latest template that is used for CloudPlatform system VMs. For example: # mkdir -p /mnt/secondary 2. 3. 2. continue with adding the new zone. cluster. Then. 5. you must ensure that you have performed the steps to seed the system VM template. host. Adding a New Zone To add a zone. you must first configure the zone’s physical network. you must first create a mount point for the secondary storage. 5. primary storage. 1. under Zones. refer to the CloudPlatform (powered by Apache CloudStack) Version 4. After you seed the secondary storage with the latest system VM template.

1. Select this checkbox to indicate that this is a dedicated zone.1 Concepts Guide. These are DNS servers for use by system VMs in the zone(these are VMs used by CloudPlatform itself.1. • Enable Local storage for User VMs. After you select the Adsvanced network type.and Secondary Storage VMs. As a matter of good practice you should set different CIDRs for different zones.5. Then click Next. • Guest CIDR.1.1.2. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall.2. VPN.0/24. (Optional) If you want to assign a special domain name to the guest VM network. Advanced Zone Configuration 1. • Dedicated. After you select Advanced in the Add Zone wizard and click Next. you will be asked to enter the following details. 6. specify the DNS suffix. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. A name for the zone.2. 5. Note The Basic zone type is not supported on VMWare ESXi hosts. The Add zone wizard panel appears. These DNS servers will be accessed via the public network you will add later.) These DNS servers will be accessed via the management traffic network interface of the System VMs. For more information. This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone. These are DNS servers for use by guest VMs in the zone. Choose vSphere as the hypervisor for the first cluster in the zone. such as virtual routers. click Add Zone.Adding a New Zone 4. proceed as described in Section 5. refer to Chapter 4. • Hypervisor. 57 . You must enter the Domain and account details to which the zone is assigned to. In the next page. • Name. • Internal DNS 1 and Internal DNS 2. The public IP addresses for the zone must have a route to the DNS server named here. The Advanced network type is used for more sophisticated network topologies. This will make it easier to set up VPNs between networks in different zones. Select to use local storage for provisioning the storage used for User VMs. Select Advanced as the network type. For example. The private IP address you provide for the pods must have a route to the internal DNS server named here. “Advanced Zone Configuration ” 5. • DNS 1 and 2. or load balancer support. • Network Domain. 10. console proxies. Only users in that domain will be allowed to use this zone.2.

for example. You can also change the network names if desired. then click OK. The traffic types are management. you need to add more. Note VMware dvSwitch is supported only for public and guest networks. 3. guest. you must specify the corresponding Ethernet port profile names as network traffic label for each traffic type on the physical network. you can move them down. To assign each label. If you have multiple physical networks. For more information on Nexus dvSwitch. If you have enabled VMware dvSwitch in the environment. Select to use local storage for provisioning the storage used for System VMs. Assign a network traffic label to each traffic type on each physical network. 2. Provisioning Your Cloud infrastructure on VMware vSphere • Enable Local Storage for System VMs. If you have enabled Nexus dvSwitch in the environment. This screen starts out with one network already configured. 58 . click the Edit button under the traffic type icon within each physical network. public. Drag and drop traffic types onto a greyed-out network and it will become active. you must specify the corresponding Switch name as network traffic label for each traffic type on the physical network. and storage traffic. A popup dialog appears where you can type the label. see Configuring a vSphere Cluster with Nexus 1000v Virtual Switch. You can move the traffic icons from one network to another. roll over the icons to display their tool tips.Chapter 5. These traffic labels will be defined only for the hypervisor selected for the first cluster. It's not yet supported for management and storage networks. These labels must match the labels you have already defined on the hypervisor host. Choose which traffic types will be carried by the physical network. For more information about the types. if the default traffic types shown for Network 1 do not match your actual setup.

59 . 6. CloudPlatform adds the first pod for you. • Gateway. The network prefix that defines the pod's subnet. click Next.Adding a New Zone 4. When done. To configure the first pod. The VLAN that will be used for public traffic. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks. • Reserved system gateway. A name for the pod. Click Next. enter the following. The netmask associated with this IP range. You can always add more pods later. 5. If desired. • Start IP/End IP. The gateway in use for these IP addresses. The gateway for the hosts in that pod. • Reserved system netmask. you can repeat this step to add more public Internet IP ranges. • Netmask. In a new zone. then click Next: • Pod Name. • VLAN. Configure the IP range for public Internet traffic. Enter the following details. then click Add. Use CIDR notation.

11. CloudPlatform adds the first host for you. then click Next: • Hypervisor. such as Secondary Storage VMs. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform.5. you need to install the hypervisor software on the host. Then. CloudPlatform adds the first cluster for you. The IP range in the management network that CloudPlatform uses to manage various system VMs. (Optional) Any labels that you use to categorize hosts for ease of maintenance. enter the following. For vSphere servers. then click Next: • Host Name. For more information. • In a new cluster. enter the following. • Specify a range of VLAN IDs to carry guest traffic for each physical network (for more information.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. The DNS name or IP address of the host. VLAN Allocation Example in the Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5. To configure the first cluster. For more information. and DHCP. enter the following. Note When you deploy CloudPlatform. Citrix recommends creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform. You can always add more hosts later. Enter a name for the cluster. • Cluster name. • Password. You can always add more clusters later. see section: 6. For example. Console Proxy VMs. To configure the first host. • In a new cluster. enter the required information about a vSphere cluster in the additional fields that appear. then click Next. Select VMware. you can set to the cloud's HA tag (set in the ha. • Username. • Host Tags. both in the Administration Guide. This is the password for the user named above (from your VMWare install). • In a new pod. the hypervisor host must not have any VMs already running. To configure the first primary storage server. This can be text of your choosing and is not used by CloudPlatform.1 Administration Guide ). Provisioning Your Cloud infrastructure on VMware vSphere • Start/End Reserved System IP. You can always add more servers later. Before you can configure the host. CloudPlatform adds the first primary storage server for you. then click Next: 60 .1 Hypervisor Configuration Guide.Chapter 5.1. Usually root. see HA-Enabled Virtual Machines as well as HA for Hosts. refer to Chapter 5 Installing VMWare for CloudPlatform in the CloudPlatform (powered by Apache CloudStack) Version 4.

• Tags (optional). iSCSI • Server. CIFS • Server. 3. if cluster A provides primary storage that has tags T1 and T2. The IP address or DNS name of the storage device. • Path. • SR Name-Label. The LUN number. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. if cluster A provides primary storage that has tags T1 and T2.Adding a New Zone • Name. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. It should be an equivalent set or superset of the tags on your disk offerings.Select VMFS (iSCSI or FiberChannel) or NFS. The tag sets on primary storage across clusters in a Zone must be identical. It should be an equivalent set or superset of the tags on your disk offerings. The IP address or DNS name of the storage device. The comma-separated list of tags for this storage device. • Lun. preSetup • Server. The IP address or DNS name of the storage device. The name of the storage device. all other clusters in the Zone must also provide primary storage that has tags T1 and T2.1986-03. • Tags (optional). For example. For example. The IQN of the target. The comma-separated list of tags for this storage device.sun:02:01ec9bb549-1271378984. NFS • Server.com. • Tags (optional). For example. • Path. It should be an equivalent set or superset of the tags on your disk offerings. • Protocol. if cluster A provides primary storage that has tags T1 and T2. For example. The IP address or DNS name of the storage device. The exported path from the server. 61 . The comma-separated list of tags for this storage device. • Target IQN. iqn. The tag sets on primary storage across clusters in a Zone must be identical. For example. The exported path from the server. Enter the name-label of the SR that has been set up outside CloudPlatform.

all other clusters in the Zone must also provide primary storage that has tags T1 and T2. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The comma-separated list of tags for this storage device. if cluster A provides primary storage that has tags T1 and T2. A combination of the datacenter name and the datastore name. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. • Path. • Tags (optional). To configure the first secondary storage server on hosts. if cluster A provides primary storage that has tags T1 and T2. • Click Launch. The tag sets on primary storage across clusters in a Zone must be identical. For example. then click Next: • NFS Server. The format is "/" datacenter name "/" datastore name. The IP address or DNS name of the vCenter server. The tag sets on primary storage across clusters in a Zone must be identical. Provisioning Your Cloud infrastructure on VMware vSphere • Tags (optional).dc. The path on each host that is where this primary storage is mounted. The IP address of the server. For example. It should be an equivalent set or superset of the tags on your disk offerings. For example. For example. The comma-separated list of tags for this storage device. • In a new zone. Before you can fill out this screen. enter the following. SharedMountPoint • Path. 62 . For example. "/mnt/primary".VM/ cluster1datastore". VMFS • Server. The exported path from the server. The comma-separated list of tags for this storage device. • Tags (optional).Chapter 5. It should be an equivalent set or superset of the tags on your disk offerings. if cluster A provides primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. "/cloud. you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. CloudPlatform adds the first secondary storage server for you. • Path. It should be an equivalent set or superset of the tags on your disk offerings.

Console Proxy VMs. 8. It is supported only for public and guest networks. 5. Use CIDR notation. Adding a Pod After you create a new zone. 63 . under Zones. see System Reserved IP Addresses. In the page that lists the zones that you configured. 4. click the zone where you want to add a pod. You can perform the following procedure to add more pods to the zone at anytime.4. click Add Pod. Clusters of multiple hosts allow for features like live migration. As an administrator you must decide if you would like to use clusters of one host or of multiple hosts. CloudPlatform adds the first pod to it. and DHCP. In the left navigation bar. • Domain: Select the domain to which you want to dedicate the pod. At the Pods node in the diagram. • Pod name: The name that you can use to identify the pod. click View all. • Account: Enter an account name that belongs to the above selected domain so that you can dedicate the pod to this account.3. enter the following details: • Zone: Select the name of the zone where you want to add the new pod. • Dedicate: Select if you want to dedicate the pod to a specific domain. CloudPlatform requires that all hosts be in a CloudPlatform cluster. but the cluster may consist of a single host. Clusters also require shared storage. In the page that lists the pods configured in the zone that you selected. 3. Click OK. 5. Note Do not use Nexus dvSwitch for management and storage networks. select Infrastructure. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. 7. Add Cluster: vSphere Host management for vSphere is done through a combination of vCenter and the CloudPlatform UI. In the right-side panel. 1. such as Secondary Storage VMs. In the Add Pod dialog bozx. For more information. • Reserved system netmask: The network prefix that defines the pod's subnet.Adding a Pod 5. 6. Click the Compute and Storage tab. • Reserved system gateway: The gateway for the hosts in that pod. click View all. Log-in to the CloudPlatform UI. 2.

A cluster size of up to 8 hosts has been found optimal for most realworld situations. the limit is 32 hosts.1. click View all. You will create a cluster that looks something like this in vCenter.Chapter 5. Note Best Practice: It is advisable for VMware clusters in CloudPlatform to be smaller than the VMware hypervisor's maximum size. in the left navigation bar. For VMware versions 4.4. 64 . 5.0. 3.1. 2. In the CloudPlatform UI. 4.4. In the right-side panel. Adding a vSphere Cluster To add a vSphere cluster to CloudPlatform: 1. VMware Cluster Size Limit The maximum number of hosts in a vSphere cluster is determined by the VMware hypervisor software. 4. under Zones. click Infrastructure. Follow the vCenter instructions to do this.2. CloudPlatform adheres to this maximum. 5. and 5.2. we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform.1. Create the cluster of hosts in vCenter. 5. Log-in to the CloudPlatform UI. Provisioning Your Cloud infrastructure on VMware vSphere For vSphere servers.

• Cluster Name: Enter the name of the cluster you created in vCenter. 7. Provide the following information in the dialog. • Nexus dvSwitch Username: The username required to access the Nexus VSM applicance. In the details page of the zone. It will automatically display in the UI 5. click Add Cluster. If you have enabled Nexus dvSwitch in the environment.2. There might be a slight delay while the cluster is provisioned. 9. 8.cluster. At the Clusters node of the diagram. click the zone where you want to add the cluster. 6. • Nexus dvSwitch Password: The password associated with the username specified above. 65 . Adding a Host (vSphere) For vSphere servers. "cloud. This user must have all administrative privileges. The fields below make reference to values from vCenter. For example. In the page that lists the zones that you have configured.4.4. In the Add Cluster dialog box. For example. • Hypervisor: Select VmWare.Adding a Host (vSphere) 5. • vCenter Username: Enter the username that CloudPlatform should use to connect to vCenter. • vCenter Password: Enter the password for the user named above • vCenter Datacenter: Enter the vCenter datacenter that the cluster is in. “Add Cluster: vSphere”.VM". See Section 5. do the following and click OK: 10. "cloud. click View all.1" • vCenter Host: Enter the hostname or IP address of the vCenter server.3. the following parameters for dvSwitch configuration are displayed: • Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.2. In the page that list the clusters. click the Compute and Storage tab. we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform.dc.

) The pod for the storage device. choose VMFS (iSCSI or FiberChannel) or NFS. be sure there is nothing on the storage (ex. • Server (for NFS. • Cluster: (Visible only if you choose Cluster in the Scope field. "/cloud. 5. the first primary storage is added as part of that procedure. When you create a new zone. 4.1986-03. • Name: The name of the storage device. The information required varies depending on your choice in Protocol. The IP address or DNS name of the vCenter server. You can add primary storage servers at any time. • Path (for NFS): In NFS this is the exported path from the server. 3. • Protocol: For vSphere.5.com. For example. • Scope: Indicate whether the storage is available to all hosts in the zone or only to hosts in a single cluster. click View All. Click the Compute and Storage tab. Adding the storage to CloudPlatform will destroy any existing data. • SR Name-Label (for PreSetup): Enter the name-label of the SR that has been set up outside CloudPlatform.Chapter 5. 2. In Zones. Click Add Primary Storage. • Path (for VMFS): In vSphere this is a combination of the datacenter name and the datastore name. Provisioning Your Cloud infrastructure on VMware vSphere 5. The format is "/" datacenter name "/" datastore name. Adding Primary Storage Warning When using preallocated storage for primary storage. choose Infrastructure. Provide the following information in the dialog. For example. • Pod: (Visible only if you choose Cluster in the Scope field. In the Primary Storage node of the diagram. such as when adding a new cluster or adding more servers to an existing cluster. then click the zone in which you want to add the primary storage.VM/ cluster1datastore". iqn. you have an empty SAN volume or an empty NFS share).) The cluster for the storage device. 1. 6.sun:02:01ec9bb549-1271378984 66 . In the left navigation. Log in to the CloudPlatform UI. or iSCSI): The IP address or DNS name of the storage device. click View All. • Target IQN (for iSCSI): In iSCSI this is the IQN of the target. • Server (for VMFS).dc.

You can add secondary storage servers at any time to add more servers to an existing zone. Depending on which provider you choose. Log in to the CloudPlatform UI as root administrator. In the left navigation bar. • Provider: Choose the type of storage provider (such as S3. 4. Warning You can use only a single region-wide object storage account per region. 2. 5. or NFS). if cluster A provides primary storage that has tags T1 and T2. NFS can be used for zone-based storage. • Tags (optional): The comma-separated list of tags for this storage device. Click Add Secondary Storage. and the others for region-wide object storage. 5. click View All. For example. 7. 1. Fill in all the required fields for your selected provider. Click OK. the first secondary storage is added as part of that procedure. Make sure you prepared the system VM template during Management Server installation. 6. 67 . you should have created and mounted an NFS share during Management Server installation.Adding Secondary Storage • Lun # (for iSCSI): In iSCSI this is the LUN number. To prepare for the zone-based Secondary Storage. 3. For example. Fill in the following fields: • Name: Give the storage a descriptive name. 3. It should be an equivalent set or superset of the tags on your disk offerings The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. you can not use S3 accounts from different users. additional fields will appear. Adding the server to CloudPlatform will destroy any existing data. such as the S3 website.6. For more information. For example. In Secondary Storage. When you create a new zone. 7. click Infrastructure. Adding Secondary Storage Note Be sure there is nothing stored on the server. consult the provider's documentation.

Make sure you prepared the system VM template during Management Server installation. you should have created and mounted an NFS share during Management Server installation. Warning If you are setting up a new zone. an NFS staging storage in each zone is still required. 4. In Secondary Storage. 8.Chapter 5. Provisioning Your Cloud infrastructure on VMware vSphere • Create NFS Secondary Storage: Be sure this box is checked. The exported path from the server. • Path. • Path. In Select View. In the left navigation bar. 68 . 2. This option is not required if you are upgrading an existing NFS secondary storage into an object storage. Upgrading from NFS to Object Storage in the Installation Guide. • Server. unless the zone already contains a secondary staging store. • NFS server. • Path: The path to the zone's Secondary Storage. 5. 7. Even when object storage (such as S3) is used as the secondary storage provider.6. • Zone: The zone where the NFS Secondary Storage is to be located. be sure the box is checked. • NFS server: The name of the zone's Secondary Storage. click View All. multiple NFS servers are allowed per zone. You only need to perform the steps below when setting up a new zone that does not yet have its NFS server. Adding an NFS Secondary Storage for Each Zone You can skip this section if you are upgrading an existing zone from NFS to object storage. NFS Server. 3.6. In this case. The path to the zone's Secondary Storage. “Upgrading from NFS to Object Storage ”. as described in Section 5. To prepare for the zone-based Secondary Storage. and Path). Click the Add NFS Secondary Storage button. click Infrastructure. then click OK: • Zone. Log in to the CloudPlatform UI as root administrator. Every zone must have at least one NFS store provisioned. 6. Fill out the dialog box fields. 5. The IP address or DNS name of the storage device.3.1. This checkbox and the three fields below it must be filled in. The zone where the NFS Secondary Storage is to be located. you can skip the rest of the fields described below (Zone. choose Secondary Storage. To provision an NFS Staging Store for a zone: 1. The name of the zone's Secondary Storage.

Specify the following: 69 . click View All.2. 4. 1. Log in to the CloudPlatform UI as root administrator. S3 Object Store can be used with Amazon Simple Storage Service or any other provider that supports the S3 interface. 5. click Infrastructure. 3.6. Make sure you prepared the system VM template during Management Server installation. Configuring S3 Object Store for Secondary Storage You can configure CloudPlatform to use Amazon S3 Object Store as a secondary storage. 2.Configuring S3 Object Store for Secondary Storage 5. In Secondary Storage. 6. Click Add Secondary Storage. In the left navigation bar.

• Provider: Select S3 for region-wide object storage. S3 can be used with Amazon Simple Storage Service or any other provider that supports the S3 interface. Provisioning Your Cloud infrastructure on VMware vSphere • Name: Give the storage a descriptive name.Chapter 5. 70 .

Each Access Key ID has a Secret Access Key associated with it.3. • Use HTTPS: Specify if you want a secure connection with the S3 storage. you can skip the rest of the fields described below (Zone. do not select this option. Select if you are upgrading an existing NFS secondary storage into an object storage. • Create NFS Secondary Staging Store: If the zone already contains a secondary staging store. These credentials are used to securely sign the requests through a REST of Query API to the CloudPlatform services. When you configure your Amazon S3 bucket as a website. In this case. 71 . • End Point: The IP address or DNS name of the S3 storage server. the service delivers the files in your bucket to web browsers as if they were hosted on a web server. NFS Server. For example. you can not use S3 accounts from different users. • Access Key: The Access Key ID of the administrator. and Path).Configuring S3 Object Store for Secondary Storage Warning You can use only a single region-wide object storage account per region.29. For example: 10.6. or post it on the AWS Discussion Forums. include it any AWS requests. Your files are stored as objects in a location called a bucket. Enter the name of the bucket where you store your files. Because you include it in each request. You can get this from the admin user Details tab in the Accounts page. • Bucket : The container of the objects stored in Amazon S3. • Path: The path to the zone's Secondary Staging Store. Upgrading from NFS to Object Storage in the Installation Guide. the ID is a secret. • Secret Key: The secret key ID of the administrator. • Connection Timeout: The default timeout for creating new connections. This key is just a long string of characters (and not a file) that you use to calculate the digital signature that you include in the request. where 8080 is the listening port of the S3 storage server. and only you and AWS should have it.1:8080. Your Secret Access Key is a secret. Don't e-mail it to anyone. • Max Error Retry: The number of retry after service exceptions due to internal errors. You can get this from the admin user Details tab in the Accounts page. No authorized person from AWS will ever ask for your Secret Access Key. “Upgrading from NFS to Object Storage ”. • Zone: The zone where S3 the Object Store is to be located. • Socket Timeout: The default timeout for reading from a connected socket.10. as described in Section 5.

2. Log in as admin to the CloudPlatform UI.value=<secretKey from . • In Secondary Storage. then locate the desired zone. click View All.Chapter 5. volumes. All previously created templates. if you want to completely shut down the NFS storage you have previously used.value=<trueorfalse>&details[4].s3cfg file>&details[2]. consider the following: • For each new snapshot taken after migration.key=secretkey&details[1]. ensure that you take a full snapshot to newly added S3.key=endpoint&details[4]. 3. 1.s3cfg file>&details[1]. Perform either of the following in the Infrastructure page: • In Zones. Locate the secondary storage that you want to upgrade to object storage.6. This would help coalesce delta snapshots across NFS and S3 stores. value= <S3 server IP:8080> All existing NFS secondary storages has been converted to NFS staging stores for each zone. Upgrading from NFS to Object Storage In an existing zone that is using NFS for secondary storage. click View All. • For volume. • All the items in the NFS storage is not migrated to S3. write your own script to migrate those remaining items to S3.key=bucket&details[2].3. snapshots are migrated on an on-demand basis based on when they are accessed. Post migration. Therefore. • You can deploy VM from templates because they are already in the NFS staging store. snapshots are moved to the object store. a copy of the volume is generated in the new S3 object store when ExtractVolume command is executed. The existing NFS storage in the zone will be converted to an NFS Staging Store. After upgrade. then select the desired secondary storage. you can upgrade the zone to use a region-wide object storage without causing downtime. volumes. ISOs. rather than as a batch job. and your S3 object store specified in the command has been added as a new region-wide secondary storage. and select Secondary Storage in the Compute and Storage tab. Fire an admin API to update CloudPlatform to use object storage: http://<MGMTIP>:8096/client/api?command=updateCloudToUseObjectStore&name=<S3 storage name>&provider=S3&details[0].value=<access key from . ISOs.key=accesskey&details[0]. Unused objects in the NFS staging store are garbage collected over time.value=<bucketname>&details[3] key=usehttps&details[3]. A copy of the template is generated in the new S3 object store when ExtractTemplate or CopyTemplate is executed. Provisioning Your Cloud infrastructure on VMware vSphere 5. The snapshots taken before migration are pushed to S3 store when you try to use that snapshot by executing createVolumeCmd by passing snapshot id. all newly created templates. 72 .

Initialize and Test

5.7. Initialize and Test
After everything is configured, CloudPlatform will perform its initialization. This can take 30 minutes or
more, depending on the speed of your network. When the initialization has completed successfully,
the administrator's Dashboard should be displayed in the CloudPlatform UI.
1. Verify that the system is ready. In the left navigation bar, select Templates. Click the template.
Check to be sure that the status is "Download Complete." Do not proceed to the next step until
this status is displayed.
2. Go to the Instances tab, and filter by My Instances.
3. Click Add Instance and follow the steps in the wizard.
a. Choose the zone you just added.
b. In the template selection, choose the template to use in the VM. If this is a fresh installation,
likely only the provided CentOS template is available.
c.

Select a service offering. Be sure that the hardware you have allows starting the selected
service offering.

d. In data disk offering, if desired, add another data disk. This is a second volume that will be
available to but not mounted in the guest. A reboot is not required if you have a PV-enabled
OS kernel in use.
e. In default network, choose the primary network for the guest. In a trial installation, you would
have only one option here.
f.

Optionally give your VM a name and a group. Use any descriptive text you would like.

g. Click Launch VM. Your VM will be created and started. It might take some time to download
the template and complete the VM startup. You can watch the VM’s progress in the
Instances screen.
4.
To use the VM, click the View Console button.
For more information about using VMs, including instructions for how to allow incoming network
traffic to the VM, start, stop, and delete VMs, and move a VM from one host to another, see
Working With Virtual Machines in the Administrator’s Guide.
Congratulations! You have successfully completed a CloudPlatform Installation.
If you decide to grow your deployment, you can add more hosts, primary storage, zones, pods, and
clusters.

73

74

Chapter 6.

Provisioning Your Cloud Infrastructure
on Hyper-V
This section describes how to add zones, pods, clusters, hosts, storage, and networks to your cloud
using Hyper-V hypervisor.
For conceptual information about zones, pods, clusters, hosts, storage, and networks in
CloudPlatform, refer to CloudPlatform (powered by CloudStack) Version 4.5 Concepts Guide.

6.1. Overview of Provisioning Steps
After installing Management Server, you can access the CloudPlatform UI and add the compute
resources for CloudPlatform to manage.
Then, you can provision the cloud infrastructure, or scale the cloud infrastructure up at any time.
After you complete provisioning the cloud infrastructure, you will have a deployment with the following
basic structure:

For information on adding a region to your cloud infrastructure, refer to Chapter 3 Adding Regions
to Your Cloud Infrastructure (optional) of CloudPlatform (powered by CloudStack) Version 4.5
Administration Guide.

6.2. Adding a Zone
Adding a zone consists of three phases:
75

you must install CIFS packege. For more information. create a mount point for secondary storage. Then. cluster. Create a Secondary Storage Mount Point for the New Zone To ensure that you have deployed the latest system VMs in new zones. Provisioning Your Cloud Infrastructure on Hyper-V • Create a mount point for secondary storage on the Management Server.1. refer to the CloudPlatform Installation Guide. Note Before you proceed with adding a new zone. host. 1. you need to seed the latest system VM template to the zone's secondary storage.2. Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues. As a prerequisite. 76 . you need to add the first pod. and secondary storage.Chapter 6. • Add the zone. 6. you must ensure that you have performed the steps to seed the system VM template. Then seed the system VM template. Secondary storage must be seeded with a template that is used for CloudPlatform system VMs. The first step is to create a mount point for the secondary storage.2. Use the following command and replace the example CIFS server name and CIFS share paths below with your own: You must have Samba client to mount CIFS on Linux. After you seed the secondary storage with a system VM template. For example: # mkdir -p /mnt/secondary 2. you must first configure the zone’s physical network. • Seed the system VM template on the secondary storage.2. Mount the secondary storage on your Management Server. You can run the command yum install samba-client samba-common cifs-utils. continue with adding the zone. Adding a New Zone To add a zone. 6. On the management server. mount -t cifs <cifsServer://cifsserver/cifsShare/Secondary_storage -o username=<domainuser>. primary storage.password=<password>.domain=<domain name> /mnt/secondary 3.

specify the DNS suffix. such as virtual routers.2.5. proceed as described in Section 6.Adding a New Zone Note Hyper-V clusters are supported in dedicated zones. “Advanced Zone Configuration ” 6.) These DNS servers will be accessed via the management traffic network interface of the System VMs. In the left navigation bar. click Add Zone. 4. or load balancer support. click View all. VPN. The public IP addresses for the zone must have a route to the DNS server named here. In the next page. • Name. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall.1. console proxies. A name for the zone. refer to Chapter 4.2. Then click Next. After you select Advanced in the Add Zone wizard and click Next. 6. 77 . These DNS servers will be accessed via the public network you will add later. 2. under Zones. click Infrastructure. In the right side panel. 1. For more information. These are DNS servers for use by system VMs in the zone(these are VMs used by CloudPlatform itself. After you select the Advanced network type. These are DNS servers for use by guest VMs in the zone. Note The Basic zone type is not supported on Hyper-V hosts. Log-in to the CloudPlatform UI using the root administrator account.1. The Advanced network type is used for more sophisticated network topologies. The Add zone wizard panel appears. Advanced Zone Configuration 1. • DNS 1 and 2.2. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. The private IP address you provide for the pods must have a route to the internal DNS server named here. • Network Domain. 3. you will be asked to enter the following details.2. Select Advanced as the network type.and Secondary Storage VMs. • Internal DNS 1 and Internal DNS 2. (Optional) If you want to assign a special domain name to the guest VM network.1 Concepts Guide. 5.

• Enable Local Storage for System VMs. you can move them down.0/24. If you have multiple physical networks. roll over the icons to display their tool tips. if the default traffic types shown for Network 1 do not match your actual setup. • Dedicated. for example. Select to use local storage for provisioning the storage used for User VMs. Select the traffic types that the physical network will carry. These labels must match the labels you have already defined on the hypervisor host. Select this checkbox to indicate that this is a dedicated zone. These traffic labels will be defined only for the hypervisor selected for the first cluster. and storage traffic.Chapter 6. click the Edit button under the traffic type icon within each physical network. Only users in that domain will be allowed to use this zone. Assign a network traffic label to each traffic type on each physical network.1. For more information about the types. You must enter the Domain and account details to which the zone is assigned to. Select to use local storage for provisioning the storage used for System VMs. The traffic types are management. • Hypervisor. As a matter of good practice you should set different CIDRs for different zones. A popup dialog appears where you can type the label. you need to add more. 78 . 3. The traffic label must match the Hyper-V vswitch name that is configured on the Hyper-V host.1. To assign each label. public. Provisioning Your Cloud Infrastructure on Hyper-V • Guest CIDR. This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone. For example. You can also change the network names if desired. You can move the traffic icons from one network to another. This will make it easier to set up VPNs between networks in different zones. then click OK. This screen starts out with one network already configured. 10. guest. Drag and drop traffic types onto a greyed-out network and it will become active. • Enable Local storage for User VMs. Choose Hyper-V as the hypervisor for the first cluster in the zone. 2.

5. The netmask associated with this IP range. you can repeat this step to add more public Internet IP ranges. In a new zone. then click Next: • Pod Name. CloudPlatform adds the first pod for you. The VLAN that will be used for public traffic. The gateway for the hosts in that pod. • Reserved system gateway. A name for the pod. You can always add more pods later. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks. then click Add. • Start IP/End IP. click Next. • Gateway. • Reserved system netmask. Configure the IP range for public Internet traffic. The gateway in use for these IP addresses. • VLAN. 6. The network prefix that defines the pod's subnet. Click Next. If desired. 79 . Use CIDR notation. • Netmask. Enter the following details.Adding a New Zone 4. enter the following. When done. To configure the first pod.

80 . Note When you deploy CloudPlatform. then click Next: • Host Name. • In a new pod. you can set to the cloud's HA tag (set in the ha. • Password. To configure the first host. For example. You can always add more clusters later. The IP range in the management network that CloudPlatform uses to manage various system VMs. For more information. Console Proxy VMs. Usually root. • Host Tags. then click Next. • Cluster name. you need to install the hypervisor software on the host. see HA-Enabled Virtual Machines as well as HA for Hosts.1 Hypervisor Configuration Guide. Enter a name for the cluster. both in the Administration Guide. To configure the first primary storage server. Provisioning Your Cloud Infrastructure on Hyper-V • Start/End Reserved System IP. You can always add more hosts later.5. then click Next: • Hypervisor. You can always add more servers later. Select Hyper-V. then click Next: • Name. • Specify a range of VLAN IDs to carry guest traffic for each physical network (for more information. see section: 6. • Username. the hypervisor host must not have any VMs already running. • In a new cluster. • In a new cluster. enter the following. To configure the first cluster.Chapter 6. enter the following. enter the following. CloudPlatform adds the first cluster for you. The name of the storage device. refer to Chapter 3 Installing Hyper-V for CloudPlatform in the CloudPlatform (powered by Apache CloudStack) Version 4. For more information. such as Secondary Storage VMs. Before you can configure the host. CloudPlatform adds the first host for you. This can be text of your choosing and is not used by CloudPlatform. This is the password for the user named above (from your Hyper-V install). The DNS name or IP address of the host.5.11.1 Administration Guide ). (Optional) Any labels that you use to categorize hosts for ease of maintenance.1. and DHCP. VLAN Allocation Example in the Citrix CloudPlatform (powered by Apache CloudStack) Version 4. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. CloudPlatform adds the first primary storage server for you.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled.

The IP address or DNS name of the storage device. if cluster A provides primary storage that has tags T1 and T2. The IP address or DNS name of the storage device. • Target IQN. The comma-separated list of tags for this storage device. The IP address or DNS name of the storage device. For example.sun:02:01ec9bb549-1271378984. • SMB Password: The password associated with the account. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical.1986-03. • Tags (optional). It should be an equivalent set or superset of the tags on your disk offerings. For example. • Tags (optional). • Tags (optional). • Lun. • Path. • Path. if cluster A provides primary 81 . • SMB Domain: The Active Directory domain that the SMB share is a part of. iSCSI • Server. iqn. 3. The tag sets on primary storage across clusters in a Zone must be identical. NFS • Server. The remaining fields in the screen vary depending on the protocol that you selected. For example. The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The exported path from the server.com.Select CIFS. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. • SMB Username: The username of the account which has the necessary permissions to the SMB shares. CIFS • Server. The exported path from the server. The comma-separated list of tags for this storage device. The user must be part of the Hyper-V administrator group. if cluster A provides primary storage that has tags T1 and T2.Adding a New Zone • Protocol. For example. The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The IQN of the target. The LUN number. For example.

you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. enter the following. "/cloud. Enter the name-label of the SR that has been set up outside CloudPlatform. "/mnt/primary".Chapter 6. The tag sets on primary storage across clusters in a Zone must be identical. (Hyper-V) To configure the first secondary storage server. if cluster A provides primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. VMFS • Server. Provisioning Your Cloud Infrastructure on Hyper-V storage that has tags T1 and T2. then click Next: 82 . A combination of the datacenter name and the datastore name. It should be an equivalent set or superset of the tags on your disk offerings. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. For example. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. preSetup • Server. SharedMountPoint • Path. For example. if cluster A provides primary storage that has tags T1 and T2. The format is "/" datacenter name "/" datastore name. The IP address or DNS name of the vCenter server. The comma-separated list of tags for this storage device. • Tags (optional).dc. all other clusters in the Zone must also provide primary storage that has tags T1 and T2.VM/ cluster1datastore". • Tags (optional). The tag sets on primary storage across clusters in a Zone must be identical. • Path. For example. • Tags (optional). all other clusters in the Zone must also provide primary storage that has tags T1 and T2. • In a new zone. The IP address or DNS name of the storage device. It should be an equivalent set or superset of the tags on your disk offerings. The path on each host that is where this primary storage is mounted. For example. if cluster A provides primary storage that has tags T1 and T2. • SR Name-Label. Before you can fill out this screen. The comma-separated list of tags for this storage device. For example. The comma-separated list of tags for this storage device. CloudPlatform adds the first secondary storage server for you. It should be an equivalent set or superset of the tags on your disk offerings.

Adding a Pod • Server. such as Secondary Storage VMs. 3. • Domain: Select the domain to which you want to dedicate the pod. 7. click Add Pod. In the page that lists the pods configured in the zone that you selected. • Account: Enter an account name that belongs to the above selected domain so that you can dedicate the pod to this account. click View all. 4. • SMB Domain: The Active Directory domain that the SMB share is a part of. At the Pods node in the diagram. CloudPlatform adds the first pod to it. enter the following. The exported path from the server. In the Add Pod dialog bozx. 6. Log-in to the CloudPlatform UI. • SMB Password: The password associated with the account. 83 . In the page that lists the zones that you configured. and DHCP. • SMB Username: The username of the account which has the necessary permissions to the SMB shares. 2. The IP address or DNS name of the storage device. To configure the first secondary storage server on hosts other than Hyper-V. Adding a Pod After you create a new zone. For more information. then click Next: • NFS Server. • Dedicate: Select if you want to dedicate the pod to a specific domain. Click the Compute and Storage tab. Use CIDR notation. • Click Launch. The IP address of the server. click the zone where you want to add a pod. 6. The exported path from the server. Console Proxy VMs. 5. see System Reserved IP Addresses. • Path. select Infrastructure. 1.3. • Path. In the right-side panel. under Zones. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. • Reserved system gateway: The gateway for the hosts in that pod. The user must be part of the Hyper-V administrator group. You can perform the following procedure to add more pods to the zone at anytime. • Pod name: The name that you can use to identify the pod. click View all. In the left navigation bar. • Reserved system netmask: The network prefix that defines the pod's subnet. enter the following details: • Zone: Select the name of the zone where you want to add the new pod.

Hosts exist inside clusters. You must install Hyper-V on the host. 84 . 6. Note Ensure that you have performed the additional CloudPlatform-specific configuration steps described in Chapter 3. 6. 2. • Dedicate: Select if you want to dedicate the cluster to a specific domain.4. • Cluster Name: Enter a name that you can use to identify the cluster. In the page that list the clusters. 2.1. • Domain: Select the domain to which you want to dedicate the cluster. click Infrastructure. 6. click Add Cluster. you must have installed the Hyper-V on the hosts and loggedin to the CloudPlatform UI. 3.5. Before you perform the following steps. click View all. 7. do the following and click OK: • Zone Name: Select the zone where you want to create the cluster.Chapter 6. In the page that lists the zones that you have configured. • Pod Name: Select the pod where you want to create the cluster. so before you begin adding hosts to the cloud. Adding a Hyper-V Host 1. click View all. In the CloudPlatform UI. 4. you must add at least one cluster. Provisioning Your Cloud Infrastructure on Hyper-V 8. Adding a Hyper-V Cluster You need to tell CloudPlatform about the hosts that it will manage. click the zone where you want to add the cluster.4. 5. In the right-side panel.1 Hypervisor Configuration Guide. Click OK. • Account: Enter an account name that belongs to the domain so that you can dedicate the cluster to this account. Installing Hyper-V for CloudPlatform of the CloudPlatform (powered by Apache CloudStack) Version 4. 1. • Hypervisor: Select Hyper-V as the hypervisor for this cluster. In the Add Cluster dialog box. At the Clusters node of the diagram. under Zones. Log-in to the CloudPlatform UI using an administrator account. In the details page of the zone. You must know the version of the Hyper-V software that CloudPlatform supports and the additional configurations that are required to ensure that the host will work with CloudPlatform. click the Compute and Storage tab. in the left navigation bar.

Adding a Hyper-V Host
3. In the left navigation bar, click Infrastructure.
4. In the right-side panel, under Zones, click View all.
5. In the page that lists the zones that are configured with CloudPlatform, click the zone where you
want to add the hosts.
6. In the details page of the zone, click the Compute and Storage tab.
7. An the Clusters node, click View all.
8. In the page that lists the clusters available with the zone, click the cluster where you want to add
the host.
9. Under the Details tab, click the View Hosts link.
10. In the page that lists the hosts available with the cluster, click Add Host.
11. In the Add Host panel, provide the following information:
• Zone: Select the zone where you want to add the host.
• Pod: Select the pod in the zone where you want to add the host.
• Cluster: Select the cluster in the pod where you want to add the host.
• Host Name: The DNS name or IP address of the host.
• Username: Usually root.
• Password: This is the password associated with the user name from your Hyper-V install.
• Dedicate: Select to indicate that this hiost is to be dedicated to a specific domain and account
• Domain: Select the domain to which you want to dedicate the host.
• Account: Select the account that is associated with the domain so that you can dedicate the
host to this account.
• Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance. For
example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if
you want this host to be used only for VMs with the "high availability" feature enabled. For more
information, refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the
CloudPlatform (powered by Apache CloudStack) Version 4.5.1 Administration Guide.
There may be a slight delay while the host is provisioned. It will display automatically in the UI.
12. Repeat the procedure for adding additional hosts.

85

Chapter 6. Provisioning Your Cloud Infrastructure on Hyper-V

6.5. Adding Primary Storage
Warning
When using preallocated storage for primary storage, be sure there is nothing on the storage (ex.
you have an empty SAN volume or an empty NFS share). Adding the storage to CloudPlatform
will destroy any existing data.

When you create a new zone, the first primary storage is added as part of that procedure. You can
add primary storage servers at any time, such as when adding a new cluster or adding more servers
to an existing cluster.
1. Log in to the CloudPlatform UI.
2. In the left navigation, choose Infrastructure. In Zones, click View All, then click the zone in which
you want to add the primary storage.
3. Click the Compute and Storage tab.
4. In the Primary Storage node of the diagram, click View All.
5. Click Add Primary Storage.
6. Provide the following information in the dialog. The information required varies depending on your
choice in Protocol.
• Scope: Indicate whether the storage is available to all hosts in the zone or only to hosts in a
single cluster.
• Pod: (Visible only if you choose Cluster in the Scope field.) The pod for the storage device.
• Cluster: (Visible only if you choose Cluster in the Scope field.) The cluster for the storage
device.
• Name: The name of the storage device.
• Protocol:Select SMB/CIFS.
• Path (for SMB/CIFS): The exported path from the server.
• SMB Username: Applicable only if you select SMB/CIFS provider. The username of the
account which has the necessary permissions to the SMB shares. The user must be part of the
Hyper-V administrator group.
• SMB Password: Applicable only if you select SMB/CIFS provider. The password associated
with the account.
• SMB Domain: Applicable only if you select SMB/CIFS provider. The Active Directory domain
that the SMB share is a part of.
• SR Name-Label (for PreSetup): Enter the name-label of the SR that has been set up outside
CloudPlatform.

86

Adding Secondary Storage
• Target IQN (for iSCSI): In iSCSI this is the IQN of the target. For example,
iqn.1986-03.com.sun:02:01ec9bb549-1271378984
• Lun # (for iSCSI): In iSCSI this is the LUN number. For example, 3.
• Tags (optional): The comma-separated list of tags for this storage device. It should be an
equivalent set or superset of the tags on your disk offerings
The tag sets on primary storage across clusters in a Zone must be identical. For example, if
cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must
also provide primary storage that has tags T1 and T2.
7. Click OK.

6.6. Adding Secondary Storage
Note
Be sure there is nothing stored on the server. Adding the server to CloudPlatform will destroy any
existing data.

When you create a new zone, the first secondary storage is added as part of that procedure. You can
add secondary storage servers at any time to add more servers to an existing zone.
1. To prepare for the zone-based Secondary Storage, you should have created and mounted an
NFS share during Management Server installation.
2. Make sure you prepared the system VM template during Management Server installation.
3. Log in to the CloudPlatform UI as root administrator.
4. In the left navigation bar, click Infrastructure.
5. In Secondary Storage, click View All.
6. Click Add Secondary Storage.
7. Fill in the following fields:
• Name: Give the storage a descriptive name.
• Provider: Select SMB as the storage provider. Depending on the provider that you selected,
additional fields will appear.

Warning
You can use only a single region-wide object storage account per region.

87

Click Add Instance and follow the steps in the wizard. f. 88 . you would have only one option here. Initialize and Test After everything is configured. Verify that the system is ready. In the template selection.Chapter 6. Click Launch VM. The password associated with the account. b. You can watch the VM’s progress in the Instances screen. 3. In a trial installation. depending on the speed of your network. 2. a. To use the VM. • SMB Password: Applicable only if you select SMB/CIFS provider. The exported path from the server. If this is a fresh installation. Your VM will be created and started. 6. Go to the Instances tab. Provisioning Your Cloud Infrastructure on Hyper-V • Zone: Select the zone where you want to create secondary storage • Server. g. In default network. In the left navigation bar. the administrator's Dashboard should be displayed in the CloudPlatform UI. c. CloudPlatform will perform its initialization. select Templates. and move a VM from one host to another. Use any descriptive text you would like. The user must be part of the Hyper-V administrator group. 4. Click the template.7. Select a service offering. add another data disk. It might take some time to download the template and complete the VM startup. and delete VMs. click the View Console button. • Path. Check to be sure that the status is "Download Complete. • SMB Username: Applicable only if you select SMB/CIFS provider. The IP address or DNS name of the storage server. choose the template to use in the VM. Be sure that the hardware you have allows starting the selected service offering. likely only the provided CentOS template is available. The Active Directory domain that the SMB share is a part of. Optionally give your VM a name and a group." Do not proceed to the next step until this status is displayed. For more information about using VMs. see Working With Virtual Machines in the Administrator’s Guide. Choose the zone you just added. A reboot is not required if you have a PV-enabled OS kernel in use. if desired. This is a second volume that will be available to but not mounted in the guest. • SMB Domain: Applicable only if you select SMB/CIFS provider. This can take 30 minutes or more. 1. and filter by My Instances. e. choose the primary network for the guest. In data disk offering. including instructions for how to allow incoming network traffic to the VM. d. When the initialization has completed successfully. start. The username of the account which has the necessary permissions to the SMB shares. stop.

If you decide to grow your deployment.Initialize and Test Congratulations! You have successfully completed a CloudPlatform Installation. and clusters. zones. primary storage. pods. 89 . you can add more hosts.

90 .

5 Concepts Guide. storage. clusters. pods. storage.5 Administration Guide. refer to CloudPlatform (powered by CloudStack) Version 4. hosts. hosts. and networks in CloudPlatform. and networks to your cloud using an LXC hypervisor. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC This section describes how to add zones. you can provision the cloud infrastructure. After you complete provisioning the cloud infrastructure. Adding a Zone Adding a zone consists of three phases: 91 . pods. 7.1. clusters.2. Then. Overview of Provisioning Steps After installing Management Server. 7. you will have a deployment with the following basic structure: For information on adding a region to your cloud infrastructure. you can access the CloudPlatform UI and add the compute resources for CloudPlatform to manage. refer to Chapter 3 Adding Regions to Your Cloud Infrastructure (optional) of CloudPlatform (powered by CloudStack) Version 4. or scale the cloud infrastructure up at any time. For conceptual information about zones.Chapter 7.

2. click Infrastructure. Mount the secondary storage on your Management Server. For this.1. Then. under Zones. 7. you must seed the latest system VM template to the secondary storage of the zone. For more information about seeding the secondary storage. Adding a New Zone To add a zone. you must first create a mount point for the secondary storage. click View all. On Management Server. Creating a Secondary Storage Mount Point for the Zone To ensure that you deploy the latest system VMs in a new zone. 92 . 7.2. Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues. After you seed the secondary storage with the latest system VM template. 3.Chapter 7. 1.1 Installation Guide. Replace NFS server name and NFS share paths in the following example with the server name and path that you use.2. primary storage.2. Seed the secondary storage with the latest template that is used for CloudPlatform system VMs. refer to the CloudPlatform (powered by Apache CloudStack) Version 4. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC • Create a secondary storage mount point for the zone • Seed the system VM template on the secondary storage. • Add the zone. 1.5. In the left navigation bar. and secondary storage. you must ensure that you have performed the steps to seed the system VM template. you must first configure the zone’s physical network. you need to add the first pod. Then. Note Before you proceed with adding a new zone. create a mount point for secondary storage. # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary 3. you can seed the latest system VM template to the secondary storage. For example: # mkdir -p /mnt/secondary 2. cluster. host. continue with adding the new zone. In the right side panel. Log in to the CloudPlatform UI using the root administrator account.

do one of the following: • For Basic: Section 7. such as security groups (IP address source filtering).) These DNS servers will be accessed via the management traffic network interface of the System VMs. you will be asked to enter the following details. The private IP address you provide for the pods must have a route to the internal DNS server named here. • DNS 1 and 2: These are DNS servers for use by guest VMs in the zone.2.2.5. then click Next. • Network Offering: Your choice here determines what network services will be available on the network for guest VMs. After you select Basic in the Add Zone wizard and click Next. DefaultSharedNetworkOffering If you do not need security groups. 5. Basic Zone Configuration 1. In the next page. Specify the details. • Hypervisor: Choose LXC for the first cluster in the zone. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. console proxies. and you will be using its Elastic IP and Elastic Load Balancing features.Adding a New Zone 4. refer to Chapter 4. VPN. or load balancer support. click Add Zone. Based on the option (Basic or Advanced) that you selected. • Advanced: Used for more sophisticated network topologies. choose this. a basic zone with 93 .2.1. 6.2. See Using Security Groups to Control Traffic to VMs. Network Offering Description DefaultSharedNetworkOfferingWithSGService If you want to enable security groups for guest traffic isolation. “Advanced Zone Configuration ” 7. • Name: A name for the zone. DefaultSharedNetscalerEIPandELBNetworkOffering If you have installed a Citrix NetScaler appliance as part of your zone network. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall.2. The public IP addresses for the zone must have a route to the DNS server named here.1. choose this.1 Concepts Guide. For more information. and Secondary Storage VMs. These DNS servers will be accessed via the public network you will add later. choose this. With the EIP and ELB features.2. The Add zone wizard panel appears. You can provide guest isolation through layer-3 means. • Internal DNS 1 and Internal DNS 2: These are DNS servers for use by system VMs in the zone (these are VMs used by CloudPlatform itself.2. “Basic Zone Configuration ” • For Advanced: Section 7. such as virtual routers. Select one of the following network types: • Basic: Provides a single network where each VM instance is assigned with an IP directly from the network.

Interface of NetScaler that is configured to be part of the public network. 5. Only users in that domain will be allowed to use this zone. It could be NetScaler VPX. this device will be dedicated to a single account. Assign a network traffic label to each traffic type on the physical network. and storage traffic. then click Next. • Public interface. For a comparison of the types. 2. Click Next. The NSIP (NetScaler IP) address of the NetScaler device. These labels must match the labels you have already defined on the hypervisor host. To add more. To assign each label. Select to use local storage for provisioning the storage used for User VMs. (Optional) If you want to assign a special domain name to the guest VM network. 3. • Dedicated. A popup dialog appears where you can type the label. Select this checkbox to indicate that this is a dedicated zone. guest. You must enter the Domain and account details to which the zone is assigned to. Default is 2. (NetScaler only) If you chose the network offering for NetScaler. Choose which traffic types will be carried by the physical network. NetScaler device type that is being added. • Enable Local storage for User VMs. When marked as dedicated. Select to use local storage for provisioning the storage used for System VMs. For more information about the types.Chapter 7. Number of guest networks/accounts that will share this NetScaler device. The traffic types are management. the value in the Capacity field has no significance – implicitly. This screen starts out with some traffic types already assigned. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC Network Offering Description security groups enabled can offer 1:1 static NAT and load balancing. • Enable Local Storage for System VMs. NetScaler MPX. • Capacity. then click OK. • Network Domain. • Number of retries. • IP address. specify the DNS suffix. drag and drop traffic types onto the network. Provide the requested details to set up the NetScaler. 94 . These traffic labels will be defined only for the hypervisor selected for the first cluster. CloudPlatform uses these credentials to access the device. 4. roll over the icons to display their tool tips. • Username/Password. see About Using a NetScaler Load Balancer. • Dedicated. click the Edit button under the traffic type icon. or NetScaler SDX. • Private interface. You can also change the network name if desired. When Dedicated is checked. its value is 1. The authentication credentials to access the device. Interface of NetScaler that is configured to be part of the private network. • Type. you have an additional screen to fill out. Number of times to attempt a command on the device before considering the operation failed. or see Basic Zone Network Traffic Types. public.

you can repeat this step to add more IP ranges. see System Reserved IP Addresses. • Cluster name: Enter a name for the cluster. and DHCP. enter the following. • Reserved system gateway. You can always add more pods later. The gateway in use for these IP addresses. then click Next: • Guest gateway: The gateway that the guests should use. • VLAN. • Start/End Reserved System IP. You can always add more clusters later. Configure the network for guest traffic. then click Add. A name for the pod. The IP range in the management network that CloudPlatform uses to manage various system VMs. You can always add more hosts later. • We strongly recommend the use of multiple NICs. The IPs in this range will be used for the static NAT capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. The netmask associated with this IP range. • If one NIC is used. see About Hosts. • Guest start IP/End IP: Enter the first and last IP addresses that define a range that CloudPlatform can assign to guests. Use CIDR notation. Console Proxy VMs. 9. The VLAN that will be used for public traffic. 8. When done. In a new cluster. (NetScaler only) Configure the IP range for public traffic. 95 . • Gateway. enter the following. This can be text of your choosing and is not used by CloudPlatform. CloudPlatform adds the first pod for you. 10. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest VMs. The network prefix that defines the pod's subnet. 7. • Netmask. If desired. The gateway for the hosts in that pod. click Next. • Guest netmask: The netmask in use on the subnet the guests will use. For an overview of what a cluster is. then click Next: • Hypervisor: The type of hypervisor software that all hosts in this cluster will run. In a new zone. these IPs should be in the same CIDR as the pod CIDR. then click Next: • Pod Name. In a new pod. such as Secondary Storage VMs. CloudPlatform adds the first host for you. CloudPlatform adds the first cluster for you. To configure the first pod. To configure the first cluster.Adding a New Zone 6. • Reserved system netmask. For an overview of what a host is. see About Clusters. • Start IP/End IP. Enter the following details. Provide the following. If multiple NICs are used. For more information. they may be in a different subnet.

• Protocol: For LXC. Advanced Zone Configuration 1. only RBD storage is supported on LXC. enter the following. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. • Username: The username is root. The remaining fields in the screen vary depending on what you choose here.Chapter 7. you need to install the hypervisor software on the host. refer to the Chapter 6 Configuring LXC for CloudPlatform section in the CloudPlatform (powered by Apache CloudStack) Version 4. 96 . To configure the first host. you can set this to the cloud's HA tag. CloudPlatform adds the first primary storage server for you. Before you can configure the host. click Next. you can enter the following details.tag global configuration parameter. For example.5. enter the following. from your LXC install. After you select Advanced in the Add Zone wizard and click Next. 7.2.2. use NFS or local storage.1 Administration Guide. To configure the first primary storage server. see About Primary Storage. After you enter these details. the host must not have any VMs that are already running. • Host Tags: (Optional) Any labels that you use to categorize hosts for ease of maintenance. Note For Data disk. For an overview of what primary storage is. 11. then click Next: • Host Name: The DNS name or IP address of the host.5 Hypervisor Configuration Guide. then click Next: • Name: The name of the storage device. For root volume. refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. choose NFS or SharedMountPoint. For more information.2. set in the ha. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC Note When you add a hypervisor host to CloudPlatform. • Password: This is the password for the user named above. In a new cluster. For more information. You can always add more servers later. if you want this host to be used only for VMs with the high availability feature enabled.

• Netmask: The netmask associated with this IP range. The private IP address you provide for the pods must have a route to the internal DNS server named here. These DNS servers will be accessed via the public network that you will add later. If desired. • Enable Local Storage for System VMs.Adding a New Zone • Name: A name for the zone. and Secondary Storage VMs. • Enable Local storage for User VMs. This screen displays that one network has been already configured. 10.1. you can move them down. Select to use local storage for provisioning the storage used for User VMs. you can repeat this step to add more public Internet IP ranges. Select to use local storage for provisioning the storage used for System VMs. These traffic labels will be defined only for the hypervisor selected for the first cluster. For example. For more information about the types. such as virtual routers. These are the VMs used by CloudPlatform itself. As a matter of good practice you should set different CIDRs for different zones. public. You must enter the Domain and account details to which the zone is assigned to. • Internal DNS 1 and Internal DNS 2: These are DNS servers for use by system VMs in the zone. This will make it easier to set up VPNs between networks in different zones. for example. The public IP addresses for the zone must have a route to the DNS server named here. 2. 4. console proxies. specify the DNS suffix. These labels must match the labels you have already defined on the hypervisor host. roll over the icons to display their tool tips.0/24. You can also change the network names if desired. and storage traffic. Select the traffic type that the physical network will carry and click Next. • Network Domain: (Optional) If you want to assign a special domain name to the guest VM network. Configure the IP range for the public (Internet) traffic. you need to add traffic types to them. • Gateway: The gateway in use for these IP addresses. The Edit traffic type dialog appears where you can enter the label and click OK. Click Next. 97 . if the default traffic types shown for Network 1 do not match your actual setup. • IPv4 DNS 1 and IPv4 DNS 2: These are DNS servers for use by guest VMs in the zone. • Hypervisor: Choose LXC as the hypervisor for the first cluster in the zone. 5. 3. You can move the traffic type icons from one network to another. Select this checkbox to indicate that this is a dedicated zone. Click the Edit button under each traffic type icon that you added to the physical netwok to assign a network traffic label to it. guest. These DNS servers will be accessed via the management traffic network interface of the System VMs. Click Next. If you have multiple physical networks. Only users in that domain will be allowed to use this zone. • Dedicated. • Guest CIDR: This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone.1. Enter the following details and click Add. Drag and drop traffic types onto a greyed-out physical network to make it active. The traffic types are management.

you can set to the cloud's HA tag (set in the ha. • Start IP/End IP: A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks. Note When you deploy CloudPlatform. For more information. • Specify a range of VLAN IDs to carry guest traffic for each physical network and click Next. and DHCP. You can always add more clusters later. such as Secondary Storage VMs. then click Next: • Host Name: The DNS name or IP address of the host. This is the password associated with the user name (from your KVM install).Chapter 7. the hypervisor host must not have any VMs running on it. refer to the Chapter 4 Configuring LXC for CloudPlatform section in the CloudPlatform (powered by Apache CloudStack) Version 4. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. Console Proxy VMs. • Password. You can always add more hosts later. Use CIDR notation.tag global configuration 98 . • Reserved system gateway: The gateway for the hosts in that pod. • Username: Usually root. enter the following and click Next: • Pod Name: A name that you can use to identify the pod. To configure the first pod. • Reserved system netmask: The network prefix that defines the pod's subnet. Before you configure the host. • Host Tags: (Optional) Any labels that you use to categorize hosts for ease of maintenance. In a new zone. You can always add more pods later. CloudPlatform adds the first host. CloudPlatform adds the first cluster.5 Hypervisor Configuration Guide. CloudPlatform adds the first pod. 6. For example. you need to install the hypervisor software on the host. • Cluster name: Enter a name for the cluster. To configure the first host. To configure the first cluster. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC • VLAN: The VLAN that will be used for public traffic. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. enter the following and click Next: • Hypervisor: The type of hypervisor software that all hosts in this cluster will run. • In a new cluster. enter the following. • In the new pod.

metadata. for example. • RADOS User: Specify the RADOS user you created. only RBD storage is supported on LXC. • RADOS Secret: The secret pass code associated with the RADOS user you specified above. enter the following and click Next: • Name: The name of the storage device.admin credentials. Ideally.Adding a New Zone parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. refer to the HA-Enabled Virtual Machines and the HA for Hosts sections in the CloudPlatform (powered by Apache CloudStack) Version 4. Although you could use client. three Monitor daemons should be run on three separate physical machines. use NFS or local storage. and handles the decision making and health check of the RBD cluster. Pools allow configurations in which individual users can only access specific pools. To configure the first primary storage server. It is a logical layer of binary data tagged as a pool. The user with the necessary permissions to access the RADOS pool for CloudPlatform. isolated from each other. in different racks. for example. it’s recommended to create a user with access only to the CloudPlatform pool. CloudPlatform adds the first primary storage server.1 Administration Guide. RBD Note For Data disk. choose NFS. or RBD Ceph. • RADOS Pool: Specify the RADOS pool: RADOS pool represents individual fragments of object store. For root volume. • RADOS Monitor: Specify the IP address of the RADOS Monitoring Daemon server. SharedMountPoint. • Protocol: For LXC. For more information.5. You can always add more servers later. • Provider : Select a provider of the RBD storage. 99 . RADOS Monior is responsible for handling communication with all external applications. • In a new cluster.data. and rbd are available only in the default configuration.

It should be an equivalent set or superset of the tags on your disk offerings. you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. 1. all other clusters in the Zone must also provide primary storage that has tags T1 and T2.3. 7. Before you enter information in the fields on this screen. For example. enter the following and click Next: • NFS Server: The IP address of the server. Adding a Pod After you create a new zone. To configure the first secondary storage server on hosts. The tag sets on primary storage across clusters in a Zone must be identical. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. The tag sets on primary storage across clusters in a Zone must be identical. • Path: The exported path from the server. • Tags (optional): The comma-separated list of tags for this storage device. • Click Launch. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC • Tags (optional). SharedMountPoint • Path: The path on each host that is where this primary storage is mounted. For example.Chapter 7. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. if cluster A provides primary storage that has tags T1 and T2. if cluster A provides primary storage that has tags T1 and T2. Log-in to the CloudPlatform UI. It should be an equivalent set or superset of the tags on your disk offerings. • In a new zone. CloudPlatform adds the first pod to it. You can perform the following procedure to add more pods to the zone at anytime. The comma-separated list of tags for this storage device. • Path: The exported path from the server. • Tags (optional): The comma-separated list of tags for this storage device. 100 . It should be an equivalent set or superset of the tags on your disk offerings. For example. The tag sets on primary storage across clusters in a Zone must be identical. For example. CloudPlatform adds the first secondary storage server. if cluster A provides primary storage that has tags T1 and T2. "/mnt/primary". NFS • Server: The IP address or DNS name of the storage device.

click Infrastructure. Before you perform the following steps. click Add Pod. click View all. such as Secondary Storage VMs. you must have installed the hypervisor on the hosts and logged-in to the CloudPlatform UI. In the page that lists the zones that you have configured. For more information. 5. under Zones. In the page that list the clusters. In the right-side panel. In the left navigation bar. see System Reserved IP Addresses. under Zones. enter the following details: • Zone: Select the name of the zone where you want to add the new pod. In the right-side panel. in the left navigation bar.4. click the zone where you want to add a pod. • Pod name: The name that you can use to identify the pod. so before you begin adding hosts to the cloud. 7. Adding an LXC Cluster You need to tell CloudPlatform about the hosts that it will manage. 2. 5. 3. In the page that lists the zones that you configured. you must add at least one cluster. 6. 101 . Console Proxy VMs. • Account: Enter an account name that belongs to the above selected domain so that you can dedicate the pod to this account. 3. At the Clusters node of the diagram. 8. In the page that lists the pods configured in the zone that you selected. click View all. 6. click Add Cluster. • Domain: Select the domain to which you want to dedicate the pod. • Reserved system netmask: The network prefix that defines the pod's subnet. At the Pods node in the diagram.Adding an LXC Cluster 2. 7. • Reserved system gateway: The gateway for the hosts in that pod. click View all. 7. Use CIDR notation. In the CloudPlatform UI. Hosts exist inside clusters. click View all. • Dedicate: Select if you want to dedicate the pod to a specific domain. do the following and click OK: • Zone Name: Select the zone where you want to create the cluster. In the Add Cluster dialog box. 1. • Start/End Reserved System IP: The IP range in the management network that CloudPlatform uses to manage various system VMs. and DHCP. select Infrastructure. click the Compute and Storage tab. Click OK. 4. In the details page of the zone. 4. click the zone where you want to add the cluster. In the Add Pod dialog box. Click the Compute and Storage tab.

80.1.4.4. Verify that the RBD modules are installed. and public network) as other hosts in the cluster. see http://docs. • Cluster Name: Enter a name that you can use to identify the cluster. perform the following on each VM (containers): 1 For more information. # yum install ceph kmod-rbd 4. 7. • If shared mountpoint storage is in use. 1 http://ceph. (Optional)Enabling RBD-Based Primary Storage on LXC: If you want to use RBD-based data disks on LXC. 7. Install the Ceph and RBD kernel module. see the installation section for your hypervisor in the CloudPlatform Installation Guide. • Each cluster must contain only hosts with the identical hypervisor. • Domain: Select the domain to which you want to dedicate the cluster. private. 2.5/install/get-packages/ 102 . the administrator should ensure that the new host has all the same mountpoints (with storage mounted) as the other hosts in the cluster.2.1. For more information. • Account: Enter an account name that belongs to the domain so that you can dedicate the cluster to this account. see Get Ceph Packages : 1.4.1. 7. For hardware requirements.com/docs/master/rbd/rbd-cloudstack/ 3.1.1. • Pod Name: Select the pod where you want to create the cluster.com/docs/v0. Requirements for LXC Hosts Configuration requirements: • Make sure that the host does not have any running VMs before you add it to CloudPlatform.ceph. • Make sure the new host has the same network configuration (guest.1 packages. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC • Hypervisor: Select LXC as the hypervisor for this cluster. Install the RBD-enabled libvirt 1.Chapter 7. Configure the Ceph repository. (Experimental Feature) Adding an LXC Host LXC hosts can be added to a cluster at any time. • Do not put more than 16 hosts in a cluster. • Dedicate: Select if you want to dedicate the cluster to a specific domain.

3.conf and ceph admin key to the /etc/ceph directory. It should automatically displayed in the UI. which is set in the ha.4. 8. • Host Tags (Optional): Any labels that you use to categorize hosts for ease of maintenance. copy ceph. • Host Name: The DNS name or IP address of the host. 5. 4. • Password: This is the password for the user named above from your LXC install. Provide the following information. 7. click View All. such as when adding a new cluster or adding more servers to an existing cluster. Log in to the CloudPlatform UI as administrator. then click the zone in which you want to add the host. 7. choose Infrastructure. Repeat these steps for adding additional hosts.1. • Username: Usually root. click View More. you can set to the cloud's HA tag. In the Clusters node. In Zones. Adding Primary Storage Warning When using preallocated storage for primary storage. Click View Hosts. 2. • Dedicate: When marked as dedicated. In the left navigation. 7. for example. see HA-Enabled Virtual Machines as well as HA for Hosts sections. if you want this host to be used only for VMs (containers) with the high availability feature enabled. For example.Adding Primary Storage # modprobe rbd 5. Click the cluster where you want to add the host. 3. Adding the storage to CloudPlatform will destroy any existing data. From ceph monitor node. You can add primary storage servers at any time. you have an empty SAN volume or an empty NFS share. the first primary storage is added as part of that procedure. 6. Adding an LXC Host 1.tag global configuration parameter. Click the Compute tab. There may be a slight delay while the host is provisioned. Click Add Host. 103 . When you create a new zone.5. 9. this host is dedicated to a single account that you select. For more information. be sure there is nothing on the storage.

6. The cluster for the storage device. choose NFS. In Zones. three Monitor daemons should be run on three separate physical machines. "/mnt/primary". • Path: (SharedMountPoint) On LXC. then click the zone in which you want to add the primary storage. The information required varies depending on your choice in Protocol. for example. for example. in different racks. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC 1. In the left navigation. and rbd are available only in the default configuration. • Server: (NFS)The IP address or DNS name of the storage device. Provide the following information in the dialog. 4. Log in to the CloudPlatform UI. 104 . 5. choose Infrastructure. isolated from each other. • Scope: Indicate whether the storage is available to all hosts in the zone or only to hosts in a single cluster. For root volume.Chapter 7. metadata. • Protocol: For LXC. • RADOS User: (RBD) Specify the RADOS user you created. • Pod: Visible only if you choose Cluster in the Scope field. Click the Compute and Storage tab. use NFS or local storage. • Provider: (RBD) Select a provider of the RBD storage. • Path: (NFS) On NFS this is the exported path from the server. click View All. Note For Data disk. only RBD storage is supported on LXC. • Cluster: Visible only if you choose Cluster in the Scope field. For example. • Name: The name of the storage device. RADOS Monior is responsible for handling communication with all external applications. this is the path on each host where this primary storage is mounted. The pod for the storage device. RADOS Block Devices (RBD) or SharedMountPoint.data. It is a logical layer of binary data tagged as a pool. In the Primary Storage node of the diagram. Pools allow configurations in which individual users can only access specific pools. 2. click View All. • RADOS Pool: (RBD) Specify the RADOS pool: RADOS pool represents individual fragments of object store. and handles the decision making and health check of the RBD cluster. Ideally. 3. • RADOS Monitor: (RBD) Specify the IP address of the RADOS Monitoring Daemon server. Click Add Primary Storage.

Adding Secondary Storage Note Be sure there is nothing stored on the server. 7. To prepare for the zone-based Secondary Storage. You can add secondary storage servers at any time to add more servers to an existing zone. click Infrastructure.Adding Secondary Storage The user with the necessary permissions to access the RADOS pool for CloudPlatform. • Create NFS Secondary Storage: Be sure this box is checked. In Secondary Storage. unless the zone already contains a secondary staging store. For example. • Provider: Choose the type of storage provider.admin credentials. the first secondary storage is added as part of that procedure. Click Add Secondary Storage. it’s recommended to create a user with access only to the CloudPlatform pool. Fill in the following fields: • Name: Give the storage a descriptive name. 3.6. 105 . Log in to the CloudPlatform UI as root administrator. • Tags: (optional) The comma-separated list of tags for this storage device. Although you could use client. 7. 5. 7. 1. you should have created and mounted an NFS share during Management Server installation. click View All. 6. In the left navigation bar. 2. • RADOS Secret: (RBD) The secret pass code associated with the RADOS user you specified above. When you create a new zone. all other clusters in the Zone must also provide primary storage that has tags T1 and T2. It should be an equivalent set or superset of the tags on your disk offerings The tag sets on primary storage across clusters in a Zone must be identical. if cluster A provides primary storage that has tags T1 and T2. Make sure you prepared the system VM template during Management Server installation. 4. Adding the server to CloudPlatform will destroy any existing data. Click OK.

The path to the zone's Secondary Storage. 8. Make sure you prepared the system VM template during Management Server installation.2. Log in to the CloudPlatform UI as root administrator. click View All. To prepare for the zone-based Secondary Storage. Adding an NFS Secondary Storage for Each Zone You can skip this section if you are upgrading an existing zone from NFS to object storage. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC Warning If you are setting up a new zone. • NFS server. In Select View. Click the Add NFS Secondary Storage button. 7.1. choose Secondary Storage. • Server. Every zone must have at least one NFS store provisioned.6. 7. The name of the zone's Secondary Storage. 3. • Path. To provision an NFS Staging Store for a zone: 1.6. • Zone: The zone where the NFS Secondary Storage is to be located. then click OK: • Zone. This checkbox and the three fields below it must be filled in. In the left navigation bar. 2. Fill out the dialog box fields. 5. click Infrastructure. multiple NFS servers are allowed per zone. The exported path from the server. 6. • NFS server: The name of the zone's Secondary Storage. • Path. 7. The zone where the NFS Secondary Storage is to be located. The existing NFS storage in the zone will be converted to an NFS Staging Store. 4. In Secondary Storage. 106 . You only need to perform the steps below when setting up a new zone that does not yet have its NFS server. be sure the box is checked.Chapter 7. • Path: The path to the zone's Secondary Storage. you should have created and mounted an NFS share during Management Server installation. The IP address or DNS name of the storage device. Upgrading from NFS to Object Storage In an existing zone that is using NFS for secondary storage. you can upgrade the zone to use a region-wide object storage without causing downtime.

Initialize and Test After upgrade. CloudPlatform starts the initialization. • For volume. • In Secondary Storage.value=<secretKey from . 107 . 7.key=bucket&details[2]. volumes. Therefore. When the initialization has completed successfully. In the left navigation bar. a copy of the volume is generated in the new S3 object store when ExtractVolume command is executed.s3cfg file>&details[2]. the administrator's dashboard should be displayed in the CloudPlatform UI. rather than as a batch job.value=<bucketname>&details[3]. ISOs. click View All. volumes. • All the items in the NFS storage is not migrated to S3.key=accesskey&details[0].s3cfg file>&details[1]. ensure that you take a full snapshot to newly added S3.value=<trueorfalse>&details[4]. This can take 30 minutes or more. Post migration. Check to be sure that the status is "Download Complete. Perform either of the following in the Infrastructure page: • In Zones. click Templates. click View All. All previously created templates. snapshots are moved to the object store.key=endpoint&details[4].value=<access key from . The snapshots taken before migration are pushed to S3 store when you try to use that snapshot by executing createVolumeCmd by passing snapshot id.key=secretkey&details[1]. key=usehttps&details[3]. • You can deploy VM from templates because they are already in the NFS staging store. 2. and select Secondary Storage in the Compute and Storage tab. all newly created templates. Click the template. 3. 1. consider the following: • For each new snapshot taken after migration.7. and your S3 object store specified in the command has been added as a new region-wide secondary storage. if you want to completely shut down the NFS storage you have previously used. ISOs. Log in as admin to the CloudPlatform UI. snapshots are migrated on an on-demand basis based on when they are accessed. Verify that the system is ready. 1. then select the desired secondary storage." Do not proceed to the next step until this status is displayed. value=<S3 server IP:8080> All existing NFS secondary storages has been converted to NFS staging stores for each zone. depending on the speed of your network. write your own script to migrate those remaining items to S3. Fire an admin API to update CloudPlatform to use object storage: http://<MGMTIP>:8096/client/api?command=updateCloudToUseObjectStore&name=<S3 storage name>&provider=S3&details[0]. Unused objects in the NFS staging store are garbage collected over time. A copy of the template is generated in the new S3 object store when ExtractTemplate or CopyTemplate is executed. Locate the secondary storage that you want to upgrade to object storage. Initialize and Test After you complete all the configuration. This would help coalesce delta snapshots across NFS and S3 stores. then locate the desired zone.

g. and filter by My Instances. click the View Console button. and move a VM from one host to another.Chapter 7. zones. choose the primary network for the guest. If you decide to grow your deployment. you can add more hosts. Optionally give your VM a name and a group. d. choose the template to use in the VM. stop. including instructions for how to allow incoming network traffic to the VM. In the template selection. pods. Click Add Instance and follow the steps in the wizard. in Linux on XenServer you will see /dev/xvdb in the guest after rebooting the VM. 108 . In data disk offering. This is a second volume that will be available to but not mounted in the guest. Select a service offering. Be sure that the hardware you have allows starting the selected service offering. Click Launch VM. b. In default network. You can watch the progress in the Instances screen. A reboot is not required if you have a PV-enabled OS kernel in use. primary storage. e. To use the VM. see Congratulations! You have successfully completed a CloudPlatform Installation. In a trial installation. For example. if desired. Your VM will be created and started. a. Choose the zone you just added. start. f. 4. likely only the provided CentOS template is available. c. For more information about using VMs. If this is a fresh installation. Go to the Instances tab. It might take some time to download the template and complete the VM startup. 3. add another data disk. Use any descriptive text you would like. (Experimental Feature) Provisioning Your Cloud Infrastructure on LXC 2. you would have only one option here. and delete VMs. and clusters.

8. VR • Template copy or template download • VM migration (if host is placed under maintenance mode) • Live migration • High availability • Values from CPU and Memory are not honoured.1.5 Concepts Guide.1. 109 . Adding a New Zone To add a zone. clusters. Adding a Zone Adding a zone consists of three phases: • Consider the Baremetal limitations. 8. cluster. pods. primary storage.1. volume. and secondary storage. hosts. the following are not supported: • CloudPlatform storage concepts: primary storage. snapshot • System VMs: SSVM. secondary storage. you must first configure the zone’s physical network. and networks in CloudPlatform. 8. Provisioning Cloud Infrastructure on Baremetal This section describes how to add a host and networks to your cloud using Baremetal hypervisor.1. • Multiple NICs • A stopped VM (the OS running on host) can only start on the host it was most recently on • Console proxy view • Dedicated resources: cluster and host • Affinity and anti-affinity group • Compute offering for Baremetal honours only BaremetalPlanner. host. Note that Baremetal host requires a VMware advanced zone up and running. • Ensure that you meet the prerequisites. For conceptual information about zones.2. Limitations of Baremetal Deployment When this feature is used.Chapter 8. Use host tag. you need to add the first pod. Then. • Add the zone. refer to CloudPlatform (powered by CloudStack) Version 4. CPVM.

1. Section 8.2. click View all. 1. Based on the option (Basic or Advanced) that you selected. • Advanced: Used for more sophisticated network topologies. In the left navigation bar.2.1. 3. click Infrastructure. Configuring Basic Zone for Baremetal Follow the steps in all the following sections in order. refer to Chapter 4. “Creating a Basic Zone ” 110 .5 Concepts Guide.2. 1. 5. “Configuring Advanced Networking in Baremetal ” 8. Provides a single network where each VM instance is assigned with an IP directly from the network. do one of the following: • For Basic: Section 8. Note Citrix strongly recommends using the same type of hypervisors in a zone to avoid operational issues. You can provide guest isolation through layer-3 means such as security groups (IP address source filtering). you must ensure that you have performed the steps to seed the system VM template.1. The Add zone wizard panel appears.Chapter 8.1.1.1. Provisioning Cloud Infrastructure on Baremetal Note Before you proceed with adding a new zone.1.2.1. under Zones. 4.2.2. In the next page. For more information. Log-in to the CloudPlatform UI using the root administrator account. 2. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall. Cloud Infrastructure Concepts of the CloudPlatform (powered by Apache CloudStack) Version 4. “Configuring Basic Zone for Baremetal” • For Advanced: Section 8. “Prerequisites for Configuring a Baremetal Basic Zone” 2.2. Section 8. Select one of the following network types: • Basic: (For AWS-style networking). click Add Zone.1. or load balancer support.1. In the right side panel. 6. VPN.

In the Add Baremetal PXE Device dialog. 7. In the Details node.1. In the list of Network service providers.3. click Baremetal DHCP. Note In Bare Metal basic zone. Prerequisites for Configuring a Baremetal Basic Zone To set up basic networking zone in Baremetal cloud. 6. In the Details node. 2. you must first create a network offering and using which create a shared network.1. clik Baremetal PXE. choose Infrastructure.1. click View All.2. “Creating a Baremetal Network Offering ” • Section 8.1.1. Adding the PXE Server and DHCP Server to Your Deployment As part of describing your deployment to CloudPlatform.1.1. click Add Baremetal DHCP Device button. click Add Baremetal PXE Device.1. The Add Baremetal DHCP Device dialog will appear. Click the Physical Network entry.2. 3. Log in as admin to the CloudPlatform UI.1.1. make the following choices: • URL: http://<PXE DHCP server IP address> • Username: login username • Password: password • Tftp root directory: /var/lib/tftpboot 8.Adding a New Zone 8. Not all network services are supported in basic zone. Security groups are supported. “Setting Up the Security Group Agent (Optional)” 8. “Adding the PXE Server and DHCP Server to Your Deployment” • Section 8.1. 4. 1. In the list of Network service providers.1. for which install the Security group agent. In the Network node. administrator must manually configure the DHCP server (dhcpd.2. Click the Physical Network tab. you will need to add the PXE server and DHCP servers. For more information.1. 9.1. • Section 8.2. Ensure that you have installed the PXE and DHCP Servers. In the Add Baremetal DHCP Device dialog: • URL: http://<PXE DHCP server IP address> 111 . The Add Baremetal PXE Device dialog will appear. 5.1. In Zones.2. see CloudPlatform Hypervisor Configuration Guide.conf) to use “next-server” as PXE server. if PXE and DHCP servers are on different hosts.2. In the left navigation. then click the zone in which you want to add the Baremetal PXE / DHCP server. click Network Service Providers Configure.1.1.

Setting Up the Security Group Agent (Optional) If you are not using security groups. 3. Download the agent software from the following link: 112 .2. b. In Select Offering. In the Select Offering dropdown. 6. Creating a Baremetal Network Offering 1. Click Add Network Offering. click Service Offerings.1.1. For example. making it available in an accessible repository. click the Enable button. Provisioning Cloud Infrastructure on Baremetal • Username: login username • Password: password 8. In the dialog. make the following choices: • Name: You can give the offering any desired name.1. and modifying the kickstart file to go get this software during installation. 5. choose Network Offering. In State. • Guest Type: Shared • Supported Services: • DHCP checkbox: checked • DHCP Provider: Baremetal • User Data checkbox: checked • User Data Provider: Baremetal • Security Groups: checked • BaremetalPxeServer: checked • Additional choices in this dialog are described in "Creating a New Network Offering" in the Administrator's Guide. click Service Offerings. In the left navigation bar. choose Network Offerings. This involves downloading the software.1. you need to install security group agent software on each Baremetal host.1.2.Chapter 8. If you plan to use security groups to control traffic to Baremetal instances. In the left navigation bar. Click OK. Baremetal. 2. Log in as admin to the CloudPlatform UI. Verify: a. 8. be sure the offering is Enabled. Click the name of the offering you just created. If not. and check the details.1.3. 7. c. you can skip this section. 1. 4.2.

com/releases/4. add the following: %post chkconfig iptables off chkconfig cs-sgagent on service cs-sgagent start 113 .php?post/2012/05/26/Installing-ipset-on-CentOS-6 • libmnl: ipset dependent library.wandin. Place the RPMs in a directory that is accessable by browser through HTTP. ipset is provided by default. both CentOS and Ubuntu have this package. In the %package section of the kickstart file. In Ubuntu. Make the following modifications in the kickstart file: a.noarch. • ipset: An iptables tool which provides ipset match. IP address.net/dotclear/index.0/security_group_agent-1. In the %post section.5. In Cent OS.0-1. For example. it's usually available with ipset rpm for downloading 2. Create a repo where the kickstart installer can find the security group agent when it's time to install.Adding a New Zone http://download. and the directory in the base URL (if you didn't use the name securitygroupagent in the previous step).1. Run the following command: # createrepo <path_to_rpms> For example. Substitute the desired repo name. add all the RPMs. For example: http://www. 3. 4. For example: %package libmnl ipset python-cherrypy security_group_agent c. Add the security group agent package to the kickstart file that you are using for your Baremetal template.rpm The agent software depends on several other RPMs: • python-cherrypy: A Python HTTP server which is distributed by default with most Linux distributions. it is not provided by default. Add the repo that you created in the previous step. you need to download it from a third party. repo --name=<repo_name> --baseurl=http://<ip_address>/securitygroupagent/ b. before reboot. Insert the following command above the %package section.cloud. if the RPMs are in the following directory: /var/www/html/securitygroupagent/ The command would be: createrepo /var/www/html/securitygroupagent/ The repo file will be created in /var/www/html/securitygroupagent/.

enter the following: • Pod Name. Choose Baremetal. A name for the zone. then set the security group agent to "on" and immediately start the agent. The network prefix that defines the pod's subnet. Use CIDR notation. • Reserved system netmask. To configure the first pod. 114 . • Hypervisor. such as Secondary Storage VMs. The IP range in the management network that CloudPlatform uses to manage various system VMs.2.Chapter 8. 1. After you enter the details. These are DNS servers for use by guests in the zone.1. The gateway for the hosts in that pod. you need to enter the following details.1. Provisioning Cloud Infrastructure on Baremetal This will close iptables to flush the default iptables rules set by the OS (CloudPlatform does not need them). Console Proxy VMs. You must enter the Domain and account details to which the zone is assigned to. Click Next. • Network Domain: (Optional) If you want to assign a special domain name to the guest VM network. These DNS servers will be accessed via the public network you will add later. You can have one or more Baremetal zones in your cloud. or OK. • Start/End Reserved System IP. specify the DNS suffix. Select this checkbox to indicate that this is a dedicated zone. • Reserved system gateway. CloudPlatform adds the first pod for you.2. A name for the pod. Select to use local storage for provisioning the storage used for User VMs. In a new zone. and DHCP. Click Next. After you select Basic in the Add Zone wizard and click Next.1. This zone can contain only Baremetal host. • Dedicated.2. Troubleshooting: After a few moments. “Creating a Baremetal Network Offering ”. Choose the network offering you created in Section 8. if this indicator does not finish. Select to use local storage for provisioning the storage used for System VMs. • Name.2. Only users in that domain will be allowed to use this zone. 4. • Network Offering. • DNS 1 and 2. The public IP addresses for the zone must have a route to the DNS server named here.1. 2. The UI will show a progress indicator. You can always add more pods later. click Refresh in the browser.1. Creating a Basic Zone Your cluster(s) of Baremetal hosts must be organized into a zone. 8. • Enable Local storage for User VMs. • Enable Local Storage for System VMs. click Next. 3.

Adding a New Zone 5.1. To use XenServer/KVM with Baremetal.2. be sure the zone is Enabled. Click the name of the zone you just created. • User/Admin has to prepare kickstart files for the corresponding OS templates and prepare HTTP and NFS server storing kickstart files and ISO files. In Allocation State. only VMware is supported as the VR provider. The following are the hardware switch details: • Hardware Switch: Dell Force 10 • Switch Version: System Type S4810 • Dell Operating System Version: 2.2. Be sure the zone is enabled: a.1. SSVM and CPVM are up. Barmetal VMs gain VLAN isolation provided by CloudPlatform which is particularly used to provide Baremetal As a Service with advanced networking capabilities for public clouds.1.2. • Advanced zone with a VMware cluster is up and running. click View All. b. make sure that no VLANs are configured on the ports where baremetal hosts are connected to the physical switch. click the Enable button. The following networking capabilities are provided: • SourceNAT • VPN • Load Balancing • Firewall 8. A new plugin has been introduced in CloudPlatform to enable automatic VLAN programming on a physical switch to which Baremetal VMs are connected. If not. therefore. In an Advanced zone.2. Baremetal As a Service cannot function standalone. 8. cannot be used for Baremetal. • In Baremetal.5(0. deploy a Baremetal instance before deploying any instance to ensure that VR is created on the VMware 115 . it works in confunction with a physical switch either from vendor's SDK or from an in-switch agent for whitebox switch. Prerequisites and Guidelines • Add the supported Switch type and the version details that CloudPlatform currently supports. The VR of Xenserver/KVM uses link local address for inter-communication.1) • Before adding to CloudPlatform. click Infrastructure. In the left navigation bar. because it provides management NIC. • Rack configuration file with port and MAC information of all Baremetal hosts. and check the details.0 • Dell Application Software Version: 9. Configuring Advanced Networking in Baremetal CloudPlatform supports Advanced networking capabilities for Baremetal guest VMs. which is needed for Baremetal to access the internal HTTP server which stores the kickstart file and installation ISO. In Zones. c.

Chapter 8. Provisioning Cloud Infrastructure on Baremetal
host. For VR HA, continue using the same means of virtualization. VR template size is increased
to 3 GB in order to store kernel/initrd files of bare metal template and it uses the same means of
virtualization. That is it has all VR capabilities.
• Baremetal uses VR to provide all network services, such as PXE/DHCP, StaticNAT, and Port
Forwarding.
• If Baremetal host has multiple NICs, only one NIC which is part of the same VLAN gets the IP
address; however, you can configure kickstart file to set a static IP or use a DHCP server outside
CloudPlatform for extra NICs.
• Baremetal cluster is a simple aggregation of hosts which have similar hardware. We recommend
you to place the hosts of the same cluster in the same IPMI subnet. Connect all the guest NICs, the
NIC that plays role as guest NIC after provisioning the guest OS, to the same TOR switch.

8.1.2.2.2. Known Limitations
• Concepts of CloudPlatform storage, including primary storage, secondary storage, volume, and
snapshot are not supported.
• Link Layer Discovery Protocol (LLDP) is not supported.
• IPv6 is not supported.
• Deploying more than 10 Baremetal instances concurrently is not supported.
• (Advanced zone) Do not configure description on the Force10 VLAN. If description is set on the
VLAN on Force10 switch, the XML document generated by the switch will have a <description>
element before <vlan-id> element, which is not recognized by the Force10 switch.
This issue is caused by a defect in the Force10 REST API parser. The parser requires <vlan-id>
to be the first item in the <vlan> body, otherwise it throws 'missing element: vlan-id in /ftos:ftos/
ftos:interface/ftos:vlan' error.

8.1.2.2.3. Deploying Baremetal Instances in an Advanced Zone
1. Set IP of the internal HTTP server in the baremetal.internal.storage.server.ip global
parameter.
This is location where kickstart file, kernel, and initrd for advanced networking baremetal
provisioning are stored.
If you want to provide storage server IP address for each network, you can specify that IP
address in the Nework Offering. This will overwrite the IP address that you set using the
baremetal.internal.storage.server.ip global configuration parameter..
2. Create the HTTP Rack Configuration Repo.
This is the text file, referred as RCT, in JSON format that specifes the L2 switch identity and
switch-host-port mapping.
For more information, see HTTP Rack Configuration Repo in the CloudPlatform Hypervisor
Configuration Guide.
3. Specify the URL of the RCT file in the Global Settings.
a. Log in to the CloudPlatform UI.
116

Adding a New Zone
b. From the left navigational bar, click Global Settings.
c.

From the Select view drop down, select Baremetal Rack Configuration.

d. Click Add Baremetal Rack Configuration.
The Add Baremetal Rack Configuration dialog is displayed.
e. Specify the URL of the location where the RCT configuration file is stored. For example:
http://10.20.20.10/baremetal/rct.json.

Note
CloudPlatform allows only one RCT file to be added. RCT is globally available to all
Baremetal zones. So, when you update the RCT file, you must ensure that the name of
the RCT file is not changed.

f.

Click OK.

4. Create a compute offering.
For more information, see Section 8.1.2.2.4, “Create a Baremetal Compute Offering”.
5. Create a network offering with PXE and DHCP services, and VR as the service provider.
For more information, see Section 8.1.2.2.5, “Creating a Baremetal Network Offering”.
6. Create an isolated network with the above network offering.
For more information, see Section 8.1.2.2.6, “Creating an Isolated Network”.
7. Add a Baremetal cluster in a VMware Advanced zone.
8. Add a Baremetal host to the cluster.
9. Deploy Baremetal instances in the isolated network you have created.

8.1.2.2.4. Create a Baremetal Compute Offering
1. Log in as admin to the CloudPlatform UI at the URL below. Substitute the IP address of your own
Management Server:
http://<management-server-ip-address>:8080/client

2. In the left navigation bar, click Service Offerings.
3. In Select Offering, choose Compute Offerings.
4. Click Add compute offering.
5. In the dialog box, fill in these values:
• Name: Any desired name for the service offering.

117

Chapter 8. Provisioning Cloud Infrastructure on Baremetal
• Description: A short description of the offering that can be displayed to users.
• Storage Type: Shared.
• Custom: Use when the users want to specify their compute requirements (CPU, RAM, number
of Cores, and so on) at the time of VM creation, instead of using pre-configured compute
offering..
If the Custom field is checked, during the VM deployment following parameters need to be
specified:
• # of CPU cores: The number of cores which should be allocated to a system VM with this
offering. Use the same value as when you added the host.
• CPU (in MHz): The CPU speed of the cores that the system VM is allocated. For example,
“2000” would provide for a 2 GHz clock. Use the same value as when you added the host.
• Memory (in MB): The amount of memory in megabytes that the system VM should be
allocated. For example, “2048” would provide for a 2 GB RAM allocation. Use the same value
as when you added the host.
• Network Rate: Allowed data transfer rate in MB per second.
• Disk Read Rate: Allowed disk read rate in bits per second.
• Disk Write Rate: Allowed disk write rate in bits per second.
• Disk Read Rate: Allowed disk read rate in IOPS (input/output operations per second).
• Disk Write Rate: Allowed disk write rate in IOPS (input/output operations per second).
• Offer HA: Unchecked. High availability services are not supported for Baremetal hosts.
• Storage Tags: The tags that should be associated with the primary storage used by the system
VM.
• Host Tags (Mandatory): to make sure CPU, RAM, and CPU cores to be matched with host
details. This is to maintain correct host capacity in CloudPlatform. CloudPlatform cannot get the
Baremetal host statistics.
• Public: Indicate whether the service offering should be available all domains or only some
domains. Choose Yes to make it available to all domains.
• Deployment Planner: Choose the technique that you would like CloudPlatform to use when
deploying VMs based on this service offering. Select BareMetal Planner.
6. Click OK.

8.1.2.2.5. Creating a Baremetal Network Offering
1. Log in as admin to the CloudPlatform UI.
2. In the left navigation bar, click Service Offerings.
3. In Select Offering, choose Network Offering.
4. Click Add Network Offering.

118

click Service Offerings. This helps in binding the storage server IP address with the network. • Additional choices in this dialog are described in "Creating a New Network Offering" in the Administrator's Guide. Select Isolated to create an offering for an advanced network.) • BaremetalPxeServer: checked • Baremetal Internal Storage Server IP: Users can specify the internal storage server IP address for each network that they configure. Baremetal. • Supported Services: • DHCP checkbox: checked DHCP Provider: Select Virtual Router. • Guest Type: Select Shared to create an offering for a basic network. • Static NAT: checked Static NAT Provider: Select Virtual Router. • Security Groups: Not supported. 7. Verify: a. • User Data checkbox: checked User Data Provider: Select Virtual Router. • DNS: checked DNS Provider: Select Virtual Router. In the dialog. make the following choices based on the type of network offering you wanted to create: • Name: You can give the offering any desired name. For example. • Firewall: checked Firewall Provider: Select Virtual Router. • Source NAT: checked Source NAT Provider: Select Virtual Router.Adding a New Zone 5. In the left navigation bar. Click OK. 119 . • Load Balancer: checked Load Balancer Provider: Select Virtual Router. 6. • Port Forwarding: checked Port Forwarding Provider: Select Virtual Router. (Security Groups are supported with Basic zone networking.

as an administrator. This will be displayed to the user. you can call the addBaremetalRCT API again to update RCT with the same URL in CloudPlatform instantly. The Add Isolated Guest Network window is displayed: 2. This will be user-visible. choose Network Offerings. To configure the base guest network: 1. • Network Domain: A custom DNS suffix at the level of a network. These steps assume you have already logged in to the CloudPlatform UI. Provide the following information: • Name. then click Add Isolated Network.6. because the host MAC can figure out other needed facts. When RCT is changed on an HTTP server. 3. Provisioning Cloud Infrastructure on Baremetal b. • Network offering: Select the network offering you created in Section 8. Click OK. • Zone: The Baremetal zone in which you are configuring the guest network. c.1. In the left navigation. • Guest Gateway: The gateway that the guests should use. the VLAN is configured on a switch on the port where baremetal host is connected.Chapter 8. • Guest Netmask: The netmask in use on the subnet the guests will use. and so on). select Network. CloudPlatform must understand the rack-level network topology. • Display Text: The description of the network.2. click the Enable button. It must contain all the information (for example.2. Register the RCT in CloudPlatform using the addBaremetalRCT API. 8. Click the name of the offering you just created. If you want to assign a special domain name to the guest VM network. If not. RCT is globally available to all Baremetal zones. In State. be sure the offering is Enabled. and check the details.2. Creating an Isolated Network This section is applicable only if you are using an advanced network for the Baremetal instances. switch identity and credentials. you must clean the port where the Baremetal host is connected on the switch: 120 .1.7. The name of the network. 8. If VM deployment or any other operation fails with the following exception.2. host-switch port mapping. Troubleshooting Tips • To program vLAN for each Baremetal instance in a Baremetal advanced zone.2.5.1. You do not need to specify any redundant information of zone/pod/cluster. CloudPlatform allows only one RCT file to be added. In the Select Offering dropdown. You can use the addbaremetalRCT API with the URL of JSON file containing the rack configuration text for providing this information to CloudPlatform. • When VM deployement is triggered in Baremtal advanced zone configuration.2. specify a DNS suffix. “Creating a Baremetal Network Offering”.

engine. 4.utils. In the left navigation.BaremetalPxeElement. http status:400. 3. 8.java:1379) at org. 5. Add a Baremetal Cluster 1. Log in as admin to the CloudPlatform UI.cloud.Add a Baremetal Cluster 2015-05-29 04:58:21.java:145) at org.cloud.networkservice.orchestration. Click the Compute and Storage tab.prepareElement (NetworkOrchestrator. 3.networkservice.exception.networkservice. click View All.prepare (BaremetalPxeElement.prepareVlan (Force10BaremetalSwitchBackend.cloudstack.java:1315) 8.prepareNic (NetworkOrchestrator.v.orchestration.VirtualMachineManagerImpl] (Work-Job-Executor-15:ctxb5cfe8c2 job-724/job-725 ctx-e863b141) (logid:3b8f48cb) Failed to start instance VM[User| i-2-101-VM] com.prepareVlan (BaremetalVlanManagerImpl. click View All. In the left navigation. • Cluster name: Enter a name for the cluster. 2.2.manager. • Pod name: Select the desired pod.apache. Click View Clusters.cloudstack.3. In Zoes. 6.baremetal. click Infrastructure. In Clusters.baremetal. Click View Hosts.prepare (NetworkOrchestrator. This can be any text you like.cloudstack. 4.orchestration.apache. Specify the following: • Zone: Select the zone where VMware cluster is set up. Click OK.java:1242) at org.NetworkOrchestrator.cloud.java:156) at com.Force10BaremetalSwitchBackend. 5. then click Add Cluster.baremetal.BaremetalPxeElement.engine. then click the name of the Baremetal zone you added earlier.1.BaremetalVlanManagerImpl.engine. • Hypervisor: Select Baremetal.java:152) at com. 2. choose Infrastructure. body dump:inconsistent value: % Error: Te 0/1 Port is untagged in another Vlan. 121 .baremetal. Log in as admin to the CloudPlatform UI.apache. at com.CloudRuntimeException: failed to program vlan[1010] for port[tengigabitethernet:0/1] on force10[ip:10.NetworkOrchestrator.1].prepareVlan (BaremetalPxeElement.c.1.403 ERROR [c.cloud.NetworkOrchestrator. then click the name of the Baremetal cluster you added earlier.cloud. The Add Cluster dialog will appear. Add a Baremetal Host 1.java:155) at com.

It is the Ethernet card and not the IPMI that will be used for the network boot. 2. Click the Add Host button. In the left navigation bar.img 122 . 5.10:/var/www/html/baremetal/initrd. enter the following values. It may take a minute for the host to be provisioned. see CloudPlatform Hypervisor Configuration Guide. click Templates.initrd=<nfs_path_to_pxe_initrd> For example: ks=http://10. In the dialog box. kernel=10. Click Create Template. • Username: User name you set for IPMI. The Add Host dialog will appear.Chapter 8. In the Add Host dialog. 4.4. Short name for the template. 3. You will use this tag later when you create the service offering. Create a Baremetal Template In these steps. Repeat for additional Baremetal hosts. • Host Tags: Set to large. Description of the template.ks. • Host MAC: MAC address of the PXE NIC. Ensure that you have met the prerequisites for Kickstart Template creation.10. 1.10/baremetal/Centos. The location of the image file on your NFS server in the format: ks=<http_link_to_kickstart_file>. For more information.10. Log into the UI as either an end user or administrator. as well as the kickstart file.10. it is assumed you already have a directory on your NFS server containing the image for the Baremetal instance. 8.initrd= 10. 8. • # CPU Cores: Number of CPUs on the machine. • Password: Password you set for IPMI. make the following choices: • Host name: The IPMI IP address of the machine. • Name. It should automatically display in the UI. • Memory (in MB): Memory capacity of the new host. • URL. Provisioning Cloud Infrastructure on Baremetal 7.101.kernel=<nfs_path_to_pxe_bootable_kernel> .10. • CPU (in MHz): Frequency of CPU. • Display Text.10:/var/www/html/baremetal/vmlinuz.10.

this checkbox is selected. • Description: A short description of the offering that can be displayed to users. Log in as admin to the CloudPlatform UI at the URL below. • Custom: Use when the users want to specify their compute requirements (CPU. In Select Offering. Only administrators may make templates featured.img. • Featured: Choose Yes if you would like this template to be more prominent for users to select. 4. Click Add compute offering. and so on) at the time of VM creation. • OS Type: Select the OS type of the ISO image. • Hypervisor: BareMetal. • HVM: Click to remove the selection of the HVM checkbox. fill in these values: • Name: Any desired name for the service offering. • Format: BareMetal. number of Cores. • Public: No. Choose other if the OS Type of the ISO is not listed or if the ISO is not bootable. Click OK. instead of using pre-configured compute offering. RAM.Create a Baremetal Compute Offering Note You must provide the kernel name as vmlinuz and the initrd file name as initrd.5. 8. 123 . Create a Baremetal Compute Offering 1. In the left navigation bar. • Storage Type: Shared. The kickstart file is located on an HTTP server. Substitute the IP address of your own Management Server: http://<management-server-ip-address>:8080/client 2. We use the link to it here. click Service Offerings. By default. • Password Enabled: No. • Zone: All Zones. In the dialog box. choose Compute Offerings. 6. 5. 3.

3. 6.securitygroup. Use the same value as when you added the host. “2048” would provide for a 2 GB RAM allocation. • Host Tags: Host tag for the compute offering. • Basic Zone • enable. Log in as admin to the CloudPlatform UI.securitygroup.agent.agent. For example.baremetal.echo (default: 10) • timeout. Choose Yes to make it available to all domains. Host tag is mandatory. • Memory (in MB): The amount of memory in megabytes that the system VM should be allocated. “2000” would provide for a 2 GHz clock.agent. • Disk Read Rate: Allowed disk read rate in bits per second. Use the same value as when you added the host.6. • Disk Read Rate: Allowed disk read rate in IOPS (input/output operations per second). 8.Chapter 8. • CPU (in MHz): The CPU speed of the cores that the system VM is allocated. Click Global Settings.baremetal. High availability services are not supported for Baremetal hosts. Provisioning Cloud Infrastructure on Baremetal If the Custom field is checked. • Disk Write Rate: Allowed disk write rate in bits per second. • Public: Indicate whether the service offering should be available all domains or only some domains. Use the same value as when you added the host. 2.securitygroup. • Disk Write Rate: Allowed disk write rate in IOPS (input/output operations per second). • Deployment Planner: Choose the technique that you would like CloudPlatform to use when deploying VMs based on this service offering. (Optional) Setting Baremetal Configuration Parameters 1. Make any desired modifications to the Baremetal configuration parameters. For example.baremetal. • Storage Tags: The tags that should be associated with the primary storage used by the system VM. • Network Rate: Allowed data transfer rate in MB per second. • CPU cap: Whether to limit the level of CPU usage even if spare capacity is available. • Offer HA: Unchecked. Click OK. during the VM deployment following parameters need to be specified: • # of CPU cores: The number of cores which should be allocated to a system VM with this offering.echo (default: false) • interval. Select BareMetal Planner.echo (default: 3600) • Advanced Zone 124 .

provision. Click Add Instance. 8. “Create a Baremetal Compute Offering ”. The traffic towards the internal HTTP server goes through VR's management NIC instead of public NIC.timeout configuration parameter. Click Next. Restart the CloudPlatform Management Server to put the new settings into effect. 125 .done.2.7. click Instances. and the instance will be created. The source NAT will only bind to guest IP of the provisioning instance to prevent network sniffer from other VMs in the same network.notification. and click Next.storage. “Creating an Isolated Network” 11. the value for the baremetal.5.2.provision. in Section 8. For Advanced zone-specific instructions.3.ip This is specified with the gateway IP of an instance.notification.done. Click Launch. otherwise your instance will stay in ‘Starting’ state and finally goes to the error state after the completion of the timeout period set by the baremetal. (Advanced network) Select the Isolated network you created earlier. see Section 8.2. Select the compute offering you created earlier.provision.Provisioning a Baremetal Instance • baremetal. and click Next. 4.server.timeout parameter has been set to 1800 seconds (30 minutes). 9. In the left navigation bar. 7. 4.done.enabled to false.2. which is SourceNATed by the CloudPlatform Management Server to the mangement NIC. 8. Note Before creating any BM instance on Basic zone. Click Template.1. change baremetal.notification. Provisioning a Baremetal Instance Deploy one Baremetal instance per host using these steps. Ensure that you meet the prerequisites. in Section 8. Log in to the CloudPlatform UI as an administrator or user. “Create a Baremetal Template”. Select the template that you created earlier. 10. “Deploying Baremetal Instances in an Advanced Zone ”. 6. 2. 1.6.internal.4. Select a zone. in Section 8. By default. 3.1. 5.

`echo $pair | tr '=' ' '` if [[ $1 == 'ksdevice' ]].Chapter 8. set up public IPs and port forwarding rules. If there is no notification sent to the Management Server or the notification fails to reach the Management Server. it identifies the instance by its IP address. Then a notification will be sent from the VR to the Management Server to indicate the completion of provisioning. 8. (Basic network) Set up security groups with ingress and egress rules to control inbound and outbound network traffic. administrator must keep the credentials to safely send the HTTP notification request and never let any customer script run before the provisioning completion notification script in the post installation script.d/blacklist-ipv6. you must configure the first command in the post installation section of the kickstart file to be the script that notifies the completion status of provisioning to CloudPlatform. The script sends a simple HTTP request that can be issued by curl.conf #yum -y install libmnl chkconfig iptables off chkconfig cs-sgagent on #service cs-sgagent start cmdline=`cat /proc/cmdline` for pair in $cmdline do set -. Provisioning Completion Notification As an administrator.d/blacklist-ipv6. wget using username/password authentication.`echo $pair | tr '=' ' '` if [[ $1 == 'ksdevice' ]]. Then.8. Follow the steps in How to Set Up Port Forwarding in the Administrator's Guide. then mac=$2 fi done gw=`ip route | grep 'default' | cut -d " " -f 3` 126 . After the agent in VR receives the notification.conf echo "blacklist ipv6" >> /etc/modprobe. then mac=$2 fi done gw=`ip route | grep 'default' | cut -d " " -f 3` curl "http://$gw:10086/baremetal/provisiondone/$mac" After you add the provisioning completion script. the post installation section of the kickstart file will appears as follows: %post #really disable ipv6 echo "install ipv6 /bin/true" > /etc/modprobe. If you want to allow inbound network traffic to the Baremetal instances through public IPs. For security reasons. The following is the script to be included in the beginning of post install section of kickstart file: cmdline=`cat /proc/cmdline` for pair in $cmdline do set -. the Management Server will shutdown the instance and transmit its state to Error after the expiration of TTL. Management Server transmits the state instance from Starting to Running. Follow the steps in Using Security Groups in the Administrator's Guide. Provisioning Cloud Infrastructure on Baremetal 12.

127 . If it is removed. This account displays in the Account page of CloudPlatform UI. specify the IPMI address of the Baremetal host. restarting Management Server will recreate it. the provisioning will be treated as a failure after the expiration of TTL. System Account The baremetal-system-account is created during the Management Server booting phase if the account is not already existing in the database.8. Instead. Accessing Your Baremetal Instance In the navigation bar of your browser. The administrator must not delete it.1. If the account is deleted accidentally. This account is used for the virtual router to send provisioning completion notification to the Management Server.9. Management Sever does not receive the provisioning completion notification.System Account curl "http://$gw:10086/baremetal/provisiondone/$mac 8. baremetal-systemaccount is created as user account to give the minimum needed permissions. The Baremetal host should be PXE booted to the specified installation. Because CloudPlatform does not have a comprehensive Identity and Access mangement at this phase. 8. and launch the virtual console.

128 .

Primarily. 500 Determines the number of threads that are used to process the commands sent to VMWare.size Determines the number of direct agents to be loaded each time.Appendix A. Category: Advanced Component: Agent Manager Scope: Global guest.agent. this applies to the number of VMWare hosts that can reconnect at a time.internal Category: Network Component: Network Orchestration Service Scope: Zone direct.domain. Parameter Description Default Value check. Category: Advanced Component: Management Server Scope: Null direct. Important Global Configuration Parameters The following tables display some important global configuration parameters in CloudPlatform.suffix Default domain name for the VMs inside the virtualized networks fronted by a router. 129 .load.pod.size Determines the default size for DirectAgentPool. different pods must belong to different CIDR subnets. TRUE You must set this parameter based on the available network configuration.agent. cloud. 16 Determines how many orphaned hosts must be picked up by a management server at a time.pool.cidrs By default.

Category: Advanced Component: Agent Manager Scope: NULL hypervisor.domain 130 XenServer. Category: Advanced Component: Agent Manager Scope: Global ha. such as copy template/ ISO and download volume/ template/ISO by setting the secstorage.copy parameter to true then secstorage.ssl. 5 This parameter determines the number of VM HA operations that can occur simultaneously.list The list of hypervisors that this deployment uses.workers Number of the HA-Worker threads in CloudPlatform. You can change this to KVM and VMWare only. VMware.Appendix A.KVM.encrypt.domain The domain name of the certificate used for SSL communication. This parameter is used when the listHypervisors API is used (with optional parameter ZoneId) to list all the hypervisor types that are available in the current deployment.ssl. Hyper-V. and BareMetal . Category: Advanced Component: Management Server Scope: NULL secstorage.cert. Important Global Configuration Parameters Parameter Description Default Value This parameter requires the number of ESXi hosts and API requests sent to VMWare. If you enable SSL communication for SSVMrelated functionalities. LXC.cert.

hung. worker VMs backup a volume into secondary storage. However.recycle.max Maxmium hosts on a vCenter cluster. FALSE Worker VMs are dummy wrapper VMs that are created to perform certain volume related tasks in VMware. 8 Category: Component: Scope: vmware.url. After you change this value.wokervm Specifies whether to recycle hung worker VMs. After a worker VM completes the task that it has been assigned to. CloudPlatform ends up with a hung worker VM that needs to be cleaned up.domain. when it faces some unexpected problems (such as Management Server stops while the worker VM performs a task). You can distinguish Worker VMs from regular VMs by their 131 . The value you enter here has to be exactly same as the value entered for the global parameter. ensure that you upload the certificates belonging to this domain.Parameter Description Default Value would signify the domain name of the certificate used for SSL communication. CS destroys it. Category: Advanced Component: Extension Registry Scope: NULL vmware. For example.percluster. consoleproxy. Do not let it grow over 8.host. This parameter enables administrators to recycle the hung worker VMs.

memory.cpu. CloudPlatform will send alerts indicating the low availability of local storage.capacity.capacity.75 . Important Global Configuration Parameters Parameter Description Default Value names.The memory utilization notificationthreshold threshold that is represented as a percentage value between 0 and 1. Worker VMs carry UUID names.75 Category: Alert Component: Alert Manager Scope: Cluster cluster. 0.1. Category: Advanced Component: Management Server Scope: NULL A.capacity. Important Healthcheck Parameters The healthcheck parameters evaluate whether your CloudPlatform configuration conforms to a predefined standard. If the local storage utilization exceeds this value.allocated. notificationthreshold The CPU utilization threshold that is represented as a percentage value between 0 and 1. If the memory utilization exceeds this value. CloudPlatform will send alerts 132 0.Appendix A. You can view the important healthcheck parameters that CloudPlatform uses in the following table: Parameter Description Default Value cluster.localStorage.allocated. If the CPU utilization exceeds this value. 0.75 Category: Alert Component: Management Server Scope: NULL cluster. notificationthreshold The local storage utilization threshold that is represented as a percentage value between 0 and 1. CloudPlatform will send alerts indicating the low CPU availability.

75 notificationthreshold threshold that is represented as a percentage value between 0 and 1.The allocated storage utilization 0. notificationthreshold The storage utilization threshold that is represented as a percentage value between 0 and 1. 0.allocated. 300 The messages between management servers in a cluster must not experience time out.storage.disable.message. the rp_filter is disabled on the public interface of console proxy VM. Category: Alert Component: Alert Manager Scope: Cluster cluster.capacity. Category: Alert Component: Alert Manager Scope: Cluster cluster. If the allocated storage utilization exceeds this value. TRUE 133 . Category: Advanced Component: Management Server Scope: NULL cluster.Important Healthcheck Parameters Parameter Description Default Value indicating the low availability of memory.storage.seconds Time (in seconds) to wait before an inter-management server message post times out. If the storage utilization exceeds this value.capacity. CloudPlatform will send alerts indicating the low availability of storage.75 Category: Alert Component: Alert Manager Scope: Cluster consoleproxy.rpfilter By default. CloudPlatform will send alerts indicating the low availability of allocated storage.timeout.

CloudPlatform deletes the entries that are older than the number of days specified for this parameter from the Events table.interval The time interval(in milliseconds) to scan console proxy working-load information. Important Global Configuration Parameters Parameter Description Default Value Category: Console Proxy Component: Agent Manager Scope: NULL consoleproxy. set this value to 0.max Maximum size represented in Gigabyte (GB) for the custom disk offering.cleanup.loadscan.delay The number of days for which the entries are retained in the Events table.url.interval 134 The interval (in seconds) for which CloudPlatform 120 . 15 If you want to retain all the entries in the Events table.diskoffering.max The maximum number of viewer sessions that console proxy is configured to serve. 10000 Category: Console Proxy Component: Agent Manager Scope: NULL consoleproxy. 1024 Category: Advanced Component: Volume Orchestration Service Scope: Global event.Appendix A. Category: Advanced Component: Management Server Scope: NULL extract.purge.size. 50 Category: Console Proxy Component: Agent Manager Scope: NULL custom.session.

This parameter does not control the timeout of any orchestration commands.cancel. Category: Advanced Component: Management Server Scope: NULL host.retry Number of times the host tries to create the volume to attach to the VM. Do not modify the default value for this parameter unless you want to disable the clean up or you want to trigger the clean up more frequently. An asynchronized job that has been in process for a long time can block other jobs that must run.Important Healthcheck Parameters Parameter Description Default Value waits before it cleans up the extracted URLs.threshold.stats. 135 .interval The time interval (in milliseconds) when the host statistics are retrieved from agents. 60000 Host statistics are collected for all hypervisors (KVM/Xen/ VmWare/HyperV).minutes Time threshold (in minutes) 60 for which CloudPlatform waits before it forces the cancellation of asynchronized jobs that have been in process for a long time. Category: Advanced Component: Management Server Scope: NULL job. 2 Category: Advanced Component: Agent Manager Scope: NULL host.

system VM deployment fails due to insufficient link local IP addresses.disable.rpfilter By default. 3600 This parameter applies to VM migration only.interval 136 The time interval (in seconds) between identifying an idle VR (that does not contain any User 600 . KVM)exceeds the value set by this parameter. Category: Advanced Component: Management Server Scope: NULL migratewait Time (in seconds) for which CloudPlatform waits to complete the VM migration. the rp_filter is TRUE disabled on the [public interface of the Domain Router VM.nums The number of link local IP addresses that the Domain Routers (domR) needed (in power of 2).gc.Appendix A. Category: Advanced Component: Agent Manager Scope: NULL network. Category: Network Component: Management Server Scope: NULL network. 10 If the number of Domain Routers (XenServer. Important Global Configuration Parameters Parameter Description Default Value Category: Advanced Component: Async Job Manager Scope: Global linkLocalIp. If you enable rp_filter. the kernel does source validation by confirming the reverse path.

limit Size limit for guest CIDR.85 137 .capacity. The allocated storage pool disablethreshold utilization threshold that is represented as a percentage value between 0 and 1.Important Healthcheck Parameters Parameter Description Default Value VMs in the 'Running' state) and deciding to shut it down.storage. Category: Advanced Component: Network Orchestration Service Scope: Global network. Category: Advanced Component: Network Orchestration Service Scope: Global network. 0.75 Category: Alert Component: Management Server Scope: NULL pool. the allocators will disable using the pool indicating the low availability of allocated storage.privateip.wait Time (in seconds) to wait 600 before actually shut down a idle VR (which does not contain any User VMs in the 'Running' state) that is not in use. 22 Category: Network Component: Network Manager Scope: NULL pod.guest. If the private IP address space utilization exceeds this value. notificationthreshold The private IP address space utilization threshold that is represented as a percentage value between 0 and 1.gc. 0. This cannot be less than this value.allocated.cidr. If the allocated storage pool utilization exceeds this value. CloudPlatform will send alerts indicating the low availability of private IP address space.capacity.

If the storage utilization exceeds this value. Category: Alert Component: Capacity Manager Scope: Zone restart.capacity. Important Global Configuration Parameters Parameter Description Default Value Category: Alert Component: Capacity Manager Scope: Zone pool. CloudPlatform persists the encrypted values of hidden parameters in the database. deploys a VM per zone to manage secondary storage.85 threshold that is represented as a percentage value between 0 and 1. Otherwise.storage. Category: Hidden Component: Management Server Scope: NULL secondary.retry.interval Time (in seconds) between retries to restart a VM. disablethreshold The storage utilization capacity 0.Appendix A. 600 This flags controls the interval between HA restart attempts. 8QhDMmJeVzoP eyAdDkQRQw== This is a hidden parameter. secondary storage is mounted on management server.vm 138 If rue. the allocators will disable the storage indicating the low availability of storage. Category: Advanced Component: High Availability Manager Scope: NULL router.size Default RAM size of router VM (in MB). hbPAm6DyrYJtYkM ed0I0ng== .storage.ram.

all secondary storage-related operations will be done on management server. VMware secondary storage manager will use entries in this table to determine whether the SSVMs be expanded to handle current load. Otherwise. Category: Advanced Component: Agent Manager Scope: NULL snapshot.execution.cmd.max The maximum number of delta snapshots that are allowed in the time interval between two full snapshots. CloudPlatform will start SSVM per zone to manage secondary storage.vm.size MTU size (in Bytes) of the storage network in the secondary storage VMs. 16 139 . Category: Hidden Component: Management Server Scope: NULL secstorage. Category: Advanced Component: Agent Manager Scope: NULL secstorage. The maximum time (in minutes) 30 max for executing a command.time.Important Healthcheck Parameters Parameter Description Default Value This flag should have the value "TRUE/FALSE". This flag is only used by VMware to clean up some commands from the cmd_exec_log table that are not executed properly. 1500 This value will be passed as boot arguments when we start and boot up SSVM. If it is TRUE.delta.mtu.

cleanup. and snapshots that are marked as destroryed in template_store_ref. a background thread will start on intervals specified by storage. volumes. and 140 TRUE .interval. No time interval is required between the retries.interval Time in seconds between two consecutive retries to stop or destroy a VM.retry Number of retries in deploying 10 a VM or starting a VM when the placement of VM to a physical host fails due to runtime error. Category: Advanced Component: High Availability Manager Scope: NULL storage.cleanup. remove all templates. Important Global Configuration Parameters Parameter Description Default Value Delta snapshot is the difference between the parent snapshot and the current volume. if this flag is TRUE. This background thread will cleanup unused templates and destroyed volumes from primary storage. 600 This flag does not control the time that is waiting for graceful guest OS shutdown. Category: Snapshots Component: Snapshot Manager Scope: NULL start.enabled Enables/disables the storage cleanup thread.Appendix A.retry. volume_store_ref. Category: Advanced Component: Virtual Machine Manager Scope: Global stop.

Category: Advanced Component: Management Server Scope: NULL task.pool. Only applicable in case of Xenserver. Category: Advanced Component: Virtual Machine Manager Scope: Global system. Category: Advanced Component: Storage Manager Scope: NULL storage.retry. NULL If this parameter is null. Category: Storage Component: Management Server Scope: NULL sync.default. and remove snapshots marked as Error from DB tables. CloudPlatform considers one of the supported hypervisors in that zone.vm. hypervisor Hypervisor type used to create system vm.cleanup.interval Time (in seconds) to wait before retrying the cleaning 600 141 .interval Cluster Delta sync interval in seconds 60 This is to keep the VM states in sync between the host and management server.max. This is the timeout value used in copying template from secondary to primary storage.Important Healthcheck Parameters Parameter Description Default Value snapshot_store_ref from secondary storage.waitseconds Time in seconds to synchronize 3600 storage pool operations.

Zones listed here are traffic sentinel zones. This is a GC (Garbage Collecting) process. it won't be 142 3600 . '0' means no retry.cancel. When a VM operation command has been submitted and for whatever reason it could not finish.zones Traffic that goes into the specified list of zones is metered.wait Time to wait (in seconds) before alerting an updating agent 600 Category: Advanced Component: Agent Manager Scope: NULL vm.interval Time (in seconds) to wait before cancelling an operation. For metering all traffic.Appendix A.op. This parameter is generic to all VM operation commands. Important Global Configuration Parameters Parameter Description Default Value up of tasks if the clean up operaton had failed previously.sentinel. All traffic going from guest network to listed zones in Traffic Sentinel will be metered. The interval is used to control the frequecy of running the clean up process. Category: Advanced Component: Management Server Scope: NULL traffic.include. leave this parameter empty. EXTERNAL Used for metering data in the shared network. Category: Usage Component: Management Server Scope: NULL update.

3600 It is the wait time to cleanup tasks that are already completed (success or failure).poolsize Number of threads to check the status of redundant router.op. Category: Advanced Component: Virtual Machine Manager Scope: Global vm.cleanup.cleanup.wait.op.Important Healthcheck Parameters Parameter Description Default Value picked up again after the canceling interval has passed.interval Interval (in seconds) to run the thread that cleans up the VM operations. 86400 Category: Advanced Component: Virtual Machine Manager Scope: Global vm.(--) Category: Advanced Component: Virtual Machine Manager Scope: Global vm. 120 Category: Advanced Component: Virtual Machine Manager Scope: Global router.interval Time (in seconds) to wait before checking if a previous operation has successfully performed.check. This parameter is not related to VMware.wait Time (in seconds) to wait before cleanuping up any vm work items. 10 Category: Advanced Component: Network Manager Scope: NULL 143 .op.

it does not have effect on them. there are commands that are using command specific timeout settings. Category: Advanced Component: Virtual Machine Manager Scope: Global vm. Important Global Configuration Parameters Parameter Description Default Value vm.op.interval Time (in seconds) to wait before taking over a VM in transition state 3600 Category: Advanced Component: Management Server Scope: NULL wait Time in seconds to wait for control commands to return 1800 This flag gives the default timeout value for commands sent from management server to host agent s.wait.tranisition. Category: Advanced Component: Agent Manager Scope: Global xapiwait Time (in seconds) to wait for XAPI to return Category: Advanced Component: Agent Manager Scope: NULL 144 600 .Appendix A.retry Times to retry locking the state of a VM for operations 5 This parameter specifies the number of retries when trying to lock-down the VM for state transition to happen.lock.state. however. it affects commands that are using the default timeout setting. If it is changed. for those commands.

notificationthreshold The Zone VLAN utilization threshold that is represented as a percentage value between 0 and 1. notificationthreshold The public IP address space utilization threshold that is represented as a percentage value between 0 and 1. If the secondary storage utilization exceeds this value. notificationthreshold The Direct network public IP address utilization threshold that is represented as a percentage value between 0 and 1. If the Direct network public IP utilization exceeds this value.Important Healthcheck Parameters Parameter Description Default Value zone.vlan. publicip.virtualnetwork. 0. CloudPlatform will send alerts indicating the low availability of secondary storage.75 Category: Alert Component: Management Server Scope: NULL zone.capacity.75 Category: Alert Component: Management Server Scope: NULL zone. If the public IP address space utilization exceeds this value. publicip.capacity.The secondary storage notificationthreshold utilization threshold that is represented as a percentage value between 0 and 1.directnetwork.capacity.75 Category: Alert Component: Management Server Scope: NULL zone.secstorage. 0. CloudPlatform will send alerts indicating the low availability Direct network public IP addresses.capacity. CloudPlatform will send alerts indicating the low availability of public IP address space.75 145 . 0. If the Zone VLAN 0.

50 Category: Advanced Component: Alert Manager Scope: NULL Hypervisor: XenSever.0.capacity.sites Comma separated list of CIDRs Null internal to the datacenter that can host template download servers. Important Global Configuration Parameters Parameter Description Default Value utilization exceeds this value.minutes Garbage collection interval to 30 destroy unused ELB VMs in minutes. please note 0. Category: Advanced Component: Management Server Scope: NULL The following configuration paramters are not supported: Parameter Description Default Value secstorage.loadbalancer.0. Category: Alert Component: Management Server Scope: NULL network. Minimum is 5 minutes.elb.standby 146 The minimum number of command execution sessions that the system is 10 .Appendix A. interval.allowed. basiczone.session. KVM secstorage.max The maximum number of command execution sessions that a secondary storage VM (SSVM) can handle. internal. Category: Advanced Component: Management Server Scope: NULL secstorage. CloudPlatform will send alerts indicating the low number of Zone VLANs.gc.0 is not a valid site.

NULL Category: Advanced Component: Management Server Scope: NULL Hypervisor: All hypervisors 147 . Category: Advanced Component: Agent Manager Scope: NULL Hypervisor: XenSever.snapshots.perhost Limit the number of snapshots that CloudPlatform can handle concurrently. KVM concurrent.threshold.Important Healthcheck Parameters Parameter Description Default Value able to serve immediately (standby capacity).

148 .

109 configure advanced networking. 30 host requirements. 91 upgrade to object storage. 121 add zone. 20 add secondary storage. 72 X XenServer add cluster. 114 create compute offering. 93 create secondary storage mount point. 19 provisioning overview. 105 add zone. 29 upgrade to object storage. 49 create secondary storage mount point. 56 provisioning overview. 125 set parameters. 56 cluster size limit. 41 provisioning overview. 96 configure basic zone. 31 configure S3 Object Store. 112 deploy in advanced zone. 47 add zone. 52 L LXC add cluster. 34 configure basic zone. 12 configure basic zone. 84 add host.add NFS storage. 18 add host. 101 add host. 76 provisioning overview. 9 configure S3 Object Store. 123 create network offering. 55 upgrade to object storage. 48 add pod. 116 limitations. 83 add primary storage. 84 add pod. 7 upgrade to object storage. 103 149 . 92 configure advanced zone. 92 host requirements. 86 add secondary storage. 77 create secondary storage mount point. 40 add host. 67 add zone. 46 add secondary storage. 8 host requirements. 75 K KVM add cluster. 102 provisioning overview. 63 add primary storage. 21 add zone. 121 add host. 64 add NFS storage. 76 configure advanced zone. 22 add pod. 69 create secondary storage mount point. 106 add pod. 100 add primary storage. 57 configure S3 Object Store. 68 add pod. 26 R reset SSH key. 8 configure advanced zone. 40 add primary storage. 44 add NFS storage. 66 add secondary storage. 115 configure basic zone. 30 configure advanced zone. 64 configure advanced zone. 3 V vSphere add cluster. 87 add zone. 5 S SSH key. 109 provision instance. 106 Index B Baremetal add cluster. 19 add NFS storage. 124 C create SSH keypair. 23 create secondary storage mount point. 17 add primary storage. 3 H Hyper-V add cluster. 103 add secondary storage.

150 .