Professional Documents
Culture Documents
Architecting A Vcloud: Technical White Paper
Architecting A Vcloud: Technical White Paper
a vCloud
Version 1.0
TEC H N I C A L W H ITE PA P E R
Architecting a vCloud
Table of Contents
1. What is a VMware vCloud?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Document Purpose and Assumptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Cloud Computing and vCloud Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
1.3 vCloud Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2. Assembling a vCloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 vCloud Logical Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 vCloud Management Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 vCloud Resource Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3. Creating Services with vCloud Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 vCloud Director Constructs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Establish Provider Virtual Datacenters (Prov vDCs) . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Establish Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Establish Networking Options Public vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Establish Networking Options Private vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6 Establish Organization Virtual Datacenters (Org vDCs). . . . . . . . . . . . . . . . . . . . 18
3.7 Create vApp Templates and Media Catalogs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.8 Establish Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.9 Accessing your vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4. Managing the vCloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 Security Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Workload Availability Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5. Sizing the vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.1 Sizing Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2 Sizing the management cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Sizing the workload resource group clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Architecting a vCloud
List of Figures
Figure 1 vCloud Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 2 vCloud Logical Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Figure 3 vCloud Resource Group Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 4 vCloud Director Construct to vSphere Mapping. . . . . . . . . . . . . . . . . . . . . . . 12
Figure 5 Example Diagram of Provider Networking for a Public vCloud. . . . . . . . . . 16
Figure 6 Configure External IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 7 Example Diagram of Provider Networking for a Private vCloud. . . . . . . . . 17
Figure 8 Configure Firewall Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 9 - vShield Managers Administrator UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 10 - vCloud Director Manage and Monitor UI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Figure 11 - Configure Firewall Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Architecting a vCloud
C AT E G O R Y
REFERENCED DOCUMENT
Service Definitions
vCloud
vCloud Director
vSphere
vShield
Chargeback
For further information, refer to the set of documentation for the appropriate product. For additional guidance
and best practices, refer to the Knowledge Base on vmware.com.
Architecting a vCloud
DESCRIPTION
Assembling a vCloud
Architecting a vCloud
vCloud API
vCenter Chargeback
VMware Sphere
Architecting a vCloud
VC LO U D C O M P O N E N T
DESCRIPTION
VMware vSphere
VMware vShield
Other VMware or third-party products or solutions such as orchestration are not addressed in this iteration of a
vCloud.
2. Assembling a vCloud
2.1 vCloud Logical Architecture
In building a vCloud, assume that all management components such as vCenter Server and vCenter Chargeback
Server will run as virtual machines.
As a best practice of separating resources allocated for management functions from pure user-requested
workloads, the underlying vSphere clusters will be split into two logical groups,
A single management cluster running all core components and services needed to run the cloud.
One or more vCloud resource groups that represent dedicated resources for cloud consumption. Each
resource group is a cluster of ESXi hosts managed by a vCenter Server, and is under the control of VMware
vCloud Director. Multiple resource groups can be managed by the same vCenter Server.
Reasons for organizing and separating vSphere resources along these lines are:
Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained
in a relatively small and manageable management cluster. They do not run on a large set of host clusters; this
could lead to situations where it is time-consuming to track down and manage such workloads.
Architecting a vCloud
Management components are separate from the resources they are managing.
Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would
not host vCenter VMs.
Resource groups can be consistently and transparently managed and carved up, and scaled horizontally.
The logical architecture with vSphere resource separation is depicted as follows.
Management Cluster
VM
VM
VM
VM
VM
VM
VM
VM
wa
VM
VM
VM
VM
re
wa
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
wa
re
VM
VM
VM
VM
wa
VM
VM
VM
VM
VM
VM
wa
re
re
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
wa
re
re
vCloud Infrastructure
vCenter Server VMs
vCloud Director Cell VMs
VM
VM
VM
VM
VM
VM
VM
VM
wa
VM
wa
VM
VM
VM
VM
VM
VM
wa
re
re
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
wa
re
re
VM
VM
VM
VM
VM
wa
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
re
wa
re
VM
VM
VM
VM
VM
VM
VM
wa
re
VM
VM
VM
VM
VM
VM
wa
re
The management cluster resides in a single physical site. vCloud resource groups also reside within the same
physical site. This ensures a consistent level of service. Otherwise, latency issues might arise if workloads need to
be moved from one site to another, over a slower or less reliable network.
Neither secondary nor disaster recovery (DR) sites are in the scope of this document. Certain limitations apply
when using VMware and 3rd party tools for disaster recovery and secondary or federated sites. Consult your
local VMware representative for assistance in understanding these limitations and possible alternatives. You can
also consult the Knowledge Base on vmware.com for additional information.
Architecting a vCloud
Host networking in the management cluster will be configured per vSphere best practices, including (but not
limited to) the following:
Separation of network traffic for security and load considerations by type (management, VM, vMotion/Fault
Tolerance (FT), storage.
Network path redundancy.
Use of vNetwork Distributed Switches where possible for network management simplification. The
architecture calls for the use of vNetwork Distributed Switches in the user workload resource group, so it is a
best practice to use the vNetwork Distributed Switch across all of your clusters, including the management
cluster.
Increasing the MTU size of the physical switches as well as the vNetwork Distributed Switches to at least 1524
to accommodate the additional MAC header information used by vCloud Director Network Isolation links.
vCD-NI is called for by the service definition and the architecture found later in this document. Failure to
increase the MTU size could adversely affect performance of the network throughput to VMs hosted on the
vCloud infrastructure.
Shared storage in the management cluster will be configured per vSphere best practices, including (but not
limited to) the following:
Storage paths will be redundant at the host (connector), switch, and storage array levels.
All hosts in a cluster will have access to the same datastores.
The use of RDMs in the vCloud Director infrastructure is currently not supported and should be avoided.
Management components running as VMs in the management cluster include the following:
vCenter Server(s) and vCenter Database
vCloud Director Cell(s) and vCloud Director Database
vCenter Chargeback Server(s)
vShield Manager (one per vCenter Server)
Optional management functions, deployed as VMs include:
vCenter Update Manager
VMware Data Recovery
VMware Management Assistant (vMA)
For more information on the resources needed by the VMs in the management cluster refer to Sizing the vCloud
in this document.
The optional management VMs are not required by the service definition but they are highly recommended to
increase the operational efficiency of the solution.
All of the management VMs can be protected by VMware HA and FT, unless the vCenter Server VM has 2 vCPUs,
in which case it cannot use FT and a solution such as vCenter Heartbeat should be considered. vCenter Site
Recovery Manager (SRM) can be used to protect some components of the management cluster. At this time,
vCenter Site Recovery Manager will not be used to protect vCloud Director cells because a secondary (DR) site
is out of scope of the vCloud, and changes to IP addresses and schemas in recovered vCloud Director cells can
result in problems.
Unlike a traditional vSphere environment where vCenter Server is used by administrators to provision VMs,
vCenter Server plays an integral role in end-user self-service provisioning by handling all VM deployment
requests by vCloud Director. Therefore, ensuring the availability of vCenter Servers with a solution such as
vCenter Heartbeat is highly recommended.
Architecting a vCloud
vShield Edge appliances are deployed automatically by vCloud Director as needed and will reside in the vCloud
resource groups, not in the management cluster. They will be placed in a separate resource pool by vCloud
Director and vCenter. For additional information on the vShield Edge appliance and its functions, refer to the
vShield Manager Administrator guides.
vCenter
Host Cluster
vCenter
Resource Pool
Figure 3 vCloud Resource Group Mapping
While it is possible to create multiple vCenter resource pools per host cluster, it is best to dedicate the cluster for
use by vCloud Director. vCloud Director will automatically allocate resources to cloud organizations by creating
resource pools with appropriate reservations and limits within the cluster. Since vCloud Director manages
vSphere resources by proxy through a vCenter Server and automatically creates resource pools within vCenter
as needed, using vCenter Server to create resource pools or nested pools can go against the efficient allocation
of resources by vCloud Director. Multiple parent-level resource pools can also add unnecessary complexity and
lead to unpredictable results or inefficient use of resources, if the reservations are not set appropriately.
To summarize, it is a best practice to use a 1-to-1mapping with vCloud Resource Group to vCenter host cluster.
Resource pools will be automatically created by vCloud Director.
Compute Resources
All hosts in the vCloud resource groups will be configured per vSphere best practices, similar to the
management cluster. VMware HA will also be used to protect against host and VM failures.
Resource groups can be of different compute capacity sizes (number of hosts, number of cores, performance of
hosts) to support differentiation of compute resources by capacity or performance for service level tiering
purposes.
For a detailed look at how to size the vCloud resource groups, refer to Sizing the vCloud in this document.
Storage
Shared storage in the vCloud resource groups will be configured per vSphere best practices, similar to the
management cluster. Storage types supported by vSphere will be used. The use of RDMs in the vCloud Director
infrastructure is currently not supported and should be avoided.
Creation of datastores will need to take into consideration Service Definition requirements and workload use
cases, which will affect the number and size of datastores to be created. vCloud Director will assign datastores
for use through provider virtual datacenters ( provider vDCs), and only existing vSphere datastores can be
assigned.
Architecting a vCloud
Datastores in the vCloud resource groups will be used for vCloud workloads, known as vApps. vSphere best
practices apply for datastore sizing in terms of number and size. Vary datastore size or shared storage
characteristic if providing differentiated or tiered levels of service. Sizing considerations include:
Datastore size:
What is the average vApp size x number of vApps x spare capacity?
For example: Avg VM size * # VMs * (1+ % headroom)
What is the average VM disk size?
How many VMs are in a vApp?
How many VMs are to be expected?
How much spare capacity do you want to allocate for room for growth (express in a percentage)?
Datastore use:
Will expected workloads be transient or static?
Will expected workloads be disk-intensive?
The public cloud service definition calls for a capacity of 1,500 VMs initially and specifies 60 GB of storage per
VM. You should consider these numbers when sizing your datastores.
Additionally, an NFS share must be set up and made visible to all cells for use by vCloud Director for transferring
files in a vCloud Director multi-cell environment. NFS is the required protocol for the transfer volume. Refer to
the vCloud Installation Guide for more information on where to mount this volume.
Networking
Host networking for hosts within a vCloud resource group will be configured per vSphere best practices in the
same manner as the vCloud management cluster. In addition, the value of the number of vNetwork Distributed
Switch ports per host should be increased from the default value of 128 to the maximum of 4096. Increasing the
ports will allow for vCloud Director to dynamically create port groups as necessary for the private organization
networks created later in this document. Refer to the vSphere Administrator Guide for more information on
increasing this value.
Networking requirements specific to the vCloud resource groups that facilitate cloud networking include:
Increasing the MTU size of the physical switches as well as the vNetwork Distributed Switches to at least 1524
to accommodate the additional MAC header information used by vCloud Director Network Isolation links.
vCD-NI is called for by the service definition and the architecture found later in this document. Failure to
increase the MTU size could adversely affect performance of the network throughput to VMs hosted on the
vCloud infrastructure.
Pre-configured vSphere port groups for use in connecting to external networks:
These can be using standard vSwitch port groups, vNetwork Distributed Switch port groups, or the Cisco
Nexus 1000V.
In a vCloud for service providers, these pre-configured port groups will provide access to the internet.
Make sure to have sufficient vSphere port groups created and made available for VM access in the vCloud.
VLANs to support private networks:
Private networks are private with respect to an organization.
Hosts must be connected to VLAN trunk ports.
Private networks are backed by VLAN IDs or network pools, which use fewer VLAN IDs.
vNetwork Distributed Switches are required.
MTU size should be increased to a minimum of 1524 bytes, # of vCD-NI networks per VLAN.
Note that vCloud Director creates port groups automatically as needed.
Architecting a vCloud
Admin Organization
Access Control
Users
Catalogs
Organization A
Users
Provisioning Policies
Catalogs
User Clouds
Organization vDCs
Access Control
Provisioning Policies
User Clouds
vApp
(VMs with vApp Network)
Organization vDCs
vApp
(VMs with vApp Network)
vSphere
vApp Network
Organization Network
Organization Network
External Networks
Port Groups or
dvPort Groups
Resource Pools
Organization vDCs
Provider vDC: Gold
Organization vDCs
Provider vDC: Silver
Organization vDC
Provider vDC: Bronze
Datastores
VC LO U D D I R E C TO R C O N S T R U C T
DESCRIPTION
Organization
Architecting a vCloud
VC LO U D D I R E C TO R C O N S T R U C T
DESCRIPTION
External Network
Organization Network
vApp Network
Architecting a vCloud
Refer to the service definition for guidance on the size of vSphere clusters and datastores to attach when
creating a provider vDC.
Consider:
Expected number of VMs
Size of VMs (CPU, RAM, disk)
Service Provider Considerations
Considerations for a service provider (public) vCloud include creating multiple provider virtual datacenters
(Prov vDCs) based on tiers of service that will be provided.
Because Prov vDCs contain only CPU, memory, and storage resources and those are common across all of the
requirements in the public cloud service definition, you should create one large Prov vDC attached to a vSphere
cluster that has sufficient capacity to run 1,500 VMs. You should also leave overhead to grow the cluster with
more resources up to the maximum of 32 hosts, should organizations need to grow in the future.
If you determine that your hosts do not have sufficient capacity to run the maximum number of VMs called out
by the public cloud service definition, you should separate the Pay-As-You-Go service tier from the Resource
Pool service tier by creating two separate Prov vDCs.
Private Cloud Considerations
Given that a provider virtual datacenter (Prov vDC) represents a vSphere cluster and resource pool, its
commonly accepted that a single Prov vDC be established. Refer to the service definition for private cloud for
details on the Service Tier(s) called for.
Because Prov vDCs contain only CPU, memory, and storage resources, and those are common across all of the
requirements in the private cloud service definition, you should create one large Prov vDC attached to a cluster
that has sufficient capacity to run 400 VMs.
Should it be determined that existing host capacity cant meet the requirement, or theres a desire to segment
capacity along the lines of equipment type (for example, CPU types in different Prov vDCs), then establish a
Prov vDC for Pay-As-You-Go use cases and a separate Prov vDC for the resource-reserved use cases.
Architecting a vCloud
Administrators assigned to the administrative organization will also be responsible for creating official template
VMs for placement in the master catalog for other organizations to use. VMs in development should be stored in
a separate development catalog that is not shared with other organizations.
As a note of reference, there is already a default System organization in the vCloud Director environment. The
administrative organization being created here is different from the built-in System organization since it can
actually create vApps and catalogs and share them.
Make sure that when you create the administrative organization you set it up to allow publishing of catalogs.
Standard Organizations
Create an organization for each tenant of the vCloud as necessary. Each of the standard organizations will be
created with the following considerations:
Do not use LDAP
Cannot publish catalogs
Use system defaults for SMTP
Use system defaults for notification settings
Use Leases, Quotas, and Limits meeting the providers requirements
Architecting a vCloud
vCloud Datacenter
Organization ACME Corp.
Network Pool
Org Net:
ACME-Private
Private Internal
Org Net: ACME-Internet
Private Routed
Provider Internet
Organization Networks
Create 2 different organization networks for each organization, one external organization network and one
private internal organization network. You can do this as one step in the vCloud Director UI wizard by selecting
the default (recommended) option when creating a new organization network. When naming a organization
network, it is a best practice to start with the organization name and a hyphen, for example, ACME-Internet.
Per the Service Definition for Public Cloud, the external network will be connected as a routed connection that
will leverage vShield Edge for firewalling and NAT to keep traffic separated from other organizations on the
same external provider network. Both the external organization network and the internal organization networks
will leverage the same vCD-NI network pool previously established. For both the internal network and the
external network, you will need to provide a range of IP addresses and associated network information. Since
both of the networks will be private networks behind a vShield Edge, you can use RFC 1918 addresses for both
static IP address pools.
The Service Definition for Public Cloud defines a limit of external connections with a maximum of 8 IP addresses,
so you should provide a range of 8 IP addresses only when creating the static IP address pool for the external
network. For the private network, you can make the static IP address pool as large as desired. Typically, a full
RFC 1918 class C is used for the private network IP pool.
The last step is to add external public IP addresses to the vShield Edge configuration on the external
organization network. By selecting Configure Services on the external organization network, you can add
8 public IP addresses that can be used by that particular organization. These IP addresses should come from
the same subnet as the network that you assigned to the systems external network static IP pool.
Architecting a vCloud
Enterprise vCloud
Organization Software Design
Network Pool
Org Net:
Internal Network
Private Internal (optional)
Org Net: External Access
Private Direct
Corporate Backbone
Architecting a vCloud
An important differentiation to keep an eye on is an External Network, a function of the vCloud foundational
layer under all the private vClouds that may get established, and Organization External Networks, a
component of each organization that gets established at its creation time. This section is focused on the first
external network mentioned, the foundational object.
At least one external network is required to enable organization networking to connect to. The external provider
network in a private vCloud is a network outside of the scope of the cloud, i.e., it is not managed by either the
vCloud layer or the vSphere layer. It is a network that already exists within the address space used by the
enterprise.
To establish this network, follow the wizard, filling in the network mask, default gateway and other specifications
of the LAN segment as required. When building this, specify enough address space for use as static
assignments, as this is where vCloud Director draws Public IP Pool addresses from. A good starting range is
30 addresses that do not conflict with existing addresses in use, or ranges already committed for DHCP.
Note: Static IP Pool address space is not used for DHCP, but the function is similar to that. This pool will be used
to provision NAT-type connectivity between the Organizations and the cloud services below it.
Network Pools
A network pool is a collection of virtual machine networks that are available to be consumed by organizations to
create organization networks and vApp networks. Network traffic on each network in a pool is isolated at layer 2
from all other networks.
You will need a network in the network pool for every private organization network and external organization
network in the vCloud environment. The private cloud service definition calls for one external organization
network and the ability for the organization to create private vApp networks. Because there is no minimum
called out in the service definition for the number of vApp networks, a good number of networks to start out
with is 10 per organization. Make your network pool as large as the number of organizations times 10.
Organization Networks
At least one organization external network is required to connect vApps created within the Organization to other
vApps and/or the networking layers beyond the Private vCloud.
To accomplish this, create an external network in the Cloud Resources section (under Manage & Monitor of the
System Administration section of the vCloud Director UI). In the wizard, be sure to select a direct connection.
This external network maps to an existing vSphere network for VM use as defined in the External Networks
section (above).
Other networking options are available, like a routed organization external network, and could be used, but add
complexity to the design that is normally not needed. For the purpose of this design there are no additional
network requirements. For more information on adding additional network options please refer to the vCloud
Director Administrators Guide.
Architecting a vCloud
Reservation. All resources assigned to the organization vDC are reserved exclusively for the organization
vDCs use.
With all of the above models the organization can be limited to deploy a certain number of VMs. Or, this can also
be set to unlimited.
The first organization vDC to be created should be an administration organization vDC for use by the
administration organization. The allocation model is set to Pay as you go so as not to take resources from other
organization vDCs until they are needed.
Subsequent organization vDCs should be created to serve the organizations previously established. In selecting
the appropriate allocation model, the service definition and organizations use cases of workloads should be
taken into consideration.
Service Provider Considerations
The organization virtual datacenter allocation model maps directly to a corresponding vCenter Chargeback
billing model:
Pay as you go. Pricing can be set per VM, and a corresponding speed of a vCPU equivalent can be specified.
Billing is unpredictable as it is tied directly to actual usage.
Allocation. Consumers are allocated a baseline set of resources but have the ability to burst by tapping into
additional resources as needed, but are typically charged at higher rates for exceeding baseline usage. This
model will result in more variable billing but allows for the possibility of more closely aligning variable
workloads to their cost.
Reservation. Consumers are allocated and billed for a fixed container of resources, regardless of usage. This
model allows for predictable billing and level of service, but consumers may pay for a premium if they do not
consume all their allocated resources.
These allocation models also map directly to the service tiers found in the public cloud service definition. The
Basic VDC model will use the Pay-as-you-go allocation model since instances are only charged for the resources
they consume and there is no commitment required from the consumer. The Committed VDC model will use the
Allocation Pool model since the consumer is required to commit to a certain level of usage but is also allowed to
exceed that usage. The Dedicated VDC model will use the Reservation Pool model since this service tier requires
dedicated and guaranteed resources for the consumer.
The Service Definition for Public Cloud provides detailed and descriptive guidance on how much a provider
should charge for each service tier. Chargeback functionality is provided by VMware vCenter Chargeback, which
is integrated with VMware vCloud Director. You should follow the steps in the vCloud Chargeback Models to set
up the appropriate charging profiles for each of your service tiers. You can further reference the VMware vCenter
Chargeback Users Guide for information on how to customize the individual reports generated.
For further information, refer to the vCloud Chargeback Models Implementation Guide, which details how to set
up vCloud Director and vCenter Chargeback to accommodate instance-based pricing (pay as you go),
reservation-based pricing, and allocation-based pricing.
Private Cloud Considerations
The organization vDC allocation model used depends on the type of workloads to be expected.
Pay as you go. A transient environment where workloads are repeatedly deployed and undeployed, such as a
demonstration or training environment, would be suited for this model.
Allocation. Elastic workloads that have a steady state but during certain periods of time surge due to special
processing needs would be suited for this model.
Reservation. Since a fixed set of resources are guaranteed, infrastructure-type workloads that demand a
predictable level of service would run well using this model.
When an organization vDC is created in vCloud Director, vCenter Server automatically creates child resource pools
with the appropriate resource reservations and limits, under the resource pool representing the provider vDC.
Architecting a vCloud
As part of creating an organization vDC, a storage limit can be set on the amount of storage to draw from the
provider vDC backing the organization vDC. By default, this setting is left to unlimited. For the purpose of this
architecture there will be no limit on storage consumed by the vApps since we are providing static values for the
individual VM storage and we are also limiting the number of VMs in an organization.
An option to enable thin provision allows provisioning VMs using thin disks to conserve disk usage. vSphere
best practices apply in the use of thin-provisioned virtual disks. This feature can save substantial amounts of
storage and have very little performance impact on workloads in the vCloud infrastructure. It is recommended
to enable this feature when creating each organization. For more information about this feature please refer to
the vCloud Director Administrators Guide or the VMware knowledge base.
Architecting a vCloud
vCloud Director is basically a java process. One can search for java processes with the process status (ps)
command to make sure that the cells are running. If you see java process listed then the cell should be running,
otherwise you will get no output from the command below.
# ps -ef | grep java
vcloud
27721
1 0 Aug20 ?
00:16:01 /opt/vmware/cloud-director/jre/
bin/java -Xms512M -Xmx1024M -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/opt/vmware/cloud-director/logs -Dservicemix.home=/opt/vmware/clouddirector -Dservicemix.base=/opt/vmware/cloud-director -Djava.util.logging.config.file=/
opt/vmware/cloud-director/etc/java.util.logging.properties -Dorg.apache.servicemix.
filemonitor.configDir=/opt/vmware/cloud-director/etc -Dorg.apache.servicemix.filemonitor.
Architecting a vCloud
monitorDir=/opt/vmware/cloud-director/deploy -Dorg.apache.servicemix.filemonitor.
generatedJarDir=/opt/vmware/cloud-director/data/generated-bundles -Dorg.apache.
servicemix.filemonitor.scanInterval=86400000 -Dservicemix.startLocalConsole=false
-Dservicemix.startRemoteShell=false -Dorg.ops4j.pax.logging.DefaultServiceLog.level=ERROR
-Dservicemix.name=root -Djava.awt.headless=true -DVCLOUD_HOME=/opt/vmware/cloud-director
-Djava.io.tmpdir=/opt/vmware/cloud-director/tmp -Djava.library.path=/opt/vmware/clouddirector -Djava.net.preferIPv4Stack=true -Doracle.jdbc.defaultNChar=true -Dlog4j.
configuration=file:/opt/vmware/cloud-director/etc/log4j.properties -jar /opt/vmware/
cloud-director/system/org.eclipse.osgi-3.4.3.R34x_v20081215-1030.jar -configuration /opt/
vmware/cloud-director/etc
Running a tail command on the vCloud Directors log files (cell.log, vcloud-container-debug.log, and vcloudcontainer-info.log) located in /opt/vmware/cloud-director/logs, contains a lot of information related to
understanding the execution and health of each individual cell.
For example, the following error message could appear in the vcloud-container-debug.log file:
2010-08-23 15:33:34,407 | ERROR | pool-jetty-6
| LdapProviderImpl
This entry reveals that there is a problem with LDAP. This would give some information in narrowing down the
problem to a specific component in place (LDAP in this case). Searching for a string ERROR in the log files
such as vcloud-container-debug.log and vcloud-container-info.log will show all the errors that happened to an
individual cell at execution time.
In a multi-cell environment, this could be more challenging because one has to log into different servers to
monitor the health of all of the cells. For multi-cell environments you should enable syslog collection to a
centralized logging server. Please refer to the vCloud Director Administrators Guide for instructions on how to
setup syslog redirection.
Analyzing errors from the log files is also possible from the vCloud Directors Administrator portal. For detailed
instructions on how to access the log files in the Administrator portal please refer to the vCloud Director
Administrators Guide.
Architecting a vCloud
/opt/vmware/vslad/vslad
/opt/vmware/vslad/vslad
/opt/vmware/vslad/vslad
/opt/vmware/vslad/vslad
For more information on monitoring the vSphere components refer to the vSphere Resource Management Guide.
Architecting a vCloud
Apart from the Administrator UI or vShield Manager vSphere Client plugin, there is currently no external
mechanism to do health monitoring of vShield Manager or vShield Edge devices.
For more detailed information on the monitoring aspects of vShield Manager and vShield Edge refer to the
vShield Manager Administrator Guide.
ITEM
Leases
Quotas
Limits
vSphere Resources
CPU
Memory
Network static IP address pool
Storage free space
Architecting a vCloud
Once logged in as Administrator to vCloud Director, the UI shows the availability and current status of both
virtual and pure virtual resources (where virtual resources are vCenters, resource pools, hosts, datastores,
switches, and ports; and pure virtual resources are vCloud cells, provider virtual datacenters [Prov vDCs],
organization virtual datacenters [Org vDCs], external networks, organization networks, and network pools).
4.2 Logging
Logs of vCloud components can be analyzed for troubleshooting, auditing, and additional monitoring purposes.
As with vSphere, the use of a centralized logging server is recommended. The primary methods for remote
event notification include syslog, SNMP, and MOM (Windows). Refer to the Administrators Guide for each
respective VMware product.
vCloud Director cells can be configured to send logs to a centralized server. The following settings will need to
be modified:
/opt/vmware/cloud-director/etc/global.properties
/opt/vmware/cloud-director/etc/responses.properties
Architecting a vCloud
The following table shows the primary log files for each vCloud component, and whether remote logging is
supported.
COMPONENT
LO G LO C AT I O N
R E M OT E LO G G I N G ?
vCloud Director
%VCLOUD%/logs/*
Yes
/var/log/messages
/var/log/secure
vSphere ESXi
/var/log/vmware/vslad/installer.log
/var/log/vmware/vslad/vslad.log
/var/log/vmware/esxupdate.log
/var/log/vmware/esxcfg-boot.log
/var/log/vmkernel
/var/log/vmware/esxcfg-firewall.log
/var/log/vmware/vpx/vpxa.log
vCenter Server
vCenter Chargeback Server
Windows Logs
No
Windows Logs
No
%ProgramFiles%VMware\VMware
vCenter Chargeback\apachetomcat-6.0.18\logs
%ProgramFiles%\VMware\VMware
vCenter Chargeback\Apache2.2\logs
%ProgramFiles%\VMware\VMware
vCenter Chargeback\DataCollectorEmbedded\logs
vShield Manager
No
vShield Edge
Yes
Architecting a vCloud
The current public cloud service definition does not call out a requirement for setting up LDAP or Active
Directory integration, so it is up to the individual provider. This is also the case for an enterprise running a private
vCloud.
User access and privileges within vCloud Director is controlled through role-based access control (RBAC). For
additional information on permissions, roles, and default settings, refer to the VMware vCloud Director
Administration Guide.
Securing Workloads
Workloads in the vCloud environment are protected from a networking perspective through network visibility
(external or internal to an organization or vApp) and connection types (direct or NAT routed).
vShield Edge devices are deployed automatically by vCloud Director to facilitate routed network connections.
vShield Edge uses MAC encapsulation for NAT routing. This prevents any Layer 2 network information from
being seen by other organizations in the environment. vShield also provides firewall services which can be
configured to not allow any inbound traffic to any virtual machines connected to a public access organization
network.
For service providers, the Service Definition for Public Cloud specifies how the networking options should be set
up, which in turn takes into consideration network security requirements. Each of the organization networks are
connected to the shared public network through a routed connection.
In order to meet the requirements of the service definition, allow up to 8 public IP addresses inbound access to
virtual machines in the organization. The organization administrator is the actual user that will be responsible for
making this configuration change. Once a vApp is created and VMs are added to it and connected to the public
access organization network, the vApp will obtain a private IP address from the static IP pool previously
established. The organization administrator can then configure the firewall and the NAT external IP mapping for
the newly created VM and private IP address using the network configure services wizard as shown below.
Architecting a vCloud
For a private vCloud, network routing and firewall requirements will depend on the security policies of the
enterprise as they apply to the specific workloads, organizations, and the enterprise itself.
VC P U
MEMORY
S TO R AG E
N E T WOR KING
vCenter Server
8 GB
20 GB
100 MB
Oracle Database
16 GB
100 GB
1 GigE
4 GB
10 GB
1 GigE
vCenter Chargeback
8 GB
30 GB
1 GigE
Architecting a vCloud
ITEM
VC P U
MEMORY
S TO R AG E
N E T WOR KING
vShield Manager
4 GB
512 MB
100 MB
TOTAL
11
40 GB
161 GB*
3 GigE*
For the table above, the Oracle Database will be shared between the vCenter Server, the vCloud Director cells,
and the vCenter Chargeback Server. Different users and instances should be used for each database instance
in-line with VMware best practices.
In addition to the storage requirements above, a NFS volume is required to be mounted and shared by each
vCloud Director cell to facilitate uploading of vApps from cloud consumers. The size for this volume will vary
depending on how many concurrent uploads are in progress. Once an upload completes the vApp is moved to
permanent storage on the datastores backing the catalogs for each organization and the data no longer resides
on the NFS volume. The recommended starting size for the NFS transfer volume is 250 GB. You should monitor
this volume and increase the size should you experience more concurrent or larger uploads in your environment.
TOTA L P E R C E N TAG E
TOTA L V M S
Pay-As-You-Go
50%
750
37.5%
563*
10%
150
2.5%
37*
TOTAL
100%
1,500
* Note that some total VMs are rounded up or down due to percentages
Architecting a vCloud
The service definition also calls out the distribution for VMs in the environment with 45% small, 35% medium, 15%
large, and 5% extra large. Below is a chart that shows the total amount of memory, CPU, storage, and networking
based on the service definition assumptions and the total VM count from the public cloud service definition.
ITEM
PERCENT
VC P U S
MEMORY
S TO R AG E
N E T WOR KING
Small
45%
675
675 GB
40.5 TB
400 GB
Medium
35%
1,050
1,050 GB
31.5 TB
300 GB
Large
15%
900
900 GB
54 TB
400 GB
Extra Large
5%
600
600 GB
4.5 TB
200 GB
TOTAL (1,500)
100%
3,225
3,225 GB
130.5
1,300 GB
The above numbers may shock you. Before you determine your final sizings you should refer to VMware best
practices for common consolidation ratios on the above resources. An example table has been provided below
to show you what final numbers could look like using typical consolidation ratios seen in field deployments.
RESOURCE
BEFORE
R AT I O
AFTER
CPU
3,225
8:1
403 vCPUs
Memory
3,225 GB
1.6:1
2,016 GB
Storage
130.5 TB
2.5:1
52 TB
Network
1,300 GB
6:1
217 GB
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed
at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW_11Q4_WP_Architecting_p30_A_R2