This action might not be possible to undo. Are you sure you want to continue?
Goal of This Document
This document describes and provides high-level design considerations for deploying a Vblock. Vblock is an enterprise- and service provider-class infrastructure solution using VMware vSphere/vCenter on a Cisco Unified Computing System (UCS) connected to EMC CLARiiON CX4 Series storage platforms or Symmetrix V-Max Series arrays via a Cisco MDS 9506 Multilayer Director class SAN Switch or, optionally, a MDS 9222i Multiservice Modular Fibre Channel Switch.
The target audience for this document includes sales engineers, field consultants, advanced services specialists, and customers who want to deploy a virtualized infrastructure using VMware vSphere/vCenter on Cisco UCS connected to EMC V-Max and CLARiiON storage products. The document also explores potential business benefits of interest to senior executives.
This document is intended to describe:
• • •
The role of the Vblock within a data center The capabilities and benefits of the Vblock The components of the two types of Vblock: Vblock 1 and Vblock 2
This document also highlights the collaborative efforts of three partner companies—EMC, VMware, and Cisco—working together on a common goal of providing proven technology to customers.
© 2010 Cisco EMC VMware. All rights reserved.
IT is undergoing a transformation. The current “accidental architecture” of IT increases procurement, management costs, and complexity while making it difficult to meet customer service level agreements. This makes IT less responsive to the business and creates the perception of IT being a cost center. IT is now moving towards a “private cloud” model, which is a new model for delivering IT as a service, whether that service is provided internally (IT today), externally (service provider), or in combination. This new model requires a new way of thinking about both the underlying technology and the way IT is delivered for customer success. While the need for a new IT model has never been more clear, navigating the path to that model has never been more complicated. The benefits of private clouds are capturing the collective imagination of IT architects and IT consumers in organizations of all sizes around the world. The realities of outdated technologies, rampant incremental approaches, and the absence of a compelling end-state architecture are impeding adoption by customers. By harnessing the power of virtualization, private clouds place considerable business benefits within reach. These include:
• • • • • •
Business enablement—Increased business agility and responsiveness to changing priorities; speed of deployment and the ability to address the scale of global operations with business innovation Service-based business models—Ability to operate IT as a service Facilities optimization—Lower energy usage; better (less) use of datacenter real estate IT budget savings—Efficient use of resources through consolidation and simplification Reduction in complexity—Moving away from fragmented, ‘accidental architectures’ to integrated, optimized technology that lowers risk, increases speed and produces predictable outcomes Flexibility—Ability of IT to gain responsiveness and scalability through federation to cloud service providers while maintaining enterprise-required policy and control
Moore’s Law (1965) was, in the 1980s and 1990s, replaced by the unwritten rule that everyone knew and did not lament loudly enough: enterprise IT doubles in complexity and TCO every five years and IT gets more pinched by the pressure points. Enterprise IT solutions over the past 30 years have become more costly to analyze and design, procure, customize, integrate, inter-operate, scale, service, and maintain. This is due to the inherent complexity in each of these lifecycle stages of the various solutions. Within the last decade, we have seen the rise of diverse inter-networks—variously called “fabrics,” “grids,” and, generically, the “cloud”—constructed on commodity hardware, heavily yet selectively service-oriented with a scale of virtualized power never before contemplated, housed in massive data centers on- and off-premises. Yet amid the buzzword din—onshoring/offshoring, in-/out-/co-sourcing, blades and RAIDs, LANs and SANs, massive scale and hand-held computing—virtualization (an abiding computing capacity since early mainframe days) has met secure networking (around since DARPAnet), both perfected, to form the basis for the next wave, rightly delineated. It has only been in the past several years that the notion of “cloud computing”—infrastructure, software, or whatever-business-needs as an IT service—has been taken seriously in its own right, championed by pioneers who have proved the model’s viability even if on too limited a basis. With enterprise-level credibility, enabled by the best players in the IT industry, the next wave of computing will be ushered in on terms that make business sense to the business savvy.
Vblock Infrastructure Packages Reference Architecture
© 2010 Cisco EMC VMware. All rights reserved.
What Constitutes a Vblock?
Vblocks are pre-engineered, tested, and validated units of IT infrastructure that have a defined performance, capacity, and availability Service Level Agreement (SLA). The promise that Vblocks offer is to deliver IT infrastructure in a new way and accelerate organizations’ migration to private clouds. Vblocks grew out of an idea to simplify IT infrastructure acquisition, deployment, and operations. Removing choice is part of that simplification process. To that end, many decisions regarding current form factors may limit the scope to customize or remove components. For example, substituting components is not permitted as it breaks the tested and validated principle. While Vblocks are tightly defined to meet specific performance and availability bounds, their value lies in a combination of efficiency, control, and choice. Another guiding principle of Vblocks is the ability to expand the capacity of Vblock infrastructures as the architecture is very flexible and extensible. The following sections provide definitions for Vblock configurations with mandatory, recommended, and optional hardware and software.
Vblock—A New Way of Delivering IT to Business
Vblock Infrastructure Packages accelerate infrastructure virtualization and private cloud adoption:
– Integrated and tested units of virtualized infrastructure – Best-of-breed virtualization, network, compute, storage, security, and management products
– Predictable performance and operational characteristics
Reduced risk and compliance
– Tested and validated solution with unified support and end-to-end vendor accountability
Customer benefits include:
• • • • • • • •
Simplifies expansion and scaling Add storage or compute capacity as required Can connect to existing LAN switching infrastructure Graceful, non-disruptive expansion Self-contained SAN environment with known standardized platform and processes Enables introduction of Fiber Channel over IP (FCIP), Storage Media Encryption (SME), and so on, later for Multi-pod Enables scaling to multi-Vblock and multi-data center architectures Multi-tenant administration, role-based security, and strong user authentication
Vblock Design Principles
A data center is a collection of pooled “Vblocks” aggregated in “Zones.”
• • •
A unit of assembly that provides a set of services, at a known level, to target consumers Self contained, but it may also use external shared services Optimized for the classes of services it is designed to provide
Vblock Infrastructure Packages Reference Architecture
© 2010 Cisco EMC VMware. All rights reserved.
Can be clustered to provide availability or aggregated for scalability, but each Vblock is still viable on its own Fault and service isolation—The failure of a Vblock will not impact the operation of other Vblocks (service level degradation may occur unless availability or continuity services are present) Repeatable “units” of construction based on “matched” performance, operational characteristics, and discrete of power, space, and cooling Repeatable design patterns facilitate rapid deployment, integration, and scalability Designed from the “Facilities to the Workload” to be scaled for the highest efficiencies in virtualization and workload re-platforming An extensible management and orchestration model based on industry standard tools, APIs, and methods Built to contain, manage, and mitigate failure scenarios in hardware and software environments Predictable SLA—Granular SLA measurement and assurance Deterministic space and weight—Floor tiles become unit of capacity planning Power and cooling—Consistent power consumption and cooling (KWh/BTUs) per unit Pre-determined capacity and scalability—Uniform workload distribution and mobility Deterministic fault and security isolation Accelerate the journey to pervasive virtualization and private cloud computing while lowering risk and operating expenses Ensure security and minimize risk with certification paths Support and manage SLAs
– Resource metering and reporting – Configuration and provisioning – Resource utilization
Vblocks are architectures that are pre-tested, fully-integrated, and scalable. They are characterized by:
• • • • •
Vblocks offer deterministic performance and predictable architecture:
• • • • •
Vblock benefits include:
• • •
Vblock is a validated platform that enables seamless extension of the environment Vblock O/S and application support:
– Vblock accelerates virtualization of applications by standardizing IT infrastructure and IT
– Broad range of O/S support – All current applications that work in a VMware environment also work in a Vblock environment – Vblock validated applications include: – SAP – VMware View 4 – Oracle RAC – Exchange 2007 – Sharepoint and Web applications
Vblock is a scalable platform for building solutions:
Vblock Infrastructure Packages Reference Architecture
© 2010 Cisco EMC VMware. All rights reserved.
EMC Ionix is optional and available at additional cost. such as e-mail.Vblock Overview • • • • Modular architecture enables graceful scaling of Vblock environment Consistent policy enforcement and IT operational processes Add capacity to an existing Vblock or add more Vblocks Mix-and-match Vblocks to meet specific application needs Vblock Architecture Components Figure 1 provides a high-level overview of the components in the Vblock architecture1. All rights reserved. 1. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. etc. 5 . Typical use cases include shared services. file and print. virtual desktops. The network layer represented in the figure by the Cisco Nexus 7000 is not a Vblock component. Figure 1 Vblock Architecture Components Management VMware vCenter Cisco UCS Manager EMC Ionix UIM SMC or NaviSphere Compute/Network Cisco Nexus 1000V VMware vSphere Cisco UCS 5108 Blade Chassis Cisco UCS 6100 Fabric Interconnect Network Cisco Nexus 7000 SAN Cisco MDS 9506 Storage CLARiiON CX4-480 OR Symmetrix V-Max 228037 Vblock 1 Components Vblock 1 is a mid-sized configuration that provides a broad range of IT capabilities for organizations of all sizes.
It is optimized for performance to support high-intensity application environments for enterprises and service providers. CRM systems. FC. All rights reserved. Typical use cases include business critical ERP.Vblock Overview Vblock 1 components: • Compute – 16-32 Cisco UCS B-series blades – 128-256 Cores – 960-1920 GB Memory • Network – Cisco Nexus 1000V – The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based) traffic from the blades to the conected SAN and LAN • Storage – EMC CLARiiON CX4-480 – 38-64 TB capacity – Enterprise Flash Drives (EFD). .0 Management – EMC Ionix Unified Infrastructure Manager (optional) – VMware vCenter – EMC Navisphere – EMC PowerPath/VE – Cisco UCS Manager – Cisco Fabric Manager Vblock 2 Components Vblock 2 is a high-end configuration that is extensible to meet the most demanding IT needs. and SATA Drives – iSCSI and SAN – Celerra NS-G2 (optional) – Cisco MDS 9506 (optionally MDS 9222i) • • VMware vSphere 4. etc.0/vCenter 4. Vblock 2 components: • Compute – 32-64 Cisco UCS B-series blades – 256-512 Cores – 3072-7144 GB memory • Network – Cisco Nexus 1000V – The UCS uses 6100 series fabric interconnects which carry the network and storage (IP-based) traffic from the blades to the conected SAN and LAN Vblock Infrastructure Packages Reference Architecture 6 © 2010 Cisco EMC VMware.
All rights reserved. FC.Vblock Overview • Storage – EMC Symmetrix V-Max – 96-146 TB capacity – EFD.0 Management – EMC Ionix Unified Infrastructure Manager (optional) – VMware vCenter – EMC Symmetrix Management console – EMC PowerPath/VE – Cisco UCS Manager – Cisco Fabric Manager Vblock Design and Configuration Details Figure 2 provides a high-level topological view of Vblock components. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. 7 . and SATA drives – iSCSI and SAN – Celerra NS-G8 (optional) – Cisco MDS 9506 • • VMware vSphere 4.0/vCenter 4.
bandwidth. and storage capacity relative to the compute and storage arrays offered. All rights reserved. and Table 3.Vblock Overview Figure 2 High-Level Topological View of Vblock CLARiiON CX4-480 or Symmetrix V-Max EMC Storage Cisco MDS 9506-SAN A LAN Cisco MDS 9506-SAN B Management Up Links Management Links 10/100/1000 UCS Cluster Links Nexus 6100 Series Fabric Interconnect Fabric Links (10GE x 4) Cisco UCS Fabric Extender (in back) UCS Blade Server UCS Blade Chassis A Vblock consists of a minimum and maximum amount of components that offer balanced I/O. Each Vblock is a fully-redundant autonomous system that has 1+1 or N+1 redundancy by default. The minimum and maximum configurations for Vblock 1 and Vblock 2 are listed in Table 1. 228038 . Table 2. Table 1 Minimum and Maximum Vblock Configurations UCS Hardware UCS 5100 Chassis UCS B-200 Series Blades UCS 6140 Fabric Interconnect Type 1 Minimum Type 1 Maximum Type 2 Minimum Type 2 Maximum 2 16 4 32 2 2 2 4 32 8 64 UCS 6120 Fabric Interconnect 2 Table 2 Software Software Type 1 Type 2 Yes VMware vSphere 4 Enterprise Plus Suite Yes Vblock Infrastructure Packages Reference Architecture 8 © 2010 Cisco EMC VMware.
see Storage Area Network—Cisco MDS Fibre Channel Switch. This however requires careful consideration of the operational environment and introduces some variance. this will have density and performance impacts that need to be ascertained. For Vblock 2. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. Within a Vblock 1. If B-250 series modules are a requirement for memory densities greater than 96G per module. 48. However. and 96G RAM options. if you have specific needs to have a mixture of RAM densities.Vblock Overview Table 3 Network Hardware Nexus 1000V Type 1 Minimum Type 1 Maximum Type 2 Minimum Type 2 Maximum Yes Yes 2 Yes 2 Yes 2 MDS 9506 Director Switch 2 In Vblock 1. the performance may be acceptable for small Vblock 1 implementations. there are no hard disks on the B-200 series blades as all boot services and storage are provided by the SAN. and instrumentation. It should be noted that other hypervisors are not supported by Vblocks and invalidate the Vblock support agreement. flexibility. each UCS chassis contains B-200 blades. It is also acceptable for operating system and applications to be run directly on the B-200 series blades. but may optionally be changed for 9509 or 9513s to scale capacity or reduced to an MDS 9222i if less density is required. or save costs. where it is more likely to be running very dense VM or memory intensive mission-critical applications. All rights reserved. 9 . For Vblock 2. B-250 series modules were not tested. However. If required. Nexus 1000V and Enterprise Plus are mandatory components due to the inherent richness that they offer in terms of policy control. it is expected that these will reduce the impact as the number of CPUs per slot is reduced by 50%. Note that as the B-250 is a full-slot module. This provides good price/performance and support some memory intensive applications such as in-memory databases within the Vblock definition. you can specify 32G. increase MTBF. all B-200 series have been defined with 96GB RAM by default due to the systems performance capabilities. segmentation. For example. a small hard drive may be installed if local page memory is required for vSphere. Because a B-250 blade is a full slot module. Figure 4 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 and Figure 5 illustrates the interconnection of the Cisco MDS 9506 in Vblock 2. that will reduce IOPs and potentially disk capacity. For more information on the MDS 9222i and MDS 9506. The MDS 9506 switches are recommended. each B-200 series blade module has 72GB SATA drives for page memory purposes. 72. but will be a future option. six (6) of which have 48GB RAM and two (2) of which have 96GB RAM. these may be removed to reduce power and cooling overhead. Each 61x0 has either 4 or 8 10GE/Unified Fabric uplinks to the aggregation layer (the aggregation layer is not a part of Vblock) Nexus 7000 (new build out) or Catalyst 6500 (upgrade to an existing data center) switches and either 4 or 8*4G Fibre Channel connections to the SAN aggregation provided by a pair of MDS 9506 director-class switches (SAN A and B support). VMware vSphere 4 Enterprise Plus licenses are mandatory within all Vblock definitions (to enable the Cisco 1000v and EMC PowerPath/VE) and per CPU licensing is included within the defined bill-of-materials. The amount of RAM per blade within either a Vblock 1 or Vblock 2 may be adjusted if you have specific requirements within the definition of a Vblock. If the local disk is used for main storage or operating system storage. it is not considered a Vblock and is a custom implementation at this point. this may be accommodated within Vblock 1 and Vblock 2 once testing and validation have been completed.
the numbers are intended to provide guidance on typical densities. Obviously. Additionally. these numbers are highly variable based upon your use cases and requirements. Figure 3 EMC Celerra in Vblock IP 61x0 Fabric Interconnect 1/10Gb Ethernet Data Mover 5100 Blade Chassis Data Mover SAN Front-end Ports Cache Data Mover Physical Disks EMC Celerra NS-G EMC Storage 228042 5100 Blade Chassis UCS For more information. Storage Storage capacity has been tuned to match the I/O performance of the attached UCS systems. The Cisco Nexus 50x0 is not a Vblock component. The Cisco Nexus 7000 is not a Vblock component.Vblock Overview Networking None of the current Vblock definitions contain any form of network switches except for MDS SAN. there are two possibilities: • • Connect the Celerra NAS Gateway to the Nexus 70001 aggregation layer directly Use a local Nexus 50x02 switch to provide connectivity Figure 3 illustrates the interconnection of the EMC Celerra in Vblock. All rights reserved. There is no provision for an intermediate layer of “access” switches at this time. Vblock Infrastructure Packages Reference Architecture 10 © 2010 Cisco EMC VMware. the UCS 61x0 are connected using either 4*10GE/Unified Fabric (Type 1) or 8*10GE/Unified Fabric (Type 2) connections. For upstream connectivity. If you require a Celerra NAS Gateway within the Vblock 1 (recommended). see NFS Datastores and Native File Services—EMC Celerra Gateway Family. 1. 2. some analysis of the likely underlying applications has also been taken into account to characterize user or VM densities that are likely for a given Vblock. The MDS 9000 series are necessary components to provide Fibre Channel connectivity between the storage arrays and UCS 61x0 series Fabric Interconnects and ultimately the UCS B-200 series blades. which equates to an oversubscription factor of 4:1. .
Vblock Overview Figure 4 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1 and Figure 5 illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2. Figure 4 EMC CLARiiON CX4-480 in Vblock 1 Cisco MDS 9222i IP 61x0 Fabric Interconnect Cisco MDS 9222i Service Processor A Service Processor B 5100 Blade Chassis 8-16 FC Front-end Ports 2-4 GB iSCSI Front-end Ports 16 GB Cache CLARiiON CX4-480 Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. 228040 5100 Blade Chassis UCS 105-180 Physical Disks 11 . All rights reserved.
400 1.000 18.350 26.950 41.650 110 83 37.350 67.378 21 21. Fibre Channel.400 1.000 13.Vblock Overview Figure 5 EMC Symmetrix V-Max in Vblock 2 Cisco MDS 9506 IP 61x0 Fabric Interconnect Cisco MDS 9506 SE SE SE SE FA FA FA FA FA FA FA FA 5100 Blade Chassis 8-16 FC Front-end Ports 4-8 GB iSCSI Front-end Ports 64-128 GB Cache V-Max (Two Engine) SE = iSCSI FA = Fibre Channel Table 4 through Table 9 contain CLARiiON CX4-480 and Symmetrix V-Max system controller I/O and bandwidth capacities.900 184 97.843 CLARiiON CX4-480 Fiber (450GB) 60. Table 4 Storage Storage CLARiiON CX4-480 Symmetrix V-Max NAS Gateway 1. 42TB is usable when system spares and overheads are factored in. Table 5 Vblock 1—CLARiiON CX4-480 Configuration Vblock 1 # of Drives Raw Capacity Estimated Capacity (GB) Minimum Flash (400GB) SATA (1TB) Total 6 2. 228041 5100 Blade Chassis UCS 220-355 Physical Disks .173 47.378 27 27.145 Maximum Fiber (450GB) Flash (400GB) SATA (1TB) Total 151 6 2.565 Vblock Infrastructure Packages Reference Architecture 12 © 2010 Cisco EMC VMware. as well as installed disks and other configuration information. Recommended # of Dives for Minimum # of Drives for Maximum Capacity (TB) Minimum Capacity (TB) Maximum 110 209 NS-G2 1 184 359 NS-G8 1 61 42 NS-G2 1 97 221 NS-G81 The CLARiiON system is configured with an amount of Flash. This means that although the minimum Vblock 1 density is some 61TB of RAW storage.750 67. and SATA drives with N+1 spares redundancy. All rights reserved.
Table 7 Vblock 2—Symmetrix V-Max Configuration Vblock 2 # of Drives Raw Capacity Estimated Capacity (GB) Estimated IOPS Estimated Bandwidth (Mbps) Minimum Flash (400GB) SATA (1TB) Total 9 3.288 Within the Vblock definitions. the exact number required being ascertained during Vblock planning phases. including CIFS for applications.530 6. a Vblock 1 can support NAS with the provision that primary boot services are provided across the SAN.000 900 1.288 4.400 108. while optional. so this is neither a tested nor validated solution.620 6.000 Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware.800 39.500 1.350 76 76. the NAS Gateways have not been performance validated for a pure NAS environment. NAS access is recommended for vSphere. 13 .200 7.350 405 43.140 209 90.500 1. For Vblock 2.920 Symmetrix V-Max Fiber (450GB) 135. A NAS Gateway.Vblock Overview Table 5 Vblock 1—CLARiiON CX4-480 Configuration Vblock 1 Estimated IOPS Estimated Bandwidth (Mbps) Minimum Flash (400GB) SATA (1TB) Total 15. it is highly recommended that a NAS Gateway or two are deployed for vSphere. the characteristics of the system are such that mission-critical applications will be hosted that will require Fibre Channel access to maintain performance.600 43.180 CLARiiON CX4-480 Fiber (450GB) Table 6 Vblock 1—CLARiiON CX4-480 CLARiiON CX4-480 Storage Processors iSCSI front-end ports (1 Gb) Global Write Memory (cache) IOPs/MBs Minimum 2 4 16 Maximum 2 12 8 16 Fibre Channel front-end ports (4 Gb) 8 30. further testing is required ensure boot (PXE) as well as file access can be supported in a balanced fashion.739 Maximum Fiber (450GB) Flash (400GB) SATA (1TB) Total 15. Again. All rights reserved.650 359 221.710 48.600 149.600 2.500 1.400 3. For the interim.000 900 1.320 4.200 10.060 22.620 71.582 124 55.050 315 3.000 71.920 Flash (400GB) SATA (1TB) Total 9 3.983 30.954 14.092 Maximum Fiber (450GB) 240 75.940 2.990/3.500 5. It should be noted that UCS does not currently support iSCSI boot of physical servers (VMs can boot on iSCSI through vSphere). provides this service.350 110 110.000 49.250 22.520 22.954 43. Although these have been tested.990 27.600 2.800 1.530/6.
620/6. availability. and operations.2k 146 GB.com/products/family/clariion-family. some of which are managed by their respective element managers. All rights reserved.200/10.com/products/family/symmetrix-family.emc.htm. 300 GB. EMC PowerPath/VE (PP/VE) provides several benefits in terms of performance.Vblock Overview Table 8 Symmetrix V-Max System Supported Drive Types Drive type 4 Gb/s FC 4 Gb/s FC SATA Rotational speed Capacity 15k 10k 7. Unified Infrastructure Manager (UIM)1. so the base PP/VE license is mandatory for Vblocks 1 and 2. manages the configuration. Additional engines possible beyond base Vblock configuration.htm and for the Symmetrix storage system. The Vblocks management framework showing relationships and interfaces is shown in Figure 6. 400 GB 4 Gb/s Flash (SSD) N/A Table 9 Vblock 2—Symmetrix V-Max Symmetrix V-Max Storage Processors (Directors) iSCSI front-end ports (10 Gb) Global Memory (cache) IOPs/MBs Minimum 2 (1 Engine) 4 64 Maximum1 4 (2 Engines) 16 8 128 Fibre Channel front-end ports (4 Gb) 8 48.582 71. The individual element managers and managed components are: • • • • • VMware vCenter Server Cisco UCS Manager Cisco Fabric Manager EMC Symmetrix Management Console EMC Navisphere Manager A Vblock element manager. For more information on the EMC CLARiiON storage system. Vblock Management Within the Vblock there are several managed elements. Vblock Infrastructure Packages Reference Architecture 14 © 2010 Cisco EMC VMware. see http://www.920 1. and compliance of a Vblock and multiple mixed Vblocks.emc. see http://www. open management framework. 450 GB 400 GB 1 TB 200 GB. This accrues several benefits as it provides “single pane of glass” for systems configuration and integration and provides Vblock service catalogs and Vblock self-service portal capabilities. provisioning. These elements offer corresponding interfaces that provide an extensible. . 1. Optional and available at additional cost.
billing capabilities. Vblock Qualification of Existing Environments Many organizations have extensive EMC and VMware components within their data centers. and using UIM as a single point of integration. Config & Change Configuration Compliance Analysis Infrastructure Recovery (DR) Manages one or more Vblocks 253547 Stand-alone Component Management Cisco UCS Manager EMC Symmetrix Console VMware vCenter It should be noted that Ionix UIM does not provide fault. or software lifecycle management capabilities. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. A plan would then be developed to remediate the environment to meet Vblock standards. 15 . It should be noted however that Vblock has an open management framework that allows an organization to integrate Vblock management with their choice of management tools should they so desire. this simplifies Vblock integration into IT service catalogs and workflow engines. However. UIM can dramatically simplify Vblock deployment by abstracting the overall provisioning aspects of Vblock. By the abstractions offered by UIM. All rights reserved. simply adding a UCS system to this environment does not constitute a Vblock from a number of aspects. including: • • • • • • Do the existing arrays meet the published system capacity for Vblock 1 and Vblock 2? What firmware/software versions are running within the infrastructure? Is vSphere 4 deployed? Which other hypervisors are in use: Xen. these questions need to be addressed. In this respect.Vblock Overview Figure 6 Vblock Management Enterprise Management Platforms Configuration & Compliance Events Availability & Performance Events Unified Vblock Element Management EMC Ionix Unified Infrastructure Manager (UIM) Unified Multi-Vblock Element Management IT Provisioning Portal Service Profile Catalog Policy-Based Management Unified Provisioning. Hyper-V? What management packages are being used? What other equipment is accessing the storage arrays? Before the existing equipment can be supported as a Vblock. while offering granular access to individual components for troubleshooting and fault management. performance monitoring.
Each of these situations would need to be assessed on their relative merits and would require extensive audits. If additional capacity is required. As the organization requires more capacity. Expanding Vblock Infrastructures One guiding principle of Vblocks is the ability to expand the capacity of Vblock infrastructures. this capacity may be aggregated (clustered) as a single pool of shared capacity or segmented into smaller isolated pools. All rights reserved. and then migrate existing storage arrays to that infrastructure over time. flexible infrastructure. the initial Vblock configuration includes an MDS 9506 that has a 24-port 2/4/8G Fibre Channel module. Figure 7 Vblock Expansion Vblock 1 Base Vblock 1 Base Vblock 1 Expansion Vblock 2 Base Vblock 1 Base Vblock 1 Storage Expansion Vblock 1 Expansion Vblock Compute Expansion In order to scale capacity within a Vblock. In addition. security. the capacity of the Vblock scales either as an aggregated pool. whereby any UCS blade can access any storage disks on the SAN or as isolated silos. migrate workloads first. As Vblocks are added. the initial Vblock may be extended by adding another Type 1 and clustering the two systems to aggregate their capacity. This offers an organization the ability to configure Vblock infrastructure to achieve their compliance. Using Figure 7 as a reference. the first Vblock deployment may be a single Type 1. If additional capacity is required on the MDS 9506 switch. 253557 .Vblock Overview In practical terms. it may be simpler to deploy a new Vblock. For example. The Vblock architecture is very flexible and extensible and is architected to be easily expandable from a few hundred VMs/users to several thousand users. it is perfectly acceptable to aggregate two Vblock 1s to provide capacity for 6. or operational reasons. the system does not require any additional validation. policy. it may not be possible or desirable to do this depending upon the complexity of the environment. If a Type 2 is added. additional interface modules can be installed as necessary. Vblock Infrastructure Packages Reference Architecture 16 © 2010 Cisco EMC VMware.000 VMs that can share common storage capacity. and fault isolation objectives using a single. this capacity may be segmented from the Type 1 storage and compute for regulatory. As long as storage capacity is added in conjunction with compute capacity to maintain balanced performance as published within the Vblock. an expansion to the original Vblock simply connects the UCS 61x0 and CLARiiON or Symmetrix V-Max to the existing MDS interfaces.
data mining. If compute is to be scaled to be in excess of the minimum or maximum storage capacity.). multichassis platform in which all resources participate in a unified management domain. 17 . lossless.html) is a family of line-rate. low-latency. it is recommended that only similar Vblocks are pooled so as to maintain the performance and availability SLA associated with that Vblock. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. is designed within open industry standard technologies and aims to reduce TCO and increase business agility. systemic problems from I/O or capacity may be introduced that need careful consideration. All rights reserved. 10-Gbps Ethernet and Fibre Channel over Ethernet interconnect switches. there is no real concern as the performance limitation are either at the system controller or compute node. the Vblock environment must be carefully considered. scalable. This requires services engagement to validate and certify before being accepted as a Vblock. optimized for virtual environments.Vblock Component Details If compute or storage needs to be added asynchronously.com/en/US/partner/products/ps10276/index. UCS Components The Cisco Unified Computing System is built from the following components: • Cisco UCS 6100 Series Fabric Interconnects (http://www. x86-architecture servers. network. The system is an integrated. This is easily achieved on the MDS 9500 director switches using Virtual SAN capabilities. and storage access. If storage is increased above that specified on a per Vblock maximum. In order to satisfy the performance needs of a Vblock. so this should not be a concern.cisco. etc. the performance of the systems controller and UCS system has been balanced. The system integrates a low-latency. In most cases. parametric execution. Some applications that may require this flexibility are high-performance compute environments (CFD. lossless 10 Gigabit Ethernet unified network fabric with enterprise-class. Vblock Component Details This section contains more detailed descriptions of the main components of Vblock 1 and Vblock 2: • • Compute—Unified Computing System (UCS) Network – Cisco Nexus 1000V • Storage – EMC CLARiiON CX4 Series – EMC Symmetrix V-Max Storage System – NFS Datastores and Native File Services—EMC Celerra Gateway Family – Storage Area Network—Cisco MDS Fibre Channel Switch • Virtualization Compute—Unified Computing System (UCS) The Cisco Unified Computing System (UCS) is a next-generation data center platform that unites compute. The platform.
manages entire system UCS Fabric Interconnect 20 Port 10Gb FCoE 40 Port 10Gb FCoE UCS Fabric Extender Remote line card UCS Blade Server Chassis Flexible bay configurations UCS Blade Server Industry-standard architecture Vblock Infrastructure Packages Reference Architecture 18 © 2010 Cisco EMC VMware.Vblock Component Details • Cisco UCS 5100 Series Blade Server Chassis (http://www.cisco. see: http://www. providing up to four 10-Gbps connections each between blade servers and the fabric interconnect. • • • • Fore more information.com/en/US/partner/products/ps10280/index.html) adapt to application demands. and offer best-in-class virtualization.html.cisco. compatibility with existing driver stacks.com/en/US/partner/products/ps10279/index.com/en/US/partner/products/ps10281/index. All rights reserved.html) provides centralized management capabilities for the Cisco Unified Computing System. Table 10 UCS System Components UCS Manager Embedded. Cisco UCS B-Series Blade Servers (http://www.html) offer a range of options.com/en/US/partner/products/ps10280/index.com/en/US/partner/products/ps10278/index. high-performance Ethernet. Cisco UCS B-Series Network Adapters (http://www. Table 10 summarizes the various components that constitute UCS. including adapters optimized for virtualization.com/en/US/partner/netsol/ns944/index. Cisco UCS 2100 Series Fabric Extenders (http://www. intelligently scale energy use.cisco.cisco.html) bring Unified Fabric into the blade-server chassis. . or efficient.html) supports up to eight blade servers and up to two fabric extenders in a six rack unit (RU) enclosure. Cisco UCS Manager (http://www.cisco.cisco.
19 . Figure 8 Cisco Unified Computing System IP Aggregation Layer 10Gb x 4 Ethernet Port Channel to Aggregation Layer UCS 6100 Fabric Interconnect Cisco UCS Manager Storage Area Network (SAN) 4Gb x 4 Fibre Channel to SAN UCS 6100 Fabric Interconnect Cisco UCS Manager UCS 5100 Blade Chassis Fabric Extender UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade Fabric Extender Fabric Extender Unified Fabric • 10Gb FCoE • 10Gb Ethernet Power and Cooling UCS 5100 Blade Chassis Fabric Extender UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade UCS B200 M1 Blade Power and Cooling 228039 10Gb Connection From Each Fabric Extender to Each Blade Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. All rights reserved.Vblock Component Details Table 10 UCS System Components UCS Virtual Adapters Choice of multiple adapters Figure 8 provides an overview of the components of the Cisco UCS.
. The system integrates a low-latency. rehosting software on different servers as needed for scaling and load management is difficult to accomplish. firmware. reliably. Managed as a single system whether it has one server or 320 servers with thousands of virtual machines. The system is an integrated. these environments deploy fixed spare servers already configured to meet peak workload needs. Ethernet NICs and Fibre Channel HBAs. • Cisco UCS 6100 Series Fabric Interconnects A core part of the Cisco Unified Computing System. lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions. multi-chassis platform in which all resources participate in a unified management domain. From a network perspective. network configurations. adding cost and management complexity. the Cisco UCS 6100 Series provides both the LAN and SAN connectivity for all blades within its domain. and securely through end-to-end provisioning and migration support for both virtualized and non-virtualized systems. data center environments have become more difficult and costly to maintain. storage access. however. and software switches used in virtualization software all having separate feature sets and management paradigms. adding delays and introducing the possibility of errors in the process. Virtualization offers significant benefits.Vblock Component Details Cisco UCS and UCS Manager (UCSM) The Cisco Unified Computing System™ (UCS) is a revolutionary new architecture for blade server computing. The Cisco Unified Computing System accelerates the delivery of new services simply. and virtualization into a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility. and maintain I/O connectivity. but the combination of x86 server architectures and the older deployment paradigm makes change difficult: • In fixed environments in which servers run OS and application software stacks. the Cisco UCS 6100 Series Fabric Interconnects provide both network connectivity and management capabilities to all attached blades and chassis. require configuration and firmware updates. UCS Manager Data centers have become complex environments with a proliferation of management points. All rights reserved. Virtual environments inherit all the drawbacks of fixed environments. it adds more complexity. All chassis. As a result. x86-architecture servers. whether installed in blade systems or rack-mount servers. Typically. attached to the interconnects become part of a single. raising both capital and operating costs. Blade and rack-mount server firmware must be maintained and BIOS settings must be managed for consistency. switches in blade servers. The Cisco UCS 6100 Series offers line-rate. scalable. The interconnects provide the management and communication backbone for the Cisco UCS B-Series Blades and UCS 5100 Series Blade Server Chassis. Most of the time these servers are either idle or highly underutilized. while security and performance may be less than desired. Vblock Infrastructure Packages Reference Architecture 20 © 2010 Cisco EMC VMware. network. The Cisco UCS is a next-generation data center platform that unites compute. The fragmentation of the access layer makes it difficult to track virtual machine movement and to apply network policies to virtual machines to protect security. Change is the norm in data centers. and BIOS settings all must be configured manually to move software from one server to another. and more. Most current blade systems have separate power and environmental management modules. the access layer has fragmented. by supporting unified fabric. support per-virtual machine QoS. improve visibility. I/O devices and their configuration. In addition. lossless 10 Gigabit Ethernet unified network fabric with enterprise-class. highly available management domain. low-latency. with traditional access layer switches. and therefore all blades. the Cisco Unified Computing System decouples scale from complexity.
Centralized unified management with Cisco UCS Manager software. fabric interconnects provide uniform access to both networks and storage. 21 .cisco. delivering a scalable and flexible architecture for current and future data center needs. redundant front-plug fans and power supplies. including: • • • • • High performance Unified Fabric with line-rate. makes the UCS system very reliable and energy efficient. eight cooling fans. and FCoE. in conjunction with the simple chassis design that incorporates front to back cooling. For more information on the Cisco UCS 6100 Series Fabric Interconnects. Efficient cooling and serviceability with front-to-back cooling. eliminating the barriers to deploying a fully virtualized environment. and rear cabling. the Cisco Unified Computing System enables the chassis to: Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. The cooling fans and power supply are hot-swappable and redundant. The highly-efficient (in excess of 90%) power supplies. Available expansion module options provide Fibre Channel and/or 10 Gigabit Ethernet uplink connectivity. Two models are available: the 20-port Cisco UCS 6120XP and the 40-port Cisco UCS 6140XP. low-latency. Virtual machine optimized services with the support for VN-Link technologies. see: http://www. Cisco’s first blade-server chassis offering. By incorporating unified fabric and fabric-extender technology. the additional power supplies are for redundancy.com/en/US/products/ps10276/index. is six rack units (6RU) high and mounts in an industry-standard 19-inch rack. The Cisco UCS 5108 Blade Server Chassis revolutionizes the use and deployment of blade-based systems.html Vblock Configuration and Design Considerations • Vblock 1—6120 Fabric Interconnect – (20) 10 Gb fixed ports to blade chassis/aggregation layer – (4) 4 Gb Ports to SAN fabric • Vblock 2—6140 Fabric Interconnect – (40) 10 Gb fixed ports to blade chassis/aggregation layer – (8) 4 Gb Ports to SAN fabric • • Always configured in pairs for availability and load balancing Predictable performance: – 4:1 network oversubscription – Balanced configuration Cisco UCS 5100 Series Blade Server Chassis The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System.Vblock Component Details Typically deployed in redundant pairs. A chassis can accommodate up to either eight half slot or four full slot Cisco UCS B-Series Blade Servers. All rights reserved. and four power supply units. Both models offer key features and benefits. The chassis requires only two power supplies for normal operation. while helping reduce total cost of ownership. the Cisco UCS 5108 Blade Server Chassis. two redundant 2104XP Fabric Extenders. lossless 10 Gigabit Ethernet.
which serves to balance memory capacity and overall density. The server is a half-width.com/en/US/products/ps10279/index. The Cisco UCS 5108 Blade Server Chassis is a critical component in delivering the simplicity and IT responsiveness for the data center as part of the Cisco Unified Computing System. • • • For more information on the Cisco UCS B200 M1 Blade Server.cisco.html Vblock Infrastructure Packages Reference Architecture 22 © 2010 Cisco EMC VMware. Mezzanine card options include either a Cisco UCS VIC M81KR Virtual Interface Card. or a single 10GB Ethernet Adapter. performance. All rights reserved. two-socket blade server with substantial throughput and 50 percent more industry-standard memory compared to previous-generation Intel Xeon two-socket servers. One dual-port mezzanine card for up to 20 Gbps of I/O per blade. .Vblock Component Details • • • Have fewer physical components Require no independent management Be more energy efficient than traditional blade-server chassis This simplicity eliminates the need for dedicated chassis management and blade switches. see: http://www.html Vblock Configuration and Design Considerations • Vblock 1 – 2 to 4 blade chassis • Vblock 2 – 4 to 8 blade chassis • Availability: – Two Fabric Extenders per chassis – N+1 cooling and power • Predictable performance: – 2:1 Oversubscription—40 Gb per chassis – Balanced configuration – Distribute vHBA and vNIC between fabrics Cisco UCS B-200 M1 Blade Server The Cisco UCS B-200 M1 Blade Server balances simplicity. see: http://www. reduces cabling.com/en/US/products/ps10299/index. Features of the Cisco UCS B-200 M1 include: • Up to two Intel® Xeon® 5500 Series processors. a converged network adapter (Emulex or QLogic compatible).cisco. and density for production-level virtualization and other mainstream data-center workloads. and allowing scalability to 40 chassis without adding complexity. Two optional Small Form Factor (SFF) Serial Attached SCSI (SAS) hard drives available in 73GB 15K RPM and 146GB 10K RPM versions with an LSI Logic 1064e controller and integrated RAID. For more information on the Cisco UCS 5100 Series Blade Server Chassis. which automatically and intelligently adjust server performance according to application needs. increasing performance when needed and achieving substantial energy savings when not. Up to 96 GB of DDR3 memory in a half-width form factor for mainstream workloads.
Trunks and Port Groups Network Cisco Nexus 1000V The Nexus 1000V (http://www.com/survey/exit. 23 . All rights reserved.vmware.cisco.h tml) framework to offer tight integration between server and network environments and help ensure consistent. It takes advantage of the VMware vSphere (http://www.Vblock Component Details Vblock Configuration and Design Considerations • Vblock 1 – 16-32 blades – 128-256 cores – 960-1920 GB Memory – 6 blade/chassis = 48 GB – 2 blades/chassis = 96 GB • Vblock 2 – 32-64 blades – 256-512 cores – 3072-7144 GB memory – 96 GB per blade – (2) 73 GB internal HDD • Availability: – N+1 blades per chassis – Trunk and Port Group configuration • One dual port Converged Network Adapter (Unified Network) – vNIC – vHBA • • Internal connections to both Fabric Extenders Predictable performance – Dual quad core Xeon® 5500 Series processors – Balanced configuration – Network – Memory – Compute • Scalability and flexibility – VLAN.cisco.cisco.com/en/US/products/ps9902/index. It allows policies to move Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. policy-based network capabilities to all servers in the data center.html) is a software switch on a server that delivers Cisco VN-Link (http://www.com/en/US/netsol/ns894/index.html?http://www.com/products/cisco-nexus-1000V/index.html) services to virtual machines hosted on that server.
. All rights reserved. It offers flexible collaboration between the server. and security compliance. resulting in improved business continuance. performance management. security. For more information. reducing the total cost of ownership (TCO) by providing operational consistency and visibility throughout the network.html. and storage teams while supporting various organizational boundaries and individual team autonomy. Last but not least.cisco. it aligns management of the operational environment for virtual machines and physical server connectivity in the data center. and storage compliance. network.Vblock Component Details with a virtual machine during live migration.com/en/US/products/ps9902/index. ensuring persistent network. see: http://www. Storage Storage components include: • • • • EMC CLARiiON CX4 Series EMC Symmetrix V-Max Storage System NFS Datastores and Native File Services—EMC Celerra Gateway Family Storage Area Network—Cisco MDS Fibre Channel Switch EMC CLARiiON and Symmetrix • • Storage configurations are application-specific Logical device considerations* – LUN size – Consistent size based on application requirements – RAID Protection – RAID 1 – RAID 5 – RAID 6 – LUN aggregation using meta devices – Size – Performance – Virtual provisioning – Thin pool – Thin devices/fully allocated • Simplifies storage provisioning – Storage tiers based on drive and protection – Storage templates – Storage policies • Local and remote replication requirements Vblock Infrastructure Packages Reference Architecture 24 © 2010 Cisco EMC VMware. security.
and CLARiiON Virtual Provisioning customers can decrease costs and energy use and optimize availability and virtualization. Delivering up to twice the performance and scale as the previous CLARiiON generation. The unique combination of flexible. UltraFlex™ technology. Flash drives extend the storage tiering capabilities of CLARiiON by: • • • • • Delivering 30 times the IOPS of a 15K RPM FC drive Consistently delivering less than 1 ms response times Requiring 98 percent less energy per I/O than 15K rpm Fibre Channel drives Weighing 58 percent less per TB than a typical Fibre Channel drive Providing better reliability due to no moving parts and faster RAID rebuilds UltraFlex technology—The CLARiiON CX4 architecture features UltraFlex technology—a combination of a modular connectivity design and unique FLARE® operating environment software capabilities that deliver: • • • Dual protocol support with FC and iSCSI as the base configuration on all models Easy.e. CLARiiON CX4 is the leading midrange storage solution to meet a full range of needs—from departmental applications to data-center-class business-critical systems. and reporting for efficient capacity planning Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. FCoE) CLARiiON Virtual Provisioning—Allows CLARiiON users to present an application with more capacity than is physically allocated to it in the storage system. The CX4-480 supports Flash drives for maximum performance and comes pre-configured with Fibre Channel and iSCSI connectivity. allowing customers to choose the best connectivity for their specific applications. EMC is the first to bring Flash drives to midrange storage and expects the technology to become mainstream over the next few years while revolutionizing networked storage. 25 . dual-connected hosts and has the capability to scale up to 480 disk drives for a maximum capacity of 939 TB. CLARiiON Virtual Provisioning can lower total cost of ownership and offers customers these benefits: • • • Efficient tiering that improves capacity utilization and optimizes tiering capabilities across all drive types Ease of provisioning that simplifies and accelerates processes and delivers “just-in-time” capacity allocation and flexibility Comprehensive monitoring.. Through innovative technologies like Flash drives. The EMC CLARiiON CX4 model 480 supports up to 256 highly available.Vblock Component Details EMC CLARiiON CX4 Series Figure 4 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1. EMC CLARiiON CX4 Technology Advancements Enterprise Flash Drives—EMC-customized Flash drive technology provides low latency and high throughput to break the performance barriers of traditional disk technology. to meet the growing and diverse needs of today’s midsize and large enterprises. scalable hardware design and advanced software capabilities enables EMC CLARiiON CX4 series systems. powered by Intel Xeon processors. The EMC® CLARiiON® CX4 series delivers industry-leading innovation in midrange storage with the fourth-generation CLARiiON CX storage platform. All rights reserved. alerts. online expansion via hot-pluggable I/O modules Ability to easily add and/or upgrade I/O modules to accommodate future technologies as they become available (i.
2K rpm SATA drives. adaptive cooling. All rights reserved. archiving. the FLARE operating environment has also been upgraded from a 32-bit to a 64-bit environment. • • For more information. 1. The Symmetrix V-Max system is EMC’s high-end storage array that is purpose-built to deliver infrastructure services within the next-generation data center. Navisphere QoS Manager. Vblock Infrastructure Packages Reference Architecture 26 © 2010 Cisco EMC VMware. Symmetrix V-Max uses specialized engines. each of which includes two redundant director modules providing parallel access and replicated copies of all critical data. Symmetrix Fully Automated Storage Tiering (FAST)1 automatically and dynamically moves data across storage tiers. so that it is in the right place at the right time simply by pooling storage resources. Additionally.Vblock Component Details • Supports advanced capabilities including Virtual LUN. The CX4 architecture also delivers twice the capacity scale (up to 960 drives). These capabilities are of greatest value for large virtualized server deployments such as VMware Virtual Data Centers.5 times more processing power with multi-core Intel® Xeon® processors. and applying it to an application. EMC Symmetrix V-Max Storage System Figure 5 illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2. The EMC Symmetrix V-Max Series provides an extensive offering of new features and functionality for the next era of high-availability virtual data centers. . such as Auto-Provisioning Groups for simplification of storage management. and SnapView Multi-core Intel Xeon processors. and 64-bit FLARE—The CX4 boasts up to twice the performance of the previous generation and provides up to 2. Target applications include backup-to-disk. Optional. With the CLARiiON CX4. Low-power SATA II drives. With advanced levels of data protection and replication. and Virtual LUN technology for non-disruptive mobility between storage tiers. All of the industry-leading features for Business Continuity and Disaster Recovery have been the hallmarks of EMC Symmetrix storage arrays for over a decade and continues in the Symmetrix V-Max system. Built for reliability.com/products/detail/hardware/clariion-cx4-model-480. the Symmetrix V-Max system is at the forefront of enterprise storage area network (SAN) technology. These are further integrated into the VMware Virtual Infrastructure for disaster recovery with EMC’s custom Site Recovery Adapter for VMware’s Site Recovery Manager. This enhancement enables the scalability improvements and also provides the foundation for more advanced software functionality such as Virtual Provisioning. increased memory.emc. Symmetrix V-Max’s Enginuity operating system provides several advanced features. you are able to lower costs and deliver higher services levels at the same time. and drive spin-down: • Low-power SATA II drives deliver the highest density at the lowest cost and require 96 percent less energy per terabyte than 15K rpm Fibre Channel drives. and efficiency to transparently optimize service levels without compromising its ability to deliver performance on demand. the Symmetrix V-Max array has the speed. and test and development. Adaptive cooling is a new feature that provides improved energy efficiency by dynamically adjusting cooling and airflow within the CX4 arrays based on system activity. defining the policy. availability. and twice the LUNs compared with the previous generation CLARiiON. FAST enables applications to always remain optimized by eliminating trade-offs between capacity and performance. twice the memory. Navisphere Analyzer. capacity. and 32 percent less than traditional 7. Drive spin-down allows customers to set policies at the RAID group level to place inactive drives in sleep mode. As a result. and scalability.htm. Virtual Provisioning for ease of use and improved capacity utilization. see: http://www.
as well as proactive failover management. The largest supported configuration is shown in Figure 10. When viewing Figure 10. intelligent. Introduction The Symmetrix V-Max system (see Figure 9) operating environment is a new enterprise-class storage array that is built on the strategy of simple. EMC’s new PowerPath/VE support for vSphere provides optimization of usage on all available paths between virtual machines and the storage they are using. 27 .Vblock Component Details Combined with the rich capabilities of EMC ControlCenter and EMC’s Storage Viewer for vCenter. administrators are provided with end-to-end visibility and control of their virtual data center storage resources and usage. The storage array seamlessly grows from an entry-level configuration with a single. The array incorporates a new high-performance fabric interconnect designed to meet the performance and scalability demands for enterprise storage within the most demanding virtual data center installations. All rights reserved. highly available Symmetrix V-Max Engine and one storage bay into the world’s largest storage system with eight engines and 10 storage bays. refer to the following list that indicates the range of configurations supported by the Symmetrix V-Max storage array: • • • • • • 2-16 director boards 48-2.400 disk drives Up to 2 PB usable capacity Up to 128 Fibre Channel ports Up to 64 FICON ports Up to 64 Gig-E/iSCSI ports Symmetrix V-Max System Figure 9 227163 Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. modular storage.
In fact. Vblock Infrastructure Packages Reference Architecture 28 © 2010 Cisco EMC VMware. 227117 .Vblock Component Details Figure 10 Symmetrix V-Max System Features ! ! A B C D A B C D ! ! ! ! A B C D A B C D A B C D A B C D LAN1 LAN2 A B C D A B C D ! ! ! ! ! ! ! ! ! ! ! ! ! ! A B C D A B C D ! ! ! ! A B C D A B C D A B C D A B C D LAN1 LAN2 A B C D A B C D ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! The enterprise-class deployments in a modern data center are expected to be always available. Each Symmetrix V-Max Engine has two directors that can offer up to eight host access ports each. and functionality. The Symmetrix V-Max system can include two to 16 directors inside one to eight Symmetrix V-Max Engines. hardware and software updates. and Environmental Modules. performance. Each Symmetrix V-Max Engine has its own redundant power supplies. The design of the Symmetrix V-Max storage array enables it to meet this stringent requirement. The replicated components which comprise every Symmetrix V-Max configuration assure that no single point of failure can bring the system down. creating a redundant and high-availability Virtual Matrix. and service procedures are designed to be performed online and non-disruptively. The hardware and software architecture of the Symmetrix V-Max storage array allows capacity and performance upgrades to be performed online with no impact to production applications. cooling fans. therefore allowing up to 16 host access ports per Symmetrix V-Max Engine. This ensures that customers can consolidate without compromising availability. All rights reserved. all configuration changes. SPS Modules. Furthermore. the connectivity between the Symmetrix V-Max array engines provides direct connections from each director to every other director. Figure 11 shows a schematic representation of a single Symmetrix V-Max storage engine. while leveraging true pay-as-you-grow economics for high-growth storage environments.
The front-end director consists of two front-end I/O modules with four logical directors that are located in the corresponding I/O annex slots. and 16 back-end ports connecting to up to 360 storage devices using 4Gb Fibre Channel SATA or Enterprise Flash Drives. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. the memory is mirrored inside the engine across the two integrated directors. 64-128 GB of Global Memory. EMC’s Virtual Matrix interconnection fabric permits the connection of up to 8 Symmetrix V-Max Engines together to scale out total system resources and flexibly adapt to the most demanding virtual data center requirements. the front-end director. high-availability Symmetrix V-Max Engine provides the building block for all Symmetrix V-Max systems. FICON.Vblock Component Details Figure 11 Symmetrix V-Max Storage Engine Front End Back End Front End Back End Host & Disk Ports Host & Disk Ports Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Global Memory Complex Complex CPU CPU Complex Complex CPU CPU Global Memory A A B B A A B B The powerful. the Symmetrix V-Max system uses mirrored cache. The front-end I/O modules are connected to the director via the midplane. and the cache memory module. All rights reserved. Each of the two integrated directors in a Symmetrix V-Max Engine has main three parts: the back-end director. In case of a single-engine configuration. Figure 12 shows a schematic representation of a maximum Symmetrix V-Max configuration. The back-end director consists of two back-end I/O modules with four logical directors that connect directly into the integrated director. 227118 Virtual Matrix Interface Virtual Matrix Interface 29 . or Gigabit Ethernet. Memory cards range from 2 to 8 GB. 8-16 ports for front-end host access or Symmetrix Remote Data Facility channels using Fibre Channel. The cache memory modules are located within each integrated director. each with eight available memory slots. so memory is mirrored across engines in a multi-engine setup. It includes four quad-core Intel Xeon processors. For added redundancy. consequently allowing anywhere between 16 to 64 GB per integrated director.
Vblock Component Details Figure 12 Fully Configured Symmetrix V-Max Storage System Front End Back End Front End Back End Front End Back End Front End Back End Host & Disk Ports Host & Disk Ports Host & Disk Ports Host & Disk Ports Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Global Memory CPU CPU C l Complex CPU Complex Global Memory Global Memory CPU CPU C l Complex CPU Complex Global Memory Virtual Matrix Interface Interface A Virtual Matrix Interface B Virtual Matrix Interface A B Virtual Matrix Interface Interface A B Virtual Matrix Interface A B Virtual Matrix Interface Virtual Matrix Interface Core Core Global Memory Core Core Core Core Back End Front End B Global Memory A Host & Disk Ports Host & Disk Ports Core Core Core Core Core Core Core Core CPU CPU Core Core Core Core Front End CPU C l Complex Complex Back End Core Core A B Core Core Core Core Virtual Matrix Interface Interface Core Core Core Core Virtual Matrix Interface CPU CPU Core Core Back End Front End Complex CPU C l Complex B A Host & Disk Ports Host & Disk Ports Core Core Core Core Core Core Core Core Core Core Virtual Matrix Interface Host & Disk Ports Front End CPU Complex Virtual Matrix Core Core Front End Back End Global Memory Global Memory Core Core Core Core A B Core Core Virtual Matrix Interface Virtual Matrix Interface Core Core Global Memory Core Core Core Core Back End Front End B Global Memory A Host & Disk Ports Core Core Core Core Core Core Core Core CPU CPU Core Core Core Core C l Complex Back End Core Core A B Core Core Core Core Virtual Matrix Interface Interface Core Core Core Core Virtual Matrix Interface CPU CPU Core Core Back End Front End Complex CPU C l Complex B A Host & Disk Ports Host & Disk Ports Core Core Core Core Core Core Core Core Core Core Core Core Front End Back End Global Memory Global Memory Core Core Core Core A B Core Core The Symmetrix V-Max system supports the drive types in Table 8. security issues. . and the high cost of data protection and management associated with deploying file servers using general-purpose operating systems become non-issues with the EMC® Celerra® Gateway family. Take advantage of simultaneous support for NFS and CIFS protocols by letting UNIX and Microsoft® Windows® clients share files using the DART (Data Access in Real Time) operating system’s sophisticated file-locking mechanisms. enabling you to dynamically grow. combined with Celerra’s impressive I/O system architecture.emc. Performance bottlenecks. scalability. delivering a comprehensive. available at: http://www. performance. For additional information about utilizing VMware Virtual Infrastructure with EMC Symmetrix storage arrays. and cost-effectively manage file systems with multi-protocol file access. and ease of management to your business. easy-to-use package. share. NFS Datastores and Native File Services—EMC Celerra Gateway Family Figure 3 illustrates the interconnection of the EMC Celerra in Vblock. consolidated storage solution that adds IP storage (NAS) in a centrally-managed information storage system. Best-in-class EMC Symmetrix® and CLARiiON® back-end array technologies. All rights reserved. The high-end features offered with the Celerra Gateway platform enable entry-level data center consolidation resulting in lower total cost of ownership (TCO) of your server and storage assets—while enabling you to grow your IP storage Vblock Infrastructure Packages Reference Architecture 30 227112 Virtual Matrix Interface Interface Core Core Core Core Front End Global Memory Host & Disk Ports Core Core Core Core A Core Core Core Core CPU CPU C l Complex Back End B Core Core Core Core Core Core Virtual Matrix Interface Front End Complex CPU A Host & Disk Ports Core Core Core Core Back End Global Memory B Core Core Virtual Matrix Interface Interface Core Core Core Core Front End Global Memory Host & Disk Ports Core Core Core Core A Core Core Core Core CPU CPU C l Complex Back End B Core Core Core Core Core Core Virtual Matrix Interface Front End Complex CPU A Host & Disk Ports Core Core Core Core Back End Global Memory B Core Core © 2010 Cisco EMC VMware. Celerra Gateway platforms extend the value of existing EMC storage array technologies. deliver industry-leading availability.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-l dv. Each Celerra Gateway product—the NS-G2 or NS-G8—is a dedicated network server optimized for file access and advanced functionality in a scalable.pdf. refer to Using EMC Symmetrix Storage in VMware Virtual Infrastructure Environments—TechBook.
This approach offers the utmost in configuration options. Each X-Blade consists of an Intel-based server with redundant data paths. power supplies. including: • • • • One-to-eight X-Blade configurations Flash drives. 31 . Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. and multiple Gigabit Ethernet ports and optional.htm. designed and optimized for high-performance and multi-protocol network file access. The X-Blades control data movement from the disks to the network. multiple 10 Gigabit Ethernet optical ports. see: http://www.com/products/detail/hardware/celerra-ns-g2.emc. And you can improve performance over standard NAS by simply adding MPFS to your environment without application modification. SATA and low-power SATA drive support Performance/availability mode in the entry-level NS-G2• EMC Symmetrix or CLARiiON storage Native integration with Symmetrix and CLARiiON replication software providing a single replication solution for all your SAN and IP Storage disaster recovery requirements EMC Celerra Gateway Platform System Elements The Celerra Gateway is comprised of one or more autonomous servers called X-Blades which connect via FC SAN to a CLARiiON or Symmetrix storage array. All rights reserved.Vblock Component Details environment into the hundreds of TB from a single point of management. X-Blades run EMC’s Data Access in Real Time (DART) operating system. which operates out of the data path and provides a single point of configuration management and administration as well as handling X-Blade failover and maintenance support. All the X-Blades in a system are managed by the Control Station (two control stations for HA are supported on the NS-G8). see: http://www. • May be shared across multiple Vblocks Storage Area Network—Cisco MDS Fibre Channel Switch Figure 4 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 and Figure 5 illustrates the interconnection of the Cisco MDS 9506 in Vblock 2.htm. • Vblock 2 NS-G8 – 2 to 8 Datamovers For more information on the EMC Celerra NS-G8. EMC Celerra Gateway platforms combine a NAS head and SAN storage for a flexible. Fibre Channel. cost-effective implementation that maximizes the utilization of your existing resources. Vblock Configuration and Design Considerations EMC Celerra is a file server for the Vblock (Celerra file services may be shared across multiple Vblocks): • • Gateway configuration sharing CLARiiON or Symmetrix storage Vblock 1 NS-G2 – 2 Datamovers For more information on the EMC Celerra NS-G2.emc.com/products/detail/hardware/celerra-ns-g8.
cisco.html Cisco MDS 9506 Multilayer Director The Cisco® MDS 9506 Multilayer Director provides industry-leading availability. . see: http://www. and IBM Fiber Connection [FICON]) IPv6 capable Platform for intelligent fabric applications such as storage media encryption In-Service Software Upgrade (ISSU) Comprehensive network security framework High-performance intelligent application with the combination of 16-port storage services node For more information on the Cisco MDS 922i. scales up to 66 Fibre Channel ports Integrated hardware-based virtual fabric isolation with virtual SANs (VSANs) and Fibre Channel routing with inter-VSAN routing Remote SAN extension with high-performance Fibre Channel over IP (FCIP) Long distance over Fibre Channel with extended buffer-to-buffer credits Multiprotocol and mainframe support (Fibre Channel. the Cisco MDS 9506 provides advanced functionality and unparalleled investment protection. expansion slot modularity. up to 1152 Fibre Channel ports in a single rack. The Cisco MDS 9506 allows you to deploy high-performance storage-area networks (SANs) with lowest total cost of ownership (TCO). Small Computer System Interface over IP [iSCSI]. allowing the use of any Cisco MDS 9000 Family switching module in this compact system.com/en/US/products/ps5988/index. offering: • • • High-performance SAN extension and disaster recovery solutions Intelligent fabric services such as storage media encryption Cost-effective multiprotocol connectivity Its compact form factor. second. and full redundancy of all major components for best-in-class availability. Supporting up to 192 Fibre Channel ports in a single chassis. Vblock Infrastructure Packages Reference Architecture 32 © 2010 Cisco EMC VMware. Layering a rich set of intelligent features onto a high-performance. and transparent integration of new technologies. and third generation Cisco MDS 9000 Family switching modules. Product highlights: • • • • • • • • • • High-density Fibre Channel switch. the Cisco MDS 9506 addresses the stringent requirements of large data center storage environments: uncompromising high availability. and management.Vblock Component Details Cisco MDS 9222i Multiservice Modular Switch The Cisco MDS 9222i Multiservice Modular Switch delivers state-of-the-art multiprotocol and distributed multiservice convergence. All rights reserved. see: http://www.cisco. scalability. security. and advanced capabilities make the MDS 9222i an ideal solution for departmental and remote branch-office SANs requiring director-class features at a lower cost. FCIP. protocol-independent switch fabric. Compatible with first. ease of management. the Cisco MDS 9506 is designed to meet the requirements of large data center storage environments.com/en/US/products/ps8420/index. stateful process restart/failover. The Cisco MDS 9506 offers the following benefits: • Scalability and Availability—The Cisco MDS 9506 combines nondisruptive software upgrades.html For more information on the Cisco MDS 9200 Series Multilayer Switches. scalability. security.
4-Gbps. Cisco MDS 18/4-Port Multiservice Module helps ensure ease of deployment. Sophisticated diagnostics—Provides intelligent diagnostics. VSAN-enabled intermix of mainframe and open systems environments. IBM Fiber Connection (FICON). Open platform for intelligent storage applications—Provides the intelligent services necessary for hosting and/or accelerating storage applications such as network-hosted volume management. faster problem resolution. Intelligent network services—Provides integrated support for VSAN technology. no additional software is required. Integrated hardware-based inter-VSAN routing provides line-rate routing between any ports within a system or fabric without the need for external routing appliances Advanced FICON services—Supports 1/2/4-Gbps FICON environments. The seven-rack-unit chassis allows up to six Cisco MDS 9506 multilayer directors in a standard rack. and high availability by using innovative technology to transparently offer Cisco SME capabilities to any device connected to the fabric without the need for reconfiguration or rewiring. Comprehensive security framework—Supports RADIUS and TACACS+. The Cisco MDS 9506 transparently integrates Fibre Channel. maximizing the number of available Fibre Channel ports. It includes VSAN technology for hardware-enforced. further decreasing TCO. access control lists (ACLs) for hardware-based intelligent frame processing. Fibre Channel Security Protocol (FC-SP). Secure File Transfer Protocol (SFTP). Small Computer System Interface over IP (iSCSI). including cascaded FICON fabrics. and per-VSAN role-based access control. Flexibility and investment protection—Supports mix of mix of new. All rights reserved. • • • • • • • • • • • Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. Secure Shell (SSH) Protocol and Simple Network Management Protocol Version 3 (SNMPv3) implementing Advanced Encryption Standard (AES). isolated environments within a single physical fabric for secure sharing of physical infrastructure. TCO driven design—The Cisco MDS 9506 offers advanced management tools for overall lowest TCO. and network analysis tools as well as integrated Call Home capability for added reliability. Integrated Cisco Storage Media Encryption (SME) as distributed fabric service—Supported on the Cisco MDS 18/4-Port Multiservice Module. and N_Port ID virtualization for mainframe Linux partitions. data migration and backup. 33 . Cisco SME provisioning is integrated into the Cisco Fabric Manager. Cisco SME key management can be provided by either the Cisco Key Management Center (KMC) or with RSA Key Manager for the Datacenter from RSA. and Fibre Channel over IP (FCIP) in one system. and first generation Cisco MDS 9000 Family modules providing forward and backward compatibility and unparalleled investment protection. Multiprotocol—The multilayer architecture of the Cisco MDS 9000 Family enables a consistent feature set over a protocol-independent switch fabric. CUP (Control Unit Port) support enables in-band management of Cisco MDS 9000 Family switches from the mainframe management console.Vblock Component Details • Compact design—The Cisco MDS 9506 provides high port density in a small footprint. and 2-Gbps MDS Fibre Channel switching modules. 1/2/4/8-Gbps and 10-Gbps Fibre Channel—Supports new 8-Gbps as well as existing 10-Gbps. Integrated hardware-based VSANs and Inter-VSAN Routing (IVR)—Enables deployment of large-scale multi-site and heterogeneous SAN topologies. VSANs. hardware-enforced zoning. protocol decoding. Cisco SME encrypts data at rest on heterogeneous tape drives and virtual tape libraries (VTLs) in a SAN environment using secure IEEE standard Advanced Encryption Standard (AES) 256-bit algorithms. the Security Division of EMC. Integration into port-level hardware allows any port within a system or fabric to be partitioned into any VSAN. second. and advanced traffic-management features such as Fibre Channel Congestion Control (FCC) and fabric-wide quality of service (QoS) to enable migration from SAN islands to enterprise-wide storage networks. and reduced service costs. scalability. saving valuable data center floor space. ACLs.
cisco. line-rate encryption of Fibre Channel data between any Cisco MDS 9000 Family 8-Gbps modules. connectivity. . • For more information on the Cisco MDS 9506 Multilayer Director.com/en/US/products/ps5990/index. hardware-based.html Vblock 1 SAN Configuration • (2) Cisco MDS 9506 (optionally 2 Cisco MDS 9222i) – (8) 4 Gb N-ports to each Fabric Interconnect – (4-8) 4 Gb N-ports to each CLARiiON Storage Processor Vblock 2 SAN Configuration • (2) Cisco MDS 9506 – (8) 4 Gb N-ports to each Fabric Interconnect – (8-16) 4 Gb N-ports to each V-Max Storage Processor Storage Design Considerations • Balanced configuration – Capacity. see: http://www.cisco. All rights reserved. workload (IOPs/MBs) • Availability – Enterprise class storage – RAID protection – Extensive remote replication capabilities using MirrorView and SRDF • Predictable Performance – Large cache – Tiered storage including Enterprise Flash Drives • Ease of deployment and management – Template based provisioning – Wizards – Fully Automate Storage Tiering (FAST) – Virtual Provisioning – Local replication capability using SnapView and TimeFinder Vblock Infrastructure Packages Reference Architecture 34 © 2010 Cisco EMC VMware.Vblock Component Details • Unified SAN management—The Cisco MDS 9000 Family includes built-in storage network management with all features available through a command-line interface (CLI) or Cisco Fabric Manager.html For more information on the Cisco MDS 9500 Series Multilayer Directors. Integration with third party storage management platforms allows seamless interaction with existing management tools. see: http://www.com/en/US/products/hw/ps4159/ps4358/ps5395/index. Cisco TrustSec Fibre Channel Link Encryption—Delivers transparent. a centralized management tool that simplifies management of multiple switches and fabrics.
VMware vCenter Server. 35 . formerly VMware VirtualCenter. the industry’s most reliable platform for data center virtualization. Optimize IT service delivery and deliver the highest levels of application service agreements with the lowest total cost per application workload by decoupling your business critical applications from the underlying hardware for unprecedented flexibility and reliability. Unlocks the power of vSphere through proactive management. VMware vCenter Server: • • • Provides centralized control and visibility at every level of virtual infrastructure.vmware.Vblock Component Details Virtualization VMware vSphere/vCenter • VMware vSphere 4 is the virtualized infrastructure for the Vblock – Virtualizes all application servers – Provides VMWare High Availability (HA) and Dynamic Resource Scheduling (DRS) • Templates enable rapid provisioning Vblock Core to VM Ratios Table 11 # of VMs Based on Minimum # of VMs Based on Maximum UCS Configuration UCS Configuration Vblock 1 1:4 Core to VM Ratio (1920 MB memory/VM) 1:16 Core to VM Ratio (480 MB memory/VM) Vblock 2 1:4 Core to VM Ratio (3072 MB memory/VM) 1:16 Core to VM Ratio (768 MB memory/VM) 1024 4096 2048 8192 512 2048 1024 4096 VMware vSphere and vCenter Server VMware vSphere and vCenter Server offer the highest levels of availability and responsiveness for all applications and services with VMware vSphere. All rights reserved. VMware vCenter Server provides a scalable and extensible platform that forms the foundation for virtualization management (http://www. centrally manages VMware vSphere (http://www.com/products/.vmware. Is a scalable and extensible management platform with a broad partner ecosystem. allowing IT administrators dramatically improved control over the virtual environment compared to other management platforms. see http://www.vmware. For more information. Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware.com/solutions/virtualization-management/).com/products/vsphere/) environments.
4 5.98 Symmetrix V-Max SE (1 System and 1 Storage Bay) 10.4 5.4 0.14163 BTU/hr 1 Std Rack = 45 RU 1 CLARiiON Rack = 40 RU 1 V-Max Bay = 44 RU 2 3 4 5 Vblock Infrastructure Packages Reference Architecture 36 © 2010 Cisco EMC VMware.95 47-63 IEC320-C14 0.4 0.7 2.95 Equipment Description UCS 6120XP Fabric Interconnect UCS Chassis (5108 Chassis and B2000 M1 Blade) MDS 9506 Catalyst 6504 ACE Load Balancer Clariion CX4-480 Processor (Pair) Clariion CX4-480 DAEs V-Max SE (2 Bay Configuration) V-Max System Bay (4 Engine) V-Max Storage Bay Celerra NS-G2 Celerra NS-G8-8X-Blade System Total Vblock 1 Min (#) 2 2 2 2 2 1 7 1 Power (kVA) 1.7 0.9375 0.3 58.9 173 250 100-240 100-240 100-240 100-240 200-240 50-60 50-60 50-60 47-63 47-63 47-63 20 20 20 7. North America NEMA L6-20P.355 5.75 34 5.8 26300 44 2774 1258.256 Cooling (BTU/hr) 3072 48560 24000 24000 3120 Rack Vblock 2 Space (U) Max (#) 2 24 14 10 2 4 8 2 2 2 Power (kVA) 2.5 200-240 50-60 2 per bay Cisco MDS 9222i Cisco MDS 9506 Cisco Catalyst 6504-E Cisco ACE 4710 Load Balancer Celerra NS-G2 1.52 34.355 0.128 0. Table 12 Vblock Power and Cooling Specifications—1 Rack Space (RU) Input Voltage (VAC) Circuit Breaker (Amps).5 124 40 24.82 0.4 0.52 23.375 8.03 Celerra NS-G8-4X-Blade System 1. NEMA L5-20.375 17 5.18 100-240 100-240 100-240 50-60 50-60 48-60 16 12 16 2 per chassis 2 per chassis 2 per chassis 1 1 per power supply 1 per power supply 1 per power supply 1 per power supply CS8365C (3-phase Delta-4 wire) Power Cord.28 Cooling (BTU/hr) 3072 48560 24000 24000 3120 990 17400 Rack Space (U) 2 24 14 10 2 3 36 Vblock 2 Min (#) 2 4 2 2 2 Power (kVA) 1. Frequency Recommen AC Power ded (Hz) Connections Equipment Description Cisco UCS Chassis (5108 Chassis and 8 x B200 M1 Blades) Cisco UCS 6120XP Fabric Interconnect Cisco UCS 6140XP Fabric Interconnect CLARiiON CX4-480 Service Processor (Pair) and Standby Power Supplies CLARiiON CX4-480 DAE CLARiiON 40U Rack Enclosure Power (kVA) Cooling (BTU/hr) Weight (lb) Weight (kg) Power Connector CAB-L520P-C19-US.4 34000 88 4008 1818 200-240 50-60 2 per bay CS8365C (3-phase Delta-4 wire) Symmetrix V-Max System Bay (4 Engine) 4. IE320C19 IEC320-C14 IEC320-C14 0.1 13700 44 1830 830 200-240 50-60 2 per bay CS8365C (3-phase Delta-4 wire) Symmetrix V-Max System Bay (8 Engine) 7.256 Cooling Rack (BTU/hr) Space (U) 6144 97120 24000 24000 3120 4 48 14 10 2 1 10. and Space Requirements Table 12 and Table 13 contain power and cooling specifications.05625 2.256 0.1 19800 44 2144 972.44 11. 228354 1 1 4.4 5.3 10600 17 438 199 100-240 180-240 (single phase) 180-240 (single phase) 180-240 (single phase) 47-63 15 20 for each phase 20 for each phase 20 for each phase 47-63 IEC320-C14 0.5 12140 1536 2561 990 1450 32175 6 1 2 3 3 40 255 35 50 99.375 17 5.4 5.68 45.3 41. 20A. 125VAC 15A NEMA 5-15 Plug.511 3236 89776 Estimated Number of Racks 2 64 1 0.69 18. 250 VAC IEC320-C14 IEC320-C14 IEC320-C14 IEC320-C14 NEMA L6-30P Power Factor 4.1 13700 19800 44 44 . IE320C19 NEMA L6-20P.756 10600 144280 17 155 1 3.Physical Architecture Physical Architecture Power.4 30.8 10 30 50 (3-phase Delta-4 wire) 50 (3-phase Delta-4 wire) 50 (3-phase Delta-4 wire) 50 (3-phase Delta-4 wire) 2 per chassis 2 per unit 2 per unit 2 for Standby Power Supplies 2 per DAE 2 per domain 0.98 74.52 2884 12000 12000 1560 3236 3 7 5 1 2 53.3 200-240 50-60 2 per bay CS8365C (3-phase Delta-4 wire) Symmetrix V-Max Storage Bay 6.875 22.08 Cooling (BTU/hr) 3072 24280 24000 24000 3120 990 10150 Rack Space (RU) 2 12 14 10 2 3 21 Vblock 1 Max (#) 2 4 2 2 2 1 12 1 Power (kVA) 1.412.355 3. All rights reserved.95 47-63 IEC320-C14 0.66 15.211 3236 121306 2 1 91 3.4 34000 88 1 0.5 5.75 Inches 1 kilowatt = 3.4 0.5 8100 13 333 151 Celerra NS-G8-8X-Blade System 3.7 5500 9 228 104 Celerra NS-G8-6X-Blade System 2.5 68 380 115. Cooling.256 0.3 56.91 34.6875 0.25 0.556 10600 192340 17 179 1 Rack Unit (RU) = 1.1 6.
All rights reserved.5 kVA 0. NEMA L6-30P NEMA L6-30P Recommended power strip requirements L6-30A Power Strip Any colocation requirements eg disk none Shark Rack none Shark Rack must be attached or with 100 meters of Part# T2EC014-A Part# T2EC014-A none devices on OM3 carriage ? EMC Bay EMC Bay L6-30A Power Strip L6-30A Power Strip with C13 with C13 Receptacles Receptacles NEMA L6-30P L6-30A Power Strip with C13 Receptacles L6-30A Power Strip with C13 Receptacles .520 kVA 1.66 in x 30.7 kVA 0. NEMA L5-20.5A for each supply power supply 4+4 8+4 CAB-L520P-C19-US.28 kVA (12 DAEs) 2 kVA (7 DAEs) to 5. Cooling 121536 BTU/hr.1 kVA 13700 BTU/hr No Above and Below OK 76.2 in x 41. 24 x IEC320/C13 pluges.355 kVA 990 BTU/hr Yes Above and Below OK Rackable depends depends 2 for Service Processors 7.8 A 2 6 kVA 32175 BTU/hr No Above and Below OK 75 in x 24 in x 39 in no no 0.Physical Architecture Table 13 Vblock Power and Cooling Specifications—2 UCS Rack (2 x 6120XP and 2 x 5108 Chassis) UCS Rack (2 x 6120XP and 4 x 5108 Chassis) 2 x Cisco MDS 9506 and 2 x Catalyst 6504 Rack V-Max System Bay V-Max Storage Bay CX4-480 (System) CX4-480 (Rack Enclosure Only) For all Vblock chassis : CX4-480 (DAE) 3 kVA (7 DAEs) to 5.8 kVA 4.6 kVA to 3. type (C19 ) Interconnects Interconnects 30A each 2 15A 2 NEMA L6-20P.7 kVA 0.88 in depends depends 1 for service processor 50A 2 6.355 kVA 10. and space requirements for Vblock 1 and Vblock 2 is shown in Table 14.66 in x 30.3 kVA 14 kVA 27352 BTU/hr 4 post rack Above and Below OK Rackable 4 2 19 kVA 51632 BTU/hr 4 post rack Above and Below OK Rackable 4 2 10. # of amps Recommended # of power plugs 14 kVA 27 kVA 10. 70 Rack Units (RU) 5 Racks 112 Rack Units (RU) 5 Racks Rack Layouts Rack layouts are provided for: • • • • Vblock 1 Minimum Configuration—Rack Layout Front View Vblock 1 Maximum Configuration—Rack Layout Front View Vblock 2 Minimum Configuration—Rack Layout Front View Vblock 2 Maximum Configuration—Rack Layout Front View Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware.3 kVA 5500 BTU/hr to 10600 BTU/hr Yes Above and Below OK Rackable depends none 1 control station 20A for each phase 4 to 10 (depends) 2 2 15. 24 x L30M. Table 14 Summary of Power. 37 228360 2 x CDUs 2 x CDUs (ServerTech.1 kVA 6. Footprint of any non rack devices Connectivity requirements of each rack (fiber LC-LC)(SAN) Connectivity requirements of each rack (fiber LC-LC)(IP) Any non-fibre connectity if needed? (ethernet) Recommended power strip requirements eg.model# CW-24V2L30M. and Space Requirements Minimum Configuration Vblock 1 Power Space Vblock 2 Power Space 32 KVA 22 KVA Maximum Configuration 29 KVA 109662 BTU/hr. 250 VAC and Recommended power strip requirements for Fabric C14 for Fabric eg. (ServerTech.6 kVA to 3.7 kVA 21800 BTU/hr No Above and Below OK 76.com.1 kVA 6.8 kVA 4.355 kVA 6 kVA 15A 20A to 40A RMS at Startup Surge 14 kVA 27 kVA 10. 66 Rack Units (RU) 2 Racks 69 Rack Units (RU) 3 Racks 45 KVA 170096 BTU/hr. Cooling 78132 BTU/hr. 20A. Cooling.28 kVA (12 DAEs) 1440 BTU/hr Yes Above and Below OK Rackable no no no 10A 2 Celerra NS-G2 Celerra NS-G8 Power load for each rack (startup power surge) Power load for each rack (max power draw ) Power load for each rack (average power draw ) Heat load of each rack max only( in Kwatts heat) Confirm any rack is suitable Confirm can be power from above or below ( current symetrix cannot). CAB-L520P-C19NEMA L5-20.28 kVA (12 DAEs) 3 kVA (7 DAEs) to 5. model# CW-24V2. cooling.88 in none none none 50A 2 0. 250 VAC and C14 20A.520 kVA 3236 BTU/hr Yes Above and Below OK Rackable depends none 1 control station 1. IE320-C19 CS8365C (3-phase Delta-4 wire) CS8365C (3-phase Delta-4 wire) IEC320-C14 IEC320-C14 NEMA L6-30P IEC320-C14 IEC320-C14 none none none none none none none A summary of power.com.2 in x 41. IEC320/C13 pluges.5A for power 15. US.8 kVA 48000 BTU/hr 2 Post Rack OK Above and Below OK Rackable depends depends depends 12A for MDS & 16A for Catalyst 8 4.
2 Data Movers (optional) Vblock 1 Minimum Configuration—Rack Layout Front View Figure 13 42 U 42 U MDS Fabric Interconnect 6120 * 2 UCS Chassis 5108 UCS Chassis 5108 Clariion CX4-480 NS-G2 DAE Power Vblock Infrastructure Packages Reference Architecture 38 © 2010 Cisco EMC VMware. 18 Ports 4 GB Fibre Channel) CLARiiON CX4-480. 2 * 73 GB Internal HDD 2 UCS 6120 Fabric Interconnect. All rights reserved. 8 Blades each. 2 Controllers Celerra NS-G2. 20 Fixed Ports. 6 * 48 GB RAM + 2 * 96 GB RAM (Total 480 GB RAM). 253591 . 24 Ports 4 GB Fibre Channel (optionally 2 MDS 9222i.Physical Architecture Vblock 1 Minimum Configuration—Rack Layout Front View • • • • • 2 UCS Chassis. 8 Ports 4 GB Fibre Channel 2 MDS 9506.
2 * 73 GB Internal HDD 2 UCS 6120 Fabric Interconnect.Physical Architecture Vblock 1 Maximum Configuration—Rack Layout Front View • • • • • 4 UCS Chassis. 6 * 48 GB RAM + 2 * 96 GB RAM (Total 1920 GB RAM). All rights reserved. 8 Ports 4 GB Fibre Channel 2 MDS 9506. 39 253592 . 20 Fixed Ports. 24 Ports 4 GB Fibre Channel (optionally 2 MDS 9222i. 8 Blades. 2 Controllers Celerra NS-G2. 18 Ports 4 GB Fibre Channel) CLARiiON CX4-480. 2 Data Movers (optional) Vblock 1 Maximum Configuration—Rack Layout Front View Figure 14 42 U Fabric Interconnect 6120 * 2 UCS Chassis 5108 42 U 42 U MDS UCS Chassis 5108 NS-G2 UCS Chassis 5108 UCS Chassis 5108 Clariion CX4-480 DAE Power Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware.
8 Ports 4 GB Fibre Channel 2 MDS 9506. 2 to 8 Data Movers (optional) Vblock 2 Minimum Configuration—Rack Layout Front View 2 .Physical Architecture Vblock 2 Minimum Configuration—Rack Layout Front View • • • • • 4 UCS Chassis. 8 Blades. 253593 .MDS 1 – NAS Gateway System Bay With 2 Engines Storage Bay With 13 DAEs Storage Bay With 13 DAEs Figure 15 4 -UCS 5108 Chassis 2 -UCS 6140 42 U 42 U 42 U 42 U 42 U Vblock Infrastructure Packages Reference Architecture 40 © 2010 Cisco EMC VMware. 2 * 73 GB Disk Drives 2 UCS 6140 Fabric Interconnect. 96 GB RAM. 24 Ports 4 GB Fibre Channel V-Max. 2 Engines Celerra NS-G8. All rights reserved. 40 Fixed Ports.
2 to 8 Data Movers (optional) Vblock 2 Maximum Configuration—Rack Layout Front View 4 – UCS 5108 1– NAS GW 2– MDS System Bay With 2 Engines 4 Racks of Storage Bays With 52 DAEs Figure 16 2 – UCS 6140 4 – UCS 5108 42 U 42 U 42 U 42 U 42 U X 4 Racks of DAEs References • • • • VMware View Reference Architecture http://www.cisco. 2 * 73 GB Disk Drives 2 UCS 6140 Fabric Interconnect. 8 Blades.com/go/unifiedcomputing Cisco Data Center Solutions http://www.References Vblock 2 Maximum Configuration—Rack Layout Front View • • • • • 4 UCS Chassis.vmware.com/go/datacenter Vblock Infrastructure Packages Reference Architecture © 2010 Cisco EMC VMware. 2 Engines Celerra NS-G8.com/resources/techresources/1084 VMware 3.com/products/view/ Cisco UCS http://www.cisco. 24 Ports 4 GB Fibre Channel V-Max. 41 253594 . 8 Ports 4 GB Fibre Channel 2 MDS 9506. All rights reserved.vmware. 96 GB RAM.0 http://www. 40 Fixed Ports.
Virtual Matrix. Inc. Inc. Copyright © 2010 EMC Corporation. All other marks and names mentioned herein may be trademarks of their respective companies. Inc. . Inc.vmware.cisco. Celerra. 3401 Hillview Ave Palo Alto.cisco.pdf • Cisco Systems. All rights reserved.emc. EMC. This product is protected by U.vmware.htm EMC Symmetrix V-Max System and VMware Virtual Infrastructure http://www. The use of the word partner does not imply a partnership relationship between Cisco and any other company.com/go/designzone EMC Symmetrix V-Max System http://www. PowerPath. Ionix. All other trademarks used herein are the property of their respective owners. All rights reserved. EMC2. Vblock Infrastructure Packages Reference Architecture 42 © 2010 Cisco EMC VMware. Inc.com/collateral/hardware/white-papers/h6209-symmetrix-v-max-vmware-virtual-i nfrastructure-wp. and/or its affiliates in the United States and certain other countries.pdf Using EMC Symmetrix Storage in VMware Virtual Infrastructure http://www. and international copyright and intellectual property laws. V-Max.emc. the Cisco logo.com/go/patents. Inc. Published in the USA.com Copyright © 2010 Cisco Systems. CA 94304 USA Tel: 650-427-5000 or 877-486-9273 Fax: 650-427-5001 www.S.com VMware. Cisco. VMware products are covered by one or more patents listed at http://www. MA 01748 USA Tel: 508-435-1000 www. P/N h6935 Copyright © 2010 VMware. All rights reserved.emc. All rights reserved.emc. All other trademarks mentioned in this document or Website are the property of their respective owners.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrixwp-ldv. 170 West Tasman Drive San Jose. VMware is a registered trademark or trademark of VMware. Navisphere. CLARiiON.com/products/detail/hardware/symmetrix-v-max. in the United States and/or other jurisdictions. and Cisco Systems are registered trademarks or trademarks of Cisco Systems. Symmetrix. CA 95134 USA Tel: 408-526-4000 or 800-553-6387 (NETS) Fax: 408-527-0883 www. and where information lives are registered trademarks or trademarks of EMC Corporation in the United States or other countries.com EMC Corporation 176 South Street Hopkinton.References • • • Cisco Validated Designs http://www.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.