Professional Documents
Culture Documents
At this point, you may be wondering how this magic can possibly
happen. The SDI needs a unified API to programmatically manage all
the compute, storage, and networking resource in the pools. In addition,
the management and orchestration software needs a detailed
understanding of the physical properties of all the hardware. And, of
course, the hardware needs to support programmable configuration and
be flexible enough to handle a wide range of scenarios. For example,
gear in the networking pool must support the dynamic configuration of
ports, bandwidth, and security policies.
Figure 1 below shows the front view of an HPE Synergy 12000 Frame
configured to illustrate the variety of different modules that are
available. Up to 12 2-socket blade servers, or a combination of data
storage modules and servers, can be installed in a single frame. The
various modules, numbered (1) - (8), are discussed in more detail in later
sections.
Figure 1. HPE Synergy 12000 Frame - Front View
The rear view of the HPE Synergy 12000 Frame is shown below in
Figure 2. Here, you can see examples of the various types of fabric
modules that are available. A frame has 6 interconnect bays that can be
configured for up to 3 fabrics (e.g., SAS, Ethernet, and Fibre Channel).
Each fabric typically is supported by a pair of redundant interconnect
modules.
Figure 2. HPE Synergy 12000 Frame - Rear View
In addition to the fabric modules, the rear view shows 10 fans and 6
power supplies. All 10 fans are required and at least 2 power supplies.
The number of power supplies required depends on the requirements of
the installed modules which can be determined by consulting the HPE
Power Advisor Online.
On the left side of Figure 2, notice that there are two frame link
modules. These provide management uplinks and support the multi-
frame ring architecture. Figure 3 shows a closeup view of a frame link
module.
Figure 3. HPE Synergy Frame Link Module
Each frame link module contains a MGMT port for connecting to the
management network and a LINK port for connecting multiple frames in
a ring topology. For redundancy, each frame contains two of these
modules.
Multi-Frame Architecture
A single frame can be managed individually simply by connecting the
MGMT port of both of the redundant frame link modules to the
management network. In this case, the LINK port is not used. Multiple
frames can be managed as a system in this way, with each with an
individual frame connected to the same subnet. However, this
configuration is discouraged because it does not take advantage of
important HPE Synergy management features such as uplink
consolidation and automatic discovery.
Figure 4, below, illustrates the preferred multi-frame architecture -
called a ring topology, This diagram shows 3 frames within a single rack
linked in a ring.
The frames are connected in a ring using the LINK ports on the frame
link modules. As illustrated by the blue lines in Figure 4, Frame 1
connects to Frame 2 which connects to Frame 3 which connects back to
Frame 1. Connectivity to the management network (green lines) is
provided by a single uplink using the MGMT port on Frame 1 with a
redundant uplink provided by Frame 3. Connectivity with the
management network and between the frames is managed automatically
by the frame link module and does not require user configuration.
A Unified Interface
Designed to unify these disparate tools, HPE OneView was built from
the ground up to provide a single platform for infrastructure
management. While reducing complexity, HPE OneView improves
productivity and increases flexibility. With a unified interface, it
becomes easier for an individual staff member to manage compute,
storage, and network resources, rather than being siloed into managing a
single resource type. Often referred to as a “single pane of glass”, the
unified interface enables administrators to automate management and
maintenance tasks across all resources in the datacenter.
Dashboard
From the HPE OneView Global Dashboard, you can manage thousands
of devices from a single user interface. You can view traffic across the
entire network, or drill down to check on the IOPS being consumed by a
single VM. The dashboard provides a unified view of the health of
servers, profiles, storage systems, enclosures, etc. - at the logical and
physical level. In addition, HPE OneView proactively monitors the
health of your entire infrastructure, and alerts you to problems before
they result in downtime.
When the HPE Synergy Composer provides a specific server profile, the
HPE Image Streamer can read the specifications and create bootable
images for that server profile from the golden image. These bootable
images are then streamed onto compute modules to compose the
specified servers.
Compute Modules
An HPE Synergy Frame accommodates both two-socket and four-socket
compute modules. In Figure 1 from the previous post, the four available
compute modules are shown as items (2), (3), (5), and (6). Together,
these modules offer a variety of performance, scalability, and storage
options to power different types of workloads ranging from financial
applications to web infrastructure, to collaboration software, to VDI. All
of the available compute modules can be configured with or without
internal storage. Diskless configurations work with the HPE Synergy
D3940 Storage Module (described below) to provide a pool of direct
attached storage to all the compute modules within a frame.
Those are the four compute modules currently offered for the HPE
Synergy platform. Next, we look at the shared data storage options that
are available.
Data Storage
The HPE Synergy platform supports a variety of storage options, from
direct-attached SAS within the frame, to Fibre Channel-attached Tier 1
storage, to Ethernet-attached predictive flash solutions. There is support
for file, block, or object-based storage. No matter which options you
choose, all data storage within the HPE Synergy framework is
composable and managed by the HPE Synergy Composer with HPE
OneView.
Although DAS solutions based on the D3940 are great for many
workloads, many IT organizations require the performance, reliability,
and scalability of SAN-based solutions. For those larger-scale enterprise
applications requiring Tier-1 service, HPE 3PAR StoreServ Storage
integrates seamlessly into the HPE Synergy platform. HPE 3PAR
StoreServ flash arrays can scale up to 24 petabytes of usable capacity
per Synergy system, provide three million+ IOPS and sub-millisecond
latency. But, best of all, within the HPE Synergy framework these 3PAR
arrays can be managed via HPE OneView through the HPE Synergy
Composer.
In this post, we look at the composable fabric and the various hardware
modules and some external storage systems that can be integrated into
an HPE Synergy system.
When you add a new frame to an HPE Synergy system, and connect it to
the master frame using an HPE Synergy Fabric Interconnect Link
Module, the frame is automatically discovered. Because the new
connection is east/west directly with the master, the system scales with
no performance penalty on the existing workload.
HPE recommends that, as a best practice, these fabrics be used for the
following purposes:
Looking back at Figure 2 from the first post, you can see an example of
these best practices. In Figure 2, slots 1 and 4 contain redundant HPE
Synergy 12Gb SAS Connection Module devices. These provide the
fabric for the in-frame HPE Synergy D3940 Storage Module. Slots 2 and
5 contain redundant Brocade 16Gb Fibre Channel SAN switches. These
provide fabric for an external SAN storage system - perhaps based on
HPE StoreServ technology. Slots 3 and 6 contain an Interconnect Link
Module and Virtual Connect Module. These provide fabric for inter-
frame networking. The arrangement of modules is the same as shown in
Figure 13 for Frame 2, so Figure 2 from the first post illustrates a
satellite frame that can server as a failover master.
The HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy is the
master module supporting the composable infrastructure networking
fabric across frames. As described above, each multi-frame HPE
Synergy system needs a master frame with this module, and one of the
satellite frames also needs this module for redundancy.
HPE Virtual Connect is a proven and popular technology from HPE that
decouples the network addresses of the compute modules from external
networks so that changes in compute or network infrastructure do not
require complex coordination among LAN and SAN administrators.
HPE Virtual Connect is widely used in c7000 and other HPE
BladeSystem installations. It is also the default master module in HPE
Synergy environments.
Figure 15. HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy
The HPE Virtual Connect master module works with HPE Synergy
Interconnect Link Modules in the satellite frames to provide intelligent
networking capabilities across an HPE Synergy system. There are two
versions of the Interconnect Link Module: a 10G and 20G. The 10G
version provides 12 x 10Gb Ethernet downlinks to the compute modules
in its host frame and can connect up to 4 satellite frames to a master
frame. Likewise, the 20G version provides 12 x 20Gb Ethernet
downlinks to the compute modules in its host frame and can connect up
to 2 satellite frames to a master frame.
Figure 16. HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy
Like the HPE Virtual Connect SE 40Gb F8 Module, the HPE Synergy
40Gb F8 Switch Module supports the master/satellite frame architecture
through the HPE Interconnect Link Modules. It can also be monitored
by HPE OneView, although it is not controlled by the HPE Synergy
Composer.
Customers who want to maintain an existing network can use the HPE
Synergy 10Gb Pass-Thru Module to connect each compute module in a
frame to that network. This pass-through module allows one-to-one
connectivity between a compute module’s network adapters and a top-
of-rack Ethernet switch and requires a port for each compute module
connected to the top-of-rack switch.
HPE Synergy also supports the Brocade 16Gb Fibre Channel switch for
high-performance, low-latency networking with cut-through mode FC
SAN capabilities. This switch is ideal for financial services, trading
applications, medical imaging, and rendering.
Figure 20. Brocade 16Gb Fibre Channel SAN Switch for HPE Synergy
The HPE Synergy 12Gb SAS Connection Module supports the SAS
interconnection fabric for the HPE Synergy D3940 Storage Module (see
Figure 10 from the second post). This module can connect up to 10
compute modules in a frame with up to 40 SFF drive bays in the D3940.
The result is a storage fabric, managed by HPE Synergy Composer, that
can compose storage resources to meet the needs of a wide variety of
workloads.
Figure 21. HPE Synergy 12Gb SAS Connection Module
This nonblocking SAS fabric allows full utilization of flash storage and
can support up to two million IOPs across 10 compute modules. The full
architecture is shown below from the SSDs in the D3940 Storage
Module, to the SAS Connection Modules to the HPE Smart Array
P542D Controller connected to each of the compute modules.
Downlinks
The HP interconnect modules have downlink and uplink ports. The
uplink ports are pretty obvious, as they have a port on them that can be
connected to a switch or another device. The downlink ports are less
obvious. The downlinks exist between the interconnects and the blade
bays. For example, in a c7000 chassis there are 16 server bays so an HP
Flex-10 interconnect would have 16 downlink ports, one for each blade.
In the picture below of an HP VC Flex-10 Enet Module, there are 8
uplink ports, which are visible, as well as 16 downlink ports which are
not visible, for a total of 24 ports.
Blade Mapping
Now that we’ve seen that each blade has connections to the interconnect
via the downlink ports, lets take a closer look at how we see what NICs
are mapped to which interconnect bay. HP Blades have two Lan On
Motherboard (LOM) ports as well as room for two mezzanine cards.
The mezzanine cards can contain a variety of different types of PCI
devices, but in many cases they are populated with either NICS or
HBAs.
The LOMs and Mezz Cards map in a specific order to the interconnect
bays.
LOM1 – Interconnect Bay 1
Lom2 – Interconnect Bay 2
Mezz1 – Interconnect Bay 3 (and 4 if it’s a dual port card)
Mezz2 – Interconnect Bay 5 (and 6 if it’s a dual port card, 7 and 8 if it’s
a quad port card)
The picture below should help to understand how the HP Blades map to
the interconnect bays. This example uses dual port mezzanine cards.
LOM Ports with Flex-10
An additional thing can happen if you’ve got LOM FlexNICs as well as
a Flex-10 Ethernet Module or Flex Fabric interconnect module. You
can subdivide the LOM NICs into 4 Logical NICs. From here, your
hypervisor or operating system will see 8 NICs instead of the original 2
NICs that would normally be there. This is an especially nice feature if
you’re running virtualization, as you should now have plenty of network
cards for vMotion, Fault Tolerance, Production Networks, and
management networks.
As you can see from the following screenshot, the LOM NICs will be
seperated into 4 Logical NICs labled 1-a, 1-b … 2-d.
I should also mention that if the interconnect modules are Flex Fabric,
the LOM-1b and LOM-2b could be either an HBA or a NIC, your
choice.
I know that these concepts seem fairly straight forward now, but to a
beginner this is some very useful information to get started with HP
Virtual Connect. I hope to have some more blog posts in the future
about configuring networking with Virtual Connect.
Share this:
Introduction
HPEs - Virtual Connect FlexFabric networking, can take a while to wrap your head around. So I
thought I would take a moment to explain what HPE Virtual Connect is, and also share some
knowledge and also resources that you may find useful.
What is Virtual Connect?
Virtual Connect is an HPE based technology that provides the ability to configure and customize
how each of the blade network connections are virtualized/mapped.
Or as HPE describes,
"…a technology to simplify networking configuration for the server administrator using an HP
BladeSystem c-Class environment. The baseline Virtual Connect technology virtualizes the
connections between the server and the LAN and SAN network infrastructure. It adds a hardware
abstraction layer that removes the direct coupling between them. Server administrators can
physically wire the uplinks from the enclosure to its network connections once, and then manage
the network addresses and uplink paths through Virtual Connect software."
In addition to virtual connect, you will be sure to see the name FlexFabric. In short, Flexfabric is
a range of HPE networking hardware products, ranging from blade NICs, ToR switches, and
Blade Chassis switch modules.
Components
Virtual Connect InterConnect Modules - The Interconnect modules plug directly into the
interconnect bays in the rear of the HP BladeSystem c Class enclosure. The modules connect to
the server blades through the enclosure midplane.[1] Each Interconnect bay is numbered from 1 to
8.
FlexFabric Adapters - There are two types of FlexFabric adapter - FlexibleLOM and
Mezzanine. Each type is installed into its relating blade system board port/connector.
Figure 1 - C7000 Overview.[2]
LOM1:1b 1 1 Network B
LOM1:1c 1 1 Network C
LOM1:1d 1 1 Network D
LOM1:2a 2 2 Network A
LOM1:2b 2 2 Network B
LOM1:2c 2 2 Network C
LOM1:2d 2 2 Network D
With regards to the Mezzanine adapters, each port of the adapter is presented to the system/OS
as a single port (shown below):
Bandwidth Allocation
8. To Power On partition
RMC cli> power on