P. 1
Server-to-network edge technologies

Server-to-network edge technologies

|Views: 41|Likes:
Published by Sravanthi Reddy

More info:

Published by: Sravanthi Reddy on Mar 04, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Server-to-network edge technologies: converged networks and virtual I/O

Technology brief

Table of contents
Executive Summary .............................................................................................................................. 2 What is the server-to-network edge?....................................................................................................... 3 Physical infrastructure challenges at the server-to-network edge ................................................................. 5 Solving server-to-network complexity with converged networks .................................................................. 6 iSCSI .............................................................................................................................................. 6 Fibre Channel over Ethernet .............................................................................................................. 8 Challenges at the server-to-network edge from virtual machines ............................................................... 11 Virtual Ethernet Bridges................................................................................................................... 11 Solving complexity at the virtualized server-to-network edge with Edge Virtual Bridging ............................. 15 Common EVB capabilities and benefits............................................................................................. 15 Virtual Ethernet Port Aggregator (VEPA) ............................................................................................ 17 Multichannel technology ................................................................................................................. 18 VN-Tag......................................................................................................................................... 19 Status of EVB and related standards ................................................................................................. 20 Summary .......................................................................................................................................... 20 Appendix: Understanding SR-IOV and Virtual Connect Flex-10 ............................................................... 23 For more information.......................................................................................................................... 25 Call to action .................................................................................................................................... 25

Executive Summary
This technology brief describes the “server-to-network edge” in a data center architecture and technologies that affect the server-to-network edge in both the physical and virtual infrastructure. The server-to-network edge is the link between the server and the first tier of the network infrastructure, and it is becoming an increasingly important area of the infrastructure. Within the data center, the server-to-network edge is the point within the topology that contains the most connections, switches, and ports. The most common types of networks used in enterprise data centers are Ethernet for LANs and Fibre Channel for SANs. Typically, these have different topologies, administrators, security, performance requirements, and management structures. Converged networking technologies such as iSCSI, Fibre Channel over Ethernet (FCoE), and Converged Enhanced Ethernet (CEE) have the potential to simplify the networking infrastructure. CEE is the Ethernet implementation of the Data Center Bridging (DCB) protocols that are being developed by IEEE for any 802 MAC layer network. CEE and DCB are commonly used interchangeably. For this technology brief, the term CEE refers to Ethernet protocols and Ethernet products that are DCB compliant. With 10 Gb Ethernet becoming more common, the potential for combining multiple network streams over a single fabric becomes more feasible. HP recommends that customers implement converged networking first at the server-to-network edge. This area contains the most complexity and thus offers the most opportunity for cost benefit by simplifying and reducing the amount of infrastructure. By implementing first at the server-to-network edge, customers will also be able to maintain existing administrator roles, network topologies, and security requirements, for their LAN and SAN infrastructure. Customers who want to implement FCoE as a converged networking fabric should be aware that some of the underlying standards are not finalized. By implementing first at the server edge, customers will be able to take advantage of the standards that are in place while avoiding the risk of implementing technology that will have to be replaced or modified as standards solidify. The server-to-network edge is also becoming increasingly complex due to the sprawl of virtual machines. Virtual machines add a new layer of virtual machine software and virtual networking that dramatically impacts the associated network. Challenges from virtual machines include the performance loss and management complexity of integrating software-based virtual switches, also referred to as Virtual Ethernet Bridges (VEBs), into existing network management. In the near future, hardware NICs will be available that use Single Root I/O Virtualization (SR-IOV) technology to move software-based virtual switch functionality into hardware. While hardware-based SR-IOV NICs will improve performance, compared to traditional software-only virtual NICs, they do not solve the other challenges of management integration and limited management visibility into network traffic flows. To solve these challenges, the IEEE 802.1 Work Group is creating a standard called Edge Virtual Bridging (EVB). The EVB standard is based on a technology known as Virtual Ethernet Port Aggregator (VEPA). Using VEPA, VM traffic within a physical server is always forwarded to an external switch and directed back when necessary to the same host. This process is known as a “hair-pin turn”, as the traffic does a 180-degree turn. The hairpin turn provides for external switch visibility into and policy control over the virtual machine traffic flows and it can be enabled in most existing switches with firmware upgrades and no hardware changes. The HP approach to technologies at the server-to-network edge is based on using industry standards. This ensures that new technologies will work within existing customer environments and organizational roles, yet will preserve customer choice. The HP goal for customers is to enable a simple migration to advanced technologies at the server-to-network edge without requiring an entire overhaul strategy for the data center.


depending on whether the interconnect modules replace the ToR or add a tier. can be over-subscribed. and with existing HP ProLiant offerings such as BladeSystem and Virtual Connect Readers can see the “For more information” section and the Appendix for additional information about HP products. or server-to-network edge layer Figure 1. Ethernet network architecture is often highly tree-like. the server-to-network edge refers to the connection points between servers and the first layer of both local area network (LAN) and storage area network (SAN) switches. The most common networks used in enterprise data centers are Ethernet for LANs and Fibre Channel for SANs.This paper assumes that the reader is relatively familiar with Ethernet and Fibre Channel networks. Figure 1 illustrates a typical multi-tiered Ethernet infrastructure. Typical multi-tier Ethernet data center architecture (physical infrastructure) The dotted line around the Blade server/Top of rack(ToR) switch indicates an optional layer. What is the server-to-network edge? In this technology brief. 3 . and relies on a structure in this manner: • Centralized (core) switches at the top layer • Aggregation (distribution) switches in the middle layer • Access switches at the bottom.

distribution. the infrastructure usually maintains the ToR switch. and core switches: • A single rack of servers may connect to top-of-rack switches (typically deployed in pairs for redundancy) that uplink to a redundant aggregation network layer. If the interconnect module is a pass-through. in which many servers connect directly to a large SAN director switch • Edge-core architecture.Figure 1 illustrates an architecture using both rack-based servers (such as HP ProLiant DL servers) and blade enclosures (such as HP BladeSystem c-Class enclosures) along with a variety of switches such as top-of-rack (ToR). Figure 2 illustrates the two most common SAN architectures: • SAN core architecture. end-of-row (EoR). • Several racks of servers may connect to end-of-row switches that consolidate the network connections at the edge before going into the aggregation network layer. Typical single or two-layer architecture of a Fibre Channel SAN. SAN island Edge-Core The dotted line around the Blade server/FC switch indicates an optional layer. they may either replace the existing access switch or may add an extra tier. • Blade-based architectures may include server-to-network interconnect modules that are embedded within the enclosure. which uses SAN fabric or director switches before connecting to the SAN director core switch Figure 2. also referred to as a SAN island. If the interconnect modules are switches. depending on whether the blade enclosure uses interconnects that are embedded switches or pass-through modules 4 .

The iSCSI section included later in the paper discusses iSCSI in more detail. and organizational governance roles while reaping the benefits of reduced infrastructure at the server-to-network edge. especially for enterprise data centers that have large installed Fibre Channel and Ethernet networks. However. SAN designers that have blade enclosures can use one of two approaches: • Use pass-through modules in the enclosure while maintaining an external edge switch • Replace an existing external Fibre Channel edge switch with an embedded blade switch Typically. the server edge is the most physically complex part of the data center networks. For some customers that can use non-Fibre Channel storage. the server edge has the most connections.Customers who want to maintain a SAN island approach with blade enclosures can use Fibre Channel pass-through modules to connect directly to the director switch. With an edge-core SAN architecture. tree-like topologies while SAN architectures tend to remain flat. each network type has its own requirements: • Unique network adapters for servers • Unique switches • Network management systems designed for each network • Different organizations to manage the networks • Unique security requirements • Different topologies • Different performance requirements Each of these differences add complexity and cost to the data center. Often. while a LAN architecture may be heavily oversubscribed 1 by 10:1 or 20:1. 5 . Physical infrastructure challenges at the server-to-network edge As illustrated in Figures 1 and 2. Administrators should be allowed to maintain similar processes. Different topologies are one of the reasons that HP is focusing on the server-to-network edge. For most enterprise data centers that use Fibre Channel SANs and Ethernet LANs. switches. iSCSI technology is still evolving and has experienced a slow rate of adoption. a SAN architecture differs from a LAN architecture in oversubscription ratios/cross-sectional bandwidth: a SAN is typically lightly oversubscribed by 2:1 to 8:1 (at the server edge). 1 Non-oversubscribed LAN topologies with large cross-sectional bandwidths are certainly possible. and ports (especially in environments using only rack-based servers). data center structures. iSCSI solves these challenges without requiring new Ethernet infrastructure. procedures. LAN and SAN architectures have different topologies: LAN architectures tend to have multilayered. especially for enterprise data centers. just not as common.

But Fibre Channel protocols are not routable and can be used only within a relatively limited area. InfiniBand. administrators can partition a 10GbE connection and replace multiple lower bandwidth physical NIC ports with a single Flex-10 port. and compatibility issues associated with Fibre Channel SANs. The more common definition of converged networks refers to combining LAN and SAN protocols onto a single fabric. The iSCSI protocol sought to improve that by moving the SCSI packets along a typical Ethernet network using TCP/IP (Figure 3. it enables simpler. One of the goals of a converged network is to simplify and flatten the typical Ethernet topology by reducing the number of physical components. The benefits of a converged network include reduction of host/server adapters. cabling. The possibility of converging. far right). switch ports. additional complexity.Solving server-to-network complexity with converged networks Data center administrators can address complexity at the server-to-network edge by consolidating server I/O in various ways: • Combining multiple lower-bandwidth connections into a single. far left). 6 . The iSCSI protocol serves the same purpose as Fibre Channel in building SANs. iSCSI. such as within a single data center. more centralized management. simplifies the amount of cross-connect in the network. but iSCSI runs over the existing Ethernet infrastructure and thus avoids the cost. leading to other opportunities to simplify management and improve quality of service (QoS). and costs. higher-bandwidth connection • Converging (combining) different network types (LAN and SAN) to reduce the amount of physical infrastructure required at the server edge Administrators can already use HP Virtual Connect Flex-10 technology to aggregate multiple independent server traffic streams into a single 10Gb Ethernet (10GbE) connection. reduces the amount of physical infrastructure (number of NICs and number of interconnect modules). iSCSI One of the reasons that the Fibre Channel protocol became prevalent is that it is a very lightweight. simply stated. high-performance protocol that maps locally to the SCSI initiators and targets (Figure 3. and reduces power and operational costs. Two technologies provide the most promise for use on a converged network: iSCSI and Fibre Channel over Ethernet (FCoE). data center networks has been proposed for some time. or combining. For instance. This reduces management requirements. and other protocols have been introduced with the promise of providing efficient transport on a single fabric for multiple traffic classes and data streams.

Comparison of storage protocol stacks Operating Systems and Applications SCSI Layer FCP iSCSI TCP FCoE 1 IP Ethernet FC CEE An iSCSI SAN is typically comprised of software or hardware initiators on the host server connected to an Ethernet network and some number of storage resources (targets). Figure 4. Initiators include both software. iSCSI is SCSI over TCP/IP 7 .and hardware-based initiators incorporated on host bus adapters (HBAs) and NICs.Figure 3. The iSCSI stack at both ends of the path is used to encapsulate SCSI block commands into Ethernet Packets for transmission over IP networks (Figure 4).

iSCSI does not require the new Converged Enhanced Ethernet (CEE) infrastructure (see the Fibre Channel over Ethernet section for more information).The first generations of iSCSI had some limitations that minimized its adoption: • Traditional software-based iSCSI initiators require the server CPU to process the TCP/IP protocols added onto the SCSI stack. This is called full iSCSI offload. Because FCoE was designed with minimal changes to the Fibre Channel protocol. typically under congestion situations. using an iSCSI network minimizes the SAN administration skills required. Because iSCSI administration requires the same skill set as a TCP/IP network. iSCSI becomes a more viable solution for converged networks for small-medium businesses as well as enterprise data centers. 8 . • Using 10Gb networks and 10Gb NICs with iSCSI SANs generates performance comparable to a Fibre Channel SAN operating at 8 Gb. However if the network is carrying multiple traffic classes. 2 Because FCoE is a lightweight encapsulation protocol and lacks the reliable data transport of TCP layer. limit the ability to scale a network. iSCSI can be incorporated today with their existing Ethernet infrastructure. non-business critical tiers in enterprise data centers. But if present. Using iSCSI storage can also play a role in lower. 2 It is possible to create a lossless Ethernet network using existing 802. and can only be used for short-haul communication within a data center. The newer generations of iSCSi technology solve these issues: • High-performance adapters are now available that fully offload the protocol management to a hardware-based iSCSI HBA or NIC. FCoE is a layer 2 (non-routable) protocol just like Fibre Channel. and impact performance.3x mechanisms. Typical legacy Ethernet networks allow Ethernet frames to be dropped. • iSCSI did not have a storage management system equivalent to the Fibre Channel SANs. • New centralized iSCSI boot management tools provide mechanisms for greater scalability when deploying large numbers of servers With 10Gb-based iSCSI products. For customers with iSCSI-based storage targets and initiators. therefore. it was not scalable to large numbers of servers. iSCSI will benefit from the QoS and bandwidth management offered by CEE. and they rely on upper layer protocols such as TCP to provide end-to-end data recovery. • Configuring iSCSI boot was a manual and cumbersome process. it must operate on Converged Enhanced Ethernet (CEE) to eliminate Ethernet frame loss under congestion conditions. Unlike FCoE. FCoE encapsulates Fibre Channel frames inside of Ethernet frames (Figure 5). Fibre Channel over Ethernet FCoE takes advantage of 10Gb Ethernet’s performance while maintaining compatibility with existing Fibre Channel protocols. making iSCSI a good choice for green-field deployments or when there are smaller IT teams and limited budgets. the existing mechanisms can cause QoS issues.

and cable infrastructure (Figure 6). legacy Ethernet. In a converged network.Figure 5. replacing a significant amount of the NIC. 9 . Figure 6. HBA. Illustration of an FCoE packet The traditional data center model uses multiple HBAs and NICs in each server to communicate with various networks. Converged network adapter (CNA) architecture FCoE uses a gateway device (an Ethernet switch with CEE. and legacy FC ports) to pass the encapsulated Fibre Channel frames between the server’s Converged Network Adapter and the Fibre Channel-attached storage targets. the converged network adapters (CNAs) can be deployed in servers to handle both FC and CEE traffic.

This formal transmission handling should enable fair sharing of the link. the terms DCB and CEE are used interchangeably. This consortium originally coined the term Converged Enhanced Ethernet to describe this new technology. However. better performance and metering. least understood. These proposals have now become a suite of proposed standards from the Data Center Bridging (DCB) task group within the IEEE 802. heavy congestion in an Ethernet fabric. 802. QCN must be used in conjunction with PFC to completely avoid dropping packets and guarantee a lossless environment. there are also some challenges with FCoE: • Must be deployed using a CEE network • Requires converged network adapters and new Ethernet hardware between the servers and storage targets (to accommodate CEE) • Is a non-routable protocol and can only be used within the data center • Requires an FCoE gateway device to connect the CEE network to the legacy Fibre Channel SANs and storage • Requires validation of a new fabric infrastructure that includes both the Ethernet and Fibre Channel Converged Enhanced Ethernet (CEE) An informal consortium of network vendors originally defined a set of enhancements to Ethernet to provide enhanced traffic management and lossless operations. Also. • Quantized Congestion Notification (QCN). QCN must be implemented in all components in the CEE data path (CNAs.1Qaz – supports discovery and configuration of network devices that support the technologies described above (PFC. While three of the standards that define the protocols are nearing completion. not just Ethernet. 802. it is important to note that QCN will require a hardware upgrade of almost all existing CEE components before it can be implemented in data centers. There are four new technologies defined in the DCB draft standards: • Priority-based Flow Control (PFC). the QCN standard is the most complex.There are several advantages to FCoE: • FCoE uses existing OS device drivers. and requires the most hardware support. The IEEE is defining the DCB standards so that they could apply to any IEEE 802 MAC layer network type. • Data Center Bridging Exchange Protocol (DCBX). and QCN).1Qbb – Ethernet flow control that discriminates between traffic types (such as LAN and SAN traffic) and allows the network to selectively pause different traffic classes • Enhanced Transmission Selection (ETS).1Qaz – formalizes the scheduling behavior of multiple traffic classes. Quite commonly.1Qau – supports end-to-end flow control in a switched LAN infrastructure and helps eliminate sustained.1 Work Group. and so on) before the network can use QCN. The QCN standard will be critical in allowing IT architects to converge networks “deeper” into the distribution and core network layers. For this technology brief. ETS. 802. 10 . switches. including strict priority and minimum guaranteed bandwidth capabilities. • Storage targets that are provisioned and managed on a native FC SAN can be accessed transparently through an FCoE gateway. 802. the term CEE refers to Ethernet protocols and Ethernet products that are DCB compliant. So administrators can consider CEE as being the application of the DCB draft standards to the Ethernet protocol. • FCoE uses the existing Fibre Channel security and management model with minor extensions for the FCoE gateway and Ethernet attributes used by FCoE.

and the related standardization efforts. and external network switches. or “vSwitches. retaining the existing SAN and LAN topologies. the terms used to describe these infrastructure components. 3 The term “bridge” is used because IEEE 802. in the near future. the lack of fully finalized CEE standards is another reason HP views the server-to-network edge as the best opportunity for taking advantage of “converged network” benefits. private. The following sections explain the types of virtualized infrastructure available today (or in the near future). Not all servers need access to FC SANs. In general. Hardware-based switch functionality will improve performance compared to software-based VEBs. Updating the server-to-network edge offers the greatest benefit and simplification without disrupting the data center’s architectural paradigm. CNAs should be used only with the servers that actually benefit from it. Virtual Ethernet Bridges The term Virtual Ethernet Bridge (VEB) is used by industry groups to describe network switches that are implemented within a virtualized server environment and support communication between virtual machines. the hypervisor. This new layer of virtual machines and virtual switches within each “virtualized” physical server introduces new complexity at the server edge and dramatically impacts the associated network. or it can be used to connect VMs to the external network. a VEB is a virtual Ethernet switch. As noted in the previous section. Challenges from virtual machines include the issues of managing virtual machine sprawl (and the associated virtual networking) and performance loss and management complexity of integrating software-based virtual switches with existing network management. If administrators transition the server-to-network edge first to accommodate FCoE/CEE. but complementary. A more formal definition: A Virtual Ethernet Bridge (VEB) is a frame relay service within a physical end station that supports local bridging between multiple virtual end stations and (optionally) the external bridging environment.” that are incorporated into all modern hypervisors. VEB implementations will include hardware-based switch functionality built into NICs that implement the PCI Single Root I/O Virtualization (SR-IOV) standard. 3 In other words. 11 . this will maintain the existing architecture structure and management roles. technology is required to solve the complexity of the server-to-network edge resulting from the growing use of virtual machines.Transition to FCoE HP advises that the transition to FCoE can be a graceful implementation with little disruption to existing network infrastructures if it is deployed first at the server-to-network edge and migrated further into the network over time. Challenges at the server-to-network edge from virtual machines While converged network technology addresses the complexity issues of proliferating physical serverto-network edge infrastructure. Today. virtual network between virtual machines (VMs) within a single physical server. a different. HP is working with other industry leaders to develop standards that will simplify and solve these challenges. rather than needlessly changing the entire infrastructure. It can be an internal. more of a data center’s assets use only LAN attach than use both LAN and SAN. However.1 standards use the generic term “bridge” to describe what are commonly know as switches. the most common implementations of VEBs are software-based virtual switches. Administrators could also start by implementing FCoE only with those servers requiring access to FC SAN targets. These are significant challenges that are not fully addressed by any vendor today.

Figure 7. the virtual switch forwards traffic to the physical NIC. For the vSwitch. The hypervisor implements one or more software-based virtual switches that connect the virtual NICs to the physical NICs. Data traffic received by a physical NIC is passed to a virtual switch that uses hypervisor-based VM/Virtual NIC configuration information to forward traffic to the correct VMs. When a virtual machine transmits traffic from its virtual NIC. • Can be deployed without an external switch – Administrators can configure an internal network with no external connectivity. Data flow through a Virtual Ethernet Bridge (VEB) implemented as a software-based virtual switch (vSwitch) Using software-based virtual switches offers a number of advantages: • Good performance between VMs – A software-based virtual switch typically uses only layer 2 switching and can forward internal VM-to-VM traffic directly.Software-based VEBs – Virtual Switches In a virtualized server. or user/hypervisor-configured bandwidth limits. to run a local network between a web server and a firewall running on separate VMs within the same physical server. creating virtual NICs for each virtual machine. • If the destination is internal to the physical server on the same virtual switch. with bandwidth limited only by available CPU cycles. memory bus bandwidth. the hypervisor abstracts and shares physical NICs among multiple virtual machines. 12 . • Support a wide variety of external network environments – Software-based virtual switches are standards compliant and can work with any external network infrastructure. the virtual switch uses its hypervisor-based VM/virtual NIC configuration information to forward the traffic directly back to another virtual machine. for example. the physical NIC acts as the uplink to external network. a virtual switch uses its hypervisor-based VM/virtual NIC configuration information to forward the traffic in one of two ways (see Figure 7): • If the destination is external to the physical server or to a different vSwitch.

the NIC reduces latency significantly from the VM to the external port. • Lack network policy enforcement – Modern external switches have many advanced features that are being used in the data center: port security. their management and configuration is often inconsistent or incompatible with the features enabled on external networks. In addition. software-based virtual switches also experience several disadvantages: • Consume valuable CPU bandwidth – The higher the traffic load. and again. The SR-IOV standard provides native I/O virtualization for PCIe devices that are shared on a single physical server. This is especially true for traffic that flows between VMs and the external network.) SR-IOV supports one or more physical functions (full-feature PCI functions) and one or more virtual functions (light-weight PCI functions focused primarily on data movement). Even if virtual switches support some of these features. with the management of virtual systems not being integrated into the management system for the external network. or have limited support for. • Lack network-based visibility – Virtual switches lack the standard monitoring capabilities such as flow analysis. The PCI Special Interest Group (SIG) has developed SR-IOV and Multi-Root I/O Virtualization standards that allow multiple VMs running within one or more physical servers to natively share and directly access and control PCIe devices (commonly called “direct I/O”). these advanced features. the greater the number of CPU cycles required to move traffic through a software-based virtual switch. A new technology called “distributed virtual switches” was recently introduced that allows up 64 virtual switches to be managed as a single device. and remote diagnostics of external network switches. By exposing the registers directly to the VM. The hypervisor continues to allocate resources and handle exception conditions. Hardware VEBs — SR-IOV enabled NICs Moving VEB functionality into the NIC hardware improves the performance issues associated with software-based virtual switches. reducing the ability to support larger numbers of VMs in a physical server. When network outages or problems occur. but this only alleviates the symptoms of the management scalability problem without solving the problem of lack of management visibility outside of the virtualized server. Figure 8 illustrates the architecture of an SR-IOV enabled NIC. This limits the ability to create end-to-end network policies within a data center. • Lack management scalability – As the number of virtualized servers begins to dramatically expand in a data center. identifying the root cause of problems can be difficult in a virtual machine environment. and so on. Software-based virtual switches often do not have. NICs that implement SR-IOV allow the VM’s virtual NICs to bypass the hypervisor vSwitch by exposing the PCIe virtual (NIC) functions directly to the guest OS. (MultiRoot IO Virtualization refers to direct sharing of PCI devices between guest operating systems on multiple servers. 13 .Conversely. Standard virtual switches must each be managed separately. quality of service (QoS). A capability known as Alternative Route Identifiers (ARI) allows expansion of up to 256 PCIe physical or virtual functions within a single physical PCIe device. Single-Root I/O virtualization (SR-IOV) is the technology that allows a VEB to be deployed into the NIC hardware. the number of software-based virtual switches experiences the same dramatic expansion. makes problem resolution difficult. access control lists (ACL). VM-to-VM traffic within a physical server is not exposed to the external network. advanced statistics. but it is no longer required to perform routine data processing for traffic between the virtual machines and the NIC.

Furthermore. 14 . This is especially true for traffic flowing between virtual machines and the external networks. While SR-IOV brings improvements over traditional software-only virtual NICs. • SR-IOV requires a guest OS to have a paravirtualized driver to support the direct I/O with the PCI virtual functions and operating system support for SR-IOV. they often have even fewer capabilities than software-based virtual switches. • Increases network performance due to the direct I/O between a guest OS and the NIC. significantly increasing the number of virtual networking functions for a single physical server. the proliferation of VEBs to manage is even worse with VEBs implemented in SR-IOV enabled NICs. these devices are limited because they typically have one VEB per port. soft virtual switches are no longer part of the data path. In fact. whereas. Virtual Ethernet Bridge (VEB) implemented as an SR-IOV enabled NIC There are benefits to deploying VEBs as hardware-based SR-IOV enabled NICs: • Reduces CPU utilization relative to software-based virtual switch implementations. With direct I/O. there are still challenges with hardware-based SR-IOV NICs: • Lack of network-based visibility – SR-IOV NICs with direct I/O do not solve the network visibility problem. • Supports up to 256 functions per NIC. • Lack of management scalability – There are still a large number of VEBs (one per NIC port typically) that will need to be managed independently from the external network infrastructure.Figure 8. because of limited resources of cost effective NIC silicon.” so they may increase the amount of flooding. Thus. especially if there is a significant amount of VM-to-VM traffic. • Lack of network policy enforcement – Common data flow patterns to software-based virtual switches and limited silicon resources in cost effective SR-IOV NICs results in no advanced policy enforcement features. • Increased flooding onto external networks – SR-IOV NICs often have small address tables and do not “learn. software-based virtual switches can operate multiple NICs and NIC ports per VEB. which is not available in market-leading hypervisors as of this writing.

EVB changes the way virtual bridging or switching is done to take advantage of the advanced features and scalable management of external LAN switch devices. all require the services of adjacent bridges forming a local area network. as described in the following section. or in hardware-based SR-IOV NIC devices. With an EVB. These standards aim to resolve issues with network visibility. created by an industry coalition led by HP • VN-Tag technology. Solving complexity at the virtualized server-to-network edge with Edge Virtual Bridging Realizing the common challenges with both software-based and hardware-based VEBs. the virtual NIC and virtual switch information is exposed to the external switches to achieve better visibility and manageability. At the same time. 5 EVB is defined as “the environment where physical end stations. see the Appendix. This capability provides greater management visibility. For more information about how they differ.In summary. Traffic between VMs within a virtualized server travels to the external switch and back through a 180-degree turn. the IEEE 802. and the VM-to-VM traffic uses twice the bandwidth for a single packet – one transmission for outgoing and one for incoming traffic. and scalability. as it would be with a VEB. but will be offered as an alternative for applications and deployments that require the management benefits. and management scalability issues faced by virtualized servers at the server-to-network edge. Softwareor hardware-based EVB operations can be optimized to make them simple and inexpensive to implement. or “hairpin turn” (Figure 9.1 Work Group is creating a new set of standards called Edge Virtual Bridging (EVB). IEEE 802.” See the EVB tutorial on the 802. there is a performance cost associated with EVB because the traffic is moved deeper into the network. end-to-end policy enforcement. the hypervisor-based software virtual switch is moving into hardware on the NIC using SR-IOV technology.1Qbg. EVB is not necessarily a replacement for VEB modes of operation. but does not solve the issues of management integration and limited visibility into network traffic flows. and management scalability at the virtualized server-to-network edge. top traffic flow). Just like a VEB.1 website at www. containing multiple virtual end stations. Common EVB capabilities and benefits The primary goals of EVB are to solve the network visibility. Traffic is not sent directly between the virtual machines. 4 Next generation SR-IOV implementations that use Edge Virtual Bridging will provide a better solution for the problem of management visibility. traffic from a VM to an external destination is handled just like in a VEB (Figure 9. Using NICs enabled with SR-IOV solves many of the performance issues. to obtain higher performance. created by Cisco Both of these proposals solve similar problems and have many common characteristics. HP has implemented a similar method for aggregating virtual I/O with the HP Flex10 technology.org/1/files/public/docs2009/new-evb-thaler-evb-tutorial-0709-v09. EVB augments the methods used by traditional VEBs to forward and process traffic so that the most advanced packet processing occurs in the external switches. SR-IOV is an important technology for the future. policy management. bottom traffic flow). However.ieee802. 5 Two primary candidates have been forwarded as EVB proposals: • Virtual Ethernet Port Aggregator (VEPA) and Multichannel technology.pdf 4 15 . control. but the implementation details and associated impact to data center infrastructure are quite different between the two. an EVB can be implemented as a software-based virtual switch. For existing servers.

The hairpin turn allows an external switch to apply all of its advanced features to the VM-to-VM traffic flows. The external switch can detect an EVB-enabled port on a server and can detect the exchange of management information between the MAC addresses and VMs being deployed in the network. Using EVB-enabled ports addresses the goals of network visibility and management scalability. Therefore. fully enforcing network policies. Importantly. thereby allowing the NIC to maintain low-cost circuitry • Enables a consistent level of network policy enforcement by routing all network traffic through the adjacent switch with its more complete policy-enforcement capabilities 16 . traffic must be filtered to forbid a VM from receiving multicast or broadcast packets that it sent onto the network. Figure 9. Managing the connectivity to numerous virtual servers from the external switch and network eases the issues with management scalability. no ASIC development or other new hardware is required for a network switch to support these hairpin turns. Most switches already support the hairpin turn using a mechanism similar to that used for wireless access points (WAP). Basic architecture and traffic flow of an Edge Virtual Bridge (EVB) Most switches are not designed to forward traffic received on one port back to the same port (doing so breaks normal spanning tree rules).Thus. IT architects will have the choice — VEB or EVB — to match the needs of their application requirements with the design of their network infrastructure. But rather than using the WAP negotiation mode. an EVB-specific negotiation mode will need to be added into the switch firmware. Another change required by EVBs is the way in which multicast or broadcast traffic received from the external network is handled. This brings management visibility and control to the VM level rather than just to the physical NIC level as with today’s networks. There are many benefits to using an EVB: • Improves virtualized network I/O performance and eliminates the need for complex features in software-based virtual switches • Allows the adjacent switch to incorporate the advanced management functions. it was sourced from a VM within a virtualized server). external switches must be modified to allow the hairpin turn if a port is attached to an EVB-enabled server. Because traffic received by the EVB from the physical NIC can come from the hairpin turn within the external switch (that is. This would violate standard NIC and driver operation in many operating systems.

and external switches. including access to statistics like NetFlow. 6 In addition to software-based VEPA solutions. ARP monitoring. existing standards. sFlow. As a proof-point. and dynamic ARP protection and inspection • Allows administrators to configure and manage ports in the same manner for VEB and VEPA ports • Enhances monitoring capabilities. There are many benefits from using VEPA: • A completely open (industry-standard) architecture without proprietary attributes or formats • Tag-less architecture that achieves better bandwidth than software-based virtual switches. and the adjacent external switch that implements the hairpin turn. This allows the full benefits of network visibility and management of the virtualized server environment.• Provides visibility of VM-to-VM traffic to network management tools designed for physical adjacent switches • Enables administrators to gain visibility and access to external switch features from the guest OS such as packet processing (access control lists) and security features such as Dynamic Host Configuration Protocol guard. often as a software upgrade • Minimizes changes to NICs. VEPA is designed to incorporate and modify existing IEEE standards so that most existing NIC and switch products could implement VEPA with only a software upgrade. Anna Fischer. 17 .” submitted for conference publication and under review as of this writing. rmon. and Prasant Mohapatra. “A Case for VEPA: Virtual Ethernet Port Aggregator. The prototype performance. VEPA does not require new tags and involves only slight modifications to VEB operation. and so on The disadvantage of EVB is that VM-to-VM traffic must flow to the external switch and then back again. allowing external switches to discover ports that are operating in VEPA mode and exchange information related to VEPA operation. software switches. Software-based VEPA solutions can be implemented as simple upgrades to existing software virtual switches in hypervisors. bridges. HP labs.1Qbg EVB standard on VEPA technology because of its minimal impact and minimal changes to NICs. thereby promoting low cost solutions 6 Paul Congden. but changes the forwarding rules slightly according to the base EVB requirements. thus consuming twice the communication bandwidth. this additional bandwidth is consumed on the network connections between the physical server and external switches. in conjunction with University of California. with less overhead and lower latency (especially for small packet sizes) • Easy to implement. address resolution protocol (ARP). and frame formats (which require no changes). VEPA is able to achieve most of the goals envisioned for EVB without the excessive burden of a disruptive new architecture such as VN-Tags. primarily in frame relay support. VEPA enables a discovery protocol. VEPA continues to use MAC addresses and standard IEEE 802. Wherever VEBs can be implemented. The prototype solution required only a few lines of code within the Linux bridge module. source port filtering. developed a software prototype of a VEPA-enabled software switch. SR-IOV NICs can easily be updated to support the VEPA mode of operation. In doing so.1Q VLAN tags as the basis for frame forwarding. even though it was not fully optimized. Furthermore. port mirroring. showed the VEPA to be 12% more efficient than the traditional software virtual switch in environments with advanced network features enabled. Virtual Ethernet Port Aggregator (VEPA) The IEEE 802. VEPAs can be implemented as well.1 Work Group has agreed to base the IEEE 802.

Each logical channel operates as an independent connection to the external network. Administrators may need certain virtualized applications to use VEB switches for their performance and may need other virtualized applications to use EVB (VEPA) switches for their network manageability. complementary to VEPA.1ad. firewalls. or direct mapped virtual machine. 18 . and vice-versa. an optional “Multichannel” technology. Each of the logical channels can be assigned to any type of virtual switch (VEB. Multichannel allows the traffic on a physical network connection or port (like a NIC device) to be logically separated into multiple channels as if they are independent. • Directly mapping a virtual machine to a physical network connection or port while allowing that connection to be shared by different types of virtual switches. To address these scenarios. commonly referred to as the “Provider Bridge” or “Q-in-Q” standard.1 Work Group for inclusion into the IEEE 802. but then only that virtual machine may make use of the network connection/port. Many hypervisors support directly mapping a physical NIC to a virtual machine. This mechanism provides varied support: • Multiple VEB and/or EVB (VEPA) virtual switches to share the same physical network connection to external networks. to operate in promiscuous mode lets the administrator use the external switch’s more powerful promiscuous and mirroring capabilities to send the appropriate traffic to the applications. and so on) or directly mapped to any virtual machine within the server. thus consuming valuable network connection resources.1Qbg EVB standard. Multichannel technology allows external physical switches to identify which virtual switch. was proposed by HP and accepted by the IEEE 802. parallel connections to the external network. VEPA. all in the same physical server. Promiscuous mode lets a NIC forward all packets to the application. • Directly mapping a virtual machine that requires promiscuous mode operation (such as traffic monitors. Multichannel technology uses the extra S-Tag and incorporates VLAN IDs in these tags to represent the logical channels of the physical network connection. and virus detection software) to a logical channel on a network connection/port. scenarios were identified in which VEPA could be enhanced with some form of standard tagging mechanism. For these cases. Multichannel uses existing Service VLAN tags (“S-Tags”) that were standardized in IEEE 802. Allowing virtualized applications. without overburdening the resources in the physical server. Figure 10 illustrates how Multichannel technology supports these capabilities within a virtualized server. traffic is coming from.Multichannel technology During the EVB standards development process. multichannel capability lets administrators establish multiple virtual switch types that share a physical network connection to the external networks. regardless of destination MAC addresses or tags.

1ad. or IEEE 802. and direct mapped VMs on a single NIC The optional Multichannel capability requires S-Tags and “Q-in-Q” operation to be supported in the NICs and external switches. VEPA/EVB for management visibility of the VM-to-VM traffic. The VN-Tag is inserted and stripped by either VN-Tag enabled software virtual switches. Multichannel technology supports VEB.1X tags) • Hardware changes to NIC and switch devices are required rather than simple software upgrades for existing network devices. and.1Q. which can be implemented in almost all current virtual and external physical switches. unlike the basic VEPA technology. using VN-tags requires new network products (NICs. External switches enabled with VN-Tag use the tag information to provide network visibility to virtual NICs or ports in the VMs and to apply network policies. any existing IEEE defined tagging format.Figure 10. In other words. it may require hardware upgrades. in some cases. A VN-Tag specifies the virtual machine source and destination ports for the frame and identifies the frame’s broadcast domain. and optimized support for promiscuous mode applications. Multichannel does not have to be enabled to take advantage of simple VEPA operation. or built on. IEEE 802. Multichannel technology allows IT architects to match the needs of their application requirements with the design of their specific network infrastructure: VEB for performance of VM-to-VM traffic. implementating VN-Tags has significant shortcomings: • VN-Tags do not leverage other existing standard tagging formats (such as IEEE 802. Although using VN-Tags to create an EVB solves the problems of management visibility. Multichannel merely enables more complex virtual network configurations in servers using virtual machines. VEPA. or by a VN-Tag enabled SR-IOV NIC. and software) enabled with VN-Tags. sharing physical NICs with direct mapped virtual machines. switches. VN-Tag VN-Tag technology from Cisco specifies a new Ethernet frame tag format that is not leveraged from. 19 .

they use the information in these new tags to map the physical ports on the Fabric Extenders as virtual ports on the upstream switches and to control how they forward packets to or from upstream Nexus or UCS switches and replicate broadcast or multicast traffic. Also as of this writing. HP expects that these two seemingly disparate technologies are going to become more and more intertwined as data center architectures move forward. the terms “Port Extenders” and “Fabric Extenders” are synonymous. it is unclear whether M-Tags will resemble VN-Tags or a standard tag format.1 Work Group. Status of EVB and related standards As of this writing. the port extension technology was considered as part of the EVB standardization effort. It should be noted that there are a number of products on the market today. or that will be introduced shortly.1Qbh Bridge Port Extension standard has been approved and the formal standardization process is beginning. FCoE. Cisco has modified its port extension proposal to use a tag format called “M-Tags” for the frames communicated between the Port Extenders and their upstream controlling switches.1Qbg EVB standard has been approved and the formal standardization process is beginning. the IEEE 802. As the number of server deployments with virtualized I/O continues to increase. Again. At the time of this writing. but also for use as the port extension tags. the PAR for the IEEE 802. Multichannel technology is also being included as an optional capability in the EVB standard to address the cases in which a standard tagging mechanism would enhance basic VEPA implementations. the project authorization request (PAR) for the IEEE 802. however.• EVB implemented with VN-Tag and traditional VEB cannot coexist in virtualized servers as with the VEPA with Multichannel technology M-Tag and port extension One scenario is not addressed by VEPA and Multichannel technology: port extension. Cisco VN-Tag technology has been proposed to the IEEE 802. and VEB) technologies will have distinct. as the issues with VN-Tags have been debated in the work group. 20 . more standard tag formats already defined in the IEEE.1 Work Group chose the VEPA technology proposal over the VN-Tag proposal as the base of the EVB standard because of its use of existing standards and minimal impact and changes required to existing network products and devices. The IEEE 802. the ability to converge physical network architectures should interact seamlessly with virtualized I/O.1 Work Group has decided that the EVB and port extension technologies should be developed as independent standards. Port Extenders require tagged frames. Generally. that rely on technologies that have not been adopted by the IEEE 802. CEE/DCB) and virtual I/O (EVB. Initially. Summary The server-to-network edge is a complicated place in both the physical and virtual infrastructures.1 Work Group as not only a solution for EVB. Products such as the Cisco Nexus Fabric Extenders and Universal Computing System (UCS) Fabric Extenders are examples of Port Extenders. it is unclear whether these M-Tags will incorporate the VN-Tag format or other. Customers in the process of evaluating equipment should understand how a particular vendor’s products will evolve to become compliant with the developing EVB and related standards technologies. VEPA. Emerging converged network (iSCSI. Port Extenders are physical switches that have limited functionality and are essentially managed as a line card of the upstream physical switch. The IEEE 802. but overlapping effects on data center infrastructure design.1 Work Group accepted the M-Tag technology proposal as the base of the Port Extension standard. However.

HP BladeSystem Matrix embodies HP’s vision for data center management: management that is application-centric. The Matrix management console. making the management of network connectivity easier (Virtual Connect) and aggregating network connectivity (VC Flex-10). and storage administrative groups. and network virtualization going forward. By consolidating physical NICs and switch ports. HP has been working to solve the problems of complexity at the server edge for some time. and switches at the edge where there is the most traffic congestion and the most associated network equipment requirements and costs. and should work within existing customer frameworks and organizational roles.The goal of converged networks is to simplify the physical infrastructure through server I/O consolidation. allowing administrators to partition a single 10 Gb Ethernet port into multiple individual server NIC connections. customers gain benefits from reducing the physical number of cables. virtualized I/O. FCoE. As the number of server deployments using virtual machines continues to increase. New virtual I/O technologies to consider should be based on industry standards like VEPA and VEB. networking. with the ability to provision and manage the infrastructure to support business-critical applications across an enterprise. allowing administrators to wire LAN / SAN connections once and streamline the interaction between server. HP provides the mechanisms to manage a converged infrastructure by simplifying configuration and operations. built on HP Insight Dynamics. for example. • HP BladeSystem Matrix – implements the HP Converged Infrastructure strategy by using a shared services model to integrate pools of compute. storage. HP plans to broaden Virtual Connect Flex-10 technology to provide solutions for converging different network protocols with its HP FlexFabric technology. combines automated provisioning. these products and technologies: • HP BladeSystem c-Class –eliminates server cables through the use of mid-plane technology between server blades and interconnect modules. Virtual networking technologies are becoming increasingly important (and complex) because of the growing virtual networking infrastructure supporting virtual machine deployment and management. there are important benefits to begin implementing a converged fabric at the edge of the server network. disaster recovery. They can also be implemented in hardware by abstracting and partitioning I/O among one or more physical servers. with. and management. • HP Virtual Connect Flex10 – consolidates Ethernet connections. and partner ecosystems of the HP ProCurve and Virtual Connect network portfolios into a virtualized fabric for the data center. HP FlexFabric technology will let an IT organization exploit the benefits of server. Customers will have a choice when it comes to implementation – for performance (VEB) or for management ease (VEPA/EVB). storage. With respect to FCoE in particular. the nature of I/O buses and adapters will continue to change. In the future. By implementing converged fabrics at the server-to-network edge first. CEE. storage. capacity planning. • HP Virtual Connect – simplifies connection management by putting an abstraction layer between the servers and their external networks. 21 . HP has already developed products that work to simplify the server-to-network edge by reducing cables and physical infrastructure (BladeSystem). HP plans to deliver the FlexFabric vision by converging the technology. and current-generation iSCSI are standards to achieve the long-unrealized potential of a single unified fabric. network. not disrupt them. HP is positioned well to navigate these changes because of the company’s skill set and intellectual property it holds in servers. Customers should understand how these standards and technologies intertwine at the edge of the network. Flex-10 helps organizations make more efficient use of available network resources and reduce infrastructure costs. and a self-service portal to simplify the way data center resources are managed. As the data center infrastructure changes with converged networking and virtual I/O. adapters. Virtual I/O technologies can be implemented in the software space by sharing physical I/O among multiple virtual servers (vSwitches). and network resources. management tools. compute blades. HP is not only involved in the development of standards to make the data center infrastructure better.

22 . provision. customers can use consistent processes to configure. VEB virtual switches. and manage the infrastructure.Through management technologies such as those built into the HP BladeSystem Matrix. iSCSI. and manage them with consistent software tools across the data center. regardless of the type of technologies at the server-to-network edge: FCoE. or EVB virtual switch technologies. Customers can choose the converged infrastructure and virtualization technologies that best suit their needs.

Hypervisors are essentially another operating system providing CPU. traffic flow for each one is isolated with its own MAC address and virtual local area network (VLAN) tags between the FlexNIC and VC Flex-10 interconnect module. The scalability inherent in the SR-IOV architecture has the potential to increase server consolidation and performance.Appendix: Understanding SR-IOV and Virtual Connect Flex10 The following information is taken from “HP Flex-10 and SR-IOV—What is the difference?” ISS Technology Update. April 2009: www. In current I/O architecture. Volume 8. The hypervisor continues to allocate resources and handle exception conditions. there is a function called Alternative Route Identifiers (ARI) that allows expansion of up to 256 PCIe functions. HP Virtual Connect Flex-10 technology is a hardware-based solution that enables users to partition a 10 gigabit Ethernet (10GbE) connection and control the bandwidth of each partition in increments of 100 Mb. and the entire process creates serious performance overhead. With SR-IOV. but it is no longer required to perform routine data processing. This article explores the differences in architecture and implementation between Flex-10 and SR-IOV. Both outgoing and incoming data is transformed into a format that can be understood by a physical device driver. This removes the hypervisor from the main data path and eliminates a great deal of performance overhead. route.com/servers/technology Improving I/O performance As virtual machine software enables higher efficiencies in CPU use. With SR-IOV. and buffer both outgoing and incoming packets. Flex-10 and SR-IOV technologies accomplish this goal through different approaches. the hypervisor is no longer required to process. each with its own driver. The data processing functionality places the hypervisor squarely in the main data path. Another SR-IOV advantage is the prospect of performance gains achieved by removing the hypervisor’s vSwitch from the main data path. Since SR-IOV is a hardware I/O implementation. and I/O virtualization capabilities to accomplish resource management and data processing functions. with a total bandwidth of 10 Gbps. These FlexNICs appear to the operating system (OS) as discrete NICs. The initial Flex-10 offering is based on the original PCIe definition that is limited to 8 PCI functions per given device (4 FlexNICs per 10Gb port on a dual port device). The data destination is determined. Administrators can configure a single BladeSystem 10 Gb network port to represent multiple physical network interface controllers (NICs). an administrator can set and control the transmit bandwidth available to each FlexNIC. allowing its virtual NIC to transfer data directly between the SR-IOV NIC hardware and the guest OS memory space. HP Virtual Connect Flex-10 Technology As described in the body of this technology brief. it also uses hardware-based security and quality of service (QoS) features incorporated into the physical host server.hp. 23 . Single Root I/O Virtualization (SR-IOV) The ability of SR-IOV to scale the number of functions is a major advantage. these same efficiency enablers place more overhead on physical assets. Instead. While the FlexNICs share the same physical port. also called FlexNICs. and the appropriate send or receive buffers are posted. memory. Using the VC interface. HP Virtual Connect Flex-10 Technology and Single Root I/O Virtualization (SR-IOV) both share the goal of improving I/O efficiency without increasing the overhead burden on CPUs and network hardware. Number 3. the SR-IOV architecture exposes the underlying hardware to the guest OS. data communicated to and from a guest OS is routinely processed by the hypervisor. All of this processing requires a great deal of data copying.

hp. It is important to note that Flex-10 is an available HP technology for ProLiant BladeSystem servers. these changes in infrastructure and software would let SR-IOV data handling operate in a much more native and direct manner. For more information on the SR-IOV standard and industry support for the standard. See http://h20000.pdf for more detailed information on Flex-10. As such. 24 .Advantage of using Virtual Connect Flex-10 Flex-10 technology offers significant advantages over other 10 Gb devices that provide large bandwidth but do not provide segmentation. It is easier to aggregate multiple 1 Gb data flows and fully utilize 10 Gb bandwidth.www2. reducing processing overhead and enabling highly scalable PCI functionality. The ability to adjust transmit bandwidth by partitioning data flow makes 10GbE more cost effective and easier to manage. go to the Peripheral Component Interconnect Special Interest Group (PCI-SIG) site: www.pcisig.com. Significant infrastructure savings are also realized since additional server NIC mezzanine cards and associated interconnect modules may not be needed. Once accomplished. SR-IOV cannot operate in current environments without significant changes to I/O infrastructure and the introduction of new management software.com/bc/docs/support/SupportManual/c01608922/c01608922. while SR-IOV is a released PCI-SIG specification at the beginning of the execution and adoption cycle. The fact that Flex-10 is hardware based means that multiple FlexNICs are added without the additional processor overhead or latency associated with server virtualization (virtual machines).

Follow us on Twitter: http://www.1Qau – Quantized Congestion Notification 802.t11.com/servers/technology www.org/1/pages/802.html www.org/1/pages/802.com/go/virtualconnect www. L.1bg.html www. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services.ieee802.P.org Call to action Send comments about this paper to TechCom@HP. HP shall not be liable for technical or editorial errors or omissions contained herein. Enhanced Transmission Selection www.1au.1 Work Group website IEEE Data Center Bridging standards: • • • 802. March 2010 .twitter.html www.org/1/ HP Industry Standard Server technology communications HP Virtual Connect technology IEEE 802.hp.org/1/pages/802. Resource description Web address www.S.1az.1bb.com/HPISSTechComm © 2010 Hewlett-Packard Development Company.hp. Linux is a U.For more information For additional information.ieee802. registered trademark of Linus Torvalds. refer to the resources listed below.ieee802.ieee802.ieee802. The information contained herein is subject to change without notice.1Qbg INCITS T11 Home Page (FCoE standard) www.com.html IEEE Edge Virtual Bridging standard 802.org/1/pages/802. TC100301TB.1Qbb – Priority-based Flow Control 802.1Qaz – Data Center Bridging Exchange Protocol. Nothing herein should be construed as constituting an additional warranty.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->