You are on page 1of 8

IT@Intel White Paper

Intel IT
IT Best Practices
Data Center Solutions
May 2012

Using Converged Network Adapters for


FCoE Network Unification
Executive Overview

Switching to FCoE deployment


using dual-port Intel Ethernet
X520 Server Adapters would
yield a greater than 50-percent
reduction in total network
costs per server rack.

To unify local area network and storage area network infrastructure over a single
fabric, Intel IT evaluated the price and performance advantages of switching to a
Fibre Channel over Ethernet (FCoE) using converged network adapters (CNAs). In
comparing a CNA solution to a two-card solutiona 10 Gigabit Ethernet network
interface card and a Fibre Channel (FC) host bus adapter (HBA)we determined
that CNAs provide a technically sound, cost-effective solution for network
unification in our virtualized environment.
Our evaluation revealed that switching
to FCoE deployment using the dual-port
10-gigabit (Gb) Intel Ethernet Server Adapter
X520 series had the following benefits:
Reduces total network costs per server
rack by more than 50 percent
Delivers FC performance comparable
to and, in some cases, exceeding the
performance of an FC HBA

Craig Pierce
System Engineer,
Intel Architecture Systems Integration (IASI)
Sanjay Rungta
Senior Principal Engineer, Intel IT
Sreeram Sammeta
System Engineer, IASI
Terry Yoshii
Research Staff, Intel IT

Enables controllable prioritization for


LAN or SAN traffic through network
Quality of Service mechanisms that
regulate contention for bandwidth

Allows application loads to drive nearly


90-percent full bi-directional bandwidth
utilization on FCoE or TCP/IP
Provides an open solution that, taking
advantage of native FCoE initiators in
operating systems, offers an efficient
and cost-effective alternative to more
expensive proprietary CNAs that offload
FC and FCoE processing onto the adapter
To realize these advantages, Intel IT plans
to use FCoE configurations and dual-port
10-Gb Intel Ethernet X520 server adapters
in new and replacement servers in our
virtualized environment.

IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification

Contents

Background

Executive Overview.. ............................ 1

For years, many organizations have run


two separate, parallel networks in their
data center. To connect servers and clients,
as well as connect to the Internet, they
use an Ethernet-based local area network
(LAN). For connecting servers to the
storage area network (SAN) and the block
storage arrays used for storing data, they
use a Fibre Channel (FC)-based network.

Background. . ........................................... 2
How FCoE Works. . .............................. 2

Solution .................................................. 3
Converged Network Adapter
Test Goals and Challenges............... 4
Storage Performance
Comparison: Throughput.................. 4
Storage Performance
Comparison: Response Time. . .......... 5
Storage Performance
Comparison: CPU Effectiveness. . .... 5
Baseline Bandwidth Utilization
Quality-of-Service Testing............... 5
Quality-of-Service Testing............... 6
Storage I/O Latency during
Quality-of-Service Testing............... 7

Conclusion. . ............................................. 7
Acronyms. . ............................................... 8

IT@Intel
The IT@Intel program connects IT
professionals around the world with their
peers inside our organization sharing
lessons learned, methods and strategies.
Our goal is simple: Share Intel IT best
practices that create business value and
make IT a competitive advantage. Visit
us today at www.intel.com/IT or contact
your local Intel representative if youd
like to learn more.

2 www.intel.com/IT

In late 2010, Intel IT decided our existing


1-gigabit Ethernet (1GbE) network
infrastructure was no longer adequate to meet
Intels rapidly growing business requirements
and the resource demands of our increasingly
virtualized environment. We needed a more
cost-effective solution that unifies our fabric,
provides equal or better performance, and
enables traffic prioritization through quality
of service (QoS) mechanisms. Our solution: a
10GbE infrastructure which we deployed initially
for connection to the local area network (LAN)
and to host bus adapters (HBAs) that connect
to our FC-based storage area network (SAN).
The recent development of Fibre Channel
over Ethernet (FCoE), a storage protocol that
enables FC communications to run directly
over Ethernet, provides a way for companies
to consolidate these networks into a single
common network infrastructure. By unifying
LANs and SANs, eliminating redundant
switches, and reducing cabling and network
interface card (NIC) counts, an FCoE server
adapterspecifically, the dual-port 10-Gb
Intel Ethernet X520 Server Adaptercan:
Reduce capital expenditures (CapEx)
Cut power and cooling costs by reducing the
number of components
Simplify administration by reducing the
number of devices that need to be managed
In addition, network unification using FCoE
reduces operating expenditures. For a large IT
organization such as Intel IT, this is a significant
advantage. Our 87 data centers support a
massive worldwide computing environment
that houses approximately 90,000 servers.
The timing of FCoEs release is advantageous
as well. We recently evaluated Intels existing

1GbE network infrastructure and found it


inadequate to meet Intels rapidly growing
business requirements and the increasing
demands they place on data center resources.1
Among the trends that supported our transition
to a 10GbE data center fabric design are:
The escalating data handling demands
created by increasing compute density
in our Design computing domain and by
large-scale virtualization in our Office and
Enterprise computing domain
The need to match network performance
demands with the increasing gains in
file server performance due to the latest
high-performance Intel processors and
clustering technologies. The network, not
the file servers, was the limiting factor in
supporting faster throughput.
A 40-percent annual growth in Internet
connection requirements driving a need for
greater bandwidth through the organization
When we found we could reduce our network
total cost of ownership by as much as 18
to 25 percent, we began the transition to a
10GbE data center fabric design in 2011.
In our initial implementation, we used 10GbE
NICs to connect servers to the LAN. To
connect to the FC-based SAN, we used HBAs.
At the time, we viewed this as a temporary
solution until we could fully evaluate the use
of converged network adapters (CNAs) to
enable a unified network fabric through FCoE.

How FCoE Works


FCoE is essentially an extension of FC over a
different link layer transport. FCoE maps FC
over Ethernet while remaining independent of
the Ethernet forwarding scheme. This enables
FC to use 10GbE networks while preserving
the FC protocol. With the new data center
bridging (DCB) specification, the 10-Gb DCB
protocol also alleviates two previous concerns:
packet loss and latency.
Servers connect to FCoE with CNAs, which
combine both FC HBA and Ethernet NIC
We address in detail our work evaluating the

advantages to Intel of upgrading to 10GbE


network infrastructure in the IT@Intel white paper
Upgrading Data Center Network Architecture to
10 Gigabit Ethernet. Intel Corporation, January 2011.

Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper

functionality on a single adapter card. Use of FCoE


and CNAs consolidates network TCP/IP and SAN data
traffic on a unified network.

Fibre Channel over Ethernet (FCoE)


One disadvantage of Fibre Channel (FC) as a network technology for storage
area networks (SANs) is that its incompatible with Ethernet, the dominant
server networking technology. Although SANs and Ethernet networks perform
substantively the same function, they use different technologies and thus are
entirely separate physical networks, with separate switches, cabling, networking
hardware (such as network interface cards and host bus adapters), and connections
into each server. The expertise required to support each network is also different.

For Intel IT, a unified fabric using CNAs represents an


important next step in the ongoing effort to make the
conversion of our data center architecture to 10GbE
connections as cost-efficient as possible.

Solution

Current industry standards combine server and storage networks over a single
unified fabric. The most prominent of these is FCoE. This specification preserves
the storage-specific benefits of FC and allows it to be transported over Ethernet
in the enterprise. Under FCoE, both the Ethernet and the FC protocols are
merged, enabling organizations to deploy a single, converged infrastructure
carrying both SAN and server TCP/IP network traffic. FCoE preserves all FC
constructs, providing reliable delivery, while preserving and interoperating
with an organizations investment in FC SANs, equipment, tools, and training.

We examined two approaches to providing


a converged adapter. One approach uses an
open solution that takes advantage of native
FCoE initiators in operating systems, enabling
the adapter to work in a complementary way
with the platform hardware and the operating
system. This approach provides a cost-effective
way to handle both FCoE and TCP/IP traffic
(see Figure 1).

Host

Program Interface
FC Host Bus Adapter API, DCBx, FC

File System

Local Area Network


(LAN)

multipath I/O

iSCSI

Device Drivers
virtual Storport*, Open FC,
FCoE Protocol

TCP/IP

Conguration
Data center bridging exchange (DCBx)
link protocol and bre channel (FC)

Native Operating System or


Intel FCoE Initiator and Management
FCoE, encapsulation and decapsulation,
FCoE initialization protocol

Network Driver Interface


Specication, Net Device
Base Drivers
with Buffers and Interrupts

Adapter

Storage Area Network

LAN

FCoE Ofoads: DDP, CRC Rx and Tx, LSO, RSS


Data path acceleration

FCoE Ofoads

vLAN 1

Trafc Classes and Queues


Trafc class mapped to queues (802.1Qaz)
Quality of Service Engine
Insert tag .1Q , schedule priority groups,
allocate bandwidth, ow control

Classication and
Prioritization Engine

FCoE
MAC Address

Intel Ethernet Base Drivers with


Data Center Bridging
Priorities assigned to trafc classes (802.1p)

Converged
Port

LAN
MAC Address

Converged Port
Priority tagged packets, congestion messages

vLAN 2

API application programming interface: CRC cyclic redundancy check; DDP direct data placement; FCoE Fibre Channel over Ethernet; iSCSI internet Small Computer System Interface;
LSO large segment ofoad; MAC media access control; RSS really simple syndication; Rx receive or incoming (data); Tx transmit or outgoing (data); vLAN virtual LAN

Figure 1. This diagram shows a unified networking Fibre Channel over Ethernet (FCoE) configuration. The converged network adapter uses native
FCoE initiators in the operating system to work with the platform hardware and operating system, handling FCoE traffic at a lower cost than a
hardware-based solution.

www.intel.com/IT 3

IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification

Quality of Service (QoS)


In computer networking, QoS resource
reservation control mechanisms prioritize
different applications, users, or data flows
to ensure a certain level of performance.
For example, a particular bit rate, along with
limits on delay, jitter, and packet-dropping
probability and bit error rate, may be
guaranteed for an application that is delayor loss-sensitive, such as storage or video.
For such applications, QoS guarantees
are important when network capacity is
insufficient for the concurrent data flow.

Average I/O Operations Per Second


67% Read; 100% Random

I/O Operations per Second

100,000

Fibre Channel Protocol (FCP)


Fibre Channel over Ethernet (FCoE)

80.000

60.000

40.000

20.000

0
4 KB

8 KB

16 KB

32 KB

64 KB

128 KB

FCP

84,841

77,741

65,110

42,440

26,710

13,510

FCoE

75,045

70,051

63,491

51,724

34,414

18,272

Figure2. Using a software driver to handle Fibre Channel


over Ethernet, the dual-port 10-Gb Intel Ethernet X520
Server Adapter delivered 21.9 to 35.2 percent higher
throughput running at 32-kilobyte (KB) to 128-KB block I/O
sizes compared to a dual-port 8-Gb Fibre Channel host bus
adapter.

4 www.intel.com/IT

The second approach uses custom


hardware adapters with FC and FCoE (and
in some cases TCP/IP) protocol processing
embedded in the hardware. With this
approach, the CNA converts FCoE traffic to
FC packets in the hardware, and the CNA
manufacturer provides its custom drivers,
interfaces, and management software.
The result is a more expensive adapter.
Intel IT decided to test the first approach
which is software driver-basedbecause
it would provide a robust, scalable,
high-performance server connectivity
solution without the expensive, custom
hardware. Whats more, we knew that
recent improvements in some of the
latest hypervisors now enable them to
work with FCoE traffic. This meant the
software driver-based approach could be
used in virtualized environments.

Converged Network Adapter


Test Goals and Challenges
Recognizing cost as an important
factor associated with an adapter that
could ultimately be used on hundreds
of servers, our goal was to see how a
CNA using a software driver for FCoE
processing would work for Intels data
centers. To find out, we put dual-port
10-Gb Intel Ethernet X520 Server
Adapters, which support FCoE, in two
Intel Xeon processor 5600 series-based
servers and ran a variety of performance
tests. We wanted to see if such a CNA
could address the following challenges:
Provide storage performance
(throughput, response time,
CPU effectiveness, and latency)
comparable to an FC HBA
Drive the network load and the
storage load to their peaks, when
each is run in isolation

Be regulated through appropriate QoS


mechanisms (see sidebar) that assign
higher or lower priority to either
kind of trafficnetwork or storage.
This is a critical capability because
a converged network must be able
to protect storage traffic from any
other traffic. Todays SAN networks
provide this type of isolation through
a separate infrastructure
Serve as a cost-effective, technically
viable solution for network unification
in our virtualized environment

Storage Performance
Comparison: Throughput
The first test, which relies on a CNA using
a software driver for FCoE processing,
had to realize equivalent or similar
performance in storage processing to the
dual-port 8-Gb Fibre Channel HBAs we
were replacing. To determine this, we ran
a series of tests using different I/O block
sizesfrom 4 kilobytes (KB) to 128KB
through both types of adapters. We
deliberately avoided saturating the hosts
and network and storage arrays in these
tests by specifying a cache-intensive
load5 megabytes (MB) working set size
and by driving moderately high I/O from
16 workers to four logical unit numbers at
a queue depth of 1 from a single host.
Tests showed negligible differences
in the smaller block sizes. The Fibre
Channel Protocol (FCP) performance of
the dual-port 8-Gb FC HBA was 13.1
to 2.5 percent higher than the FCoE
performance of the dual-port Intel
Ethernet X520 Server Adapter running
at 4-KB to 16-KB I/O block sizes (see
Figure2). More significant was the
difference in the higher I/O block sizes.
The FCoE performance of the dual-

We also measured bandwidth utilization


in average megabytes per second (MBps).
The results confirmed our findings
from the throughput performance
data in Figure 2. The dual-port 10-Gb
Intel Ethernet X520 Server Adapter
showed a negligible disadvantage on
the smaller I/O block sizes, but provided
greater throughput on the I/O block
sizes over 16 KB. For instance, when
processing 128-KB I/O blocks, the 10-Gb
Intel Ethernet X520 Server Adapter
performed read/write processing at a rate
of 2284 MBps compared to 1688 MBps
by the 8-Gb FC HBA.

Storage Performance
Comparison: Response Time
Another way to measure performance
is response time in milliseconds. This
is an important factor in application
performance for I/O intensive applications.
Here again, we saw negligible differences
in the smaller block sizes, with the FCP
performance of the dual-port 8-Gb Fibre
Channel HBA providing slightly faster
response times in the 4-KB to 16-KB I/O
block sizes (see Figure 3). At 32-KB I/O

block sizes and greater, the softwaredriven FCoE processing of the dual-port
10-Gb Ethernet Intel X520 Server
Adapter began to show significantly
faster performance. In fact, for the
128-KB block sizes, it delivered 35
percent faster response.

Storage Performance
Comparison: CPU
Effectiveness
One concern with using a CNA that
uses a software driver-based approach
for processing FCoE traffic is that
it could potentially consume higher
central processing unit (CPU) resources
and effectively decrease application
performance or reduce the consolidation
ratio for server virtualization. Our tests
showed this is not the case (see Figure4).
When we measured how many I/O
operations per second are processed by
each percent of a CPU, we found that
the dual-port 10-Gb Ethernet Intel X520
Server Adapter actually performed
better than the dual-port 8-Gb FC
HBA. This data suggests that we could
consolidate as many virtual machines
(VMs) on a host (or perhaps even more)
using the 10-Gb Ethernet Intel X520
Server Adapter as we do with the current
HBAs were using.

Baseline Bandwidth Utilization


Quality-of-Service Testing
To perform the planned QoS testing, we
next needed to ensure that our setup
could drive the 10-Gb Intel Ethernet X520
Server Adapter with enough network
and storage traffic to use more than
90 percent of the maximum aggregate
bandwidth across both ports of the CNA.

Average Response Time Per Command


8-Gb FCP vs. 10GbE FCoE Performance
67% Read; 100% Random
5.00
Fibre Channel Protocol (FCP)
Fibre Channel over Ethernet (FCoE)

Average Response in Milliseconds

port 10-Gb Intel Ethernet X520 Server


Adapter was 21.9 to 35.2 percent higher
running at 32-KB to 128-KB I/O block
sizesblock sizes typical of SQL Server*
and various media applications. The major
portion of this performance difference
is attributable to the clocking rate
difference between the dual-port 10-Gb
Intel Ethernet X520 Server Adapter
and the 8-Gb dual-port FC HBA (that is,
more bits per second can be transmitted
by a 10-Gb card versus an 8-Gb card).
Additional performance advantages were
derived from other efficiencies in the
software stack.

Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper

4.00

3.00

2.00

1.00

0.00
4 KB

8 KB

16 KB

32 KB

64 KB

128 KB

FCP

0.75

0.82

0.98

1.53

2.39

4.74

FCoE

0.85

0.91

1.01

1.24

1.86

3.50

Figure 3. Using a software driver to handle Fibre Channel


over Ethernet, the dual-port 10-Gb Intel Ethernet X520
Server Adapter recorded increasingly better response time
on the 32-KB to 128-KB I/O block sizes than the dual-port
8-Gb Fibre Channel host bus adapter running the Fibre
Channel protocol on its hardware.

CPU Effectiveness
67% Read; 100% Random

2,500

I/O Operations Per Second (IOPS) /


CPU Utilization Percent

Fibre Channel Protocol (FCP)


Fibre Channel over Ethernet (FCoE)
FCoE/FCP %

120%
100%

2,000

1,500

1,000

500

0%

4 KB

8 KB

16 KB

32 KB

64 KB

128 KB

FCP

2,303

2,281

2,177

1,938

1,618

1,291

FCoE

2,424

2,407

2,271

2,110

1,769

1,369

105.5%

104.3%

108.9%

109.3%

106%

FCoE/FCP% 105.3%

Figure4. In a measure of central processor unit (CPU)


effectivenesshow many I/O operations per second are
processed per percent of CPUthe dual-port 10-Gb Intel
Ethernet X520 Server Adapter using a software driver
to handle Fibre Channel over Ethernet performed better
than the dual-port 8-Gb Fibre Channel host bus adapter.

www.intel.com/IT 5

IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification

TCP/IP Network Load Applied for QoS Tests


35,000

70%

30,000

60

25,000

50

20,000

40

15,000

30

10,000

20

5000
0

Total Mbps
Total CPU Utilization Percent

10

Physical CPU Core Utilization

Megabits per second (Mbps)

Maximum Network Mbps and Total CPU Utilizaton


2 x 10 GbE Ports 512 KB; 50% Read/Write

Time

Figure 5. With our test setup, the 10-Gb Intel Ethernet


X520 Server Adapter achieved approximately 77.3 percent
of the theoretical maximum bandwidth on network traffic
during the test period.

Fibre Channel over Ethernet (FCoE)


Storage Load Applied for QoS Tests
Maximum FCoE MBps and Total CPU Utilizaton
2 x 10 GbE Ports 512 KB; 50% Read/Write

30%

4,000

24

3,000

18

2,000

12

1000
0

Total MBps
Total CPU Utilization Percent

Time

Figure 6. With our test setup, the 10-Gb Intel Ethernet


X520 Server Adapter achieved approximately 80.5 percent
of the theoretical maximum bandwidth on storage traffic
during the test period.

Physical CPU Core Utilization

Megabytes per second (MBps)

5,000

The following TCP/IP network and FCoE


workloads were applied as the baseline
for the QoS tests:
A TCP/IP network load of 77.3 percent
of possible maximum bandwidth
between two hosts running an
IOmeter on a single VM on standard
Type 1 hypervisor and measured at
one host (see Figure 5).
An FCoE storage load of 80.5 percent of
possible maximum bandwidth created
by running an IOmeter on a single VM
on a standard Type 1 hypervisor through
FCoE drivers to the CNA and then
through a switch to a storage array. The
FCoE load was measured at the host
level (see Figure 6).

Quality-of-Service Testing
We ran QoS tests to assure that storage
traffic would always be guaranteed the
bandwidth required for its performance.
To perform these tests, we ran both the
network and storage loads previously
established in our baseline bandwidth
utilization tests (Figures5 and 6). By
running both network and storage loads,
we were able to consume over 90 percent
of the CNAs maximum bandwidth.

We began with a QoS setting allocating


90 percent of the bandwidth of queue
resources to storage (FCoE) traffic and
10 percent of the bandwidth of queue
resources to network (TCP/IP) traffic
through QoS configuration on the network
switch. As we assigned an increasingly
higher percentage of resource bandwidth
to network traffic, we were able to
demonstrate the ability to regulate each
traffic type while maintaining the greater
than 90-percent maximum bandwidth
utilization (see Figure 7).
Figure 7 shows that storage traffic
will always be preserved from network
congestion. Even when FCoE was limited
to 10 percent of bandwidth in the QoS
setting, it was able to receive 38 percent
of the maximum bandwidth due to a no
drop characteristic of the queues for FCoE
network switches. To preserve the closeloop congestion management within the FC
SAN and host, rather than slowing sourcetarget traffic to match the configuration,
the system ensures that storage packets
are not dropped. This mechanism brings the
lossless data compression characteristic
within Ethernet to guarantee the delivery
of storage packets. Such protection from
busy TCP/IP traffic is vital for convergence
network adoption.

Effects of Changing Network


Quality of Service Settings
Aggregate Bandwidth Utilization

Maximum theoretical bi-directional bandwidth = 40,0000 Mbps

Megabits per second (Mbps)

40,000

Fibre Channel over Ethernet (FCoE)

TCP/IP (Network)

Total

90 Percent Maximum Bandwidth Utilized


30,000

20,000

10,000

90% FCoE
10% TCP/IP

70% FCoE
30% TCP/IP

50% FCoE
50% TCP/IP

30% FCoE
70% TCP/IP

10% FCoE
90% TCP/IP

Allocated Bandwidth/QoS Resources Percent

Figure 7. Intel IT testing demonstrated that applying


quality-of-service mechanisms can effectively balance
bandwidth allocation between LAN and SAN traffic during
congestion when using the 10-Gb Ethernet Intel X520
Server Adapter.

6 www.intel.com/IT

Increasing Bandwidth and Performance with the


Intel Xeon Processor E5 Family
The performance tests Intel IT performed on the dual-port 10-Gb Intel
Ethernet X520 Server Adapters were conducted on Intel Xeon processor
5600 series-based servers. Intel IT is now in the process of introducing Intel
Xeon processor E5 series-based servers. We expect significantly improved
performance from these servers due to several innovative features of the
Intel Xeon processor E5-2600 family.
This processor family is the first server chip to support the PCI Express* (PCIe)
3.0 I/O integration. PCIe* 3.0 is estimated to double the interconnect bandwidth
(8 gigatransfers per second) over the PCIe 2.0 specification. To get the
most out of storage and networking in particular, the Intel Xeon processor
E5-2600 family improves I/O to reduce latency and adds bandwidth to
reduce the data transfer bottlenecks.

Storage I/O Latency during


Quality-of-Service Testing
Figure 9 shows how latency increased
as we throttled storage traffic. Latency
went from approximately 20 milliseconds
at the 90/10 ratio up to approximately
43 milliseconds as we reached the
crossover point. While this is a significant
increase, its important to remember that
our test was artificially driving up latency
using the QoS mechanism. In realworld conditions, this kind of increase
would be seen only when there is
considerable network and storage traffic
congestion. In such a case, an increase in
latency is an acceptable tradeoff for an
organization that wants to prioritize one
type of traffic over another.
Comparing costs between Intels
current use of FC HBAs and standard
Ethernet NICs, Intel IT found that CNAs
using software drivers to handle FCoE
processing can reduce implementation
costs per rack by more than 50 percent
(see Table 1). Thus, software driver-based
CNAs provide not just a migration path
to unified FCoE network infrastructure,
but also serious CapEx savings. Whats
more, because current multi-core Intel
processor-based platforms are well able
to sustain two ports of 10 GbE in these
environments, at well under 10-percent
CPU utilization, there is more than
enough headroom for this approach.2
Intel 10 GbE Adapter Performance Evaluation for
FCoE and iSCSI, evaluation report prepared under
contract to Intel Corporation. Intel, September 2010.

Conclusion
Based on our test results, we are
implementing a unified network
infrastructure using FCoE through
dual-port 10-Gb Intel Ethernet
X520 Server Adapters.
Our tests demonstrate that a software
driver-based CNA solution is an
excellent solution for reducing costs and
infrastructure complexity as we unify LAN
and SAN infrastructure over a single fabric.
The switch to FCoE deployment using this
server adapter provides a greater than
50-percent reduction in total network
costs per server rack, while doubling the
performance of the dual-port 8-Gb FC
HBAs we want to replace.

Bandwidth Utilization Data


Total Aggregate Bandwidth Utilization
Maximum theoretical bi-directional bandwidth = 40,0000 Mbps

Application loads that drive nearly


90-percent full bi-directional bandwidth
utilization on FCoE or TCP/IP networks
Control of prioritization for LAN or
SAN traffic through network QoS
mechanisms, allowing regulation of
contention for bandwidth
Reliance on native FCoE initiators
in common operating systems to
handle this processing using the
host processor, instead of expensive
proprietary CNAs that offload FC and
FCoE processing onto the adapter
To realize these advantages, we plan
to use FCoE configurations and 10-Gb
Intel Ethernet X520 Server Adapters, or
similar CNAs, in new servers we add to
our virtualized environment or for those
we replace in older configurations.

TCP/IP (Network)

Total

30,000

20,000

10,000

Equally Utilized at
20% FCoE and
80% TCP/IP

Allocated Bandwidth/QoS Resources Percent

90% FCoE 70% FCoE 50% FCoE 30% FCoE 10% FCoE
10% TCP/IP 30% TCP/IP 50% TCP/IP 70% TCP/IP 90% TCP/IP
FCoE
TCP/IP
Total

32,215

27,203

22,859

19,372

15,429

3,127

8,401

12,934

16,438

20,424

35,342

35,604

35,793

35,810

35,852

Figure 8. Intel IT tests show LAN and SAN traffic are


equally utilized at 20-percent FCoE and 80-percent TCP/IP
(network) loads.

Storage I/O Latency

Our evaluation reveals that using 10-Gb Intel


Ethernet X520 Server Adapters as our CNA
solution provides the following benefits:
Storage performancethroughput,
response time, CPU effectiveness,
and latencyis comparable to and
frequently exceeding an FC HBA

Fibre Channel over Ethernet (FCoE)

40,000

Megabits per second (Mbps)

Based on the loads applied in the QoS


test, we reached a crossover point where
traffic is utilized equally at a 20/80
setting20 percent of the bandwidth
is allocated to storage traffic and 80
percent of the bandwidth is allocated to
network traffic (see Figure 8).

Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper

Average Guest Latency


50

Average Guest Latency


(in milliseconds)

40
30
20
10
0

90% FCoE 70% FCoE 50% FCoE 30% FCoE 10% FCoE
10% TCP/IP 30% TCP/IP 50% TCP/IP 70% TCP/IP 90% TCP/IP

Allocated Bandwidth/QoS Resources Percent


Fibre Channel over Ethernet (FCoE) and TCP/IP (Network)

Figure 9. Storage I/O latency increases as storage traffic is


throttled through quality-of-service mechanisms but only
when there is network and disk congestion.
Table 1. Cost of a FCoE deployment using the dual-port 10-Gb
Intel Ethernet X520 Server Adapter relative to a baseline
determined by the cost of our existing 8-Gb Fibre Channel
and 1 GbE deployment
FCoE Deployment (10 GbE) Costs as a percentage
of Existing 8-Gb FC and 1 GbE Deployment Costs
per Server
Total Network

46%

Cable Costs

62.5%

GbE Costs (8 ports)

12.5%

Costs for Fibre Channel Switch (2 ports),


NIC (2 quad ports), and HBA (2 ports)
Costs for FCoE Switch (2 ports) and FCoE NIC

71%

FCoE Fibre Channel over Ethernet: HBA - host bus adapter;


NIC - network interface card

www.intel.com/IT 7

For more information on Intel IT best practices,


visit www.intel.com/it.

Acronyms
CapEx capital expenditure
CNA

converged network adapter

DCB

data center bridging

FC

Fibre Channel

FCoE

Fibre Channel over Ethernet

FCP

Fibre Channel Protocol

Gb

gigabit

GbE

gigabit Ethernet

HBA

host bus adapter

IOPS

I/O operations per second

KB

kilobytes

LAN

local area network

Mbps

megabits per second

MBps

megabytes per second

NIC

network interface card

PCI

peripheral component interconnect

PCIe*

 eripheral Component Interconnect


P
Express or PCI Express*

QoS

quality of service

SAN

storage area network

VM

virtual machine

Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as
measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other
sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and
on the performance of Intel products, reference www.intel.com/performance/resources/benchmark_limitations.htm or call (U.S.) 1-800-628-8686 or 1-916-356-3104.
This paper is for informational purposes only. THIS DOCUMENT IS PROVIDED AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF
MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL,
SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any patent, copyright, or other intellectual property rights, relating to use of
information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.
Copyright 2012 Intel Corporation. All rights reserved. Printed in USA

Please Recycle

0512/ACHA/KC/PDF

326413-002US

You might also like