Professional Documents
Culture Documents
Converged Network Adapters Paper
Converged Network Adapters Paper
Intel IT
IT Best Practices
Data Center Solutions
May 2012
To unify local area network and storage area network infrastructure over a single
fabric, Intel IT evaluated the price and performance advantages of switching to a
Fibre Channel over Ethernet (FCoE) using converged network adapters (CNAs). In
comparing a CNA solution to a two-card solutiona 10 Gigabit Ethernet network
interface card and a Fibre Channel (FC) host bus adapter (HBA)we determined
that CNAs provide a technically sound, cost-effective solution for network
unification in our virtualized environment.
Our evaluation revealed that switching
to FCoE deployment using the dual-port
10-gigabit (Gb) Intel Ethernet Server Adapter
X520 series had the following benefits:
Reduces total network costs per server
rack by more than 50 percent
Delivers FC performance comparable
to and, in some cases, exceeding the
performance of an FC HBA
Craig Pierce
System Engineer,
Intel Architecture Systems Integration (IASI)
Sanjay Rungta
Senior Principal Engineer, Intel IT
Sreeram Sammeta
System Engineer, IASI
Terry Yoshii
Research Staff, Intel IT
IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification
Contents
Background
Background. . ........................................... 2
How FCoE Works. . .............................. 2
Solution .................................................. 3
Converged Network Adapter
Test Goals and Challenges............... 4
Storage Performance
Comparison: Throughput.................. 4
Storage Performance
Comparison: Response Time. . .......... 5
Storage Performance
Comparison: CPU Effectiveness. . .... 5
Baseline Bandwidth Utilization
Quality-of-Service Testing............... 5
Quality-of-Service Testing............... 6
Storage I/O Latency during
Quality-of-Service Testing............... 7
Conclusion. . ............................................. 7
Acronyms. . ............................................... 8
IT@Intel
The IT@Intel program connects IT
professionals around the world with their
peers inside our organization sharing
lessons learned, methods and strategies.
Our goal is simple: Share Intel IT best
practices that create business value and
make IT a competitive advantage. Visit
us today at www.intel.com/IT or contact
your local Intel representative if youd
like to learn more.
2 www.intel.com/IT
Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper
Solution
Current industry standards combine server and storage networks over a single
unified fabric. The most prominent of these is FCoE. This specification preserves
the storage-specific benefits of FC and allows it to be transported over Ethernet
in the enterprise. Under FCoE, both the Ethernet and the FC protocols are
merged, enabling organizations to deploy a single, converged infrastructure
carrying both SAN and server TCP/IP network traffic. FCoE preserves all FC
constructs, providing reliable delivery, while preserving and interoperating
with an organizations investment in FC SANs, equipment, tools, and training.
Host
Program Interface
FC Host Bus Adapter API, DCBx, FC
File System
multipath I/O
iSCSI
Device Drivers
virtual Storport*, Open FC,
FCoE Protocol
TCP/IP
Conguration
Data center bridging exchange (DCBx)
link protocol and bre channel (FC)
Adapter
LAN
FCoE Ofoads
vLAN 1
Classication and
Prioritization Engine
FCoE
MAC Address
Converged
Port
LAN
MAC Address
Converged Port
Priority tagged packets, congestion messages
vLAN 2
API application programming interface: CRC cyclic redundancy check; DDP direct data placement; FCoE Fibre Channel over Ethernet; iSCSI internet Small Computer System Interface;
LSO large segment ofoad; MAC media access control; RSS really simple syndication; Rx receive or incoming (data); Tx transmit or outgoing (data); vLAN virtual LAN
Figure 1. This diagram shows a unified networking Fibre Channel over Ethernet (FCoE) configuration. The converged network adapter uses native
FCoE initiators in the operating system to work with the platform hardware and operating system, handling FCoE traffic at a lower cost than a
hardware-based solution.
www.intel.com/IT 3
IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification
100,000
80.000
60.000
40.000
20.000
0
4 KB
8 KB
16 KB
32 KB
64 KB
128 KB
FCP
84,841
77,741
65,110
42,440
26,710
13,510
FCoE
75,045
70,051
63,491
51,724
34,414
18,272
4 www.intel.com/IT
Storage Performance
Comparison: Throughput
The first test, which relies on a CNA using
a software driver for FCoE processing,
had to realize equivalent or similar
performance in storage processing to the
dual-port 8-Gb Fibre Channel HBAs we
were replacing. To determine this, we ran
a series of tests using different I/O block
sizesfrom 4 kilobytes (KB) to 128KB
through both types of adapters. We
deliberately avoided saturating the hosts
and network and storage arrays in these
tests by specifying a cache-intensive
load5 megabytes (MB) working set size
and by driving moderately high I/O from
16 workers to four logical unit numbers at
a queue depth of 1 from a single host.
Tests showed negligible differences
in the smaller block sizes. The Fibre
Channel Protocol (FCP) performance of
the dual-port 8-Gb FC HBA was 13.1
to 2.5 percent higher than the FCoE
performance of the dual-port Intel
Ethernet X520 Server Adapter running
at 4-KB to 16-KB I/O block sizes (see
Figure2). More significant was the
difference in the higher I/O block sizes.
The FCoE performance of the dual-
Storage Performance
Comparison: Response Time
Another way to measure performance
is response time in milliseconds. This
is an important factor in application
performance for I/O intensive applications.
Here again, we saw negligible differences
in the smaller block sizes, with the FCP
performance of the dual-port 8-Gb Fibre
Channel HBA providing slightly faster
response times in the 4-KB to 16-KB I/O
block sizes (see Figure 3). At 32-KB I/O
block sizes and greater, the softwaredriven FCoE processing of the dual-port
10-Gb Ethernet Intel X520 Server
Adapter began to show significantly
faster performance. In fact, for the
128-KB block sizes, it delivered 35
percent faster response.
Storage Performance
Comparison: CPU
Effectiveness
One concern with using a CNA that
uses a software driver-based approach
for processing FCoE traffic is that
it could potentially consume higher
central processing unit (CPU) resources
and effectively decrease application
performance or reduce the consolidation
ratio for server virtualization. Our tests
showed this is not the case (see Figure4).
When we measured how many I/O
operations per second are processed by
each percent of a CPU, we found that
the dual-port 10-Gb Ethernet Intel X520
Server Adapter actually performed
better than the dual-port 8-Gb FC
HBA. This data suggests that we could
consolidate as many virtual machines
(VMs) on a host (or perhaps even more)
using the 10-Gb Ethernet Intel X520
Server Adapter as we do with the current
HBAs were using.
Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper
4.00
3.00
2.00
1.00
0.00
4 KB
8 KB
16 KB
32 KB
64 KB
128 KB
FCP
0.75
0.82
0.98
1.53
2.39
4.74
FCoE
0.85
0.91
1.01
1.24
1.86
3.50
CPU Effectiveness
67% Read; 100% Random
2,500
120%
100%
2,000
1,500
1,000
500
0%
4 KB
8 KB
16 KB
32 KB
64 KB
128 KB
FCP
2,303
2,281
2,177
1,938
1,618
1,291
FCoE
2,424
2,407
2,271
2,110
1,769
1,369
105.5%
104.3%
108.9%
109.3%
106%
FCoE/FCP% 105.3%
www.intel.com/IT 5
IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification
70%
30,000
60
25,000
50
20,000
40
15,000
30
10,000
20
5000
0
Total Mbps
Total CPU Utilization Percent
10
Time
30%
4,000
24
3,000
18
2,000
12
1000
0
Total MBps
Total CPU Utilization Percent
Time
5,000
Quality-of-Service Testing
We ran QoS tests to assure that storage
traffic would always be guaranteed the
bandwidth required for its performance.
To perform these tests, we ran both the
network and storage loads previously
established in our baseline bandwidth
utilization tests (Figures5 and 6). By
running both network and storage loads,
we were able to consume over 90 percent
of the CNAs maximum bandwidth.
40,000
TCP/IP (Network)
Total
20,000
10,000
90% FCoE
10% TCP/IP
70% FCoE
30% TCP/IP
50% FCoE
50% TCP/IP
30% FCoE
70% TCP/IP
10% FCoE
90% TCP/IP
6 www.intel.com/IT
Conclusion
Based on our test results, we are
implementing a unified network
infrastructure using FCoE through
dual-port 10-Gb Intel Ethernet
X520 Server Adapters.
Our tests demonstrate that a software
driver-based CNA solution is an
excellent solution for reducing costs and
infrastructure complexity as we unify LAN
and SAN infrastructure over a single fabric.
The switch to FCoE deployment using this
server adapter provides a greater than
50-percent reduction in total network
costs per server rack, while doubling the
performance of the dual-port 8-Gb FC
HBAs we want to replace.
TCP/IP (Network)
Total
30,000
20,000
10,000
Equally Utilized at
20% FCoE and
80% TCP/IP
90% FCoE 70% FCoE 50% FCoE 30% FCoE 10% FCoE
10% TCP/IP 30% TCP/IP 50% TCP/IP 70% TCP/IP 90% TCP/IP
FCoE
TCP/IP
Total
32,215
27,203
22,859
19,372
15,429
3,127
8,401
12,934
16,438
20,424
35,342
35,604
35,793
35,810
35,852
40,000
Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper
40
30
20
10
0
90% FCoE 70% FCoE 50% FCoE 30% FCoE 10% FCoE
10% TCP/IP 30% TCP/IP 50% TCP/IP 70% TCP/IP 90% TCP/IP
46%
Cable Costs
62.5%
12.5%
71%
www.intel.com/IT 7
Acronyms
CapEx capital expenditure
CNA
DCB
FC
Fibre Channel
FCoE
FCP
Gb
gigabit
GbE
gigabit Ethernet
HBA
IOPS
KB
kilobytes
LAN
Mbps
MBps
NIC
PCI
PCIe*
QoS
quality of service
SAN
VM
virtual machine
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as
measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other
sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and
on the performance of Intel products, reference www.intel.com/performance/resources/benchmark_limitations.htm or call (U.S.) 1-800-628-8686 or 1-916-356-3104.
This paper is for informational purposes only. THIS DOCUMENT IS PROVIDED AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF
MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL,
SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any patent, copyright, or other intellectual property rights, relating to use of
information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.
Copyright 2012 Intel Corporation. All rights reserved. Printed in USA
Please Recycle
0512/ACHA/KC/PDF
326413-002US