You are on page 1of 70

WAN Optimization Controller

Technologies

Version 3.1

• Network and Deployment Topologies


• Storage and Replication
• FCIP Configuration
• WAN Optimization Controller Appliances

Vinay Jonnakuti
Chuan Liu
Eric Pun
Donald Robertson
Tom Zhao
Copyright © 2012- 2014 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United
State and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulator document for your product line, go to EMC Online Support
(https://support.emc.com).

Part number H8076.5

2 WAN Optimization Controller Technologies TechBook


Contents

Preface.............................................................................................................................. 5

Chapter 1 Network and Deployment Topologies and


Implementations
Overview............................................................................................ 12
Network topologies and implementations ................................... 13
Deployment topologies .................................................................... 15
Storage and replication application................................................ 17
Configuration settings............................................................... 17
Network topologies and implementations ............................ 18
Notes............................................................................................ 19

Chapter 2 FCIP Configurations


Brocade FCIP ..................................................................................... 22
Configuration settings............................................................... 22
Brocade FCIP Tunnel settings.................................................. 22
Rules and restrictions................................................................ 23
References ................................................................................... 24
Cisco FCIP .......................................................................................... 25
Configuration settings............................................................... 25
Notes............................................................................................ 26
Basic guidelines.......................................................................... 27
Rules and restrictions................................................................ 28
References ................................................................................... 28

WAN Optimization Controller Technologies TechBook 3


Contents

Chapter 3 WAN Optimization Controllers


Riverbed Steelhead appliances ....................................................... 30
Overview .................................................................................... 30
Terminology ............................................................................... 31
Notes............................................................................................ 36
Features ....................................................................................... 36
Deployment topologies............................................................. 36
Failure modes supported ......................................................... 37
FCIP environment ..................................................................... 37
GigE environment ..................................................................... 39
References ................................................................................... 42
Riverbed Granite solution ............................................................... 43
Overview .................................................................................... 43
Features ....................................................................................... 45
Configuring Granite Core High Availability ........................ 47
Deployment topologies............................................................. 50
Configuring iSCSI settings on EMC storage.......................... 51
Configuring iSCSI initiator on Granite Core ......................... 52
Configuring iSCSI portal .......................................................... 53
Configuring LUNs..................................................................... 56
Configuring local LUNs ........................................................... 58
Adding Granite Edge appliances ............................................ 59
Configuring CHAP users ......................................................... 60
Confirming connection to the Granite Edge appliance ....... 61
References ................................................................................... 62
Silver Peak appliances...................................................................... 63
Overview .................................................................................... 63
Terminology ............................................................................... 64
Features ....................................................................................... 66
Deployment topologies............................................................. 67
Failure modes supported ......................................................... 67
FCIP environment ..................................................................... 67
GigE environment ..................................................................... 68
References ................................................................................... 69

4 WAN Optimization Controller Technologies TechBook


Preface

This EMC Engineering TechBook provides a high-level overview of the


WAN Optimization Controller (WOC) appliance, including network and
deployment topologies, storage and replication application, FCIP
configurations, and WAN Optimization Controller appliances.
E-Lab would like to thank all the contributors to this document, including
EMC engineers, EMC field personnel, and partners. Your contributions are
invaluable.
As part of an effort to improve and enhance the performance and capabilities
of its product lines, EMC periodically releases revisions of its hardware and
software. Therefore, some functions described in this document may not be
supported by all versions of the software or hardware currently in use. For
the most up-to-date information on product features, refer to your product
release notes. If a product does not function properly or does not function as
described in this document, please contact your EMC representative.

Audience This TechBook is intended for EMC field personnel, including


technology consultants, and for the storage architect, administrator,
and operator involved in acquiring, managing, operating, or
designing a networked storage environment that contains EMC and
host devices.

EMC Support Matrix For the most up-to-date information, always consult the EMC Support
and E-Lab Matrix (ESM), available through E-Lab Interoperability Navigator
Interoperability (ELN) at http://elabnavigator.EMC.com.
Navigator

WAN Optimization Controller Technologies TechBook 5


Preface

All of the matrices, including the ESM (which does not include most
software), are subsets of the E-Lab Interoperability Navigator
database. Included under this tab are:
◆ The EMC Support Matrix, a complete guide to interoperable, and
supportable, configurations.
◆ Subset matrices for specific storage families, server families,
operating systems or software products.
◆ Host connectivity guides for complete, authoritative information
on how to configure hosts effectively for various storage
environments.
Consult the Internet Protocol pdf under the "Miscellaneous" heading
for EMC's policies and requirements for the EMC Support Matrix.

Related The following documents, including this one, are available through
documentation the E-Lab Interoperability Navigator at
http://elabnavigator.EMC.com.
These documents are also available at the following location:
http://www.emc.com/products/interoperability/topology-resource-center.htm

• Backup and Recovery in a SAN TechBook


• Building Secure SANs TechBook
• Extended Distance Technologies TechBook
• Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB)
Concepts and Protocols TechBook
• Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB)
Case Studies TechBook
• Fibre Channel SAN Topologies TechBook
• iSCSI SAN Topologies TechBook
• Networked Storage Concepts and Protocols TechBook
• Networking for Storage Virtualization and RecoverPoint TechBook
• EMC Connectrix SAN Products Data Reference Manual
• Legacy SAN Technologies Reference Manual
• Non-EMC SAN Products Data Reference Manual
◆ EMC Symmetrix Remote Data Facility (SRDF) Connectivity Guide,
located on the E-Lab Interoperability Navigator at
http://elabnavigator.EMC.com.
◆ EMC Support Matrix, available through E-Lab Interoperability
Navigator at http://elabnavigator.EMC.com.
◆ RSA security solutions documentation, which can be found at
http://RSA.com > Content Library

6 WAN Optimization Controller Technologies TechBook


Preface

EMC documentation and release notes can be found at EMC Online


Support (https://support.emc.com).
For vendor documentation, refer to the vendor’s website.

Authors of this This TechBook was authored by Vinay Jonnakuti and Eric Pun, along
TechBook with other EMC engineers, EMC field personnel, and partners.
Vinay Jonnakuti is a Sr. Corporate Systems Engineer in the Unified
Storage division of EMC focusing on VNX and VNXe products,
working on pre-sales deliverables including collateral, customer
presentations, customer beta testing and proof of concepts. Vinay has
been with EMC's for over 6 years. Prior to his current position, Vinay
worked in EMC E-Lab leading the qualification and architecting of
solutions with WAN-Optimization appliances from various partners
with various replication technologies, including SRDF (GigE/FCIP),
SAN-Copy, MirrorView, VPLEX, and RecoverPoint. Vinay also
worked on Fibre Channel and iSCSI qualification on the VMAX
Storage arrays.
Chuan Liu is a Senior Systems Integration Engineer with more than 6
years of experience in the telecommunication industry. After joining
EMC, he worked in E-Lab qualifying IBM/HP/Cisco blade switches
and WAN Optimization products. Currently, Chuan focuses on
qualifying SRDF with FCIP/GigE technologies used in the setup of
different WAN Optimization products.
Eric Pun is a Senior Systems Integration Engineer and has been with
EMC for over 13 years. For the past several years, Eric has worked in
E-lab qualifying interoperability between Fibre Channel switched
hardware and distance extension products. The distance extension
technology includes DWDM, CWDM, OTN, FC-SONET, FC-GbE,
FC-SCTP, and WAN Optimization products. Eric has been a
contributor to various E-Lab documentation, including the SRDF
Connectivity Guide.
Donald Robertson is a Senior Systems Integration Engineer and has
held various engineering positions in the storage industry for over 18
years. As part of the EMC E-Lab team, Don leads the qualification
and architecting of solutions with WAN-Optimization appliances
from various partners using various replication technologies,
including SRDF (GigE/FCIP), VPLEX, RecoverPoint.

WAN Optimization Controller Technologies TechBook 7


Preface

Tom Zhao is a Systems Engineer Team Lead with over 6 years of


experience in the IT industry, including over one year in storage at
EMC. Tom works in E-lab qualifying Symmetrix, RecoverPoint, WAN
Optimization, and cache- based products and solutions. Prior to
EMC, Tom focused on developing management and maintenance
tools for x86 servers and platforms.

Conventions used in EMC uses the following conventions for special notices:
this document
Note: A note presents information that is important, but not hazard-related.

Typographical conventions
EMC uses the following type style conventions in this document.
Bold Use for names of interface elements, such as names of
windows, dialog boxes, buttons, fields, tab names, key
names, and menu paths (what the user specifically
selects or clicks)
Italic Use for full titles of publications referenced in text
Monospace Use for:
• System output, such as an error message or script
• System code
• Pathnames, filenames, prompts, and syntax
• Commands and options
Monospace italic Use for variables.
Monospace bold Use for user input.
[] Square brackets enclose optional values
| Vertical bar indicates alternate selections — the bar
means “or”
{} Braces enclose content that the user must specify,
such as x or y or z
... Ellipses indicate nonessential information omitted
from the example

Where to get help EMC support, product, and licensing information can be obtained as
follows:

Note: To open a service request through the EMC Online Support site, you
must have a valid support agreement. Contact your EMC sales representative
for details about obtaining a valid support agreement or to answer any
questions about your account.

8 WAN Optimization Controller Technologies TechBook


Preface

Product information
For documentation, release notes, software updates, or for
information about EMC products, licensing, and service, go to the
EMC Online Support site (registration required) at:
https://support.EMC.com

Technical support
EMC offers a variety of support options.
Support by Product — EMC offers consolidated, product-specific
information on the Web at:
https://support.EMC.com/products
The Support by Product web pages offer quick links to
Documentation, White Papers, Advisories (such as frequently used
Knowledgebase articles), and Downloads, as well as more dynamic
content, such as presentations, discussion, relevant Customer
Support Forum entries, and a link to EMC Live Chat.
EMC Live Chat — Open a Chat or instant message session with an
EMC Support Engineer.

eLicensing support
To activate your entitlements and obtain your Symmetrix license files,
visit the Service Center on https://support.EMC.com, as directed on
your License Authorization Code (LAC) letter e-mailed to you.
For help with missing or incorrect entitlements after activation (that
is, expected functionality remains unavailable because it is not
licensed), contact your EMC Account Representative or Authorized
Reseller.
For help with any errors applying license files through Solutions
Enabler, contact the EMC Customer Support Center.
If you are missing a LAC letter, or require further instructions on
activating your licenses through the Online Support site, contact
EMC's worldwide Licensing team at licensing@emc.com or call:
◆ North America, Latin America, APJK, Australia, New Zealand:
SVC4EMC (800-782-4362) and follow the voice prompts.
◆ EMEA: +353 (0) 21 4879862 and follow the voice prompts.

WAN Optimization Controller Technologies TechBook 9


Preface

We'd like to hear from you!


Your suggestions will help us continue to improve the accuracy,
organization, and overall quality of the user publications. Send your
opinions of this document to:
techpubcomments@emc.com
Your feedback on our TechBooks is important to us! We want our
books to be as helpful and relevant as possible. Send us your
comments, opinions, and thoughts on this or any other TechBook to:
TechBooks@emc.com

10 WAN Optimization Controller Technologies TechBook


1
Network and
Deployment
Topologies and
Implementations

This chapter provides the following information for the WAN


Optimization Controller (WOC) appliance:
◆ Overview ............................................................................................. 12
◆ Network topologies and implementations..................................... 13
◆ Deployment topologies ..................................................................... 15
◆ Storage and replication application................................................. 17

Network and Deployment Topologies and Implementations 11


Network and Deployment Topologies and Implementations

Overview
A WAN Optimization Controller (WOC) is an appliance that can be
placed In-line or Out-of-Path to reduce and optimize the data that is
to be transmitted over the LAN/MAN/WAN. These devices are
designed to help mitigate the effects of packet loss, network
congestion, and latency while reducing the overall amount of data to
be transmitted over the network.
In general, the technologies utilized in accomplishing this are
Transmission Control Protocol (TCP) acceleration,
data-deduplication, and compression. Additionally, features such as
QoS, Forward Error Correction (FEC), and Encryption may also be
available.
Network links and WAN circuits can have high latency and/or
packet loss as well as limited capacity. WAN Optimization
Controllers can be used to maximize the amount of data that can be
transmitted over a link. In some cases, these appliances may be a
necessity, depending on performance requirements.
WAN and data optimization can occur at varying layers of the OSI
stack, whether it be at the network and transport layer, the session,
presentation, and application layers, or just to the data (payload)
itself.

12 WAN Optimization Controller Technologies TechBook


Network and Deployment Topologies and Implementations

Network topologies and implementations


TCP was developed as a local area network (LAN) protocol.
However, with the advancement of the Internet it was expanded to be
used over the WAN. Over time TCP has been enhanced, but even
with these enhancements TCP is still not well-suited for WAN use for
many applications.
The primary factors that directly impact TCP's ability to be optimized
over the WAN are latency, packet loss, and the amount of bandwidth
to be utilized. It is these factors on which the layer 3/4 optimization
products focus. Many of these optimization products will
re-encapsulate the packets into UDP or their proprietary protocol,
while others may still use TCP, but optimize the connections between
a set of WAN Optimization Controllers at each end of the WAN.
While some products create tunnels to perform their peer-to-peer
connection between appliances for the optimized data, others may
just modify, or tag other aspects within the packet to ensure that the
far-end WOC captures the optimized traffic.
Optimization of the payload (data) within the packet focuses on the
reduction of actual payload as it passes over the network through the
use of data compression and/or data de-duplication engines (DDEs).
Compression is performed through the use of data compression
algorithms, while DDE uses large data pattern tables and associated
pointers (fingerprints). Large amounts of memory and/or hard-drive
storage can be used to store these pattern tables and pointers.
Identical tables are built in the optimization appliances on both sides
of the WAN, and as new traffic passes through the WOC patterns are
matched, and only the associated pointers are sent over the network
(versus resending data.) While typical LZ compression ratio is about
2:1, DDE ratios can range greatly, depending on many factors. In
general the combination of both of these technologies, DDE and
compression, will achieve around a 5:1 (and sometimes much higher
ratios) reduction level.
Layer 4/7 optimization is what is called the "application" layer of
optimization. This area of optimization can take many approaches
that can vary widely, but are generally done through the use of
application-aware optimization engines. The actions taken by these
engines can result in benefits, including reductions in the number of
transactions that occur over the network or more efficient use of
bandwidth. It is also at this layer the TCP optimization occurs.

Network topologies and implementations 13


Network and Deployment Topologies and Implementations

Overall, WAN optimizers can be aligned with customer networking


best practices, and it should be made clear to the customer that
applications using these devices can, and should, be prioritized based
on their WAN bandwidth/throughput requirements.

14 WAN Optimization Controller Technologies TechBook


Network and Deployment Topologies and Implementations

Deployment topologies
There are two basic topologies for deployment:
◆ In-path/in-line/bridge
◆ Out-of-path/routed
An in-path/in-line/bridge deployment, as shown in Figure 1, means
that the WAN Optimization Controller (WOC) is directly in the path
between the source and destination end points where all inbound
and outbound flows will pass through the WAN Optimization
Controllers. The placement of the WOC devices at each site is
typically placed as close as possible to the WAN circuit.

Figure 1 In-path/in-line/bridge topology

An out-of-path/routed deployment, as shown in Figure 2, means that


the WOC is not in the direct path between the source and destination
end points. The traffic must be routed/redirected to the WOC devices
using routing features such as WCCP, PBR, VRRP, etc.

Figure 2 Out-of-path/routed topology

Deployment topologies 15
Network and Deployment Topologies and Implementations

◆ WCCPv2 (Web Cache Communication Protocol) is a content


routing protocol that provides a mechanism to redirect traffic in
real-time. WCCP also has built-in mechanisms to support load
balancing, fault tolerance, and scalability.
◆ PBR (Policy Based Routing) is a technique used to make routing
decisions based on policies or a combination of policies such as
packet size, protocol of the payload, source, destination, or other
network characteristics.
◆ VRRP (Virtual Router Redundancy Protocol) is a redundancy
protocol designed to increase the availability of a default gateway.
In the event of a power failure or WOC hardware or software failure,
it is necessary for the WOC to provide some level of action. The WOC
can either continue to allow data to pass through, unoptimized, or it
can block all traffic from flowing through it. The failure modes
typically offered by WAN optimizers are commonly referred to as:
◆ Fails-to-Wire
The appliance will behave as a crossover cable connecting the
Ethernet LAN switch directly to the WAN router and traffic will
continue to flow uninterrupted and unoptimized.
◆ Fails-Open / Fails-to-Block
The appliance will behave as an open port to the WAN router.
The WAN router will recognize that the link is down and will
begin forwarding traffic according to its routing tables.
Depending upon your deployment topology, you may determine that
one method may be better suited for your environment than the
other.

16 WAN Optimization Controller Technologies TechBook


Network and Deployment Topologies and Implementations

Storage and replication application


This section provides storage and replication application details for
EMC® products:
◆ Symmetrix®/VMAX™ SRDF®
◆ RecoverPoint
◆ SAN Copy™
◆ Celerra Replicator™
◆ MirrorView™

Configuration settings
Configurations settings are as follows:
◆ Compression on GigE (RE) port = Enabled

Note: For Riverbed Steelhead RiOS v6.1.1a or later and Silver Peak
NX-OS 4.4 or later, the compression setting should be Enabled on the
Symmetrix storage system. The WAN optimization appliances
automatically detect and disable compression on the Symmetrix system.
In the event the WAN optimization appliances go down or are removed,
the Symmetric REs will re-enable compression and provide some level of
bandwidth reduction, although likely not to the level provided by the
WAN optimization appliances.

◆ SRDF Flow Control = Enabled

Note: In a GigE WAN optimization environment, use the following:

For Riverbed, use legacy flow control for 5876.229.145 and older ucode.
Use dynamic flow control for 5876.251.161 and later. Dynamic flow
control is only supported with Riverbed using RiOS 8.0.2 and later. Refer
to the WAN Optimization Controller table in the EMC Support Matrix for
support RiOS revisions. In some instances when there is packet loss,
legacy flow control may increase performance if customer requirements
are not being met.

For Silver Peak, dynamic flow control is the recommended flow control
setting.

Storage and replication application 17


Network and Deployment Topologies and Implementations

Note: In a GigE WAN optimization environment: If Legacy flow control


is used, set the JFC Windows buffer size to 2048(0x800).

In a FCIP WAN optimization environment, if Legacy flow control is


used, set the JFC Windows buffer size to 49K (0xC400).

When upgrading from 5876.229.145 and older ucode (where legacy flow
control should be set) to 5876.251.161 and later, it is recommended to
remain at legacy flow control.

◆ Disable the speed limit for transmit rate on GigE(RE) ports.


◆ GigE connection number
More connections bring more LAN throughput and a higher
WAN compression ratio for Riverbed Steelhead deployments.
Increasing the number of TCP connections through the number of
physical ports is the recommended approach. This approach is
beneficial because it is commonly configured in the field and also
adds CPU processing power. If additional TCP connections are
required, increase the number of TCP connections per DID.
However, be aware that too many TCP connections are not
desirable for a number of reasons. EMC recommends no more
than 32 TCP connections per group of meshed GigE links.

Network topologies and implementations


In general, it has been observed that optimization ratios are higher
with SRDF/A than SRDF Adaptive Copy. There are many factors that
impact how much optimization will occur, therefore results will vary.

18 WAN Optimization Controller Technologies TechBook


Network and Deployment Topologies and Implementations

Notes
Note the following:

Symmetrix configuration settings

Compression Compression should always be enabled on the Symmetrix GigE ports


if the WAN optimization controller performs dedupe and has the
capability of dynamically disabling compression for the Symmetrix
GigE port. Riverbed Steelhead and Silver Peak WAN optimization
controllers support this feature. This ensures that dedup can always
be applied to uncompressed data when a WAN optimization
controller is present, yet compression is also applied even if WAN
optimization is bypassed.

SRDF Flow Control SRDF Flow Control is enabled by default for increased stability of the
SRDF links. In some cases, further tuning of SRDF flow control and
related settings can be made to improve performance. For more
information, refer to “Storage and replication application” on page 17
or contact your EMC Customer Service representative.

Data reduction considerations


In general, it has been observed that optimization ratios are higher
with GigE ports on the GigE director as opposed to FCIP. There are
many factors that impact how much optimization will occur, (for
example, SRDF mode or repeatability of data patterns); therefore,
results will vary.

Storage and replication application 19


Network and Deployment Topologies and Implementations

20 WAN Optimization Controller Technologies TechBook


2

FCIP Configurations

This chapter provides FCIP configuration information for:


◆ Brocade FCIP ...................................................................................... 22
◆ Cisco FCIP ........................................................................................... 25

FCIP Configurations 21
FCIP Configurations

Brocade FCIP
This section provides configuration information for Brocade FCIP.

Note: Support for Brocade FCIP with WAN Optimization Controllers is


limited. Please check the WAN Optimization Controller table in the EMC
Support Matrix for supported configurations. The EMC Support Matrix is
available at https://elabnavigator.emc.com.

Configuration settings
Configuration settings are as follows:
◆ FCIP Fastwrite = Enabled
◆ Compression = Disabled
◆ TCP Byte Streaming = Enabled
◆ Commit Rate or Max/Min settings = in Kb/s (Environment
dependent)
◆ Tape Pipelining = Disabled
◆ SACK = Enabled
◆ Min Retransmit Time = 100
◆ Keep-Alive Timeout = 10
◆ Max Re-Transmissions = 8

Brocade FCIP Tunnel settings


Consider the following:
◆ FCIP Fastwrite
This setting accelerates SCSI Write I/Os over the FCIP tunnel.
This cannot be combined with FC Fastwrites. FCIP Fastwrite
should be enabled and FC Fastwrite should be disabled when
using WAN Optimization Controller (WOC) devices.
There are two different FastWrites: FC-FastWrite and FCIP
FastWrite. FC FastWrite applies to FC ISLs, while FCIP FastWrite
(same FC protocol) applies to FCIP tunnels.

22 WAN Optimization Controller Technologies TechBook


FCIP Configurations

◆ Compression
This simply compresses the data that flows over the FCIP tunnel.
This should be disabled when using with WAN Optimization
Controller (WOC) devices, thus allowing the WOC device to
perform the compression and data de-duplication.
◆ Commit Rate
This setting is environment dependent. This should be set in
accordance with the WAN Optimization vendor. Considerations
such as data-to-be-optimized, available WAN circuit size and
data-reduction ratio need to be taken into account.
◆ Adaptive Rate Limit (ARL)
Commit Rate is replaced by Minimum and Maximum rates since
newer installations have the ARL feature. When used with WAN
Optimization, the maximum is always set to port link speed.
Refer to the Brocade or WAN optimization vendor
documentation for more information.
◆ TCP Byte Streaming
This is a Brocade feature which allows a Brocade FCIP switch to
communicate with a third-party WAN Optimization Controller.
This feature supports an FCIP frame which has been split into a
maximum of 8 separate TCP segments. If the frame is split into
more than eight segments, it results in prematurely sending a
frame to the FCIP layer with an incorrect size and the FCIP tunnel
bounces.

Rules and restrictions


Consider the following rules and restrictions when using TCP byte
streaming:
◆ Only one FCIP tunnel is allowed to be configured for a GigE port
that has TCP Byte Streaming configured.
◆ FCIP tunnel cannot have compression enabled.
◆ FCIP tunnel cannot have FC Fastwrite enabled.
◆ FCIP tunnel must have a committed rate set.
◆ Both sides of the FCIP tunnel must be identically configured.
◆ TCP byte streaming is not compatible with older FOS revisions,
which do not have the option available.

Brocade FCIP 23
FCIP Configurations

References
For further information, refer to https://support.emc.com and
http://www.brocade.com.
◆ EMC Connectrix B Series Fabric OS Administrator's Guide
◆ Brocade Fabric OS Administrator’s Guide

24 WAN Optimization Controller Technologies TechBook


FCIP Configurations

Cisco FCIP
This section provides configuration information for Cisco FCIP.

Configuration settings
Configuration settings are as follows:
◆ Max-Bandwidth = Environment dependent (Default = 1000 Kb)
◆ Min-Available-Bandwidth = Normally set to WAN bandwidth /
number of GigE links using that bandwidth.
For example, if WAN = 1 Gb and using 2 GigE ports, then the Min
= 480 Mb; if using 4 GigE, then Min = 240 Mb.
◆ Estimated roundtrip time = Set to measured latency (round-trip
time - RTT) between MDS switches
◆ IP Compression = Disabled
◆ FCIP Write Acceleration = Enabled
◆ Tape Accelerator = Disabled
◆ Encryption = Disabled
◆ Min Re-Transmit Timer = 200 ms
◆ Max Re-Transmissions = 8
◆ Keep-Alive = 60
◆ SACK = Enabled
◆ Timestamp = Disabled
◆ PMTU = Enabled
◆ CWM = Enabled
◆ CWM Burst Size = 50 KB

Cisco FCIP 25
FCIP Configurations

Notes
Consider the following information for Cisco FCIP tunnel settings:
◆ Max-Bandwidth
The max-bandwidth-mbps parameter and the measured RTT
together determine the maximum window size. This should be
configured to match the worst-case bandwidth available on the
physical link.
◆ Min-Available-Bandwidth
The min-available-bandwidth parameter and the measured RTT
together determine the threshold below which TCP aggressively
maintains a window size sufficient to transmit at minimum
available bandwidth. It is recommend that you adjust this to
50-80% of the Max-Bandwidth.
◆ Estimated Roundtrip-Time
This is the measured latency between the 2 MDS GigE interfaces.
The following MDS command can be used to measure the RTT:
FCIPMDS2(config)# do ips measure-rtt 10.20.5.71
interface GigabitEthernet1/1
Roundtrip time is 106 micro seconds (0.11 milli
seconds)

Only configure the measured latency when there is no WAN


optimization appliance. When the MDS switch is connected to a
WAN optimization appliance, leave the roundtrip setting at its
default (1000 msec in the Management Console, 1 ms in the CLI).
◆ FCIP Write Acceleration
Write Acceleration is used to help alleviate the effects of network
latency. It can work with Port-Channels only when the
Port-Channel is managed by Port-Channel protocol (PCP). FCIP
write acceleration can be enabled for multiple FCIP tunnels if the
tunnels are part of a dynamic Port-Channel configured with
channel mode active. FCIP write acceleration does not work if
multiple non-Port -Channel ISLs exist with equal weight between
the initiator and the target port.

26 WAN Optimization Controller Technologies TechBook


FCIP Configurations

◆ Min Re-Transmit Timer


This is the amount of time that TCP waits before retransmitting.
In environments where there may be high packet loss /
congestion, this number may need to be adjusted to 4x the
measured roundtrip-time. Ping may be used to measure the
round trip latency between the two MDS switches.
◆ Max Re-Transmissions
The maximum number of times that a packet is retransmitted
before the TCP connection is closed.

Basic guidelines
Consider the following guidelines when creating/utilizing multiple
FCIP interfaces /profiles:
◆ Gigabit Ethernet Interfaces support a single IP address.
◆ Every FCIP profile must be uniquely addressable by an IP
address and TCP port pair. Where FCIP profiles share a Gigabit
Ethernet interface, the FCIP profiles must use different TCP port
numbers.
◆ FCIP Interface defines the physical FCIP link (local GigE port). If
you add an FCIP Profile for TCP parameters and a local GigE IP
address plus peer (remote) IP address to the FCIP Interface, it
forms an FCIP Link or Tunnel. There are always two TCP
connections (control plus data) and you can add one additional
data TCP connection per FCIP link.
◆ EMC recommends three FCIP interfaces per GigE port for best
performance. More FCIP interfaces help improve SRDF link
stability when there is high latency and/or packet loss
(>100ms/0.5%, regardless of whether latency and packet drop
conditions exist together or only one exists). A dedicated FCIP
profile per FCIP link is recommended.

Cisco FCIP 27
FCIP Configurations

Rules and restrictions


Consider the following rules and restrictions when enabling FCIP
Write Acceleration:
◆ It can work with Port-Channels only when the Port-Channel is
managed by Port-Channel Protocol (PCP).
◆ FCIP write acceleration can be enabled for multiple FCIP tunnels
if the tunnels are part of a dynamic Port-Channel configured with
channel mode active.
◆ FCIP write acceleration does not work if multiple
non-Port-Channel ISLs exist with equal weight between the
initiator and the target port.
◆ Do not enable time stamp control on an FCIP interface with write
acceleration configured.
◆ Write acceleration can not be used across FSPF equal cost paths in
FCIP deployments. Also, FCIP write acceleration can be used in
Port-Channels configured with channel mode active or
constructed with Port-Channel Protocol (PCP).

References
For further information, refer to the following documentation on
Cisco's website at http://www.cisco.com.
◆ Wide Area Application Services Configuration Guide
◆ Replication Acceleration Deployment Guide
◆ Q&A for WAAS Replication Accelerator Mode
◆ MDS 9000 Family CLI Configuration Guide

28 WAN Optimization Controller Technologies TechBook


3

WAN Optimization
Controllers

This chapter provides information on the following WAN


Optimization Controller (WOC) appliances, along with Riverbed
Granite, which is used in conjunction with Steelhead:
◆ Riverbed Steelhead appliances ........................................................ 30
◆ Riverbed Granite solution................................................................. 43
◆ Silver Peak appliances ....................................................................... 63

WAN Optimization Controllers 29


WAN Optimization Controllers

Riverbed Steelhead appliances


This section provides information on the Riverbed Steelhead WAN
Optimization Controller and the Riverbed system. The following
topics are discussed:
◆ “Overview” on page 30
◆ “Terminology” on page 31
◆ “Notes” on page 36
◆ “Features” on page 36
◆ “Deployment topologies” on page 36
◆ “Failure modes supported” on page 37
◆ “FCIP environment” on page 37
◆ “GigE environment” on page 39
◆ “References” on page 42

Overview
RiOS is the software that powers the Riverbed's Steelhead WAN
Optimization Controller. The optimization techniques RiOS utilizes
are:
◆ Data Streamlining
◆ Transport Streamlining
◆ Application Streamlining, and
◆ Management Streamlining
RiOS uses a Riverbed proprietary algorithm called Scalable Data
Referencing (SDR) along with data compression when optimizing
data across the WAN. SDR breaks up TCP data streams into unique
data chunks that are stored in the hard disk (data store) of the device
running RiOS. Each data chunk is assigned a unique integer label
(reference) before it is sent to a peer RiOS device across the WAN.
When the same byte sequence is seen again in future transmissions
from clients or servers, the reference is sent across the WAN instead
of the raw data chunk. The peer RiOS device uses this reference to
find the original data chunk on its data store, and reconstruct the
original TCP data stream.
After a data pattern is stored on the disk of a Steelhead appliance, it
can be leveraged for transfers to any other Steelhead appliance across

30 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

all applications being accelerated by Data Streamlining. Data


Streamlining also includes optional QoS enforcement. QoS
enforcement can be applied to both optimized and unoptimized
traffic, both TCP and UDP.
Steelhead appliances also use a generic latency optimization
technique called Transport Streamlining. Transport Streamlining uses
a set of standards and proprietary techniques to optimize TCP traffic
between Steelhead appliances. These techniques ensure efficient
retransmission methods, such as TCP selective acknowledgements,
are used, optimal TCP window sizes are used to minimize the impact
of latency on throughput to maximize throughput across WAN links.
Transport Streamlining ensures that there is always a one-to-one ratio
for active TCP connections between Steelhead appliances, and the
TCP connections to clients and servers. That is, Steelhead appliances
do not tunnel or perform multiplexing and de-multiplexing of data
across connections. This is true regardless of the WAN visibility mode
in use.

Terminology
Consider the following terminology when using Riverbed
configuration settings:
◆ Adaptive Compression — Detects LZ data compression
performance for a connection dynamically and turns it off (sets
the compression level to 0) momentarily if it is not achieving
optimal results. Improves end-to-end throughput over the LAN
by maximizing the WAN throughput. By default, this setting is
disabled.
◆ Adaptive Data Streamlining Mode SDR-M — RiOS uses a
Riverbed proprietary algorithm called Scalable Data Referencing
(SDR). SDR breaks up TCP data streams into unique data chunks
that are stored in the hard disk (data store) of the device running
RiOS. Each data chunk is assigned a unique integer label
(reference) before it is sent to a peer RiOS device across the WAN.
When the same byte sequence is seen again in future
transmissions from clients or servers, the reference is sent across
the WAN instead of the raw data chunk. The peer RiOS device
uses this reference to find the original data chunk on its data
store, and reconstruct the original TCP data stream. SDR-M
performs data reduction entirely in memory, which prevents the
Steelhead appliance from reading and writing to and from the

Riverbed Steelhead appliances 31


WAN Optimization Controllers

disk. Enabling this option can yield high LAN-side throughput


because it eliminates all disk latency. SDR-M is most efficient
when used between two identical high-end Steelhead appliance
models; for example, 6050 - 6050. When used between two
different Steelhead appliance models, the smaller model limits
the performance.

IMPORTANT
You cannot use peer data store synchronization with SDR-M. In
code stream 5.0.x, this must be set from the CLI by running:
"datastore anchor-select 1033" and then "restart clean."

◆ Compression Level — Specifies the relative trade-off of data


compression for LAN throughput speed. Generally, a lower
number provides faster throughput and slightly less data
reduction. Select a data store compression value of 1 (minimum
compression, uses less CPU) through 9 (maximum compression,
uses more CPU) from the drop-down list. The default value is 1.
Riverbed recommends setting the compression level to 1 in
high-throughput environments such as data center to data center
replication.
◆ Correct Addressing — Turns WAN visibility off. Correct
addressing uses Steelhead appliance IP addresses and port
numbers in the TCP/IP packet header fields for optimized traffic
in both directions across the WAN. This is the default setting.
Also see "WAN Visibility Mode" on page 35.
◆ Data Store Segment Replacement Policy — Specifies a
replacement algorithm that replaces the least recently used data
in the data store, which improves hit rates when the data in the
data store are not equally used. The default and recommended
setting is Riverbed LRU.
◆ Guaranteed Bandwidth % — Specify the minimum amount of
bandwidth (as a percentage) to guarantee to a traffic class when
there is bandwidth contention. All of the classes combined cannot
exceed 100%. During contention for bandwidth the class is
guaranteed the amount of bandwidth specified. The class receives
more bandwidth if there is unused bandwidth remaining.
◆ In-Path Rule Type/Auto-Discover — Uses the auto-discovery
process to determine if a remote Steelhead appliance is able to
optimize the connection attempting to be created by this SYN

32 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

packet. By default, auto-discover is applied to all IP addresses


and ports that are not secure, interactive, or default Riverbed
ports. Defining in-path rules modifies this default setting.
◆ Multi-Core Balancing — Enables multi-core balancing which
ensures better distribution of workload across all CPUs, thereby
maximizing throughput by keeping all CPUs busy. Core
balancing is useful when handling a small number of
high-throughput connections (approximately 25 or less). By
default, this setting is disabled. In the 5.0.x code stream, this
needs to be performed from the CLI by running: "datastore
traffic-load rule scraddr all scrport 0 dstaddr all dstport "1748"
encode "med".
◆ Neural Framing Mode — Neural framing enables the system to
select the optimal packet framing boundaries for SDR. Neural
framing creates a set of heuristics to intelligently determine the
optimal moment to flush TCP buffers. The system continuously
evaluates these heuristics and uses the optimal heuristic to
maximize the amount of buffered data transmitted in each flush,
while minimizing the amount of idle time that the data sits in the
buffer.
For different types of traffic, one algorithm might be better than
others. The considerations include: latency added to the
connection, compression, and SDR performance.
You can specify the following neural framing settings:
• Never — Never use the Nagle algorithm. All the data is
immediately encoded without waiting for timers to fire or
application buffers to fill past a specified threshold. Neural
heuristics are computed in this mode but are not used.
• Always — Always use the Nagle algorithm. All data is passed
to the codec which attempts to coalesce consume calls (if
needed) to achieve better fingerprinting. A timer (6 ms) backs
up the codec and causes leftover data to be consumed. Neural
heuristics are computed in this mode but are not used.
• TCP Hints — This is the default setting which is based on the
TCP hints. If data is received from a partial frame packet or a
packet with the TCP PUSH flag set, the encoder encodes the
data instead of immediately coalescing it. Neural heuristics
are computed in this mode but are not used.

Riverbed Steelhead appliances 33


WAN Optimization Controllers

• Dynamic — Dynamically adjust the Nagle parameters. In this


option, the system discerns the optimum algorithm for a
particular type of traffic and switches to the best algorithm
based on traffic characteristic changes.
◆ Optimization Policy — When configuring In-path Rules you have
the option of configuring the optimization policy. There are
multiple options that can be selected and it is recommended to set
this option to "Normal" for EMC replication protocols, such as
SRDF/A. The configurable options are as follows:
• Normal — Perform LZ compression and SDR
• SDR-Only — Perform SDR; do not perform LZ compression
• Compression-Only — Perform LZ compression; do not
perform SDR
• None — Do not perform SDR or LZ compression
◆ Queue - MXTCP — When creating QoS Classes you will need to
specify a queuing method. MXTCP has very different use cases
than the other queue parameters.
MXTCP also has secondary effects that you need to understand
before configuring, including:
• When optimized traffic is mapped into a QoS class with the
MXTCP queuing parameter, the TCP congestion control
mechanism for that traffic is altered on the Steelhead
appliance. The normal TCP behavior of reducing the
outbound sending rate when detecting congestion or packet
loss is disabled, and the outbound rate is made to match the
minimum guaranteed bandwidth configured on the QoS class.
• You can use MXTCP to achieve high-throughput rates even
when the physical medium carrying the traffic has high loss
rates. For example, MXTCP is commonly used for ensuring
high throughput on satellite connections where a
lower-layer-loss recovery technique is not in use.
• Another usage of MXTCP is to achieve high throughput over
high bandwidth, high-latency links, especially when
intermediate routers do not have properly tuned interface
buffers. Improperly tuned router buffers cause TCP to
perceive congestion in the network, resulting in unnecessarily
dropped packets, even when the network can support high
throughput rates.

34 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

IMPORTANT
Use caution when specifying MXTCP. The outbound rate for
the optimized traffic in the configured QoS class immediately
increases to the specified bandwidth, and does not decrease in
the presence of network congestion. The Steelhead appliance
always tries to transmit traffic at the specified rate.

If no QoS mechanism (either parent classes on the Steelhead


appliance, or another QoS mechanism in the WAN or WAN
infrastructure) is in use to protect other traffic, that other traffic
might be impacted by MXTCP not backing off to fairly share
bandwidth. When MXTCP is configured as the queue
parameter for a QoS class, the following parameters for that
class are also affected:

Link share weight — Prior to RiOS 8.0.x, the link share weight
parameter has no effect on a QoS class configured with
MXTCP. With RiOS 8.0.x and later, Adaptive MXTCP will allow
the link share weight settings to function for MXTCP QoS
classes.

Upper limit —Prior to RiOS 8.0.x, the upper limit parameter


has no effect on a QoS class configured with MXTCP. With
RiOS 8.0.x and later, Adaptive MXTCP will allow the upper
limit settings to function for MXTCP QoS classes.

◆ Reset Existing Client Connections on Start-Up — Enables kickoff.


If you enable kickoff, connections that exist when the Steelhead
service is started and restarted are disconnected. When the
connections are retried they are optimized. If kickoff is enabled,
all connections that existed before the Steelhead appliance started
are reset.
◆ WAN Visibility Mode/CA — Enables WAN visibility, which
pertains to how packets traversing the WAN are addressed. RiOS
v5.0 or later offers three types of WAN visibility modes: correct
addressing, port transparency, and full address transparency. You
configure WAN visibility on the client-side Steelhead appliance
(where the connection is initiated). The server-side Steelhead
appliance must also support WAN visibility (RiOS v5.0 or later).
Also, see Correct Addressing on page 32.

Riverbed Steelhead appliances 35


WAN Optimization Controllers

Notes
Consider the following when using Riverbed configuration settings:
◆ LAN Send and Receive Buffer Size should be configured to 2 MB
◆ WAN Send and Receive Buffer Size is environment dependent
and should be configured with the result utilizing the following
formula:
WAN BW * RTT * 2 / 8 = xxxxxxx bytes

Features
Features include:
◆ SDR (Scalable Data Referencing)
◆ Compression
◆ QoS (Quality of Service)
◆ Data / Transport / Application / Management Streamlining
◆ Encryption - IPsec

Deployment topologies
Deployment topologies include:
◆ In-Path
• Physical In-Path
◆ Virtual In-Path
• WCCPv2 (Web Cache Coordination Protocol)
• PBR (Policy-Based-Routing)
◆ Out-of-Path
• Proxy
◆ Steelhead DX 8000, Steelhead CX 7055/5055/1555 and Steelhead
7050/6050/5050 appliances also support 10 GbE Fibre ports
◆ The virtual steelheads are supported when deployed on
VMWARE ESX or ESXi servers. The virtual appliances can only
be deployed in out-of-path configurations.

36 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

Failure modes supported


The following failure modes are supported:
◆ Fail-to-wire
◆ Fail-to-block

FCIP environment
The following Riverbed configuration settings are recommended in a
FCIP environment:
◆ Configure > Networking > QoS Classification:
• QoS Classification and Enforcement = Enabled
• QoS Mode = Flat
• QoS Network Interface with WAN throughput = Enabled for
appropriate WAN interface and set available WAN Bandwidth
• QoS Class Latency Priority = Real Time
• QoS Class Guaranteed Bandwidth % = Environment
dependent
• QoS Class Link Share Weight = Environment dependent
• QoS Class Upper Bandwidth % = Environment dependent
• Queue = MXTCP
• QoS Rule Protocol = All
• QoS Rule Traffic Type = Optimized
• DSCP = All
• VLAN = All
◆ Configure > Optimization > General Service Settings:
• In-Path Support = Enabled
• Reset Existing Client Connections on Start-Up = Enabled
• Enable In-Path Optimizations on Interface In-Path_X_X for
appropriate In-Path interface
• In RiOS v5.5.3 CLI or later: “datastore codec multi-codec
encoder max-ackqlen 30"
• In RiOS v6.0.1a or later: "datastore codec multi-codec encoder
global-txn-max 128"
• In RiOS v6.0.1a or later: "datastore sdr-policy sdr-m"

Riverbed Steelhead appliances 37


WAN Optimization Controllers

• In RiOS v6.0.1a or later: " datastore codec multi-core-bal"


• In RiOS v6.0.1a or later: "datastore codec compression level 1"
◆ Configure > Optimization > In-Path Rules:
• Type = Auto Discovery
• Preoptimization Policy = None
• Optimization Policy = Normal
• Latency Optimization Policy = Normal
• Neural Framing Mode = Never
• WAN Visibility = Correct Addressing
• In RiOS v5.5.3 CLI or later for FCIP: “in-path always-probe
enable”
• In RiOS v5.5.3 CLI or later for FCIP: “in-path always-probe
port 3225”
• In RiOS v6.0.1a or later: "in-path always-probe port 0"
• In RiOS v6.0.1a or later: "tcp adv-win-scale -1"
• In RiOS v6.0.1a or later: "in-path kickoff-resume"
• In RiOS v6.0.1a or later: "protocol FCIP enable" for FCIP
• In RiOS v6.0.1a or later: "protocol srdf enable " for Symmetrix
DMX and VMAX
Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows:
– Configure > Optimization > FCIP
- FCIP Settings
- Enable FCIP
- FCIP Ports: 3225, 3226, 3227, 3228
• In RiOS v6.0.1a or later: "protocol fcip rule scr-ip 0.0.0.0 dst-ip
0.0.0.0 dif enable" for EMC Symmetrix VMAX™
Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows:
– Rules > Add a New Rule
- Enable DIF if R1 and R2 are VMAX and hosts are Open
Systems or IBM iSeries (AS/400)
- DIF Data Block Size: 512 bytes (Open Systems) and 520
Bytes (IBM iSeries, AS/400)
- No DIF setting is required if mainframe hosts are in use
• In RiOS v6.0.1i or later: "sport splice-policy outer-rst-port port
3226" for Brocade FCIP only

38 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

◆ Configure > Optimization > Performance:


• High Speed TCP = Enabled
• LAN Send Buffer Size = 2097152
• LAN Receive Buffer Size = 2097152
• WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 =
xxxxxxx bytes)

Note: BDP = Bandwidth delay product.

• WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 =


xxxxxxx bytes)
• Data Store Segment Replacement Policy = Riverbed LRU
• Adaptive Data Streamlining Modes = SDR-Default

Note: Latest appliances that use SSD-based data store will achieve
high throughput with standard SDR(SDR-Default). For legacy-based
data store appliances, use SDR-M.

• Compression Level = 1
• Adaptive Compression = Disabled
• Multi-Core Balancing = Enabled

Note: Multi-Core Balancing should be disabled if there are 16 or


greater data-bearing connections (i.e., exclusive of control
connections, such as those commonly established by FCIP gateways).

Note: Maximum latency (round-trip time) and packet drop supported on


Cisco FCIP links are 100 ms round trip and 0.5% packet drop. The limit is the
same regardless whether latency and packet drop conditions exist together or
only one of them exists. This limitation only applies to the baseline (without
WAN OPT appliances). With WAN OPT appliances and proper
configurations, RTT and packet loss can be extended beyond that limitation.
Up to 200 ms round trip and 1% packet drop were qualified by EMC E-Lab.

GigE environment
The following are Riverbed configuration settings recommended in a
GigE environment:

Riverbed Steelhead appliances 39


WAN Optimization Controllers

In RiOS v6.1.1a or later, Steelheads will be able to automatically


detect and disable the Symmetrix VMAX and DMX compression by
default. Use show log from the Steelhead to verify that compression
on the VMAX/DMX has been disabled. The "Native Symmetrix RE
port compression detected: auto-disabling" message will display only
on the Steelhead adjacent to the Symmetrix (either the local or remote
side) that initiates the connection.
With Riverbed firmware v6.1.3a and above, the SRDF Selective
Optimization feature is supported for SRDF group level optimization
for end-to-end GigE environments with VMAX which have EMC
Enginuity v5875 and later. Refer to the Riverbed Steelhead Deployment
and CLI Guide for further instructions.
◆ Configure > Networking > Outbound QoS (Advanced):
• QoS Classification and Enforcement = Enabled
• QoS Mode = Flat or hierarchical
QoS classes are configured in one of two different modes: flat
or hierarchical. The difference between the two modes
primarily consists of how QoS classes are created. In RiOS v8.0
or later, the Hierarchical mode is recommended.
• QoS Network Interface with WAN throughput = Enabled for
appropriate WAN interfaces and set to available WAN
Bandwidth
• QoS Class Latency Priority = Real Time
• QoS Class Guaranteed Bandwidth % = Environment
dependent
• QoS Class Link Share Weight = Environment dependent
• QoS Class Upper Bandwidth % = Environment dependent
• Queue = MXTCP
• QoS Rule Protocol = All
• QoS Rule Traffic Type = Optimized
• DSCP = Reflect
◆ Configure > Optimization > General Service Settings:
• In-Path Support = Enabled
• Reset Existing Client Connections on Start-Up = Enabled
• Enable In-Path Optimizations on Interface In-Path_X_X
• In RiOS v5.5.3 CLI and later: “datastore codec multi-codec
encoder max-ackqlen 30
• In RiOS v6.0.1a CLI or later: "datastore codec multi-codec
encoder global-txn-max 128"
◆ Configure > Optimization > In-Path Rules:

40 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

• Type = Auto Discovery


• Preoptimization Policy = None
• Optimization Policy = Normal
• Latency Optimization Policy = Normal
• Cloud Acceleration = Auto
• Neural Framing Mode = Never
• WAN Visibility =Correct Addressing
• Auto Kickoff = Enabled
• In RiOS v5.5.3 CLI or later for GigE: “in-path always-probe
enable”
• In RiOS v5.5.3 CLI or later for GigE: “in-path always-probe
port 1748”
• In RiOS v5.0.5-DR CLI or later for GigE: “in-path asyn-srdf
always-probe enable”
• In RiOS v6.0.1a or later: "in-path always-probe port 0"
• In RiOS v6.0.1a or later: "tcp adv-win-scale -1"
• In RiOS v6.0.1a or later: "protocol srdf enable " for Symmetrix
DMX and VMAX
Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows:
– Configure > Optimization > SRDF
– SRDF Settings
– Enable SRDF
– SRDF Ports: 1748
• In RiOS v6.0.1a or later: "protocol srdf rule src-ip 0.0.0.0 dst-ip
0.0.0.0 dif enable” for Symmetrix VMAX
Or, in RiOS v6.1.1.a or later, you can use the GUI as follows:
– Rules > Add a New Rule
– Enable DIF if R1 and R2 are VMAX and hosts are Open
Systems or IBM iSeries (AS/400)
– DIF Data Block Size: 512 bytes (Open Systems) and 520
Bytes (IBM iSeries, AS/400)
◆ Configure > Optimization > Transport Settings:
• Transport Optimization = Standard TCP
• LAN Send Buffer Size = 2097152
• WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 =
xxxxxxx bytes)
◆ Configure > Optimization > Performance
• WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 =
xxxxxxx bytes)
• Data Store Segment Replacement Policy = Riverbed LRU
• Adaptive Data Streamlining Modes = SDR-Default

Riverbed Steelhead appliances 41


WAN Optimization Controllers

Note: Latest appliances that use SSD-based data store will achieve
high throughput with standard SDR(SDR-Default). For legacy-based
data store appliances, use SDR-M.

• Compression Level = 1
• Adaptive Compression = Disabled
• Multi-Core Balancing = Enabled

Note: Multi-Core Balancing should be disabled if there are 16 or


greater data-bearing connections (i.e., exclusive of control
connections, such as those commonly established by FCIP gateways).

References
For more information about the Riverbed Steelhead WAN
Optimization Controller and the Riverbed system, refer to Riverbed's
website at http://www.riverbed.com.
◆ Steelhead Appliance Deployment Guide
◆ Steelhead Appliance Installation and Configuration Guide
◆ Riverbed Command-Line Interface Reference Manual

42 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

Riverbed Granite solution


This section provides the following information on the Riverbed
Granite solution.
◆ “Overview” on page 43
◆ “Features” on page 45
◆ “Configuring Granite Core High Availability” on page 47
◆ “Deployment topologies” on page 50
◆ “Configuring iSCSI settings on EMC storage” on page 51
◆ “Configuring iSCSI initiator on Granite Core” on page 52
◆ “Configuring iSCSI portal” on page 53
◆ “Configuring LUNs” on page 56
◆ “Configuring local LUNs” on page 58
◆ “Adding Granite Edge appliances” on page 59
◆ “Configuring CHAP users” on page 60
◆ “Confirming connection to the Granite Edge appliance” on
page 61
◆ “References” on page 62

Overview
Riverbed Granite is a block storage optimization and consolidation
system. It consolidates all storage at the data center and creates
diskless branches. Granite is designed to enable edge server systems
to efficiently access storage arrays over the WAN as if they were
locally attached.
The Granite solution is deployed in conjunction with Steelhead
appliances and consists of two components:
◆ Granite Core — A physical or virtual appliance in the data center,
and it mounts all the LUNs that need to be made available to
applications and servers at a remote location from the back-end
storage array. Granite Core appliances makes those LUNs
available across the WAN in the branch via the Granite Edge
module on a Steelhead EX or standalone Granite Edge appliance.

Riverbed Granite solution 43


WAN Optimization Controllers

◆ Granite Edge — A module that runs on a Steelhead EX appliance


in the branch office, it presents storage LUNs projected from the
data center as local LUNs to applications and servers on the local
branch network and operates as a block cache to ensure local
performance.
The Granite Edge appliance also connects to the blockstore, a
persistent local cache of storage blocks. When the edge server
requests blocks, those blocks are served locally from the
blockstore (unless they are not present, in which case they are
requested from the data center LUN). Similarly, newly written
blocks are spooled to the local cache, acknowledged to the edge
server, and then asynchronously propagated to the data center.
Because each Granite Edge appliance implementation is linked to
a dedicated LUN at the data center, the blockstore is authoritative
for both reads and writes, and can tolerate WAN outages without
worrying about cache coherency.
Blocks are communicated between Granite Edge appliance and
Granite Core and the data center LUN via an internal protocol.
(Optionally, this traffic can be further optimized by Steelheads for
improved performance.)
◆ For SCSI writes — Granite Edge acknowledges all writes locally
to ensure high-speed ("local") write performance. Granite
maintains a write block journal and preserves block-write order
to keep data consistent in case of a power failure or WAN
outages.
◆ For SCSI reads — The Granite Edge cache is warmed with active
data blocks delivered by Granite Core in the data center, which
performs predictive prefetch to ensure required data is quickly
delivered. Alternatively, a LUN can be "pinned" to the edge cache
and prepopulated with all data from a data center LUN.
Granite initially populates the blockstore in several possible ways:
◆ Reactive prefetch — The system observes block requests, applies
heuristics based on these observations to intelligently predict the
blocks most likely to be requested in the near future, and then
requests those blocks from the data center LUN in advance.
◆ Policy-based prefetch — Configured policies identify the set of
blocks that are likely to be requested at a given edge site in
advance, and then requests those blocks from the data center
LUN in advance.

44 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

◆ First request — Blocks are added to the blockstore when first


requested. Because the first request is cold, it is subject to
standard WAN latency.
Figure 3 shows an example of a multi-site Granite deployment.

Figure 3 Riverbed Granite example

Features
This section briefly describes Riverbed Granite features.

Granite Prediction To deliver high-performance when accessing block-storage over the


and Prefetch WAN, Granite brings file system awareness to the block layer. File
system awareness enables intelligent block prefetch that addresses
both high latency and the inherently random nature of I/O at the
block layer, accelerating block storage access across distance. Granite
Core performs block-level prefetch from the back end storage array
and actively pushes blocks to the Granite Edge to keep its cache
warm with a working set of data. To accomplish this prefetch, Granite
first establishes file system context at the block layer. For Windows
servers (physical or virtual) for instance, Granite Core traverses the
NTFS Master File Table (MFT) to build a two-way map of the file

Riverbed Granite solution 45


WAN Optimization Controllers

system - blocks to files, and files to blocks. This map is used to


determine what to prefetch. By intelligently inspecting block access
requests from the application/host file system iSCSI initiator, Granite
algorithms predict the next logical file system block or a cluster of
blocks to prefetch. This process provides seamless access to data
center LUNs and ensures that operations like file access and large
directory browsing are accelerated across a WAN.

Granite Edge To eliminate latency introduced by the WAN, the Granite appliance
Blockstore cache in the branch presents a write-back block cache, called the blockstore.
Block writes by applications and hosts at the edge are acknowledged
locally by the blockstore and then asynchronously flushed back to the
data center. This enables application and file system initiator hosts in
the branch to make forward progress without being impacted by
WAN latency. As blocks are received, written to disk and
acknowledged, the written blocks are also journaled in write order to
a log. This log file is used to maintain the block-write order to ensure
data consistency in case of a crash or WAN outage. When the
connection is restored, Granite Edge plays the blocks in logged
write-order to Granite Core, which commits the blocks to the physical
LUN on the back-end storage array. The combination of block
journaling and write-order preservation enables Granite Edge to
continue serving write functions in the branch during a WAN
disconnection.

LUN pinning A storage LUN provisioned via Granite can be deployed in two
different modes, pinned and unpinned. Pinned mode caches 100% of
the data blocks on the Steelhead EX appliance at the branch. This
ensures that the contents of specified storage LUNs are maintained at
the edge to support business operations in the event of a WAN
outage. Unpinned mode maintains only a working set of the most
frequently accessed blocks at the branch.

Disconnected In some cases, the WAN connection may suffer an outage or may be
operations unpredictable. That means that branch-resident applications might
behave unpredictably in the case of a block cache miss which cannot
be serviced from the data center LUN due to the WAN outage. For
such cases, Granite LUN pinning is recommended. This will ensure
that the contents of an entire LUN are prefetched from the data center
and prepopulated at the edge to ensure a 100% hit rate on Granite
Edge. In LUN-pinning mode, no reads are sent across the wire to the
data center, although dirty blocks (changed/newly-written blocks)
are preserved in a persistent log and flushed to the data center when
WAN connectivity is restored, ensuring consolidation and protection

46 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

of newly created data. The result is high performance for applications


during a WAN outage, while extending continuous data protection
for the edge.

Boot over the WAN VMware vSphere virtual server technology combined with Granite
now makes it possible to boot over the WAN to provide instant
provisioning and fast recovery capabilities for edge locations. A
bootable LUN in the data center is mapped to a host in a branch
office. The host can either be a separate ESXi server or the Steelhead
EX embedded VSP. Granite Core detects the LUN as a VMFS file
system with an embedded NTFS file system virtual machine
workload and upon further inspection learns the block sequence of
both.
Once the boot process on the branch host starts, blocks for the
Windows file server virtual machine are requested from across the
WAN. Granite Core recognizes these requests and prefetches all of
the required block clusters from the data center provisioned LUN and
pushes them to the Granite Edge appliance at the branch, ensuring
local performance for the boot operation.

Configuring Granite Core High Availability


You can configure high availability between two Granite Core
appliances using the Management Console of either appliance.
When in a failover relationship, both appliances operate
independently, rather than either one being in standby mode. When
either appliance fails, the failover peer manages the traffic of both
appliances.
If you configure the current Granite Core appliance for failover with
another appliance, all storage configuration and storage report pages
include an additional feature that enables you to access and modify
settings for both the current appliance and the failover peer.
This feature appears below the page title and includes the text
"Device failover is enabled. You are currently viewing configuration
and reports for...." You can then select either Self (the current
appliance) or Peer from the drop-down list.

Riverbed Granite solution 47


WAN Optimization Controllers

Figure 4 shows a sample storage configuration page with the feature


enabled.

Figure 4 Sample storage configuration pages

You can configure two appliances for failover in the Failover


Configuration page.

Note: For failover, Riverbed recommends connecting both failover peers


directly with cables via two interfaces. If direct connection is not an option,
Riverbed recommends that each failover connection use a different local
interface and reach its peer IP address via a completely separate route.

To configure a peer appliance, complete the following steps:


1. Log in to the Management Console of one of the appliances to be
configured for high availability.

48 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

2. Choose Configure > Failover Configuration to display the


Failover Configuration page.

3. Enter the peer IP address in the Peer IP field.


4. Specify the failover peer using the controls described in the
following table.

Component Description

Peer IP Address Specify the IP address of the peer appliance.

Local Interface Optionally, specify a local interface over which the


heartbeat is monitored.

Second Peer IP Address Specify a second IP address of the peer appliance.

Second Local Interface Optionally, specify a second local interface over which
the heartbeat is monitored.

Enable Failover Enables the new failover configuration.

After failover has been configured, you can access configuration


settings for both appliances from either appliance.

Riverbed Granite solution 49


WAN Optimization Controllers

Deployment topologies
Figure 5 illustrates a generic Granite deployment.

Figure 5 Granite deployment example

The basic system components are:


◆ Microsoft Windows Branch Server — The branch-side server that
accesses data from the Granite system instead of a local storage
device.
◆ Blockstore — The blockstore is a persistent local cache of storage
blocks. Because each Granite Edge appliance is linked to a
dedicated LUN at the data center, the blockstore is generally
authoritative for both reads and writes.
In the above diagram, the blockstore on the branch side is
synchronized with LUN1 at the data center.
◆ iSCSI Initiator — The iSCSI initiator is the branch-side client that
sends SCSI commands to the iSCSI target at the data center.
◆ Granite-enabled Steelhead EX appliance — Also referred to as a
Granite Edge appliance, the branchside component of the Granite
system links the edge server to the blockstore and links the
blockstore to the iSCSI target and LUN at the data center. The
Steelhead provides general optimization services.
◆ Data Center Steelhead appliance — The data center-side
Steelhead peer for general optimization.

50 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

◆ Granite Core — The data center component of the Granite system.


Granite Core manages block transfers between the LUN and the
Granite Edge appliance.
◆ iSCSI Target — The data center-side server that communicates
with the branch-side iSCSI initiator.
◆ LUNs — Each Granite Edge appliance requires a dedicated LUN
in the data center storage configuration.

Configuring iSCSI settings on EMC storage


For instructions on how to configure iSCSI, refer to the iSCSI SAN
Topologies TechBook, available on the E-Lab Interoperability Navigator,
Topology Resource Center tab, at http://elabnavigator.EMC.com.
The Use Case Scenarios chapter describes the steps to configure iSCSI
storage on the EMC VNX and VMAX arrays. Follow the steps for a
Linux iSCSI host.
After the arrays are configured, the Granite Cores are configured
using the steps described next in “Configuring iSCSI initiator on
Granite Core” on page 52.

Riverbed Granite solution 51


WAN Optimization Controllers

Configuring iSCSI initiator on Granite Core


To configure the iSCSI initiator, complete the following steps:
1. Choose Configure > Storage > iSCSI Configuration to display
the iSCSI Configuration page.

2. Under iSCSI Initiator Configuration, configure authentication


using the controls described in the following table.

Control Description

Initiator Name Specify the name of the initiator to be configured.

Enable Header Digest Includes the header digest data in the iSCSI PDU.

Enable Data Digest Includes the data digest data in the iSCSI PDU.

Enable Mutual CHAP Enables CHAP (Challenge-Handshake Authentication Protocol)


Authentication authentication.
If you select this option, an additional setting appears for specifying
the mutual CHAP user. You can either select an existing user from
the drop-down list or create a new CHAP user definition dynamically.
Note: CHAP authenticates a user or network host to an
authenticating entity. CHAP provides protection against playback
attack by the peer through the use of an incrementally changing
identifier and a variable challenge value.

Apply Applies the changes to the running configuration.

52 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

Configuring iSCSI portal


To configure an iSCSI portal, complete the following steps:
1. Choose Configure > Storage > iSCSI Configuration to display
the iSCSI Configuration page.

2. Under iSCSI Portal Configuration, add or modify iSCSI portal


configurations using the controls described in the following table.

Control Description

Add an iSCSI Portal Displays controls for configuring and adding a new
iSCSI portal.

IP Address Specify the IP address of the iSCSI portal.

Port Specify the port number of the iSCSI portal. The


default is 3260.

Authentication Select an authentication method (None or CHAP) from


the drop-down list.
Note: If you select CHAP, an additional field displays in
which you can specify (or create) the CHAP username.

Add iSCSI Portal Adds the defined iSCSI portal to the running
configuration.

Riverbed Granite solution 53


WAN Optimization Controllers

3. To view or modify portal settings, click the portal IP address in


the list to access the following set of controls.

Control Description

Portal Settings Specify the following:


• Port - The port setting for the selected iSCSI portal.
• Authentication - Specify either None or CHAP from the
drop-down list.
• Update iSCSI Portals - Updates the portal settings
configuration.

Offline LUNs Click Offline LUNs to take offline all LUNs serviced by
this selected iSCSI portal.

4. To add a target to the newly configured portal:


a. Click the portal IP address in the list to expand the set of
controls.

54 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

b. Under Targets, add a target for the portal using the controls
described in the following table.

Control Description

Add a Target Displays controls for adding a target.

Target Name Enter the target name or choose from available targets.
Note: This field also enables you to rescan for available
targets.

Port Specify the port number of the target.

Snapshot Configuration From the drop-down list, select an existing snapshot


configuration.
If the desired snapshot configuration does not appear on
the list, you can add a new one by clicking Add New
Snapshot Configuration. You will be prompted to
specify the following:
• Host Name or IP Address - Specify the IP address or
hostname of the storage array.
• Type - Select the type from the drop-down list.
• Username - Specify the username.
• Password/Confirm Password - Specify a password.
Retype the password in the Password Confirm text
field.
• Protocol - Specify either HTTP or HTTPS.
• Port - Specify the HTTP or HTTPS port number.

Note: Note: The Protocol and Port fields are only


activated if NetApp is selected as Type.

Add Target Adds the newly defined target to the current iSCSI portal
configuration.

5. To modify an existing target configuration:


a. Click the portal IP address in the list to expand the set of
controls.

Riverbed Granite solution 55


WAN Optimization Controllers

b. Under Targets, click the target name in the Targets table to


expand the target settings using the controls described in the
following table.

Control Description

Target Settings Open this tab to modify the port and snapshot
configuration settings.
Optionally, you can add a new snapshot configuration
dynamically by clicking the Add New Snapshot
Configuration link adjacent to the setting field.

Offline LUNs Open this tab to access the Offline LUNs button.
Clicking this button takes offline all configured LUNs
serviced by the current target.

Configuring LUNs
To configure an iSCSI LUN, complete the following steps.
1. Choose Configure > Storage > LUNs to display the LUNs page.

56 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

2. Configure the LUN using the controls described in the following


table.

Control Description

Add an iSCSI LUN Displays controls for adding an iSCSI LUN to the current
configuration.

LUN Serial Number Select from the drop-down list of discovered LUNs.
The LUNs listed are shown using the following format: serial
number (portal/target).
Note: If the desired LUN does not appear, scroll to the bottom
of the list and select Rescan background storage for new LUNs.

LUN Alias Specify an alias for the LUN.

Add iSCSI LUN Adds the new LUN to the running configuration.

3. To modify an existing iSCSI LUN configuration, click the name in


the LUN list to display additional controls.

Control Description

Details Displays online or offline status.


• Click Offline to take the LUN offline.
• Click Online to bring the LUN online.
Additionally, displays the following information about
the LUN:
• Connection status
• Locally Assigned LUN Serial
• Origin LUN Serial
• Origin Portal
• Origin Target
• Size (in MB)

Alias Displays the LUN alias. Optionally, you can modify


the value and click Update Alias.

Edge Mapping Displays the Granite Edge appliance to which the


LUN is mapped.
To unmap, click Unmap.

Failover Displays whether the LUN is configured for failover:


To enable or disable, click Disable or Enable.

Riverbed Granite solution 57


WAN Optimization Controllers

Control Description

MPIO Displays multipath information for the LUN.


Additionally, the MPIO policy can be changed from
round-robin (default) to fixed-path.

Snapshots Displays the snapshot configurations for the selected


iSCSI LUN.
Additionally, controls link to the settings for modifying
and updating snapshots.

Pin/Prepop Displays the pin status (Pinned or Unpinned) and


provides controls for changing the status.
When a LUN is pinned, the data is reserved and not
subject to the normal blockstore eviction policies.
This tab also contains controls for enabling or
disabling the prepopulation service and for
configuring a prepopulation schedule.
Note: You can create a prepopulation schedule only
when the pin status is set and updated to pinned.

Configuring local LUNs


To configure an iSCSI LUN, complete the following steps.
1. Choose Configure > Storage > LUNs to display the LUNs page.
2. Configure the LUN using the controls described in the following
table.

Control Description

Add a Local LUN Displays controls for adding a local LUN to the current
configuration. Local LUNs consist of storage on the Granite
Edge only, there is no corresponding LUN on the Granite
Core.

Granite Edge Select a LUN from the drop-down list. This list displays
configured Granite Edge appliance.

Size Specify the LUN size, in MB.

Alias Specify the alias for the LUN.

Add a Local LUN Adds the new LUN to the running configuration.

58 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

3. To modify an existing local LUN configuration, click the name in


the LUN list to display additional controls.

Control Description

LUN Status Displays online or offline status.


• Click Offline to take the LUN offline.
• Click Online to bring the LUN online.

LUN Details Displays the following information about the LUN:


• VE assigned serial number
• Granite Edge appliance
• Target

LUN Alias Displays the LUN alias, if applicable. Optionally, modify the
value and click Update Alias.

Adding Granite Edge appliances


To add or modify Granite Edge appliances
1. Choose Configure > Storage > Granite Edges to display the
Granite Edges page.

Riverbed Granite solution 59


WAN Optimization Controllers

2. Configure the Granite Edge using the controls described in the


following table.

Control Description

Add a Granite Edge Displays controls for adding a Granite Edge appliance to the
current configuration.

Granite Edge Identifier Specify the identifier for the Granite Edge appliance. This
value must match the same value configured on the Granite
Edge appliance.
Note: Granite Edge identifiers is case-sensitive.

Blockstore encryption Changes the encryption used when writing data to the
blockstore.

Add Granite Edge Adds the new Granite Edge appliance to the running
configuration. The newly added appliance appears in the list.

3. To remove an existing Granite Edge configuration, click the trash


icon in the Remove column.

Configuring CHAP users


You can configure CHAP users in the CHAP Users page.

Note: You can also configure CHAP users dynamically in the iSCSI
Configuration page.

To configure CHAP users, complete the following steps.


1. Choose Configure > Storage > CHAP Users to display the CHAP
Users page.

60 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

2. Add new CHAP users using the controls described in the


following table.

Control Description

Add a CHAP User Displays controls for adding a new CHAP user to the running
configuration.

Username Specify a CHAP username.

Password/Confirm Specify and confirm a password for the new CHAP user.
Password

Add CHAP User Adds the new CHAP user to the running configuration.

3. To modify an existing CHAP user configuration, click the


username in the User table to expand a set of additional controls.
New CHAP users are enabled by default.
4. To disable a CHAP user:
a. Click the username to expand the set of additional controls.
b. Clear the Enable check box.
c. Click Update CHAP User.
5. To change the user password, enter and confirm the new
password and click Update CHAP user.
6. To remove an existing CHAP user configuration, click the trash
icon in the Delete column.
7. Click Save to save your settings permanently.

Confirming connection to the Granite Edge appliance


This section describes how to confirm that the Granite Edge
appliance is communicating with the newly configured Granite Core
appliance.
To confirm connection, complete the following steps:
1. Log in to the Management Console of the Granite Edge appliance.
2. Choose Configure > Granite > Granite Storage to go to the
Granite Storage page.

Riverbed Granite solution 61


WAN Optimization Controllers

If the connection was successful, the page displays connection details


including the iSCSI target configuration and LUN information.

References
For more information, refer to Riverbed's website at
http://www.riverbed.com.
◆ Granite Core Deployment Guide
◆ Granite Core Installation and Configuration Guide
◆ Riverbed Branch Office Infrastructure for EMC Storage Systems
(Reference Architecture)

62 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

Silver Peak appliances


This section provides information on the Silver Peak appliances
optimization controller. The following topics are discussed:
◆ “Overview” on page 63
◆ “Terminology” on page 64
◆ “Features” on page 66
◆ “Deployment topologies” on page 67
◆ “Failure modes supported” on page 67
◆ “FCIP environment” on page 67
◆ “GigE environment” on page 68
◆ “References” on page 69

Overview
Silver Peak appliances are interconnected by tunnels, which transport
optimized traffic flows. Policies control how the appliance filters
LAN side packets into flows and whether:
◆ an individual flow is directed to a tunnel, shaped, and optimized;
◆ processed as shaped, pass-through (unoptimized) traffic;
◆ processed as unshaped, pass-through (unoptimized) traffic;
◆ continued to the next applicable Route Policy entry if a tunnel
goes down; or
◆ dropped.
The appliance manager has separate policies for routing,
optimization, and QoS functions. These policies prescribe how the
appliance handles the LAN packets it receives.
The optimization policy uses optimization techniques to improve the
performance of applications across the WAN. Optimization policy
actions include network memory, payload compression, and TCP
acceleration.
Silver Peak ensures network integrity by using QoS management,
Forward Error Correction, and Packet Order Correction. When
Adaptive Forward Error Correction (FEC) is enabled, the appliance
introduces a parity packet, which helps detect and correct

Silver Peak appliances 63


WAN Optimization Controllers

single-packet loss within a stream of packets, reducing the need for


retransmissions. Silver Peak can dynamically adjust how often this
parity packet is introduced in response to changing link conditions.
This can help maximize error correction while minimizing overhead.
To avoid retransmissions that occur when packets arrive out of order,
Silver Peak appliances use Packet Order Correction (POC) to
resequence packets on the far end of a WAN link, as needed.

Terminology
Consider the following terminology when using Silver Peak
configuration settings:
◆ Coalescing ON — Enables/disables packet coalescing. Packet
coalescing transmits smaller packets in groups of larger packets,
thereby increasing performance and helping to overcome the
effects of latency.
◆ Coalesce Wait — Timer (in milliseconds) used to determine the
amount of time to wait before transmitting coalesced packets.
◆ Compression — Reduces the bandwidth consumed by traffic
traversing the WAN. Payload compression is used in conjunction
with network memory to provide compression on "first pass"
data.
◆ Congestion Control — Techniques used by Silver Peak to manage
congestion scenarios across a WAN. Configuration options are
standard, optimized, and auto. Standard uses standard TCP
congestion control. Optimized congestion control is the most
aggressive mode of congestion control and should only be used in
environments with point-to-point connections for a dedicated to
single application. Auto congestion control aims to improve
throughput over standard congestion control, but may not be
suitable for all environments.
◆ FEC / FEC Ratio — Technique used by Silver Peak to recover
from packet loss without the need for packet retransmissions.
Hence, loss is corrected on the Silver Peak appliance resulting in
higher throughout during the data transmission.
◆ IP Header Compression — Enables/disables compression of the
IP header in order to reduce the packet size. Header compression
can provide additional bandwidth gains by reducing packet
header information using specialized compression algorithms.

64 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

◆ Mode — Refers to the Silver Peak tunnel configuration. The


default setting is GRE. Alternative option is UDP.
◆ MTU (Maximum Transmission Unit) — The size, in bytes, of the
largest PDU that a given layer of a communications protocol can
pass onwards.
◆ Network Memory — Silver Peak's implementation of real-time
data reduction of network traffic. This de-duplication technology
is used to inspect all inbound and outbound WAN traffic, storing
a local instance of data on each appliance. The NX Series
appliance compares real-time traffic streams with to patterns
stored using Network Memory. If a match exists, a short reference
pointer is sent to the remote Silver Peak appliance, instructing it
to deliver the traffic pattern from its local instance. Repetitive
data is never sent across the WAN. If the content is modified, the
Silver Peak appliance detects the change at the byte level and
updates the network's memory. Only the modifications are sent
across the WAN. These are combined with original content by NX
Series appliances at the destination location.
Currently, it is recommended to enable network memory and set
the network memory mode to 1. Mode 1 is referred to as "low
latency mode" and enables network memory to better balance
data reduction versus high throughput. While network memory
can be enabled from the GUI, configuring it for mode 1 must be
performed through the CLI.
◆ Payload Compression — Uses algorithms to identify relatively
short byte sequences that are repeated frequently over time.
These sequences are then replaced with shorter segments of code
to reduce the size of transmitted data. Simple algorithms can find
repeated bytes within a single packet; more sophisticated
algorithms can find duplication across packets and even across
flows.
◆ Reorder Wait — Time (in milliseconds) that the Silver Peak
appliances will wait to reorder packets. This is a dynamic value
that will change based on line conditions. Recommendation is to
leave this as the default for SRDF traffic.
◆ RTP Header Compression — Used to compress the size of the
RTP protocol packet header used in Voice over IP
communications. Header compression can provide additional
bandwidth gains by reducing packet header information using
specialized compression algorithms.

Silver Peak appliances 65


WAN Optimization Controllers

◆ TCP Acceleration — References several techniques used by Silver


Peak to accelerate the TCP protocol. TCP acceleration uses
techniques such as selective acknowledgement, window scaling,
and transaction size adjustment to compensate for poor
performance on high latency links.
◆ Tunnel Auto Max BW — Allows the Silver Peak to automatically
determine the maximum bandwidth available. Recommendation
is to disable this in SRDF environments.
◆ Tunnel Max BW — For manually configuring the maximum
bandwidth accessible to the Silver Peak. This is recommended in
SRDF environments where bandwidth values are known. This is
a static configuration.
◆ Tunnel Min BW — For manually configuring the maximum
bandwidth accessible to the Silver Peak. This does not need to be
set for proper operation. This is a static configuration. A value of
32kbps is recommended, which is the default.
◆ WAN Bandwidth — Applies to the WAN side of the appliance
and should be set to the amount of bandwidth to be made
available to the appliance on the WAN side. Inputting a value
also configures the tunnel max bandwidth configuration variable.
◆ Windows Scaling — Used to overcome the effects of latency on
single-flow throughput in a TCP network. The window-scale
factor multiplies the standard TCP window of 64 KB by 2 to the
power of the window-scale. Default window-scale is 6.

Features
Features include:
◆ Compression (payload and header)
◆ Network memory (data-deduplication)
◆ TCP acceleration
◆ QoS (Quality of Service)
◆ FEC (Forward Error Correction)
◆ POC (Packet Order Correction)
◆ Encryption - IPsec

66 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

Deployment topologies
Deployment topologies include:
◆ In-line (bridge mode)
• In-line
◆ Out-of-path (router)
• Out-of-path with Policy-Based-Routing (PBR) redirection
• Out-of-path with Web Cache Coordination Protocol
(WCCPv2)
• Out-of-path with VRRP peering to WAN router
• Out-of-path with Policy-Based-Routing (PBR) and VRRP
redundant Silver Peak appliances
• Out-of-path with Web Cache Coordination Protocol (WCCP)
redundant Silver Peak appliances
◆ The Silver Peak appliances can only be deployed in out-of-path
(Router) mode when using 10 Gb Ethernet Fibre data ports as
optical interfaces to do not fail to wire
◆ The Silver Peak NX-8700, NX-9700, and NX-10000 appliances
support 10 Gb Ethernet Fibre data ports
◆ The SilverPeak VX (virtual appliances) and the Silver Peak VRX
(virtual appliances) are supported when deployed on the
VMWARE ESX or ESXi servers. The virtual appliances can only
be deployed in out-of-path configurations.

Failure modes supported


The following failure modes are supported:
• Fail-to-wire
• Fail-open

FCIP environment
The following Silver Peak configuration settings are recommended in
an FCIP environment:
◆ WAN Bandwidth = (Environment dependent)
◆ Tunnel Auto Max BW = Disabled (Unchecked)

Silver Peak appliances 67


WAN Optimization Controllers

◆ Tunnel Max BW = in Kb/s (Environment dependent)


◆ Tunnel Min BW = 32 Kb/s
◆ Reorder Wait = 100 ms
◆ MTU = 1500 (For 3.1 code and higher, maximum MTU = 2500)
◆ Mode = GRE
◆ Network Memory = Enabled
◆ Compression = Enabled
◆ TCP Acceleration = Enabled
◆ CIFS Acceleration = Disabled
◆ FEC = Enabled
◆ FEC Ratio = 1:5 (Recommended)
◆ Windows Scale Factor = 8
◆ Congestion Control = Optimized
◆ IP Header Compression = Enabled
◆ RTP Header Compression = Enabled
◆ Coalescing On = Yes
◆ Coalesce Wait = 0 ms
◆ From the CLI run: "system network-memory mode 1"

Note: Maximum latency (round-trip time) and packet drop supported on


Cisco FCIP links are 100 ms round trip and 0.5% packet drop. The limit is the
same regardless whether latency and packet drop conditions exist together or
only one of them exists. This limitation only applies to the baseline (without
WAN OPT appliances). With WAN OPT appliances and proper
configurations, RTT and packet loss can be extended beyond that limitation.
Up to 200 ms round trip and 1% packet drop were qualified by EMC E-Lab.

GigE environment
The following Silver Peak configuration settings are recommended in
a GigE environment:
◆ WAN Bandwidth = (Environment dependent)
◆ Tunnel Auto Max BW = Disabled (Unchecked)
◆ Tunnel Max BW = in Kbps (Environment dependent)

68 WAN Optimization Controller Technologies TechBook


WAN Optimization Controllers

◆ Tunnel Min BW = 32 Kb/s


◆ Reorder Wait = 100 ms
◆ MTU = 1500
◆ Mode = GRE
◆ Network Memory = Enabled
◆ Compression = Enabled
◆ TCP Acceleration = Enabled
◆ CIFS Acceleration = Disabled
◆ FEC = Enabled
◆ FEC Ratio = 1:5 (Recommended)
◆ Windows Scale Factor = 8
◆ Congestion Control = Optimized
◆ IP Header Compression = Enabled
◆ RTP Header Compression = Enabled
◆ Coalescing On = Yes
◆ Coalesce Wait = 0 ms
◆ From the CLI run: "system network-memory mode 1"

References
For more information about Silver Peak appliances, refer to the Silver
Peak website at http://www.silver-peak.com.
◆ Silver Peak Command Line Interface Reference Guide
◆ Silver Peak Network Deployment Guide

Silver Peak appliances 69


WAN Optimization Controllers

70 WAN Optimization Controller Technologies TechBook

You might also like