Professional Documents
Culture Documents
Docu33925 TechBook WAN Optimization Controller Technologies
Docu33925 TechBook WAN Optimization Controller Technologies
Technologies
Version 3.1
Vinay Jonnakuti
Chuan Liu
Eric Pun
Donald Robertson
Tom Zhao
Copyright © 2012- 2014 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United
State and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulator document for your product line, go to EMC Online Support
(https://support.emc.com).
Preface.............................................................................................................................. 5
EMC Support Matrix For the most up-to-date information, always consult the EMC Support
and E-Lab Matrix (ESM), available through E-Lab Interoperability Navigator
Interoperability (ELN) at http://elabnavigator.EMC.com.
Navigator
All of the matrices, including the ESM (which does not include most
software), are subsets of the E-Lab Interoperability Navigator
database. Included under this tab are:
◆ The EMC Support Matrix, a complete guide to interoperable, and
supportable, configurations.
◆ Subset matrices for specific storage families, server families,
operating systems or software products.
◆ Host connectivity guides for complete, authoritative information
on how to configure hosts effectively for various storage
environments.
Consult the Internet Protocol pdf under the "Miscellaneous" heading
for EMC's policies and requirements for the EMC Support Matrix.
Related The following documents, including this one, are available through
documentation the E-Lab Interoperability Navigator at
http://elabnavigator.EMC.com.
These documents are also available at the following location:
http://www.emc.com/products/interoperability/topology-resource-center.htm
Authors of this This TechBook was authored by Vinay Jonnakuti and Eric Pun, along
TechBook with other EMC engineers, EMC field personnel, and partners.
Vinay Jonnakuti is a Sr. Corporate Systems Engineer in the Unified
Storage division of EMC focusing on VNX and VNXe products,
working on pre-sales deliverables including collateral, customer
presentations, customer beta testing and proof of concepts. Vinay has
been with EMC's for over 6 years. Prior to his current position, Vinay
worked in EMC E-Lab leading the qualification and architecting of
solutions with WAN-Optimization appliances from various partners
with various replication technologies, including SRDF (GigE/FCIP),
SAN-Copy, MirrorView, VPLEX, and RecoverPoint. Vinay also
worked on Fibre Channel and iSCSI qualification on the VMAX
Storage arrays.
Chuan Liu is a Senior Systems Integration Engineer with more than 6
years of experience in the telecommunication industry. After joining
EMC, he worked in E-Lab qualifying IBM/HP/Cisco blade switches
and WAN Optimization products. Currently, Chuan focuses on
qualifying SRDF with FCIP/GigE technologies used in the setup of
different WAN Optimization products.
Eric Pun is a Senior Systems Integration Engineer and has been with
EMC for over 13 years. For the past several years, Eric has worked in
E-lab qualifying interoperability between Fibre Channel switched
hardware and distance extension products. The distance extension
technology includes DWDM, CWDM, OTN, FC-SONET, FC-GbE,
FC-SCTP, and WAN Optimization products. Eric has been a
contributor to various E-Lab documentation, including the SRDF
Connectivity Guide.
Donald Robertson is a Senior Systems Integration Engineer and has
held various engineering positions in the storage industry for over 18
years. As part of the EMC E-Lab team, Don leads the qualification
and architecting of solutions with WAN-Optimization appliances
from various partners using various replication technologies,
including SRDF (GigE/FCIP), VPLEX, RecoverPoint.
Conventions used in EMC uses the following conventions for special notices:
this document
Note: A note presents information that is important, but not hazard-related.
Typographical conventions
EMC uses the following type style conventions in this document.
Bold Use for names of interface elements, such as names of
windows, dialog boxes, buttons, fields, tab names, key
names, and menu paths (what the user specifically
selects or clicks)
Italic Use for full titles of publications referenced in text
Monospace Use for:
• System output, such as an error message or script
• System code
• Pathnames, filenames, prompts, and syntax
• Commands and options
Monospace italic Use for variables.
Monospace bold Use for user input.
[] Square brackets enclose optional values
| Vertical bar indicates alternate selections — the bar
means “or”
{} Braces enclose content that the user must specify,
such as x or y or z
... Ellipses indicate nonessential information omitted
from the example
Where to get help EMC support, product, and licensing information can be obtained as
follows:
Note: To open a service request through the EMC Online Support site, you
must have a valid support agreement. Contact your EMC sales representative
for details about obtaining a valid support agreement or to answer any
questions about your account.
Product information
For documentation, release notes, software updates, or for
information about EMC products, licensing, and service, go to the
EMC Online Support site (registration required) at:
https://support.EMC.com
Technical support
EMC offers a variety of support options.
Support by Product — EMC offers consolidated, product-specific
information on the Web at:
https://support.EMC.com/products
The Support by Product web pages offer quick links to
Documentation, White Papers, Advisories (such as frequently used
Knowledgebase articles), and Downloads, as well as more dynamic
content, such as presentations, discussion, relevant Customer
Support Forum entries, and a link to EMC Live Chat.
EMC Live Chat — Open a Chat or instant message session with an
EMC Support Engineer.
eLicensing support
To activate your entitlements and obtain your Symmetrix license files,
visit the Service Center on https://support.EMC.com, as directed on
your License Authorization Code (LAC) letter e-mailed to you.
For help with missing or incorrect entitlements after activation (that
is, expected functionality remains unavailable because it is not
licensed), contact your EMC Account Representative or Authorized
Reseller.
For help with any errors applying license files through Solutions
Enabler, contact the EMC Customer Support Center.
If you are missing a LAC letter, or require further instructions on
activating your licenses through the Online Support site, contact
EMC's worldwide Licensing team at licensing@emc.com or call:
◆ North America, Latin America, APJK, Australia, New Zealand:
SVC4EMC (800-782-4362) and follow the voice prompts.
◆ EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Overview
A WAN Optimization Controller (WOC) is an appliance that can be
placed In-line or Out-of-Path to reduce and optimize the data that is
to be transmitted over the LAN/MAN/WAN. These devices are
designed to help mitigate the effects of packet loss, network
congestion, and latency while reducing the overall amount of data to
be transmitted over the network.
In general, the technologies utilized in accomplishing this are
Transmission Control Protocol (TCP) acceleration,
data-deduplication, and compression. Additionally, features such as
QoS, Forward Error Correction (FEC), and Encryption may also be
available.
Network links and WAN circuits can have high latency and/or
packet loss as well as limited capacity. WAN Optimization
Controllers can be used to maximize the amount of data that can be
transmitted over a link. In some cases, these appliances may be a
necessity, depending on performance requirements.
WAN and data optimization can occur at varying layers of the OSI
stack, whether it be at the network and transport layer, the session,
presentation, and application layers, or just to the data (payload)
itself.
Deployment topologies
There are two basic topologies for deployment:
◆ In-path/in-line/bridge
◆ Out-of-path/routed
An in-path/in-line/bridge deployment, as shown in Figure 1, means
that the WAN Optimization Controller (WOC) is directly in the path
between the source and destination end points where all inbound
and outbound flows will pass through the WAN Optimization
Controllers. The placement of the WOC devices at each site is
typically placed as close as possible to the WAN circuit.
Deployment topologies 15
Network and Deployment Topologies and Implementations
Configuration settings
Configurations settings are as follows:
◆ Compression on GigE (RE) port = Enabled
Note: For Riverbed Steelhead RiOS v6.1.1a or later and Silver Peak
NX-OS 4.4 or later, the compression setting should be Enabled on the
Symmetrix storage system. The WAN optimization appliances
automatically detect and disable compression on the Symmetrix system.
In the event the WAN optimization appliances go down or are removed,
the Symmetric REs will re-enable compression and provide some level of
bandwidth reduction, although likely not to the level provided by the
WAN optimization appliances.
For Riverbed, use legacy flow control for 5876.229.145 and older ucode.
Use dynamic flow control for 5876.251.161 and later. Dynamic flow
control is only supported with Riverbed using RiOS 8.0.2 and later. Refer
to the WAN Optimization Controller table in the EMC Support Matrix for
support RiOS revisions. In some instances when there is packet loss,
legacy flow control may increase performance if customer requirements
are not being met.
For Silver Peak, dynamic flow control is the recommended flow control
setting.
When upgrading from 5876.229.145 and older ucode (where legacy flow
control should be set) to 5876.251.161 and later, it is recommended to
remain at legacy flow control.
Notes
Note the following:
SRDF Flow Control SRDF Flow Control is enabled by default for increased stability of the
SRDF links. In some cases, further tuning of SRDF flow control and
related settings can be made to improve performance. For more
information, refer to “Storage and replication application” on page 17
or contact your EMC Customer Service representative.
FCIP Configurations
FCIP Configurations 21
FCIP Configurations
Brocade FCIP
This section provides configuration information for Brocade FCIP.
Configuration settings
Configuration settings are as follows:
◆ FCIP Fastwrite = Enabled
◆ Compression = Disabled
◆ TCP Byte Streaming = Enabled
◆ Commit Rate or Max/Min settings = in Kb/s (Environment
dependent)
◆ Tape Pipelining = Disabled
◆ SACK = Enabled
◆ Min Retransmit Time = 100
◆ Keep-Alive Timeout = 10
◆ Max Re-Transmissions = 8
◆ Compression
This simply compresses the data that flows over the FCIP tunnel.
This should be disabled when using with WAN Optimization
Controller (WOC) devices, thus allowing the WOC device to
perform the compression and data de-duplication.
◆ Commit Rate
This setting is environment dependent. This should be set in
accordance with the WAN Optimization vendor. Considerations
such as data-to-be-optimized, available WAN circuit size and
data-reduction ratio need to be taken into account.
◆ Adaptive Rate Limit (ARL)
Commit Rate is replaced by Minimum and Maximum rates since
newer installations have the ARL feature. When used with WAN
Optimization, the maximum is always set to port link speed.
Refer to the Brocade or WAN optimization vendor
documentation for more information.
◆ TCP Byte Streaming
This is a Brocade feature which allows a Brocade FCIP switch to
communicate with a third-party WAN Optimization Controller.
This feature supports an FCIP frame which has been split into a
maximum of 8 separate TCP segments. If the frame is split into
more than eight segments, it results in prematurely sending a
frame to the FCIP layer with an incorrect size and the FCIP tunnel
bounces.
Brocade FCIP 23
FCIP Configurations
References
For further information, refer to https://support.emc.com and
http://www.brocade.com.
◆ EMC Connectrix B Series Fabric OS Administrator's Guide
◆ Brocade Fabric OS Administrator’s Guide
Cisco FCIP
This section provides configuration information for Cisco FCIP.
Configuration settings
Configuration settings are as follows:
◆ Max-Bandwidth = Environment dependent (Default = 1000 Kb)
◆ Min-Available-Bandwidth = Normally set to WAN bandwidth /
number of GigE links using that bandwidth.
For example, if WAN = 1 Gb and using 2 GigE ports, then the Min
= 480 Mb; if using 4 GigE, then Min = 240 Mb.
◆ Estimated roundtrip time = Set to measured latency (round-trip
time - RTT) between MDS switches
◆ IP Compression = Disabled
◆ FCIP Write Acceleration = Enabled
◆ Tape Accelerator = Disabled
◆ Encryption = Disabled
◆ Min Re-Transmit Timer = 200 ms
◆ Max Re-Transmissions = 8
◆ Keep-Alive = 60
◆ SACK = Enabled
◆ Timestamp = Disabled
◆ PMTU = Enabled
◆ CWM = Enabled
◆ CWM Burst Size = 50 KB
Cisco FCIP 25
FCIP Configurations
Notes
Consider the following information for Cisco FCIP tunnel settings:
◆ Max-Bandwidth
The max-bandwidth-mbps parameter and the measured RTT
together determine the maximum window size. This should be
configured to match the worst-case bandwidth available on the
physical link.
◆ Min-Available-Bandwidth
The min-available-bandwidth parameter and the measured RTT
together determine the threshold below which TCP aggressively
maintains a window size sufficient to transmit at minimum
available bandwidth. It is recommend that you adjust this to
50-80% of the Max-Bandwidth.
◆ Estimated Roundtrip-Time
This is the measured latency between the 2 MDS GigE interfaces.
The following MDS command can be used to measure the RTT:
FCIPMDS2(config)# do ips measure-rtt 10.20.5.71
interface GigabitEthernet1/1
Roundtrip time is 106 micro seconds (0.11 milli
seconds)
Basic guidelines
Consider the following guidelines when creating/utilizing multiple
FCIP interfaces /profiles:
◆ Gigabit Ethernet Interfaces support a single IP address.
◆ Every FCIP profile must be uniquely addressable by an IP
address and TCP port pair. Where FCIP profiles share a Gigabit
Ethernet interface, the FCIP profiles must use different TCP port
numbers.
◆ FCIP Interface defines the physical FCIP link (local GigE port). If
you add an FCIP Profile for TCP parameters and a local GigE IP
address plus peer (remote) IP address to the FCIP Interface, it
forms an FCIP Link or Tunnel. There are always two TCP
connections (control plus data) and you can add one additional
data TCP connection per FCIP link.
◆ EMC recommends three FCIP interfaces per GigE port for best
performance. More FCIP interfaces help improve SRDF link
stability when there is high latency and/or packet loss
(>100ms/0.5%, regardless of whether latency and packet drop
conditions exist together or only one exists). A dedicated FCIP
profile per FCIP link is recommended.
Cisco FCIP 27
FCIP Configurations
References
For further information, refer to the following documentation on
Cisco's website at http://www.cisco.com.
◆ Wide Area Application Services Configuration Guide
◆ Replication Acceleration Deployment Guide
◆ Q&A for WAAS Replication Accelerator Mode
◆ MDS 9000 Family CLI Configuration Guide
WAN Optimization
Controllers
Overview
RiOS is the software that powers the Riverbed's Steelhead WAN
Optimization Controller. The optimization techniques RiOS utilizes
are:
◆ Data Streamlining
◆ Transport Streamlining
◆ Application Streamlining, and
◆ Management Streamlining
RiOS uses a Riverbed proprietary algorithm called Scalable Data
Referencing (SDR) along with data compression when optimizing
data across the WAN. SDR breaks up TCP data streams into unique
data chunks that are stored in the hard disk (data store) of the device
running RiOS. Each data chunk is assigned a unique integer label
(reference) before it is sent to a peer RiOS device across the WAN.
When the same byte sequence is seen again in future transmissions
from clients or servers, the reference is sent across the WAN instead
of the raw data chunk. The peer RiOS device uses this reference to
find the original data chunk on its data store, and reconstruct the
original TCP data stream.
After a data pattern is stored on the disk of a Steelhead appliance, it
can be leveraged for transfers to any other Steelhead appliance across
Terminology
Consider the following terminology when using Riverbed
configuration settings:
◆ Adaptive Compression — Detects LZ data compression
performance for a connection dynamically and turns it off (sets
the compression level to 0) momentarily if it is not achieving
optimal results. Improves end-to-end throughput over the LAN
by maximizing the WAN throughput. By default, this setting is
disabled.
◆ Adaptive Data Streamlining Mode SDR-M — RiOS uses a
Riverbed proprietary algorithm called Scalable Data Referencing
(SDR). SDR breaks up TCP data streams into unique data chunks
that are stored in the hard disk (data store) of the device running
RiOS. Each data chunk is assigned a unique integer label
(reference) before it is sent to a peer RiOS device across the WAN.
When the same byte sequence is seen again in future
transmissions from clients or servers, the reference is sent across
the WAN instead of the raw data chunk. The peer RiOS device
uses this reference to find the original data chunk on its data
store, and reconstruct the original TCP data stream. SDR-M
performs data reduction entirely in memory, which prevents the
Steelhead appliance from reading and writing to and from the
IMPORTANT
You cannot use peer data store synchronization with SDR-M. In
code stream 5.0.x, this must be set from the CLI by running:
"datastore anchor-select 1033" and then "restart clean."
IMPORTANT
Use caution when specifying MXTCP. The outbound rate for
the optimized traffic in the configured QoS class immediately
increases to the specified bandwidth, and does not decrease in
the presence of network congestion. The Steelhead appliance
always tries to transmit traffic at the specified rate.
Link share weight — Prior to RiOS 8.0.x, the link share weight
parameter has no effect on a QoS class configured with
MXTCP. With RiOS 8.0.x and later, Adaptive MXTCP will allow
the link share weight settings to function for MXTCP QoS
classes.
Notes
Consider the following when using Riverbed configuration settings:
◆ LAN Send and Receive Buffer Size should be configured to 2 MB
◆ WAN Send and Receive Buffer Size is environment dependent
and should be configured with the result utilizing the following
formula:
WAN BW * RTT * 2 / 8 = xxxxxxx bytes
Features
Features include:
◆ SDR (Scalable Data Referencing)
◆ Compression
◆ QoS (Quality of Service)
◆ Data / Transport / Application / Management Streamlining
◆ Encryption - IPsec
Deployment topologies
Deployment topologies include:
◆ In-Path
• Physical In-Path
◆ Virtual In-Path
• WCCPv2 (Web Cache Coordination Protocol)
• PBR (Policy-Based-Routing)
◆ Out-of-Path
• Proxy
◆ Steelhead DX 8000, Steelhead CX 7055/5055/1555 and Steelhead
7050/6050/5050 appliances also support 10 GbE Fibre ports
◆ The virtual steelheads are supported when deployed on
VMWARE ESX or ESXi servers. The virtual appliances can only
be deployed in out-of-path configurations.
FCIP environment
The following Riverbed configuration settings are recommended in a
FCIP environment:
◆ Configure > Networking > QoS Classification:
• QoS Classification and Enforcement = Enabled
• QoS Mode = Flat
• QoS Network Interface with WAN throughput = Enabled for
appropriate WAN interface and set available WAN Bandwidth
• QoS Class Latency Priority = Real Time
• QoS Class Guaranteed Bandwidth % = Environment
dependent
• QoS Class Link Share Weight = Environment dependent
• QoS Class Upper Bandwidth % = Environment dependent
• Queue = MXTCP
• QoS Rule Protocol = All
• QoS Rule Traffic Type = Optimized
• DSCP = All
• VLAN = All
◆ Configure > Optimization > General Service Settings:
• In-Path Support = Enabled
• Reset Existing Client Connections on Start-Up = Enabled
• Enable In-Path Optimizations on Interface In-Path_X_X for
appropriate In-Path interface
• In RiOS v5.5.3 CLI or later: “datastore codec multi-codec
encoder max-ackqlen 30"
• In RiOS v6.0.1a or later: "datastore codec multi-codec encoder
global-txn-max 128"
• In RiOS v6.0.1a or later: "datastore sdr-policy sdr-m"
Note: Latest appliances that use SSD-based data store will achieve
high throughput with standard SDR(SDR-Default). For legacy-based
data store appliances, use SDR-M.
• Compression Level = 1
• Adaptive Compression = Disabled
• Multi-Core Balancing = Enabled
GigE environment
The following are Riverbed configuration settings recommended in a
GigE environment:
Note: Latest appliances that use SSD-based data store will achieve
high throughput with standard SDR(SDR-Default). For legacy-based
data store appliances, use SDR-M.
• Compression Level = 1
• Adaptive Compression = Disabled
• Multi-Core Balancing = Enabled
References
For more information about the Riverbed Steelhead WAN
Optimization Controller and the Riverbed system, refer to Riverbed's
website at http://www.riverbed.com.
◆ Steelhead Appliance Deployment Guide
◆ Steelhead Appliance Installation and Configuration Guide
◆ Riverbed Command-Line Interface Reference Manual
Overview
Riverbed Granite is a block storage optimization and consolidation
system. It consolidates all storage at the data center and creates
diskless branches. Granite is designed to enable edge server systems
to efficiently access storage arrays over the WAN as if they were
locally attached.
The Granite solution is deployed in conjunction with Steelhead
appliances and consists of two components:
◆ Granite Core — A physical or virtual appliance in the data center,
and it mounts all the LUNs that need to be made available to
applications and servers at a remote location from the back-end
storage array. Granite Core appliances makes those LUNs
available across the WAN in the branch via the Granite Edge
module on a Steelhead EX or standalone Granite Edge appliance.
Features
This section briefly describes Riverbed Granite features.
Granite Edge To eliminate latency introduced by the WAN, the Granite appliance
Blockstore cache in the branch presents a write-back block cache, called the blockstore.
Block writes by applications and hosts at the edge are acknowledged
locally by the blockstore and then asynchronously flushed back to the
data center. This enables application and file system initiator hosts in
the branch to make forward progress without being impacted by
WAN latency. As blocks are received, written to disk and
acknowledged, the written blocks are also journaled in write order to
a log. This log file is used to maintain the block-write order to ensure
data consistency in case of a crash or WAN outage. When the
connection is restored, Granite Edge plays the blocks in logged
write-order to Granite Core, which commits the blocks to the physical
LUN on the back-end storage array. The combination of block
journaling and write-order preservation enables Granite Edge to
continue serving write functions in the branch during a WAN
disconnection.
LUN pinning A storage LUN provisioned via Granite can be deployed in two
different modes, pinned and unpinned. Pinned mode caches 100% of
the data blocks on the Steelhead EX appliance at the branch. This
ensures that the contents of specified storage LUNs are maintained at
the edge to support business operations in the event of a WAN
outage. Unpinned mode maintains only a working set of the most
frequently accessed blocks at the branch.
Disconnected In some cases, the WAN connection may suffer an outage or may be
operations unpredictable. That means that branch-resident applications might
behave unpredictably in the case of a block cache miss which cannot
be serviced from the data center LUN due to the WAN outage. For
such cases, Granite LUN pinning is recommended. This will ensure
that the contents of an entire LUN are prefetched from the data center
and prepopulated at the edge to ensure a 100% hit rate on Granite
Edge. In LUN-pinning mode, no reads are sent across the wire to the
data center, although dirty blocks (changed/newly-written blocks)
are preserved in a persistent log and flushed to the data center when
WAN connectivity is restored, ensuring consolidation and protection
Boot over the WAN VMware vSphere virtual server technology combined with Granite
now makes it possible to boot over the WAN to provide instant
provisioning and fast recovery capabilities for edge locations. A
bootable LUN in the data center is mapped to a host in a branch
office. The host can either be a separate ESXi server or the Steelhead
EX embedded VSP. Granite Core detects the LUN as a VMFS file
system with an embedded NTFS file system virtual machine
workload and upon further inspection learns the block sequence of
both.
Once the boot process on the branch host starts, blocks for the
Windows file server virtual machine are requested from across the
WAN. Granite Core recognizes these requests and prefetches all of
the required block clusters from the data center provisioned LUN and
pushes them to the Granite Edge appliance at the branch, ensuring
local performance for the boot operation.
Component Description
Second Local Interface Optionally, specify a second local interface over which
the heartbeat is monitored.
Deployment topologies
Figure 5 illustrates a generic Granite deployment.
Control Description
Enable Header Digest Includes the header digest data in the iSCSI PDU.
Enable Data Digest Includes the data digest data in the iSCSI PDU.
Control Description
Add an iSCSI Portal Displays controls for configuring and adding a new
iSCSI portal.
Add iSCSI Portal Adds the defined iSCSI portal to the running
configuration.
Control Description
Offline LUNs Click Offline LUNs to take offline all LUNs serviced by
this selected iSCSI portal.
b. Under Targets, add a target for the portal using the controls
described in the following table.
Control Description
Target Name Enter the target name or choose from available targets.
Note: This field also enables you to rescan for available
targets.
Add Target Adds the newly defined target to the current iSCSI portal
configuration.
Control Description
Target Settings Open this tab to modify the port and snapshot
configuration settings.
Optionally, you can add a new snapshot configuration
dynamically by clicking the Add New Snapshot
Configuration link adjacent to the setting field.
Offline LUNs Open this tab to access the Offline LUNs button.
Clicking this button takes offline all configured LUNs
serviced by the current target.
Configuring LUNs
To configure an iSCSI LUN, complete the following steps.
1. Choose Configure > Storage > LUNs to display the LUNs page.
Control Description
Add an iSCSI LUN Displays controls for adding an iSCSI LUN to the current
configuration.
LUN Serial Number Select from the drop-down list of discovered LUNs.
The LUNs listed are shown using the following format: serial
number (portal/target).
Note: If the desired LUN does not appear, scroll to the bottom
of the list and select Rescan background storage for new LUNs.
Add iSCSI LUN Adds the new LUN to the running configuration.
Control Description
Control Description
Control Description
Add a Local LUN Displays controls for adding a local LUN to the current
configuration. Local LUNs consist of storage on the Granite
Edge only, there is no corresponding LUN on the Granite
Core.
Granite Edge Select a LUN from the drop-down list. This list displays
configured Granite Edge appliance.
Add a Local LUN Adds the new LUN to the running configuration.
Control Description
LUN Alias Displays the LUN alias, if applicable. Optionally, modify the
value and click Update Alias.
Control Description
Add a Granite Edge Displays controls for adding a Granite Edge appliance to the
current configuration.
Granite Edge Identifier Specify the identifier for the Granite Edge appliance. This
value must match the same value configured on the Granite
Edge appliance.
Note: Granite Edge identifiers is case-sensitive.
Blockstore encryption Changes the encryption used when writing data to the
blockstore.
Add Granite Edge Adds the new Granite Edge appliance to the running
configuration. The newly added appliance appears in the list.
Note: You can also configure CHAP users dynamically in the iSCSI
Configuration page.
Control Description
Add a CHAP User Displays controls for adding a new CHAP user to the running
configuration.
Password/Confirm Specify and confirm a password for the new CHAP user.
Password
Add CHAP User Adds the new CHAP user to the running configuration.
References
For more information, refer to Riverbed's website at
http://www.riverbed.com.
◆ Granite Core Deployment Guide
◆ Granite Core Installation and Configuration Guide
◆ Riverbed Branch Office Infrastructure for EMC Storage Systems
(Reference Architecture)
Overview
Silver Peak appliances are interconnected by tunnels, which transport
optimized traffic flows. Policies control how the appliance filters
LAN side packets into flows and whether:
◆ an individual flow is directed to a tunnel, shaped, and optimized;
◆ processed as shaped, pass-through (unoptimized) traffic;
◆ processed as unshaped, pass-through (unoptimized) traffic;
◆ continued to the next applicable Route Policy entry if a tunnel
goes down; or
◆ dropped.
The appliance manager has separate policies for routing,
optimization, and QoS functions. These policies prescribe how the
appliance handles the LAN packets it receives.
The optimization policy uses optimization techniques to improve the
performance of applications across the WAN. Optimization policy
actions include network memory, payload compression, and TCP
acceleration.
Silver Peak ensures network integrity by using QoS management,
Forward Error Correction, and Packet Order Correction. When
Adaptive Forward Error Correction (FEC) is enabled, the appliance
introduces a parity packet, which helps detect and correct
Terminology
Consider the following terminology when using Silver Peak
configuration settings:
◆ Coalescing ON — Enables/disables packet coalescing. Packet
coalescing transmits smaller packets in groups of larger packets,
thereby increasing performance and helping to overcome the
effects of latency.
◆ Coalesce Wait — Timer (in milliseconds) used to determine the
amount of time to wait before transmitting coalesced packets.
◆ Compression — Reduces the bandwidth consumed by traffic
traversing the WAN. Payload compression is used in conjunction
with network memory to provide compression on "first pass"
data.
◆ Congestion Control — Techniques used by Silver Peak to manage
congestion scenarios across a WAN. Configuration options are
standard, optimized, and auto. Standard uses standard TCP
congestion control. Optimized congestion control is the most
aggressive mode of congestion control and should only be used in
environments with point-to-point connections for a dedicated to
single application. Auto congestion control aims to improve
throughput over standard congestion control, but may not be
suitable for all environments.
◆ FEC / FEC Ratio — Technique used by Silver Peak to recover
from packet loss without the need for packet retransmissions.
Hence, loss is corrected on the Silver Peak appliance resulting in
higher throughout during the data transmission.
◆ IP Header Compression — Enables/disables compression of the
IP header in order to reduce the packet size. Header compression
can provide additional bandwidth gains by reducing packet
header information using specialized compression algorithms.
Features
Features include:
◆ Compression (payload and header)
◆ Network memory (data-deduplication)
◆ TCP acceleration
◆ QoS (Quality of Service)
◆ FEC (Forward Error Correction)
◆ POC (Packet Order Correction)
◆ Encryption - IPsec
Deployment topologies
Deployment topologies include:
◆ In-line (bridge mode)
• In-line
◆ Out-of-path (router)
• Out-of-path with Policy-Based-Routing (PBR) redirection
• Out-of-path with Web Cache Coordination Protocol
(WCCPv2)
• Out-of-path with VRRP peering to WAN router
• Out-of-path with Policy-Based-Routing (PBR) and VRRP
redundant Silver Peak appliances
• Out-of-path with Web Cache Coordination Protocol (WCCP)
redundant Silver Peak appliances
◆ The Silver Peak appliances can only be deployed in out-of-path
(Router) mode when using 10 Gb Ethernet Fibre data ports as
optical interfaces to do not fail to wire
◆ The Silver Peak NX-8700, NX-9700, and NX-10000 appliances
support 10 Gb Ethernet Fibre data ports
◆ The SilverPeak VX (virtual appliances) and the Silver Peak VRX
(virtual appliances) are supported when deployed on the
VMWARE ESX or ESXi servers. The virtual appliances can only
be deployed in out-of-path configurations.
FCIP environment
The following Silver Peak configuration settings are recommended in
an FCIP environment:
◆ WAN Bandwidth = (Environment dependent)
◆ Tunnel Auto Max BW = Disabled (Unchecked)
GigE environment
The following Silver Peak configuration settings are recommended in
a GigE environment:
◆ WAN Bandwidth = (Environment dependent)
◆ Tunnel Auto Max BW = Disabled (Unchecked)
◆ Tunnel Max BW = in Kbps (Environment dependent)
References
For more information about Silver Peak appliances, refer to the Silver
Peak website at http://www.silver-peak.com.
◆ Silver Peak Command Line Interface Reference Guide
◆ Silver Peak Network Deployment Guide