You are on page 1of 33
Module 2: Introduction to WAN Optimization WANZ200 Optimization Essentials Module 2: Introduction to WAN Optimization riverbed SteelHead 9.2 SteelCentral Controller 9.2 SteelHead Mobile 4.8 June 2016 Rev. K Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Agenda Application Performance Considerations How Root Causes Affect Application Performance Data Streamlining Transport Streamlining Application Streamlining Components of a Complete WAN Optimization Solution Gg Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Objectives After completing this module, you will be able to: = Explain how SteelHeads deliver improved application performance = Identify three SteelHead streamlining (optimizing) methodologies in RiOS = Describe the SteelHead-to-SteelHead foundational concepts Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization eo ces Considerations — - Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Why WAN Optimization? Application Acceleration Consolidation & Virtualization ~ Dramatically faster response ~ Consolidate servers from branch times to datacenter ~ Improved user productivity ~ Consolidate servers at branch ~ Hundreds of apps from Citrix to _~ LAN-like performance access SharePoint Disaster Recovery Bandwidth Optimization = ae — and recover — in less ime ~ Do more with less - Reduce WAN traffic by 60-95% Achieve RTO and RPO objectives Making the WAN feel more like a LAN enables an array of business critical IT initiatives. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization SteelHeads Overcoming Layers of Inefficiencies near) re Cory Inherent TCP/IP Protocol Transport Streamlining: Chattiness Virtual Window Expansion (VWE), Window Scaling (RFC Connection-Oriented Latency 1323) Service Provider Challenges Low-Speed & High-Speed TCP optimizations Adverse Environments Hyper-Converged Edg Costly Branch IT Servicing SteelFusion Edge & SteelFusion Core ‘Islands of Storage’ Liabilities Virtual Services Platform (VSP) Outright WAN Outages We can readily liken our layered optimization structure to the OS! module. Moving bottom to top, we see the SteelHead’s ability to help: + Layer 1 issues regarding WAN failure and basic bit stream minimization, also known as Scalable Data Referencing (SDR), then * Layer 4 transport protocol optimizations, and finally + Layer 7 with application-specific optimizations Note that these optimizations are independent of one another, so it is possible to perform SDR and not Transaction Prediction, or vice versa, to best optimize each individual flow. Virtual Window Expansion — A well-known method for improving TCP throughput is the use of larger windows in order to increase the number of bytes that can be “in flight” without being acknowledged. Although window scaling is available in most client and server TCP implementations, itis often challenging to configure correctly. In many Windows versions, correctly configuring window scaling requires esoteric knowledge of relevant settings and a willingness to edit the Windows registry - requirements that place window scaling out of reach for many organizations. Even with the appropriate knowledge and skill set, making these changes on every server ina large enterprise may require large amounts of administrative overhead and may not be a very scalable approach. RiOS enables automatic window scaling across the WAN without requiring the user to make any changes to clients, servers, or the routing infrastructure. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization SteelHeads: A Most Clever & Compatible Solution ! In s : Insufficient WAN Bandwidth Each layer presents its own inherent challenges... = Solution must ‘see’ into each layer natively = Deliver fast assessment & delivery of Application-specific optimizations = App & Transport Layer Optimizations Reduce Costly R/T counts & times = Reduced R/T Times + Lower Payload = Incredible App Performance Improvements End-to-End Application Performance Achieved! riverbed Two key elements of network performance are bandwidth and latency. Network bandwidth can get optimized through intelligent streamlining of the data and prioritization of the critical and real time trafic. On the other hand network latency is caused by protocol and application chattiness which has to get streamlined at the protocol and application layer. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization RiOS: Underlying Framework Man Inte ne t 07 t at a : S = ; Branch Office NG =. = Transparent Deployment ~ Maintains client / server interaction with no changes - SteelHead Appliances auto-discover each other = Optimization is controlled via rules ~ Traffic is optimized by default ~ VoIP and video can be “passed through” with no degradation = Provides data /transport/ application streamlining Riverbed SteelHead products are designed to provide no disruption to your current network upon deployment. The products can be transparently deployed and will maintain the flow of all data on the network. They do so by maintaining existing TCP connections and creating their own TCP session across the WAN. This allows clients and servers to communicate just as they always have, while SteelHead appliances can transparently optimize and accelerate communications. RIOS operates as a transparent TCP proxy. During TCP connection setup with Riverbed SteelHead Appliances as well as Virtual SteelHead, RIOS implements the logical, single, end-to-end TCP connection via three associated, back-to-back TCP connections. The TCP proxy connections are established in a one-to-one ratio with no encapsulation, traffic mixing, or tunnel configuration. The two “outer” connections seen by the client or server look the same as an un-optimized single connection, while the “inner” connection is invisible to client and server and allows RiOS to perform a variety of performance improvements for transmissions across the WAN. This design allows RIOS-powered products to optimize transfers across the WAN with no disruption or reconfiguration of clients, servers, or routers. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Affect Application——— =z-1e cele a FelaLe(-) Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 10 Application Protocol Inefficiency and Latency Application Example: = High “Application Turns” means chatty = Chatty application designs seem slow in high latency arenas ‘Wow! Network is fast! Request Low Application Turn no ~ Response saceel Request Branch Office eapotied iow York Data Center Network is Request "ah Application Turn 00000 slow! Response nl 10Mbps WAN Request YY ork User Input Response aor mie eaten ata Center beach once Response riverbed Each change in direction is called an application turn because the application changes the direction of data flow. Applications with many application turns are generally considered chatty and are sensitive to network delay. The sensitivity occurs because each message must be received ata tier before the corresponding response can be sent; as a result, each message is affected by network latency. Although network latency between client and application server can be less than 1 ms, the aggregated delay can be significant due to the number application turns. Application turn reduction should not be confused with TCP turn reduction, which occurs as part of TCP optimization. Latency due to distance may go unnoticed for applications with few client-server software interactions (or turns), but it poses a problem for applications requiring many interactions—such as those based on “chatty” application protocols like Microsoft’s Common Internet File System (CIFS) file sharing protocol, and Outlook that uses Microsoft's Messaging Applications Programming Interface (MAP!) email protocol for Exchange. Each additional turn requires packets to traverse the user-to-server distance twice, once in each direction. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 1 Wireshark Chatty Comparison: FTP vs. CIFS Wireshark graph on FTP commands versus SMB (CIFS) commands: ~ Same size file transfer shows CIFS is more chatty (X axis is time) ~ Filter using smb2 or ftp (Y axis is number of protocol commands) Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Distributed Computing Problems Tape Networking Application problems problems ” Web, email, FTP, Notes, etc. ¥ Not Enough Bandwidth v Slowresponse times “Applications not ~ Applications too slow to use prioritized v Mobile access needed Most companies use bandwidth as the scapegoat for all of the problems that remote offices have with application performance over the WAN. Poor performance, however, is the result of three Tape Storage = plete File WebMail Servers Servers Ser Storage problems ¥ Data sprawl v Islands of storage Backup and replication v Meeting SLAs ¥ Compliance worries equally important problems. Solving just one of them will not truly solve the problem. Networking bandwidth and latency slow data access. Storage problems result in excess capacity utilization for backup, high management costs, and limited access to data in other offices. Applications that are not optimized for use over the WAN operate slowly even in a high capacity environment. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 2 Module 2: Introduction to WAN Optimization = _— So Bee cesiicc il aii Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. B Module 2: Introduction to WAN Optimization 4 Scalable Data Referencing (SDR) pyright © 2013 Riverbed Technology text Binary OOOO OTOH TOOT TDO TOOT TOOTOTTOOTTTOTI) 0000111010000 10000, . (01100001 0fOTOTO representation / ;oo101s1011001109f0101 10D, 4 level references Ref[9234] _Ref[55k1] _Reff816378] Reff4u244] _Reflj8s] ae 20 level reference Reffvs5q6] Reffqk7j9] cae 3” level reference Reflw7a2] 16-Byte references communicate megabytes of existing data (128Byte average chunk size) Data Streamlining = SDR + LZ riverbed Scalable Data Streamlining (using SDR) uses a proprietary algorithm to break up the data into small ‘chunks’, then references and stores these data chunks on the SteelHead. The first time a data stream is seen by the SteelHead, all the data, and references to the data, are compressed and sent to the remote SteelHead. (A “cold send.”) The cold send is still optimized with the TCP and application optimizations which is discussed later. The very next time any of these file-independent chunks of the original data stream is requested — in either direction a SteelHead needs to only send the references to those chunks to the remote SteelHead. If any portions of the accompanying data stream have not yet been seen (perhaps they represent modified or changed portions of a file, for example), the SteelHead simply includes the chunks of data that have not yet been seen along with the references associated with the data. The file is then reconstructed exactly as issued from the server using a combination of the changed and unchanged data. Data is never delivered to the destination that wasn’t sent by the source. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 15 SDR in Detail REFERENCE #'s (16 bytes each) eyrerarterns iit liy ree spree Stitry {to 512 bytes each) | aly f Sen The various documents that are retrieved by the users (from a server) are intercepted on the LAN interface of the SSH, initiating the SDR process. Implementing a randomizer engine, SDR creates ‘chunks’ of unique byte patterns from the incoming data stream (OUTER CHANNEL) and writes them to the Segstore partition (a.k.a. Datastore partition). These chunks are sized from 8 to 512 bytes each. SDR then assigns a unique 16-byte reference number to each byte pattern; these are Ist Level reference numbers that are then linked directly (and permanently) to their respective byte patterns. In this slide, the reference numbers r1 through r12 are Ist-level 16-byte reference numbers. Once this initial SDR process is complete, the SSH then sends the byte patterns and their assigned 1st level reference numbers out the WAN interface; this is a cold transfer and is located in between the client and server-side SH’s (INNER CHANNEL). Should this SSH intercept subsequent LAN-sourced data streams containing previously processed patterns of zeroes and ones (from same or similar files), SDR will NOT ‘chunk’ or write this same data to disk. Instead, SDR directs the SSH to send across the WAN just 16-byte 1st level reference numbers to which the original byte patterns had been assigned — no actual byte patterns will traverse the WAN, substantially reducing WAN load. These subsequent transfers are known as a warm transfers and constitute a major component of SH Bandwidth (or Data) Streamlining. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 16 SDR in Detail REFERENCE #'s (16 bytes each) {to 512 bytes oaen) [i ett rf uel ; Sen (Note continued) The ‘scalable’ aspect of SDR begins with its ability to ‘recognize’ repeated warm transfers between previously peered SH’s, essentially keeping track of the most popular 1st level reference numbers shuttled across the INNER CHANNEL. For these ‘popular’ reference numbers, SDR will generate 2nd-level 16-byte reference numbers to represent the aggregate of 1st-level reference numbers and their assigned byte patterns. The reference numbers r13 through r16 are 2nd-level 16-byte reference numbers. Should the 2nd-level 16-byte reference numbers be deemed ‘popular’ by SDR during its normal operations, SDR will generate 3rd-level 16-byte reference numbers to represent the aggregate of 2nd-level. reference numbers (which represent the aggregate of 1st-level reference numbers and their assigned byte patterns). The reference numbers r17 and r18 are 3rd-level 16-byte reference numbers. Should the 3rd- level 16-byte reference numbers be deemed ‘popular’ by SDR during its normal operations, SDR will generate 4th-level 16-byte reference numbers to represent the aggregate of 3rd-level reference numbers (which represent the aggregate of 2nd-level reference numbers, which, in turn, represent the aggregate of 1st-level reference numbers and their assigned byte patterns). The reference number r19 is a 4th-level 16-byte reference number. By design, SDR will not go beyond 4th-level references. Using this slide as an example to illustrate the effectiveness of SDR’s Bandwidth Streamlining, consider that the 16-byte reference number r19 represents the entire document in its entirety. NOTE: More insight into the effectiveness of SDR can be found in several Optimization reports. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 7 RiOS: SDR Benefit o New York 207 OO000000000000000000000000 datacenter oo pt UI cone New York Datacenter 60-98% reduction in bandwidth By significantly reducing the size of information being sent over the WAN, we can achieve 60-98% data reduction, resulting in more capacity in already limited bandwidth pipes. Since there is less data to travel over the WAN, users see greatly improved file transfer times. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 18 RiOS: Bandwidth Streamlining 60-98% reduction over time in WAN utilization a Files & Data Reconstructed 0" Files & Data” AS & * Requests from the client to the server * SteelHead auto-intercepts the request, segments data and LZ compress it * Only new bytes are LZ compressed and sent over the WAN * 16-Byte references communicate gigabytes of existing data * Remote SteelHead reconstructs data and delivers it to the client Lempel-Ziv Compression A variant of this universal algorithm for sequential data compression is leveraged. “The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.” (abstract from “A Universal Algorithm for Sequential Data Compression” by Jacob Ziv, Fellow, IEEE, and Abraham Lempel, Member, IEEE) Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 19 Module 2: Introduction to WAN Optimization 20 RiOS: (Virtual) Window Scaling GOOSR0000000000) 5 Sf eon nse Hoan Eee asin su TS apse ety meng TPP = Larger windows improve TCP throughput = Max amount of data per round trip increases = net throughput of bottlenecks = Challenging to configure = RiOS enables automatic windows scaling across the WAN ‘Awell-known method for improving TCP throughput is the use of larger windows in order to increase the number of bytes that can be “in flight” without being acknowledged. By increasing the window size, the maximum amount of data per round trip goes up, increasing the net throughput when the TCP window is the bottleneck. Although window scaling is available in most client and server TCP implementations, it is often challenging to configure correctly, In many Windows versions, correctly configuring window scaling requires esoteric knowledge of relevant settings and a willingness to edit the Windows registry — requirements that place window scaling out of reach for many organizations. Even with the appropriate knowledge and skill set, making these changes on every server in a large enterprise may require large amounts of administrative overhead and may not be a very scalable approach. RiOS enables automatic window scaling across the WAN without requiring the user to make any changes to clients, servers, or the routing infrastructure. Beyond simple window scaling, however, is the software's ability to virtually expand TCP windows and enable capacity that is hundreds of times greater than basic TCP payloads. As a TCP proxy, RiOS effectively repacks TCP payloads with a mixture of data and references to data. As noted in the data streamlining section, recognized data that would have been transported is instead replaced by a reference, which can representa very large amount of data. In this manner, RiOS virtually expands the TCP frame, often by a factor of several hundred or more. This Virtual Window Expansion (VWE) dramatically reduces the number of round trips that need to be taken to deliver a given amount of data. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 21 RiOS: Congestion Algorithms Advanced TCP Acceleration = High Speed (HS-TCP) for “Fill the pipe” OC-12 and larger connections = Max-Speed (MX-TCP) for lossy network connections = Connection Pooling ~ Eliminate 50% of overhead for small, short-lived connections = Adaptive Congestion Windows ~ Adapt transfer parameters based on network characteristics = Limited and Fast Retransmits ~ Ensure priority handling for packet resends = Application Aware Transport Optimization ~ Oracle Forms traffic in socket (native) and HTTP modes For high-bandwidth WAN links (also known as “Long Fat Networks” or “LFNs”) components of Transport Streamlining known as High Speed TCP (HS-TCP) and Max-Speed TCP (MX-TCP) may be activated, which enables greater bandwidth utilization, providing the capability to “fill the pipe” more effectively. MX-TCP also helps when dealing with lossy network connections. Adaptive congestion windows uses standards-based enhancement of TCP’s native congestion window negotiation, This allows for the optimal transmission scenario given a customer's, network. Limited and fast retransmits - Using RFCs 3042 and 2582, SteelHead can recover quickly in the event of a lost packet, avoiding any costly retransmission timeout. RiOS treats packet resends with priority over other packets using the limited and fast retransmits feature. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 2 Connection Pooling Minimizes the time for optimized connection setup Three-way TCP handshake not required to finish the WAN SteelHead uses a TCP connection from its’ “pool” of connections Transport Streamlining = One-one ratio for active TCP connections between SteelHeads and TCP connections to clients and servers SteelHeads do not tunnel, multiplex, or demultiplex inner connection connection pool BRANCH OFFICE Connection pooling adds a benefit to transport streamlining by minimizing the time for an optimized connection to setup. ‘Some application protocols, such as HTTP, use many rapidly created, short-lived TCP connections. To optimize these protocols, SteelHeads create pools of idle TCP connections. When a client tries to create a new connection to a previously visited server, the SteelHead uses a TCP connection from its pool of connections. Thus the client and the SteelHead do not have to wait for a three- way TCP handshake to finish across the WAN. This feature is called connection pooling. Connection pooling is available only for connections using the correct addressing WAN visibility mode. Transport streamlining ensures that there is always a one-to-one ratio for active TCP connections between SteelHeads and the TCP connections to clients and servers. Regardless of the WAN visibility mode in use, SteelHeads do not tunnel or perform multiplexing and demultiplexing of data across connections. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Jumbo Frames **Note: All devices, including switches and routers, between SteelHead devices must be configured to use the same jumbo frame size. riverbed In order to increase overall security, minimize congestion, minimize latency and simplify the overall configuration of your storage infrastructure, it is recommended to segregate storage traffic from regular LAN traffic, Place storage traffic on its own physically separated network (or VLAN) that is routed separately from the main network. If Jumbo Frames are supported in your network infrastructure, it is recommended to use jumbo frames between the SteelFusion Core appliance and the Storage array. Jumbo Frames can be used to allow more data to be transferred with each Ethernet transaction and reduce the number of frames. This larger frame size reduces the overhead on both the SteelFusion appliance and the storage device, providing the best performance for large transfer sizes. **Note: All devices, including switches and routers, between SteelFusion Core and the storage array must be configured to use the same jumbo frame size. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 2B Module 2: Introduction to WAN Optimization 4 Congestion Algorithms SteelHead supports several TCP Congestion Avoidance Algorithms = Standard TCP: back-off with packet drops and slow-start to ramp up BW « Bandwidth Estimation: dynamically adjust rates based on round trip time = HighSpeed TCP (HS-TCP): back-off less, faster restart. Good for 50+ MB/s = MxTCP: blast traffic at a set rate (not a congestion algorithm!) = Rate pacing: MX-TCP without congestion, one of above algorithms otherwise = Skipware ~ SCPS: modified slow-start for low loss, high latency (satellite) environment ~ SCPS Error Tolerance Standard TCP - Standard TCP is a standards-based implementation of TCP and is the default setting in the SteelHead. Standard TCP is a WAN-friendly TCP stack. Bandwidth estimation - Bandwidth estimation is the delay-based algorithm that incorporates many of the features of standard TCP and includes calculation of RTT and bytes acknowledged. This additional calculation avoids the multiplicative decrease in rate detected in other TCP algorithms in the presence of packet loss. HighSpeed TCP (HS-TCP) - HS-TCP is efficient in long fat networks (LFNs) in which you have large WAN circuits (50 Mbps and above) over long distances. Typically, you use HS-TCP when you have a few long-lived replicated or backup flows. HS-TCP is designed for high-bandwidth and high-delay networks that have a low rate of packet loss due to corruption (bit errors). HS-TCP has a few advantages over standard TCP for LFNs. Standard TCP will backoff (slow down the transmission rate) in the presence of packet loss, causing connections to under use the bandwidth. MX-TCP - MX-TCP is ideal for dedicated links, or to compensate for poor link quality (propagation issues, noise, and so on) or packet drops due to network congestion. The objective of MX-TCP is to achieve maximum TCP throughput. Rate pacing - Rate pacing is a combination of MX-TCP and a TCP congestion avoidance algorithm. Rate pacing leverages the rate configured for an MX-TCP QoS class to minimize buffer delays, but can adjust to the presence of loss due to network congestion. ‘SkipWare Space Communications Protocol Standards (SCPS) per connection - SCPS per connection is for satellite links with little or no packet drops due to corruption. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization SkipWare Satellite communications are subjected to a harsh array of environmental and operational conditions that can drastically impair the performance of the network. Weather effects, interference, blockage and high-bandwidth-delay products reduce bandwidth efficiency and degrade throughput, jeopardizing the warfighter’s mission-critical communications. Previous approaches to this problem were either standalone SCPS implementations, such as SkipWare® by Global Protocols, or WAN optimization solutions such as Riverbed® SteelHead® appliances. While WAN optimization solutions offer significant performance improvements for satellite WANs, many solutions are not SCPS compliant and therefore lack the degree of interoperability desired by some customers. Similarly, standalone SCPS implementations offer interoperability and a standards-based approach, but are limited in their acceleration capabilities. Specifically, the combined solution enables organizations to: »» Accelerate applications over a satellite WAN, anywhere from 5x to 50x, and up to 100x in some cases, ensuring that warfighters have timely access to mission critical information »»Reduce bandwidth usage by 60- 95% by removing redundant data from the WAN »» Optimize connectivity with any SkipWare-based military network »»Maintain full interoperability with any other SCPS-based network Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 25 Module 2: Introduction to WAN Optimization Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 26 Module 2: Introduction to WAN Optimization 7 RiOS: Application Streamlining SteelHead completes SteelHead completes transaction locally transaction locally Removes round trips from the WAN Application Streamlining enables the server-side SteelHead to act as a client and request data from the application server. This allows the SteelHeads to essentially “feed the WAN” at a very rapid rate, eliminating the otherwise-required long waits for those many transactions to traverse the WAN. Containing these transactions to the LAN leverages the LAN’s typical millisecond response rates. This limits the application turn count by gathering most content into a single transaction over long network distances. The effect in the performance equation is to reduce the turn count. RiOS accomplishes this by predicting transactions based on application knowledge and past behavior, reconstructing the application-level interactions on both the client and server ends, all while preserving client-server protocol semantics. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 28 RiOS: Application Protocols MAPI*/eMAPI Microsoft Exchange with MAPI / encrypted MAP! Ms-saL Database driven applications Cite * Latency optimizations which are enabled by default within RiOS Riverbed has the greatest number of application-specific optimizations of any vendor. By doing so, we provide the best possible performance across the broadest array of applications that enterprises care about the most. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization SS “Comiponents of a ene Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 29 Module 2: Introduction to WAN Optimization 30 A Complete Optimization Solution BRANCH OFFICE ( “aie Ss ) BRANCH OFFICE A complete optimization solution brings more to the table than solely WAN optimization appliances. It also includes different manifestations, or models, of the appliances to fit a variety of network topology & infrastructure needs. The appliances should have a comprehensive management platform and ideally should include interactivity with a mobile solution deployed directly on laptops and/or workstations. Riverbed brings all this to bear and more. Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 31 Knowledge Check 1. The Riverbed SteelHead devices use TCP tunneling to transfer optimized traffic. True or False? a. True 2. Which of the following protocols are enabled for latency optimization on Riverbed SteelHead appliances by default? c. Citrix d_ CIFS SMB1 ©. Encrypted MAPI (eMAPI) 3. Of the following devices, which ones perform the SDR function? b. Virtual SteelHead c. Interceptor d. SteelHead Mobile Controller INSTRUCTOR NOTE: Animations set on each of the green highlight answer boxes. 1. Riverbed SteelHeads use a TCP proxy mechanism to transfer optimized traffic across the WAN. Tunneling implies that packet encapsulation takes place, which the SteelHeads don’t perform. (B. False) 2. MAPI, HTTP and CIFS v1 are enabled by default. Encrypted MAPI (eMAPI) and Citrix require additional configuration before enabling. (A., B. and D.) 3. Both SteelHead Mobile and Virtual SteelHeads perform Scalable data Referencing (SDR). Interceptor is a load balancer and the SteelHead Mobile Controller manages SteelHead Mobile endpoint, so they don’t participate in the SDR functionality. (A. and B.) Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization 32 Summary You should now be able to: = Explain how SteelHeads deliver improved application performance = Identify three SteelHead streamlining (optimizing) methodologies in RiOS « Describe the SteelHead-to-SteelHead foundational concepts Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. Module 2: Introduction to WAN Optimization Thank You Copyright © 2006-2016 Riverbed Technology. Published by Professional Services. This document contains confidential and proprietary information. TRAINING USE ONLY. 33

You might also like