You are on page 1of 15

xxx Cisco Intelligent WAN (IWAN)

The Cisco IWAN solution helps businesses achieve their goals, and this book will help
IT departments get the most out of these solutions. The book describes IWAN and its
implementation in an easy-to-understand format that will allow network professionals to
take full advantage of this solution in their environments. In doing so, it will allow those
IT professionals to deliver tremendous business value to their organizations. At Cisco,
we believe that technology can truly help businesses define their strategy and value in
the market. And we believe that IT can help deliver that value through speed, agility, and
responsiveness to their customers and their businesses.

Michael Koons

VP Systems Engineering and Technology,

Cisco Systems
xxxi

Introduction
The Cisco Intelligent WAN (IWAN) enables organization to deliver an uncompromised
experience over any WAN transport. With the Cisco IWAN architecture, organizations
can provide more bandwidth to their branch office connections using cost-effective
WAN transports without affecting performance, security, or reliability.

The authors’ goal was to provide a multifunction self-study book that explains the
technologies used in the IWAN architecture that would allow the reader to successfully
deploy the technology. Concepts are explained in a modular structure so that the reader
can learn the logic and configuration associated with a specific feature. The authors
provide real-world use cases that will influence the design of your IWAN network.

Knowledge learned from this book can be used for deploying IWAN via CLI or other
Cisco management tools such as Cisco Prime Infrastructure or Application Policy
Infrastructure Controller Enterprise Module (APIC-EM).

Who Should Read This Book?


This book is for network engineers, architects, and consultants who want to learn more
about WAN networks and the Cisco IWAN architecture and the technical components
that increase the effectiveness of the WAN. Readers should have a fundamental
understanding of IP routing.

How This Book Is Organized


Although this book can be read cover to cover, it is designed to be flexible and allow
you to easily move between chapters and sections of chapters so that you can focus on
just the material that you need.

Part I of the book provides an overview of the evolution of the WAN.

■ Chapter 1, “Evolution of the WAN”: This chapter explains the reasons for increased
demand on the WAN and why the WAN has become more critical to businesses in
any market vertical. The chapter provides an introduction to Cisco Intelligent WAN
(IWAN) and how it enhances user experiences while lowering operational costs.

Part II of the book explains transport independence through the deployment of Dynamic
Multipoint VPN (DMVPN).

■ Chapter 2, “Transport Independence”: This chapter explains the history of WAN


technologies and the current technologies available to network architects. Dynamic
Multipoint VPN (DMVPN) is explained along with the benefits that it provides over
other VPN technologies.

■ Chapter 3, “Dynamic Multipoint VPN”: This chapter explains the basic concepts
of DMVPN and walks the user from a simple topology to a dual-hub, dual-cloud
topology. The chapter explains the interaction that NHRP has with DMVPN because
that is a vital component of the routing architecture.
xxxii Cisco Intelligent WAN (IWAN)

■ Chapter 4, “Intelligent WAN (IWAN) Routing”: This chapter explains why EIGRP
and BGP are selected for the IWAN routing protocols and how to configure them.
In addition to explaining the logic for the routing protocol configuration, multicast
routing is explained.

■ Chapter 5, “Securing DMVPN Tunnels and Routers”: This chapter examines the
vulnerabilities of a network and the steps that can be taken to secure the WAN.
It explains IPsec DMVPN tunnel protection using pre-shared keys and PKI
infrastructure. In addition, the hardening of the router is performed through the
deployment of Zone-Based Firewall (ZBFW) and Control Plane Policing (CoPP).

Part III of the book explains how to deploy intelligent routing in the WAN.

■ Chapter 6, “Application Recognition”: This chapter examines how an application


can be identified through the use of traditional ports and through deep packet
inspection. Application classification is essential for proper QoS policies and
intelligent routing policies.

■ Chapter 7, “Introduction to Performance Routing (PfR)”: This chapter discusses


the need for intelligent routing and a brief evolution of Cisco Performance Routing
(PfR). The chapter also explains vital concepts involving master controllers (MCs)
and border routers (BRs) and how they operate in PfR version 3.

■ Chapter 8, “PfR Provisioning”: This chapter explains how PfRv3 can be configured
and deployed in a topology.

■ Chapter 9, “PfR Monitoring”: This chapter explains how PfR can be examined to
verify that it is operating optimally.

■ Chapter 10, “Application Visibility”: This chapter discusses how PfR can view and
collect application performance on the WAN.

Part IV of the book discusses and explains how application optimization integrates into
the IWAN architecture.

■ Chapter 11, “Introduction to Application Optimization”: This chapter covers the


fundamentals of application optimization and how it can accelerate application
responsiveness while reducing demand on the current WAN.

■ Chapter 12, “Cisco Wide Area Application Services (WAAS)”: This chapter
explains the Cisco WAAS architecture and methods that it can be inserted into a
network. In addition, it explains how the environment can be sized appropriately for
current and future capacity.

■ Chapter 13, “Deploying Application Optimizations”: This chapter explains how the
various components of WAAS can be configured for the IWAN architecture.

Part V of the book explains the specific aspects of QoS for the WAN.

■ Chapter 14, “Intelligent WAN Quality of Service (QoS)”: This chapter explains
NBAR-based QoS policies, Per-Tunnel QoS policy, and other changes that should be
made to accommodate the IWAN architecture.
xxxiii

Part VI of the book discusses direct Internet access and how it can reduce operational
costs while maintaining a consistent security policy.

■ Chapter 15, “Direct Internet Access (DIA)”: This chapter explains how direct
Internet access can save operational costs while providing additional services at
branch sites. The chapter explains how ZBFW or Cisco Cloud Web Security can be
deployed to provide a consistent security policy to branch network users.

Part VII of the book explains how IWAN can be deployed.

■ Chapter 16, “Deploying Cisco Intelligent WAN”: This chapter provides an overview
of the steps needed to successfully migrate an existing WAN to Cisco Intelligent WAN.

The book ends with a closing perspective on the future of the Cisco software-defined
WAN (SD-WAN) and the management tools that are being released by Cisco.

Learning in a Lab Environment


This book contains new features and concepts that should be tested in a lab environment
first. Cisco VIRL (Virtual Internet Routing Lab) provides a scalable, extensible network
design and simulation environment that includes several Cisco Network Operating System
virtual machines (IOSv, IOS-XRv, CSR 1000V, NX-OSv, IOSvL2, and ASAv) and has the
ability to integrate with third-party vendor virtual machines or external network devices.

The authors will be releasing a VIRL topology file so that readers can learn the
technologies as they are explained in the book. More information about VIRL can be
found at http://virl.cisco.com.

Additional Reading
The authors tried to keep the size of the book manageable while providing only
necessary information about the topics involved. Readers who require additional
reference material may find the following books to be a great supplementary resource
for the topics in this book:

■ Bollapragada, Vijay, Mohamed Khalid, and Scott Wainner. IPSec VPN Design.
Indianapolis: Cisco Press, 2005. Print.

■ Edgeworth, Brad, Aaron Foss, and Ramiro Garza Rios. IP Routing on Cisco IOS,
IOS XE, and IOS XR. Indianapolis: Cisco Press, 2014. Print.

■ Karamanian, Andre, Srinivas Tenneti, and Francois Dessart. PKI Uncovered:


Certificate-Based Security Solutions for Next-Generation Networks. Indianapolis:
Cisco Press, 2011. Print.

■ Seils, Zach, Joel Christner, and Nancy Jin. Deploying Cisco Wide Area Application
Services. Indianapolis: Cisco Press, 2008. Print.

■ Szigeti, Tim, Robert Barton, Christina Hattingh, and Kenneth Briley Jr. End-to-End
QoS Network Design: Quality of Service for Rich-Media & Cloud Networks,
Second Edition. Indianapolis: Cisco Press, 2013. Print.
This page intentionally left blank
Chapter 1

Evolution of the WAN

This chapter covers the following topics:

■ WAN connectivity

■ Increasing demands on enterprise WANs

■ Quality of service for the WAN

■ Branch Internet connectivity and security

■ Cisco Intelligent WAN

A router’s primary job is to provide connectivity between networks. Designing and


maintaining a LAN is straightforward because equipment selection, network design,
and the ability to install or modify cabling are directly under the control of the network
engineers.

WANs provide connectivity between multiple LANs that are spread across a broad area.
Designing and supporting a WAN add complexity because of the variety of network
transports, associated limitations, design choices, and costs of each WAN technology.

WAN Connectivity
WAN connectivity uses a variety of technologies, but the predominant methods come
from service providers (SPs) with three primary solutions: leased circuits, Internet, and
Multiprotocol Label Switching (MPLS) VPNs.

Leased Circuits
The cost to secure land rights and to purchase and install cables between two locations
can present a financial barrier to most companies. Service providers can deliver
dedicated circuits between two locations at a specific cost. Leased circuits can provide
2 Chapter 1: Evolution of the WAN

high-bandwidth and secure connectivity. Regardless of link utilization, leased lines


provide guaranteed bandwidth between two locations because the circuits are dedicated
to a specific customer.

Internet
The Internet was originally created based on the needs of the U.S. Department of
Defense to allow communication even if a network segment is destroyed. The Internet’s
architecture has evolved so that it now supports the IP protocol (IPv4 and IPv6) and
consists of a global public network connecting multiple SPs. A key benefit of using
the Internet as a WAN transport is that both locations do not have to use the same SP.
A company can easily establish connectivity between sites using different SPs.

When a company purchases Internet connectivity, bandwidth is guaranteed only to


networks under the control of the same SP. If the path between networks crosses multiple
SPs, bandwidth is not guaranteed because the peering link can be oversubscribed
depending upon the peering agreement between SPs. Bandwidth for peering links is
typically smaller than the bandwidth of the native SP network. At times congestion may
occur on the peering link, adding delay or packet loss as packets traverse the peering link.

Figure 1-1 illustrates a sample topology in which bandwidth contention can occur
on peering links. AS100 guarantees 1 Gbps of connectivity to R1 and 10 Gbps of
connectivity to R3. AS200 guarantees 10 Gbps of connectivity to R4, and AS300
guarantees 1 Gbps of connectivity to R2. AS100 and AS200 peer with a 10 Gbps circuit,
and AS200 peers with AS300 with two 10 Gbps circuits. With normal traffic flows R1
can communicate at 1 Gbps rates with R2. However, if R3 is transmitting 10 Gbps of
data to R4, 11 Gbps of traffic must travel across the 10 Gbps circuit into AS200. Because
the peering links are not dedicated to a specific customer, some traffic is delayed or
dropped because of oversubscription of the 10 Gbps link. Bandwidth or latency cannot
be guaranteed when packets travel across peering links.

Peering Link Peering Link


West 10 Gbps 20 Gbps East
Coast Site Coast Site
1 Gbps Link 1 Gbps Link

R1-R2 Data Flow


R1 R2

A S100 A S200 A S300


West Coast SP Backbone SP East Coast SP

R3-R4 Data Flow


R3 R4

10 Gbps Link 10 Gbps Link

Figure 1-1 Bandwidth Is Not Guaranteed on the Internet


Increasing Demands on Enterprise WANs 3

Quality of service (QoS) is based on granting preference to one type of network traffic
over another. QoS design is based on trust boundaries, classification, and prioritization.
Because the Internet is composed of multiple SPs, the trust boundary continually changes.
Internet SPs trust and prioritize only network traffic that originates from their devices.
QoS is considered a best effort when using the Internet as a transport. Some organizations
may deem the Internet unacceptable because this requirement cannot be met.

Multiprotocol Label Switching VPNs (MPLS VPNs)


Service providers use MPLS to provide a scalable peer-to-peer architecture that provides
a dynamic method of tunneling for packets to transit from SP router to SP router
without looking into the packet’s contents. Such networks forward traffic based upon
the outermost label of the packet and do not require examination of the packet’s header
or payload. As packets cross the core of the network, the source and destination IP
addresses are not checked as long as a destination label exists in the packet. Only the SP
provider edge (PE) routers need to know how to forward unlabeled packets toward the
customer router.

The MPLS VPNs are able to forward customer networks via two options depending upon
the customer’s requirements:

■ Layer 2 VPN (L2VPN): The SP provides connectivity to customer routers by


creating a virtual circuit between the nodes. The SP emulates a cable or network
switch and does not exchange any network information with the customer routers.

■ Layer 3 VPN (L3VPN): The SP routers create a virtual context, known as a Virtual
Route Forwarding (VRF) instance, for each customer. Every VRF provides a
method for routers to maintain a separate routing and forwarding table for each VPN
network on a router. The SP communicates and exchanges routes with the customer
edge (CE) routers. L3VPN exchanges IPv4 and IPv6 packets between PE routers.

The SPs own all the network components in an MPLS VPN network and can guarantee
specific QoS levels to the customer. They price their services based on service-level
agreements (SLAs) that specify bandwidth, QoS, end-to-end latency, uptime, and
additional guarantees. The price of the connectivity typically correlates to higher
demands in the SLAs to offset additional capacity and redundancy in their infrastructure.

Increasing Demands on Enterprise WANs


WAN traffic patterns have evolved since the 1990s. At first, a majority of network traffic
remained on the LAN because people with similar job functions were grouped together
in a building. Sharing files and interacting via email was localized, so WAN links typically
transferred data between email servers, or for users accessing the corporate intranet. Over
time, WAN circuits have seen an increase of network traffic as explained in the following
sections.
4 Chapter 1: Evolution of the WAN

Server Virtualization and Consolidation


Server CPUs have become faster and faster, allowing servers to do more processing.
IT departments realized that consolidating file or email servers consumed fewer resources
(power, network, servers, and staff) and lowered operational costs. Server consolidation
reached a new height with the introduction of x86 server virtualization. Companies
virtualized physical servers into virtual machines. An unintended consequence of server
consolidation was that WAN utilization increased because servers were located at data
centers (DCs), not in branch offices.

Cloud-Based Services
An organization’s IT department is responsible for maintaining business applications such
as word processing, email, and e-commerce. Application sponsors must work with IT to
accommodate costs for staffing, infrastructure (network, workstations, and servers) for
day-to-day operations, architecture, and disaster recovery.

Cloud-based providers have emerged from companies like SalesForce.com, Amazon,


Microsoft, and Google. Cloud SPs assume responsibility for the cost of disaster
recovery, licensing, staff, and hardware while providing flexibility and lower costs to
their customers. The cost of a cloud-based solution can be spread across the length of
the contract. Changing vendors in a cloud-based model does not have the same financial
impact as implementing an application with in-house resources.

Connectivity to cloud providers is established with dedicated circuits or through Internet


portals. Some companies prefer a dedicated circuit because they manage the security
aspect of the application at the point of attachment. However, providing connectivity
through the Internet gives employees the same experience whether they are in the office
or working remotely.

Collaboration Services
Enterprise organizations historically maintained a network for voice and a network for
computer data. Phone calls between cities were classified as long distance, allowing
telephone companies to charge the party initiating the call on a per-minute basis.

By consolidating phone calls onto the data network using voice over IP (VoIP), organiza-
tions were able to reduce their operating costs. Companies did not have to maintain both
voice and data circuits between sites. Legacy private branch exchanges (PBXs) no longer
needed to be maintained at all the sites, and calls between users in different sites used the
WAN circuit instead of incurring per-minute long-distance charges.

Expanding upon the concepts of VoIP, collaboration tools such as Cisco WebEx now
provide virtual meeting capability by combining voice, computer screen sharing, and
interactive webcam video. These tools allow employees to meet with other employees,
meet with customers, or provide training seminars without requiring attendees to be in
the same geographic location. WebEx provides a significant reduction in operating costs
because travel is no longer required. Management has realized the benefits of WebEx
Increasing Demands on Enterprise WANs 5

but has found video conferencing or Cisco TelePresence even more effective. These tools
provide immersive face-to-face interaction, involving all participants in the meeting,
thereby increasing the attention of all attendees. Decisions are made faster because of the
reduced delay, and people are more likely to interact and share information with others
over video.

Voice and video network traffic requires prioritization on a network. Voice traffic is
sensitive to latency between endpoints, which should be less than 150 ms one way.
Video traffic is more tolerant of latency than voice. Latency by itself causes a delay
before the voice is heard, turning a phone call (two-way audio) into a CB radio (one-way).
While this is annoying, people can still communicate. Jitter is the varying delay between
packets as they arrive in a network and can cause gaps in the playback of voice or video
streams. If packet loss, jitter, or latency is too high, users can become frustrated with
choppy/distorted audio, video tiling, or one-way phone calls that drastically reduce
the effectiveness of these technologies.

Bring Your Own Device (BYOD)


In 2010, employees began to use their personal computers, smartphones, and tablets
for work. This trend is known as bring your own device (BYOD). Companies allowed
their employees to BYOD because they anticipated an increase in productivity, cost
savings, and employee satisfaction as a result.

However, because these devices are not centrally managed, corporations must take steps
to ensure that their intellectual property is not compromised. Properly designed networks
ensure that BYOD devices are separated from corporate-managed devices.

Smartphones and tablets for BYOD contain a variety of applications. Some may be used
for work, but others are not. Application updates are an average size of 2 MB to 25 MB;
some operating system updates are 150 MB to 750 MB in size. When users update
multiple applications or the operating system (OS) on their device, it consumes network
bandwidth from business-related applications.

Note Some users connect their smartphones and tablets to corporate networks purely to
avoid data usage fees associated with their wireless carrier contracts.

Guest Internet Access


Many organizations offer guest networks for multiple reasons, including convenience and
security:

■ Convenience: Enterprises commonly provide their vendors, partners, and visitors


with Internet access as a convenience. Providing connectivity allows access to the
company’s network for email, VPN access for files, or to a lab environment, making
meetings and projects productive.
6 Chapter 1: Evolution of the WAN

■ Security: Separating the secured corporate resources (workstations, servers, and


so on) from unmanaged devices creates a security boundary. If an unmanaged device
becomes compromised because of malware or a virus, it cannot communicate with
corporate devices.

Quality of Service for the WAN


Network users expect timely responsiveness from their network applications. Most LAN
environments provide gigabit connectivity to desktops, with adequate links between
network devices to prevent link saturation. Network engineers deploy QoS policies to
grant preference of one type of network traffic over a different type. Although QoS
policies should be deployed everywhere in a network, they are a vital component of any
WAN edge design, where bandwidth is often limited because of cost and/or availability.

Media applications (voice and/or video) are sensitive to delay and packet loss and are
often granted the highest priority in QoS policies. Typically, non-business-related traffic
(Internet) is assigned the lowest QoS priority (best effort). All other business-related
traffic is categorized and assigned an appropriate QoS priority and bandwidth based
upon the business justification.

A vital component of QoS is the classification of network traffic according to the


packet’s header information. Typically traffic is classified by class maps, which use a
combination of protocol (TCP/UDP) and communication ports. Application developers
have encountered issues with traffic passing through corporate firewalls on nonstandard
ports or protocols. They have found methods to tunnel their application traffic over
port 80, allowing instant messaging (IM), web conferencing, voice, and a variety of other
applications to be embedded in HTTP. In essence, HTTP has become the new TCP.

HTTP is not sensitive to latency or loss of packets and uses TCP to detect packet loss
and retransmission. Network engineers might assume that all web-browsing traffic can
be marked as best effort because it uses HTTP, but other applications that are nested in
HTTP can be marked incorrectly as well.

Deep packet inspection is the process of looking at the packet header and payload to
determine the actual application for that packet. Packets that use HTTP or HTTPS header
information should use deep packet inspection to accurately classify the application for
proper QoS marking. Providing proper network traffic classification ensures that the
network engineers can deploy QoS properly for every application.

Branch Internet Connectivity and Security


The Internet provides a wealth of knowledge and new methods of exchanging
information with others. Businesses host web servers known as e-commerce servers to
provide company information or allow customers to shop online. Just as with any aspect
of society, criminals try to obtain data illegally for personal gain or blackmail. Security is
deployed in a layered approach to provide effective solutions to this problem.
Branch Internet Connectivity and Security 7

Firewalls restrict network traffic to e-commerce servers by specifying explicit destination


IP addresses, protocols, and ports. Email servers scan email messages for viruses and
phishing attempts. Hackers have become successful at inserting viruses and malware
into well-known and respected websites. Content-filtering servers can restrict access to
websites based on the domain-based classification and can dynamically scan websites for
malicious content.

Internet access is provided to the branch with either a centralized or a distributed model.
Both models are explained in the following sections.

Centralized Internet Access


In the centralized Internet access model, one centralized or regional site provides Internet
connectivity. This model simplifies the management of Internet security policy and
device configuration because network traffic flows through a minimal number of access
points. This reduces the size of the security infrastructure and its associated maintenance
costs.

The downside of the centralized model is that all network traffic from remote locations
to the Internet is also backhauled across the WAN circuit. This can cause congestion on
the enterprise WAN and centralized Internet access circuits during peak usage periods
unless the Internet circuit contains sufficient bandwidth for all sites and the WAN
circuits are sized to accommodate internal network traffic as well as the backhauled
Internet traffic. Although Internet circuits have a low cost, the backhauled network
traffic travels on more expensive WAN circuits. In addition, backhauling Internet traffic
may add latency between the clients and servers on the Internet. The latency occurs for
recreational web browsing as well as access to corporate cloud-based applications.

Figure 1-2 illustrates the centralized Internet model. All Internet traffic from R2 or R3
must cross the WAN circuit where it is forwarded out through the headquarters Internet
connection.

Internet
Internet Traffic

Branch Headquarters Branch

R2 R1 R3

Intranet and Internet Traffic Intranet and Internet Traffic

Figure 1-2 Centralized Internet Connectivity Model


8 Chapter 1: Evolution of the WAN

Distributed Internet Access


In the distributed Internet access model, Internet access is available at all sites. Access
to the Internet is more responsive for users in the branch, and WAN circuits carry only
internal network traffic. Figure 1-3 illustrates the distributed Internet model. R2 and
R3 are branch routers that can provide access to the Internet without having to traverse
the WAN links. R2 and R3 route packets to the Internet out of their Internet circuits,
reducing the need to backhaul Internet traffic across costly WAN circuits.

Internet
In
ffic tra
ra ne
tT tT
ne ra
ra ffic
nt
Headquarters

Branch I Branch

R2 R1 R3

Intranet Traffic Intranet Traffic

Figure 1-3 Distributed Internet Connectivity Model

This model requires that the security policy be consistent at all sites, and that appropriate
devices be located at each site to enforce those policies. These requirements can be a
burden to some companies’ network and/or security teams.

Cisco Intelligent WAN


Cisco Intelligent WAN (IWAN) architecture provides organizations with the capability
to supply more usable WAN bandwidth at a lower cost without sacrificing performance,
security, or reliability. Cisco IWAN is based upon four pillars: transport independence,
intelligent path control, application optimization, and secure connectivity.

Transport Independence
Cisco IWAN uses Dynamic Multipoint VPN (DMVPN) to provide transport
independence via overlay routing. Overlay routing provides a level of abstraction that
simplifies the control plane for any WAN transport, allowing organizations to deploy a
consistent routing design across any transport and facilitating better traffic control and load
sharing, and supports routing protocols, removing any barriers to equal-cost multipathing
(ECMP). Overlay routing provides transport independence so that a customer can select
any WAN technology: MPLS VPN (L2 or L3), metro Ethernet, direct Internet, broadband,
Cisco Intelligent WAN 9

cellular 3G/4G/LTE, or high-speed radios. Transport independence makes it easy to mix and
match transport options or change SPs to meet business requirements.

For example, a new branch office requires network connectivity. Installing a physical
circuit can take an SP six to 12 weeks to provision after the order is placed. If the order
is not placed soon enough or complications are encountered, WAN connectivity for
the branch is delayed. Cisco IWAN’s transport independence allows the temporary use
of a cellular modem until the physical circuit is installed without requiring changes to
the router’s routing protocol configuration, because DMVPN resides over the top of the
cellular transport. Changing transports does not impact the overlay routing design.

Intelligent Path Control


Routers forward packets based upon destination address, and the methodology for path
calculation varies from routing protocol to routing protocol. Routing protocols do not
take into consideration packet loss, delay, jitter, or link utilization during path calculation,
which can lead to using an unsuitable path for an application. Technologies such as IP
SLAs can measure the path’s end-to-end characteristics but do not modify the path
selected by the routing protocol.

Performance Routing (PfR) provides intelligent path control on an application basis.


It monitors application performance on a traffic class basis and can forward packets
on the best path for that application. In the event that a path becomes unacceptable,
PfR can switch the path for that application until the original path is within application
specifications again. In essence, PfR ensures that the path taken meets the requirements
set for that application.

PfR has been enhanced multiple times for Cisco intelligent path control, integrating
with DMVPN and making it a vital component of the IWAN architecture. It provides
improved application monitoring, faster convergence, simple centralized configuration,
service orchestration capability, automatic discovery, and single-touch provisioning.

Providing a highly available network requires elimination of single points of failure


(SPoFs) to accommodate hardware failure and other failures in the SP infrastructure. In
addition to redundancy, the second circuit can provide additional bandwidth with the use
of transport independence and PfR. This can reduce WAN operating expenses in any of
the IWAN deployment models.

Figure 1-4 depicts a topology that provides R1 connectivity to R5 across two different
paths. R1 and R5 have identified DMVPN tunnel 100 as the best path with the routing
protocol used and continue to send VoIP traffic up to that tunnel’s capacity. R1 uses
the same tunnel for sending and transferring files. The total amount of network traffic
exceeds tunnel 100’s bandwidth capacity. The QoS policies on the tunnel ensure that the
VoIP traffic is not impacted, but file transfer traffic is impacted. The DMVPN tunnel 200
could be used to transfer files with intelligent path control.
10 Chapter 1: Evolution of the WAN

DMVPN Tunnel 100

R2

R1 R5

R3 R4

DMVPN Tunnel 200

Figure 1-4 Path Optimizations with Intelligent Path Control

PfR overcomes scenarios like the one described previously. With PfR, R1 can send VoIP
traffic across DMVPN tunnel 100 and send file transfer traffic toward DMVPN tunnel
200. PfR allows both DMVPN tunnels to be used while still supporting application
requirements and not dropping packets.

Note Some network engineers might correlate PfR with MPLS traffic engineering (TE).
MPLS TE supports the capability to send specific marked QoS traffic down different TE
tunnels but lacks the granularity that PfR provides for identifying an application.

Application Optimization
Most users assume that application responsiveness across a WAN is directly related
to available bandwidth on the network link. This is an incorrect assumption because
application responsiveness directly correlates to the following variables: bandwidth, path
latency, congestion, and application behavior.

Most applications do not take network characteristics into account and rely on
underlying protocols like TCP for communicating between computers. Applications
are typically designed for LAN environments that provide high-speed links that do not
have congestion and are “chatty.” Chatty applications transmit multiple packets in a
back-and-forth manner, requiring an acknowledgment in each direction.

You might also like