You are on page 1of 137

Cisco Systems Advanced Services

High Level Design Template

Version 0.1

Corporate Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 526-4100
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED
WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED
WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15
of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment
generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense.

The following information is for FCC compliance of Class B devices: The equipment described in this manual generates and may radiate radio-frequency energy. If it is not installed in
accordance with Cisco’s installation instructions, it may cause interference with radio and television reception. This equipment has been tested and found to comply with the limits for a
Class B digital device in accordance with the specifications in part 15 of the FCC rules. These specifications are designed to provide reasonable protection against such interference in a
residential installation. However, there is no guarantee that interference will not occur in a particular installation.

You can determine whether your equipment is causing interference by turning it off. If the interference stops, it was probably caused by the Cisco equipment or one of its peripheral
devices. If the equipment causes interference to radio or television reception, try to correct the interference by using one or more of the following measures:

Turn the television or radio antenna until the interference stops.

Move the equipment to one side or the other of the television or radio.

Move the equipment farther away from the television or radio.

Plug the equipment into an outlet that is on a different circuit from the television or radio. (That is, make certain the equipment and the television or radio are on circuits controlled by
different circuit breakers or fuses.)

Modifications to this product not authorized by Cisco Systems, Inc. could void the FCC approval and negate your authority to operate the product.

The following third-party software may be included with your product and will be subject to the software license agreement:

CiscoWorks software and documentation are based in part on HP OpenView under license from the Hewlett-Packard Company. HP OpenView is a trademark of the Hewlett-Packard
Company. Copyright  1992, 1993 Hewlett-Packard Company.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version
of the UNIX operating system. All rights reserved. Copyright  1981, Regents of the University of California.

Network Time Protocol (NTP). Copyright  1992, David L. Mills. The University of Delaware makes no representations about the suitability of this software for any purpose.

Point-to-Point Protocol. Copyright  1989, Carnegie-Mellon University. All rights reserved. The name of the University may not be used to endorse or promote products derived from
this software without specific prior written permission.

The Cisco implementation of TN3270 is an adaptation of the TN3270, curses, and termcap programs developed by the University of California, Berkeley (UCB) as part of the UCB’s
public domain version of the UNIX operating system. All rights reserved. Copyright  1981-1988, Regents of the University of California.

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the
RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited.
Copyright  1995, Madge Networks Limited. All rights reserved.

Xremote is a trademark of Network Computing Devices, Inc. Copyright  1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the
suitability of this software for any purpose.

The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL
FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE
PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS
SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy,
Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness
Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are
trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All That’s Possible, and Empowering the Internet Generation, are service marks of
Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo,
Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar,
PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S.
and certain other countries.

All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between
Cisco and any other company. (0105R)

INTELLECTUAL PROPERTY RIGHTS:


THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND IT’S SUPPLIERS, AND SHALL NOT
BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE
AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF
THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF
INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Low Level Design Template


Copyright  2001-2, Cisco Systems, Inc.
All rights reserved.
COMMERCIAL IN CONFIDENCE.

2
Contents

Contents........................................................................................................................................ 3

Tables............................................................................................................................................ 8

Figures......................................................................................................................................... 10

Document Control...................................................................................................................... 12
History.................................................................................................................................... 12
Review.................................................................................................................................... 13
Design Acceptance............................................................................................................... 14

About This Design Document................................................................................................... 15


Document Purpose................................................................................................................ 15
Scope...................................................................................................................................... 15
Document Usage Guidelines................................................................................................ 15
Assumptions and Caveats.................................................................................................... 16
Related Documents............................................................................................................... 16

Network Overview...................................................................................................................... 17
Network Topology................................................................................................................. 17
WAN Overview..................................................................................................................................17
Network Infrastructure.......................................................................................................... 17
Core.....................................................................................................................................................17
Edge ...................................................................................................................................................17
Access.................................................................................................................................................17
Traffic Flow and Characteristic............................................................................................ 18
Existing Services and SLAs................................................................................................. 18

Proposed Network Architecture................................................................................................ 19


Design Considerations......................................................................................................... 19
MPLS Network Architecture..............................................................................................................19
Quality-of-Service...............................................................................................................................20
MPLS/VPN Services..........................................................................................................................20
Contents

..................................................................................................................................................... 22

Deployment Guidelines.............................................................................................................. 22
Physical Network Design...................................................................................................... 22
Layer-2 Transport Media....................................................................................................................22
Central Office Bratislava [BA]...........................................................................................................23
PoPs.....................................................................................................................................................29
Hardware/Software Release Table......................................................................................................29

Logical Network Design............................................................................................................. 30


IGP Routing – (OSPF or ISIS)............................................................................................... 30
The Role of OSPF in <Customer name> MPLS network..................................................................30
OSPF Areas.........................................................................................................................................31
OSPF Authentication..........................................................................................................................31
OSPF Area Summarization.................................................................................................................31
OSPF Costs.........................................................................................................................................31
Designated and Backup Designated Routers......................................................................................31
Default Routes....................................................................................................................................32
OSPF Convergence.............................................................................................................................32
OSPF Deployment Recommendations Summary for the <CUSTOMER NAME> network.............33
Backbone Routing and Label Distribution Protocols.........................................................33
Cisco Express Forwarding (CEF) Switching......................................................................................33
Label Distribution Protocol (LDP).....................................................................................................34

Network Services........................................................................................................................ 36
MPLS/VPN Services.............................................................................................................. 36
MPLS-VPN.........................................................................................................................................36
MP-iBGP4 (Multi-protocol iBGP) Implementation...........................................................................37
Guidelines for Creating VRF Definitions.............................................................................42
VPN Route Target Communities........................................................................................................44
VPN Topologies..................................................................................................................... 45
Full Mesh............................................................................................................................................45
Hub and Spoke....................................................................................................................................45
Exranets...............................................................................................................................................45
Customers with Unique Addresses...............................................................................................45
Customers with Overlapping Addresses.......................................................................................45
Extranet NAT at a Common Service Point...................................................................................46
Extranet NAT at Customer Edge..................................................................................................46
Controlling route exports in extranets...........................................................................................46

..................................................................................................................................................... 47

PE-CE Routing Implementation................................................................................................. 47

4
Contents

Connectivity via Static Routing............................................................................................ 47


RIPv2 configuration (PE to CE)............................................................................................ 48
EBGP configuration (PE to CE)............................................................................................ 48
Configuration at the PE.......................................................................................................................48
Controlling number of VRF routes.....................................................................................................50

..................................................................................................................................................... 52

Additional MPLS VPN Services................................................................................................. 52


Internet Access for MPLS/VPN customers..........................................................................52
Separate CEs for Internet Access and VPN Access............................................................................53
Low-cost Internet Access (1CE + one/two access links)....................................................................53
Shared vrf-aware services.................................................................................................... 55
Network Address Translation for MPLS/VPN customers..................................................................55
Connecting Downstream ISPs to PE routers......................................................................................56
Remote Access (ASWAN/Security, Dial, DSL, Cable)........................................................56
Wireless.................................................................................................................................. 56
VOIP........................................................................................................................................ 56
Inter-AS/CsC.......................................................................................................................... 56

..................................................................................................................................................... 57

Traffic Engineering and Fast Reroute Technology Overview.................................................57


Traffic Engineering Basics................................................................................................... 57
Traffic Trunk Attributes......................................................................................................... 59
Bandwidth...........................................................................................................................................59
Path Selection Policy..........................................................................................................................59
Resource Class Affinity......................................................................................................................59
Adaptability.........................................................................................................................................59
Resilience............................................................................................................................................59
Priority................................................................................................................................................60
Resource Attributes.............................................................................................................. 60
Available Bandwidth..........................................................................................................................60
Resource Class....................................................................................................................................60
Path Selection........................................................................................................................ 60
Path Setup ............................................................................................................................. 61
Link Protection (FRR) Basics.............................................................................................. 62
Increased Reliability for IP Services...................................................................................................64
High Scalability Solution....................................................................................................................64

TE/TE-FRR Design ..................................................................................................................... 65


Deciding on the tunnel topology and tunnel types............................................................65
How to Route Traffic Into TE Tunnels..................................................................................65

5
Contents

Policy Based Routing.........................................................................................................................65


Static Routing Into Tunnels................................................................................................................65
Auto-Route.........................................................................................................................................66
Forwarding Adjacency.......................................................................................................................68
Using Directed LDP Sessions.............................................................................................. 69
Number of Protected Prefixes.............................................................................................. 70

“3” Implementation Of TE-FRR................................................................................................. 72


“3” Network Architecture...................................................................................................... 72
Introduction.........................................................................................................................................72
TE-FRR Design..................................................................................................................................73
Primary Tunnels............................................................................................................................74
Backup Tunnels.............................................................................................................................74
Sample configurations........................................................................................................................76
Generic Global Commands...........................................................................................................76
Birmingham P Router....................................................................................................................76
Quality of Service.................................................................................................................. 79
Introduction.........................................................................................................................................79
Differentiated Services Model – Introduction....................................................................................80
DiffServ Aware TE.............................................................................................................................91
ST QoS design – An Overview...........................................................................................................91
CE-to-PE QoS mechanisms (applied on the CE) – PPP or HDLC.....................................................93
CE-to-PE QoS mechanisms (applied on the PE) – PPP or HDLC...................................................111
PE-to-P QoS mechanisms (applied on the PE).................................................................................113
PE-P, P-P and P-PE QoS mechanisms (applied on the P)................................................................115
PE to CE QoS mechanisms (applied on the PE)...............................................................................121
QoS mechanisms on ATM PVCs (applied on the CE and PE).........................................................122
..........................................................................................................................................................124

................................................................................................................................................... 124

High Availability........................................................................................................................ 125

................................................................................................................................................... 126

Security .................................................................................................................................... 126


Password Management.....................................................................................................................126
Console Ports....................................................................................................................................127
Controlling TTY’s............................................................................................................................127
Controlling VTYs and Ensuring VTY Availability..........................................................................127
Logging.............................................................................................................................................128
Anti-spoofing....................................................................................................................................129
Controlling Directed Broadcasts.......................................................................................................131

6
Contents

IP Source Routing.............................................................................................................................131
ICMP Redirects.................................................................................................................................132
CDP...................................................................................................................................................132
NTP...................................................................................................................................................132

................................................................................................................................................... 134

Network Management............................................................................................................... 134

................................................................................................................................................... 135

Appendix I................................................................................................................................. 135

Appendix II................................................................................................................................ 136

7
Tables

Table 1 Revision History............................................................................................................ 12

Table 2 Revision Review............................................................................................................ 13

Table 3 Software Release Table................................................................................................ 29

Table 4 OSPF Timer Default Values......................................................................................... 33

Table 5 Tunnel Provisioning................................................................................................ 75

Table 6 Class-Selector PHBs..................................................................................................... 83

Table 7 Serialisation delay [ms] as function of link speed and packet size..........................86

Table 8 Recommended fragment size.......................................................................................88

Table 9 The components of the end-to-end delay model........................................................89

Table 10 CoS Mechanisms Overview........................................................................................ 92

Table 11 NB and EB settings..................................................................................................... 99

Table 12 WRED Settings for Business Class.........................................................................106

Table 13 WRED Settings for Streaming Class.......................................................................106

Table 14 WRED Settings for Standard Class.........................................................................107

Table 15 WRED - exponential weighting constant.................................................................109

Table 16 MDRR weights........................................................................................................... 117

Table 17 WRED Setings for Business Class (ENG-2 GSR) ..................................................120

Table 18 WRED Setings for Streaming Class (ENG-2 GSR)..................................................120

Table 19 WRED Setings for Standard Class (ENG-2 GSR)....................................................120

Table 20 ATM Overhead........................................................................................................... 123

Table 21 LLQ bandwidths and ATM........................................................................................ 123


Tables

9
Figures

Figure 1 <Company’s name> network – WAN topology..........................................................17

Figure 2 Architecture of CO BA................................................................................................. 24

Figure 3 HW configuration of ba2-igw-2...................................................................................25

Figure 4 HW configuration of ba1-igw-1...................................................................................26

Figure 5 HW configuration of ba-six-1......................................................................................26

Figure 6 Layer 2 Frame with 2 MPLS Labels...........................................................................34

Figure 7 MP-BGP VPN Route Distribution example.................................................................37

Figure 8 VPN route distribution using partitioned RRs...........................................................38

Figure 9 Route Reflector Redundancy in the <Customer Name> Networks.........................39

Figure 10 Redundant Route Reflectors with same cluster-id................................................41

Figure 11 Unique RD per each VPN.......................................................................................... 43

Figure 12 Unique RD per site for each VPN.............................................................................44

Figure 13 PE-CE eBGP with unique AS....................................................................................48

Figure 14 PE-CE eBGP with single network wide AS..............................................................49

Figure 15 Internet Access from a VPN using separate CEs....................................................53

Figure 16 Internet Access from a VPN – Single CE (two links in CEred, single link on
CEblue)........................................................................................................................................ 55

Figure 17 NAT in CE router........................................................................................................ 56

Figure 18 - Traffic Engineering Mechanisms...........................................................................58

Figure 19 - Traffic Engineering Path Setup..............................................................................61

Figure 20 - TE FRR Example...................................................................................................... 63

Figure 21 - Topology Without Tunnels.....................................................................................66


Figures

Figure 22 - R1 Routing Table – No MPLS TE............................................................................67

Figure 23 – Topology With TE Tunnels....................................................................................67

Figure 24 - R1 Routing Table With Autoroute Announce........................................................67

Figure 25 - Forwarding Adjacency Topology...........................................................................68

Figure 26 - "3" Core Network Architecture...............................................................................73

Figure 27 - Illustration of Primary and Backup TE Tunnels....................................................74

Figure 28 Various interpretations of the TOS field..................................................................81

Figure 29 DSCP Interpretation.................................................................................................. 84

Figure 30 Adaptive jitter buffer.................................................................................................. 87

Figure 31 - Call admission control............................................................................................ 87

Figure 32 LFI to reduce frame delay and jitter.........................................................................88

Figure 33 Overview of end-to-end delay segments.................................................................90

Figure 34 DSCP to EXP mapping.............................................................................................. 91

Figure 35 DSCP / MPLS Headers............................................................................................... 91

Figure 36 QoS mechanisms overview......................................................................................93

Figure 37 In/Out-contract Marking and Policing (example for Business class) ...................97

Figure 38 CAR based In/Out-contract Marking and Policing .................................................98

Figure 39 WRED Algorithm...................................................................................................... 103

11
Document Control

Authors:
Change Authority:
Reference Number: EDCS-xxxx

History
Table 1 Revision History

Version Issue Date Status Reason for Change


No.
Document Control

Review
Table 2 Revision Review

Reviewer’s Details Version No. Date


<Name> <Version number> <dd-mmm-yyyy>
<Organisation>

Change Forecast: Medium

This document will be kept under revision control.

13
Document Control

Design Acceptance
The signatories below confirm that the design meets the requirements specified. The design is subject to
change during or following staging.

CISCO SYSTEMS Customer Name

By:__________________________________ By:_____________________________________

Name: Name:

Title: Title:

Date:________________________________ Date:___________________________________

14
About This Design Document

Document Purpose
The purpose of this document is to outline the Cisco Systems recommended High Level Design (HLD) for
<Company’s name and Project Name>. It would recommend an architecture for the cutomer based on the
requirements outlined in CRD and subsequent meeting with the customer.

The document is split into following main sections,


• Current Architecture and Network Design
• Planned Services
• Proposed Design and Architecture
• OSS
Note: The above sections may change depending on the customer’s needs
The document provides sufficient detail to derive the device configurations that will be documented in the
Network Implementation Plan. Some parameters may be determined during network deployment.

Scope
Please refer to Statement of Work documents for exact definition of project deliverables.

Document Usage Guidelines


The document is intended to provide with a recommended architecture to the customer that fullfills the
requirements outlined by the customer. The proposed architecture keeps in the mind the existing
deployment and also the future growth of the network. HLD doesn’t delve into the configuration level
details or any scalability/performance numbers.
As long as the High Level Design document is in a draft format, it is susceptible to modifications and
additions initiated by Cisco Systems or by the customer.

After acceptance of the HLD by the customer, the HLD document is still a living document that will
be updated by experiences gained throughout the deployment and testing phases.
About This Design Document

Assumptions and Caveats

• Based on the input from CRDR and SOW write down the necessary assumptions and caveats
• It is assumed the reader is familiar with the <Customer Name> service requirements. Furthermore it is
also assumed the reader is familiar with Cisco IOS and has a basic understanding of the network and
technologies that will be used to fulfil the customer requirements.

Related Documents
Write down the links to CRD, CRDR, SOW,

16
Network Overview

Describe what kind of customer and their core business. Also at a high level explain their current
architecture with more details in the following section. This information can be collected from CRD and
CRDR

Network Topology

WAN Overview
The following figure is provided for illustrative purposes and depicts a high-level view of <Company’s
name> network. Picture is simplified for easier understanding of WAN network topology.
Figure 1 <Company’s name> network – WAN topology

Network Infrastructure

Core
If possible provide the details of current core network. The platforms used, kind of links, what routing
protocol etc.

Edge
The following types of devices are installed in <Company’s name> network as Provider Edge (PE)
routers:

Access
Customer Edge (CE) routers are classical IPv4 routers and will interconnect customer sites with PE
routers via leased lines (as described in Error: Reference source not found chapter below). CE routers
usually reside in customer premises. CE router can be managed by <Company’s name> or by the customer.
Traffic Flow and Characteristic
In this section explainand characterize the custoemnr traffic. Explain the load that major pops(or even
links) are carrying. What additional traffic is expected with the new deployment.

Existing Services and SLAs


In this section explain the current services that the customer is offering and the SLAs ,if any , associated
with these services

18
Proposed Network Architecture

In this chapter we highlight the architecture that is being proposed for the customer based on the
requirements listed in the CRD and subsequent meetings with the customer.

Design Considerations
This chapter summarizes the design objectives that have been followed throughout the LLD, and the design
rules we have taken to meet these objectives.. Following are the list of these objectives as dictated by the
customer

MPLS Network Architecture


Following are just some of the examples. Customize this section based on your customer requirements
• Fast convergence
Fast convergence and network stability are two orthogonal components in any network design.
Accurately measuring and interpreting the convergence time in complex topologies is somewhat like
rocket science as many factors are involved. Improving the overall convergency by tuning the relevant
parameters is a complex task that requires in depth analysis of all side effects (e.g. CPU utilization).
We recommend to tune the convergence of routing protocols in a separate project. The scope of this
project should be exclusively the optimization of convergency in <Customer Name> MPLS network,
by introducing new features (e.g. Traffic Engineering Fast Reroute) and tuning of routing protocol
timers (see OSPF Convergence chapter)

• Network Stability and Scalability


Any routing protocol would scale well, if the routing information is stable. Stability of backbone IGP
was one of the main concerns in the former ST network. For this reason the following changes have
been made during previous project phases :
o Offload of any customer routes from backbone OSPF into BGP.
o Subnet aggregation of unstable leased-line connections and redistribution as static route into
BGP.
o Subnet aggregation the of dial-up customers with fixed addresses on VPDN tunnel
concentrator, and redistribution as static routes into BGP

• Network resilience
<Customer Name> MPLS network has been designed for high availability. Physical and logical design
ensures that primary and backup path exists between any two core routers. Core routers are equipped
with primary and backup route processors. Resilient connections between regional PoPs and Core
locations will be rolled out in project phase II
• Network security
Cisco has implemented best-practices security mechanisms on <Customer Name> routers to protect
the network. Customer security and managed firewall service was not in the scope of any Cisco project.

• Simplicity
<Customer Name> MPLS network design is clean and simple to understand. Any feature or design
element that would increase network complexity - but have a limited overall benefit - has been avoided.
ST has decided to clean-up the existing IP addressing scheme and migrate from multi-area OSPF into
single-area design.

• MPLS
LDP has been chosen for label distribution in <Customer Name> MPLS network. LDP is enabled on
all core links (P-P, P-PE, P-RR, P-iGW).

Quality-of-Service
• Traffic prioritisation
The following Classes of Service are implemented in the <Customer Network> network: Voice,
Streaming, Business, Standard. Each class has different QoS attributes and guaranteed (configured)
bandwidth that cannot be utilised by any other class during congestion periods.

Backbone links must be provisioned with sufficient capacity for each of the classes!

• Flexibility
Modular QoS CLI allows to map traffic flows of <Customer Name> customers in one of the Classes of
Service. Such classification and marking is extremely flexible (different customers can map different
applications in any of the classes), but requires the understanding of traffic profiles (e.g. SMTP or any
other data traffic must not be mixed with delay-sensitive VoIP packets).

• Scaleable implementation
The customer-specific QoS configuration is implemented on CE routers – QoS configuration template
on PE and P devices will remain stable and the same for all ST customers. VPNSC shall be used for
accurate provisioning of QoS parameters on access (PE-CE) connections.

MPLS/VPN Services
• Flexible and scalable managed IP VPN service
Achieved through MPLS technology, properly applied MPLS/VPN functionality and VPNSC
management system.

• Service resilience
Resilient MPLS backbone, redundant route reflectors and the possibility of fully resilient connectivity
scenarios on access-layer (2CE-2PE) in all PoPs, are necessary building blocks for high availability
MPLS/VPN service.

• End-to-end Quality of Service


Achieved through the use of various Diffserv mechanisms: classification, marking, policing, queuing
and dropping. QoS is implemented on access layer and in the backbone.

20
• Internet Access for MPLS/VPN customers
Internet access from the MPLS/VPN is provided for customers with such requirement. For security
reasons we only recommend to implement the Internet connection through a dedicated CE router and
dedicated access-layer circuit (see chapter for detailed description)

• Security
Assuming that MPLS core is secure, the MPLS/VPN solution offers same level of security as the
traditional layer-2 VPN networks.

21
Deployment Guidelines

This chapter would provide high level deployment guidelines for the architecture proposed in the previous
chapter. This chapter wont go into the configuration level details but enough technical information would
be provided so that configuration details can be derived in LLD.

Physical Network Design


In this section you need to give high level detail of the layer 1. For example whether you would use OC192
backbone, would it be a GIG-E backbone. There is a possibility that you might neet talk about the optical
infrastructure.

Layer-2 Transport Media


The following summarizes the physical layer transport in ST MPLS network.

• Inter-site connectivity
o POS STM-16 over DWDM is used to interconnect:
Core COs BA, BB and KE
7304 SIX router and P router in BA.
o POS STM-1 over SDH interconnects regional PoPs (PEs) with core COs (Ps)
• Intra-site connectivity
o Back-to-back POS STM-16 is used for connectivity between P and collocated IGW routers
in BA PoP
o Back-to-back GE connections are deployed for the following device pairs:
10008 (PE) -12410 (P) ; Bratislava, Banska Bystrica, Kosice
7600 (PE) -12410 (P) ; Bratislava
ERX (DSL) -12410 (P) ; Bratislava
10008 (PE) -12012 (P) ; Banska Bystrica
o Back-to-back POS STM-1 is used for the following links:
12406 (iGW) -12008 (iGW) ; Bratislava (2 x STM-1)
(PE) - (P) ; Any other collocated PEs in central offices
7204VXR (RR) -12xxx (P) ; BA & BB

22
o Switched FE connections are used to connect existing ST IP (CE, NAS) routers that are
cascaded behind new PEs. Layer 2 switches (Cisco 4503/3550-24) are used for port aggregation.

Central Office Bratislava [BA]


The chapters below depict the architecture of the three central offices and a typical implementation of a
regional PoP in ST MPLS network.

Central Office in Bratislava is a major hub in ST network, because of the largest concentration of customer
base in that area. For this reason, ST decided to build the BA CO resiliently and with powerful routers. The
devices in BA CO can be logically grouped into four layers: Peering, Core, Aggregation and Access.

Two Firesections
The Bratislava CO will be divided into two physically separated firesections (1 and 2). Main components
of peering, core and aggregation layer will be deployed in different locations to achieve network resilience
in case of a major disaster. Customers can be dualhomed to PE routers in both firesection in order to
provide them a maximum of redundancy.
The architecture of BA CO is depicted on the following figure.

23
Figure 2 Architecture of CO BA

Peering Layer

Internet Gateway Routers (ba1-igw-1, ba2-igw-2)


Two routers (Cisco 12406 and Cisco 12008) will be installed in BA CO for IP connectivity between ST
MPLS network and:
• Downstream ISPs (eg. local ASP - Application Service Provider) that pay for transit service to ST –
these are in fact customers of ST.
• Upstream ISPs (eg. Deutsche Telekom, UTA) that provide global Internet reachability for customers
of ST.

Each iGW will have a POS STM-16 back-to-back connection with a different P router. Interconnections
with ISPs can be either POS STM-1 or E3 leased lines.

Both iGWs are equipped with powerful route processors (primary and redundant) that can handle large
number of BGP sessions, and will have installed sufficient amount of memory to carry one or more copies
of full Internet routing table.

Back-to-back links between iGW routers

24
Both iGWs will inject a BGP default route towards PE routers. A PE router will select the default route
based on IGP distance to originating iGW, and eventually send all Internet traffic to the closest iGW.
However, this iGW may not be the best exit point for a given Internet destination, so the packets would
have to be re-routed to the neighbouring iGW to be delivered to the upstream ISP. For this reason, two
POS STM-1 back-to-back links are installed between iGWs. No other traffic (eg. packets between two ST
PoPs) are passing these two links.

An alternative solution would be to download full Internet routing table to any PE router, which can in turn
deliver the Internet traffic to the right iGW. This would result in more optimal traffic flows across ST core,
and enable “distributed” peering system, with possibility of connecting ISP circuits in any PoP. Assuming
that BGP dampening is enabled on border routers, and number of routes that can be accepted from any ISP
is limited1, the major drawback is memory requirements (min. 256 MB) on all PE routers due to large
number of routes in the global Internet routing table.

Figure 3 HW configuration of ba2-igw-2

0 1 x POS STM-16 P router

1 8 x POS STM-1

2 6 x E3 Upstream/Downstream ISP

3 -
4 x POS STM-1
4 GRP Redundant

5 GRP

CSC SFC

CSC SFC

Alarm Alarm SFC

Power 12406 Power

1
It is a good practice to define the maximum number of prefixes that can be accepted from any eBGP peer. This is for
example to prevent the situation where a peering partner at SIX advertises the full Internet routes to ST.
25
Figure 4 HW configuration of ba1-igw-1
P router

Upstream/Downstream ISP

Power
1 x POS STM-16
GRP Redundant

4 x POS STM-1
4 x POS STM-1

12008
CSC 0

CSC 1
GRP

Power

0 1 2 3 4 5 6 7

SIX Internet Gateway Router (ba-six-1)


One 7304 router (ba-six-1) is collocated at SIX premises, for mutual and free-of-charge exchange of
customer traffic between peering partners at <Customer> Exchange Point. ba-six-1 router is attached to the
SIX switch with a GE interface, and interconnected with ST core router ba2-p-2 via a POS STM-16
connection.

Figure 5 HW configuration of ba-six-1

4 - 5 1 x POS STM-16 P router

2 - 3 -

0 GE0 GE1 NSE100 1 -


7304

Peering partners @ SIX

Resiliency in Peering Layer


It is recommended and a good decision to terminate at least one upstream ISP connection on each iGW.
This will protect from failures on a single peering circuit, and/or major failure of a single peering router

26
(either ST’s or the one of upstream ISP). Having two redundant Internet connections on separate routers
will also permit software and hardware upgrades on iGWs without long downtimes.2

The two iGWs distribute BGP routes (default route and full Internet table if required) to other BGP
neighbors in ST network via two redundant route reflectors.

The Internet connectivity scheme with physically separated IGW routers protects against the failure or
major disaster in one of the Bratislava firesections. Internet connectivity will remain through the backup
upstream ISP in the other firesection.

There’s currently a single router installed at SIX premises. If this router or a link between ba2-p-1 and SIX
router fails, the direct connectivity with SIX participants will be lost. Nevertheless, this does not represent
a single point of failure, because the peering partners’ networks can be during failure reached 3 across
upstream ISPs.

Core Layer
Explain how a core would be designed. You need to tell what platforms would be used.

Resiliency in Core Layer


Discuss how resiliency would be provided in the core

Aggregation Layer
Explain how a aggregation layer would be designed. You need to tell what platforms would be used.

Resiliency in Aggregation Layer


Discuss how resiliency would be provided in the aggregation layer

Access Layer
Explain how an access layer would be designed. You need to tell what platforms would be used.

Resiliency in Access Layer


Discuss how resiliency would be provided in the access layer

2
A short downtime will occur because of eBGP convergence throughout the Internet.
3
Most likely this would introduce higher RTT and jitter, and increased load on generally very expensive transit
connections with upstream ISPs.
27
Route Reflectors
Ecplain the design for route refelectors and also what platforms would be used

28
PoPs
Explain with diagrams how a typical PoP would look like in a <Customer Name> network.

Hardware/Software Release Table


The following table summarizes the IOS releases for different platforms in <Customer Name> MPLS
network.

Table 3 Software Release Table

Device Role in <Customer Name> IOS release Image Name


MPLS network

29
Logical Network Design

IGP Routing – (OSPF or ISIS)


The content in this section focuses on OSPF but simlar content needs to be developed for ISIS if ISIS is to
be used in the network
OSPF is a link state routing protocol. It is called as such because it sends link states advertisement (LSA)
to all the routers within the same hierarchical area. All the OSPF routing information is passed within these
LSAs. After the routers receive that information they run the SPF algorithm to calculate the shortest path to
each destination.

When an SPF router powers up all the routing protocol data structures are initialised and then the process
waits for the interfaces to be functional. Once the interfaces are functioning the devices use the OSPF hello
protocol to establish neighbourships. Once the hello exchange has finished and the neighbourship is
established, hello packets are used as keepalives to identify which devices are active. When the link state
databases of two neighbours are synchronised, they are said to be adjacent. Distribution of routing
information is only performed between adjacent routers.

Each router sends periodically LSAs and also when a router's state changes. The OSPF database contents
are compared with the received LSAs to identify possible topology changes.

The Role of OSPF in <Customer name> MPLS network


The <Customer name> MPLS network requires an underlying Interior Gateway Protocol (IGP) to be
enabled for the following reasons:
• BGP next-hop reachability.
• Fast convergence after failure of backbone node or core link.
• Shortest path routing across <Customer name> backbone.

<Customer name> is already running OSPF in its current network. Therefore <Customer name> is very
familiar with OSPF operation, and has gained lots of experiences in OSPF troubleshooting. <Customer
name> has therefore requested to preserve the OSPF as IGP in current MPLS network. The choice of
OSPF is a very good one as it is standardised, scales well and converges quickly.
If customer was not running OSPF(or ISIS) or if there are other reasons for OSPF(or ISIS) deployment
then explain those reasons

OSPF Areas
Single area or Multiarea OSPF would be implemented in the <Customer Network> network. This
decision is based on:
• Give reasons here. Also discuss in this sections how you are going to number OSPF areas

OSPF Authentication
It is possible to authenticate the OSPF packets such that routers can participate in routing
domains based on predefined passwords. By default, a router uses a Null authentication, which
means that routing exchanges over a network are not authenticated. Two other authentication
methods exist: Simple password authentication and Message Digest authentication (md5).
Authentication does not need to be set, but we strongly recommended for security purposes.
And we are recommending MD5 as the authentication method since it is provided higher
security than plain text authentication method.
Message Digest Authentication is a cryptographic authentication. A key (password) and key-id
are configured on each router. The router uses an algorithm based on the OSPF packet, the key,
and the key-id to generate a “message digest” that gets appended to the packet. Unlike the
simple authentication, the key is not exchanged over the wire. A non-decreasing sequence
number is also included in each OSPF packet to protect against replay attacks.
This method also allows for uninterrupted transitions between keys. This is helpful for
administrators who wish to change the OSPF password without disrupting communication. If an
interface is configured with a new key, the router will send multiple copies of the same packet,
each authenticated by different keys. The router will stop sending duplicate packets once it
detects that all of its neighbors have adopted the new key. Following are the commands used
for message digest authentication:

OSPF Area Summarization


Without going into the details of summarization plans just lay out the guidelines for summarization that
would be followed in the LLD

OSPF Costs
• . Expalin any considerations kept in mind when deciding ospf costs

Designated and Backup Designated Routers


On a multi-access media such as Ethernet it is a good idea to force the designated router and backup
designated router to be routers that have more memory and greater processing power than the other routers
in the area. Under the default election scheme, each router has the default priority of 1, therefore the router
with the highest router id (i.e. Loopback IP address) becomes the designated router for the segment.

31
It is not mandatory to enforce the DR selection on multi-access media segments with just two OSPF
speakers (e.g. GigE PE-P uplinks). Therefore these kind of Fast/Gigabit Ethernet interfaces in <Customer
Network> network are defined as OSPF point-to-point links to prevent election of DR/BDR routers.

Default Routes
If there are any default routes then explain how and where are they being injected

OSPF Convergence
Resiliency and redundancy to circuit failure is provided by the convergence capabilities of OSPF
at layer 3. There are two components to OSPF routing convergence: detection of topology
changes and recalculation of routes.

Detection of topology changes is supported in two ways by OSPF. The first, and quickest, is a
failure or change of status on the physical interface, such as Loss of Carrier. The second is a
timeout of the OSPF hello timer. An OSPF neighbor is deemed to have failed if the time to wait
for a hello packet exceeds the dead timer, which defaults to four times the value of the hello
timer. On a Serial, Fast Ethernet or Gigabit Ethernet interface, the default hello timer is set to 10
seconds; therefore the dead timer is 40 seconds

Recalculation of routes is done by each router after a failure has been detected. A link-state
advertisement (LSA) is sent to all routers in the OSPF area to signal a change in topology. This
causes all routers to recalculate all of their routes using the Djikstra (SPF) algorithm. This is a
CPU intensive task, and a large network, with unreliable links, could cause a CPU overload.

When link goes down and if layer2 is not able to detect the failure, convergence in the core can
be improved by decreasing the value of the hello timer. The timer should not be set too low as
this may cause phantom failures, hence unnecessary topology recalculations.

Remember that these timers are used to detect failures that are not at the physical level. For
example, carrier still exists but there is some sort of failure in the intermediate network.

Once a topology change has been detected, LSA is generated and flooded to rest of the devices
in the network. Recalculation of the routes will not occur until the spf timer has expired. The
default value of this timer is 5 seconds. An spf hold time is also used to delay consecutive SPF
calculations (give the router some breathing space). The default for this value is 10 seconds. As
a result, the min time for the routes to converge in case of failure is always going to be more
than 5 secs unless the SPF timers are tuned using OSPF throttle timers. As a result, it is now
possible to schedule spf run right after flooding the LSA information but this can potentially
cause the instabilities in the network e.g. even a flash congestion in the network for a very short
duration could trigger declare the link down and trigger the SPF run.

These timers will be left alone in the initial implementation especially because in the next phase
of this project, MPLS Traffic Engineering with Fast-ReRoute (FRR) capability will be deployed.
Once FRR is implemented, tweaking OSPF timers become less of a concern.

A keepalive timer is also associated with the interface that will detect failure at a level lower
that OSPF. The default for this timer is 10 seconds; again this will be left as default initially.

32
In the initial deployment of the Core network, all timers will be left at their default values as
shown below. These could be slowly lowered and behavior of the network monitored if faster
convergence is required.
If the timers are not default then explain why they are being changed and also the values used
and configurations

Table 4 OSPF Timer Default Values

Timer Default Value


ip ospf dead-interval 4 x hello interval (40 sec)

ip ospf hello-interval 10 sec

ip ospf retransmit-interval 5 sec

ip ospf transmit-delay 1 sec

timers throttle spf <spf-start> <spf-hold> <spf-max-wait> 5000 msec 10000msec 10000msec

OSPF Deployment Recommendations Summary for the <CUSTOMER


NAME> network
List down any additional recommendations for ospf and try to summarize the main points

Backbone Routing and Label Distribution


Protocols
The three protocols are required in the core to provide a functional MPLS network include OSPF,
LDP, and MP-BGP. OSPF provides IP Connectivity amongst the various end points and has
already been discussed in the previous section. LDP is needed to distribute the necessary label
information required to establish the label switched paths between the PE routers. MPLS in
general, depends on IGP or in this case OSPF along with Cisco Express Forwarding to create the
necessary forwarding table. CEF, LDP, and MP-BGP will be explored in greater detail in this
section. Lastly, MP-BGP is needed to exchange the VPN routing information between VPN
customer sites. On the PE routers, VPN Customer routes are kept in separate routing tables,
known as VPN Routing and Forwarding tables (VRFs). Routes in the global routing table are not
reachable by routes in VRFs or vice versa.

Cisco Express Forwarding (CEF) Switching


Cisco Express Forwarding (CEF) is advanced Layer 3 IP switching technology. CEF optimises
network performance and scalability for networks with large and dynamic traffic patterns by
essentially distilling the routing information into a forwarding database known as the FIB,
Forwarding Information Database. Cisco Express Forwarding or CEF switching is a pre-requisite

33
for MPLS to function properly. Therefore CEF must be configured on all the PE and P devices in
the <CUSTOMER NAME> networks.

Label Distribution Protocol (LDP)


LDP is responsible for distributing the labels for IP destination prefix in the MPLS network. Labels
are assigned to every IGP learnt prefix that is in the global routing table. This global routing
table is created and maintained by the IGP, which, in <CUSTOMER NAME> network, is selected
to be OSPF. Essentially all IP destination prefixes will be either a loopback or circuit interface
address. No customer IP addresses will be maintained in the global routing table.
The core P routers only have an understanding of labels that are associated with IP destinations
in the OSPF internal routing table. They have no knowledge of labels related to routes in
customer VPNs as these are created and distributed by the MP-BGP process between PEs only.
Therefore, in the P network, a labelled IP packet is switched to the next-hop based only on the
outer label (the one allocated by LDP from the global routing table) until it finally reaches its
destination. In an MPLS-VPN network, this final destination will always be the egress- PE that
originated the VPN route.

Interface MTU size


As mentioned earlier, two levels of labels are needed to deliver MPLS-VPN services. The first
level label is distributed by the LDP protocol, whilst the second level label is created by MP-BGP
for VPN distribution as discussed in the next section.

When these two labels are placed into the frame they increase the frame size by 8 bytes (4
bytes per label). This can be problematic particularly on Ethernet interfaces which have a
default Maximum Transmission Unit of 1500 bytes; bigger frame sized packets will be dropped if
packets arrive with no fragment bit set. Note that with Ethernet encapsulation without dot1q
encap, the actual layer 2 frame size is 1518 while with dot1q encap, the actual layer 2 frame
size is 1522. With two label impostion, the actual layer 2 frame size becomes 1526 (or 1530 with
dot1q encap) as shown in the Figure 6.

Figure 6 Layer 2 Frame with 2 MPLS Labels

However, it is possible to increase the MPLS mtu on an interface to accommodate the switching
of packets bigger than 1500 size. The default MTU on Serial and POS interfaces is 4470bytes so
frame increase of 8 bytes is not a big concern on these interfaces. This will allow an MPLS frame
with upto 4 labels (16 bytes) over the link. If any Ethernet switches are added into the core
carrying MPLS frames they must also have their MTU increased.
The following note is just an example. Need similar reasoning


4 labels have been allowed to cater for future services on the network
Not
such as traffic engineering & FRR etc. In general, each additional
e service may require an increase in the label stack from 2 to something
greater.

34
LDP Design Recommendations
In this section list all the design and recommendations and configurations related to customer;s networks.
Following are just example of them and may or may not apply to your customer
• For proper operation of MPLS, LDP chooses an ip address as a router-id. It is important to note that
the ip address chosen as router-id is routable, otherwise LDP will not be able to form the neighbor
relationship with the adjacent nodes.

• It is recommended to enable logging of LDP neighbor state change .


• As with OSPF, MD5 based authentication could be enabled on each link where LDP will
be used to prevent any DoS attacks, and to help with configuration errors.

35
Network Services

MPLS/VPN Services
This section describes how the VPN services will be offered by <Customer Name> using the MPLS-VPN
concept.

MPLS-VPN
In MPLS VPN terminology the term PE (Provider Edge) refers to the provider edge router, where
the CE (Customer Edge) connects to and the VPN are created. Each VPN is associated with one
or more VPN routing / forwarding instances (VRFs). A VRF consists of an IP routing table, a
derived Cisco Express Forwarding (CEF) table, a set of interfaces that use the forwarding table,
and a set of rules and routing protocol parameters that control the information that is included
into the routing table.
A one-to-one relationship does not necessarily exist between customer sites and VPNs. A given
site can be a member of multiple VPNs. A customer site's VRF contains all the routes available
to the site from the VPNs of which it is a member.
Packet forwarding information is stored in the IP routing table and the CEF table for each VRF. A
separate set of routing and CEF tables is maintained for each VRF. These tables prevent
information from being forwarded outside a VPN, and also prevent packets that are outside a
VPN from being forwarded to a router within the VPN.
All MPLS VPN configurations are done at the PE router. The rest of the network merely switches
labels and is not aware of the VPN structure or logical separation of customers. The core
network is referred to as the P network in an MPLS VPN.
In order to enable MPLS VPN there are several implementation steps:
• MP-iBGP Implementation
• VPN Routing & Forwarding Table Definitions
• PE to CE Routing Definition
The following sections discuss each of these areas in more detail and provide recommendations and design guidelines for
<CUSTOMER NAME> network.
MP-iBGP4 (Multi-protocol iBGP) Implementation
BGP is one of the vital components in enabling MPLS VPN Service. It is used to propagate VPN
routing information. Various BGP attributes and extensions are used to distribute the VPN
routes.

Distribution of VPN Routing Information

A service provider edge (PE) router can learn an IP prefix from a customer edge (CE) router via
either static or dynamic routing protocols. In the most basic configuration, static routes can be
configured on both the CE and PE router configuration. Alternatively, dynamic routing
protocols, including BGP, RIP, EIGRP or OSPF can be used to share IP prefix information between
provider and customer networks.
The IP prefix is a member of the IPv4 address family. After it learns the IP prefix, the PE
converts it into a VPN-IPv4 prefix by combining it with an 8-byte route distinguisher (RD). The
generated prefix is a member of the VPN-IPv4 address family. It serves to uniquely identify the
customer address, even if the customer site is using globally non-unique (unregistered private)
IP addresses.
The route distinguisher used to generate the VPN-IPv4 prefix is specified by a configuration
command associated with the VRF on the PE router.
BGP distributes reachability information for VPN-IPv4 prefixes for each VPN. Since these are not
IPv4 addresses, BGP provides Multi-Protocol extensions (see RFC 2283, Multiprotocol Extensions
for BGP-4) which defines support for address families other than IPv4 and allows the distribution
of these VPN-IPv4 routes. It propagates VPNv4 reachability information, among the PE (or RR)
routers only. The reachability information for a given VPN is propagated only to other members
of that VPN. The BGP multi-protocol extensions identify the valid recipients for VPN routing
information. All the members of the VPN learn routes from other members enabling them to
communicate with each other. The entire operation of distributing the VPN routes is illustrated
in the Figure 7 MP-BGP VPN Route Distribution example.

Customize the following figure to use customer naming convention

Figure 7 MP-BGP VPN Route Distribution example

37
BGP communication takes place at two levels: within IP domains, known as autonomous
systems (interior BGP or IBGP) and between autonomous systems (external BGP or EBGP). PE-
PE or PE-RR (route reflector) sessions are IBGP sessions, and PE-CE sessions are EBGP sessions.
In addition, a PE router binds a label to each customer prefix learned from a CE router and
includes the label in the network reachability information for the prefix that it advertises to
other PE routers. When a PE router forwards a packet received from a CE router across the
provider network it labels the packet with the label learned from the destination PE router.
When the destination PE router receives the labelled packet, it does a MPLS lookup for the
corresponding vrf and it pops the label and uses it to direct the packet to the correct CE router

Use of VPNv4 Route Reflectors


In this section discuss the Route Reflector design that would be implemented as part of the proposed
architecture Below is some content as an example
The ability for route reflectors to adequately cater for all PE’s in the network is a function of the
number of VPNV4 routes the RR has to hold, the number of PE peerings and the frequency of
churn (VPNv4 routes being advertised and withdrawn). When RRs are used to peer the PEs in a
MPLS/BGP network, the RRs will hold all the VPNV4 routes advertised by all the PE’s. In other
words every route belonging to each customer network must be held in the RR for distribution
to all other PEs or RRs. Scalability problems could arise if the number of VPN routes were very
large as RR’s could potentially exhaust memory resources.
Figure 8 VPN route distribution using partitioned RRs shows one of the possible solution to
address this problem and make MPLS VPN deployment scalable is to use route reflectors, but
partition them in such a way that each partition would carry routes for a subset of the VPN’s
provided by the <Customer Name> network. Thus, no single Route Reflector is required to
maintain all routes for all the VPNs.

The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

Figure 8 VPN route distribution using partitioned RRs

The mechanism for partitioning RR’s is via the route-target using a BGP command called bgp rr-
group. With this command, each RR will only hold routes that match the specified route-targets.

If RR’s are to be partitioned several design issues must be considered in the <CUSTOMER
NAME> network;

38
Location of Route Reflectors – ideally it would be ideal to deploy reflectors in various
physical locations so that a single failure would not impact operations.

Partitioning of Route Reflectors


In this section discuss if and why we need to partitioning of RRs and also give some design
guidelines so that in LLD one can derive configs of RRs

Route Reflector redundancy – There would need to be at least two route reflectors holding
the same information, in the event there is a failure of one, the other can still provide VPN route
information as shown in Figure 9 Route Reflector Redundancy in the <Customer Name>
Networks.

The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

Figure 9 Route Reflector Redundancy in the <Customer Name> Networks

There are a total of <Put the actual number of RRs here> RR in the <Customer Name> network.
Each RR is a <Equipment name with the memory>. We recommend deploying RR partitioned
into two groups (This may change with some customers) with two RR in each group in the
<Customer Name> network. Each group of RR can be assigned to serve few regions or partitioned
can be made based on the route-targets that each RR will serve in the <Customer Name> network.
This way each group of RRs will serve only a certain number of VPN customers and carries only
a subset of routes instead of carrying the routes for all the customers. The PE routers could then
connect to the two RRs in the corresponding group for the VPN information they require which
would cut down the overhead of each RR holding all routes distributing all VPN routes to all
peers. Doing this would provide <CUSTOMER NAME> with a scalable solution as the network
grows. Alternately, a full mesh can be created between route-reflector if partition is not desired
at this time.
In addition, it is recommended to configure both route-reflectors within each group with
different cluster ids which otherwise may create issues if the IBGP sessions between PE and RR
fail.

39
The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

40
Figure 10 Redundant Route Reflectors with same cluster-id.

The paragraph below is for a particular customer. In your HLD you should use naming convention
used by your customer

In the above example, if iBGP session between JRC edge router and RR2 and KMR edge router and RR1 fails, the
VPNv4 routes received by RR1 will be forwarded to RR2 but updates will be rejected due to same cluster ID. It is very
unlikely that such a double failure will occur in the network but as a best practice, it is commended to place both RR in
different clusters. By default, RR Cluster ID is chosen as the BGP Router-ID but its configurable.

Autonomous System Number


An autonomous system (AS) number is required for MP-BGP peerings. By convention this value
is also used in Route Distinguishers to create IP-VPNv4 addresses and route-targets, although it
is not necessary they be the same as the AS number. <CUSTOMER NAME> will be using <Customer’s
AS number>

MP-iBGP Authentication
Cisco implementation of BGP allows for MD5 authentication between BGP peers. This
authentication provides some protection against accidental or malicious BGP peering in the
network. It is possible to configure a unique password for every peer. However this may be
administratively difficult to manage, particularly for eBGP links. Hence, a single password for all
internal peering only is recommended.

Use of BGP Peer-groups


BGP peer-groups provide a way to group individual BGP peers with common policies to enable efficient update
calculation and simplifies configuration. This method of grouping neighbors together for BGP update message
generation reduces the amount of system processing resources needed to process the routing table. This method,
however, has the following limitations:
• All neighbors that shared the same peer-group configuration also had to share the
same outbound routing policies.

41
• All neighbors had to belong to the same peer-group and address-family.
Neighbors configured in different peer-groups cannot belong to different address-
families.
These limitations existed to balance optimal update generation and replication against
peer-group configuration. These limitations also caused the network operator to
configure smaller peer-groups, which reduced the efficiency of update message
generation.
The introduction of the BGP Dynamic Update Peer-Groups feature separates BGP update
generation from peer-group configuration. The BGP Dynamic Update Peer-Groups feature
introduces an algorithm that dynamically calculates BGP update-group membership
based on outbound routing policies. This feature does not require any configuration by
the network operator. Optimal BGP update message generation occurs automatically and
independently. BGP neighbor configuration is no longer restricted by outbound routing
policies, and update-groups can belong to different address families.
As dynamic peer-groups take care of the update generation, simplification of the configuration can be achieved
using either standard peer-group configuration or peer-templates. We therefore recommend the dynamic peer-
groups (for update generation efficiency) and standard peer-group configuration for the <CUSTOMER NAME>
network for the MPLS VPN deployment.
You need to make sure you clearly articulate what is being recommended for this specific customer

Use of Path MTU discovery


Every TCP session has a limit in terms of how much data it can transport in a single packet. This limit is defined as the
Maximum Segment Size (MSS) and is 536 bytes by default on the PE-routers. This means TCP will take all of the data
in a transmit queue and break it up into 536 byte chunks before passing packets down to the IP layer. Using a MSS of
536 bytes ensures that the packet will not be fragmented before it gets to its destination because most links have a MTU
of at least 1500 bytes.

The problem is that using such a small MSS value creates a large amount of TCP/IP overhead, especially when TCP has
a lot of data to transport like it does with BGP in the MPLS VPN environment. The solution is to dynamically determine
how large the MSS value can be without creating packets that will need to be fragmented. This is accomplished by
enabling "ip tcp path-mtu-discovery" (a.k.a. PMTU). PMTU allows TCP to determine the smallest MTU size among all
links between the ends of a TCP session. TCP will then use this MTU value minus room for the IP and TCP headers, as
the MSS for the session. If a TCP session only traverses Ethernet segments then the MSS will be 1460 bytes. If it only
traverses POS segments then the MSS will be 4430 bytes. The increase in MSS from 536 to 1460 or 4430 bytes reduces
TCP/IP overhead, which helps BGP converge faster.

Guidelines for Creating VRF Definitions


In this section we wont delve into configuration details for VRF definiation but rather we’ll
provide some recommendations to keep in mind while doing the actual configurations.

Some sample recommendations are provided below. Please customize according to your
customer needs.

Route-distinguisher Allocation schemes and Recommendations


There are three different approaches to allocate route-distinguishers for a given VPN in the MPLS VPN network.

Approach #1 - Unique RD for each VPN

42
A unique RD value can be assigned for each VPN. For example: If there are three sites belonging
to customer A connected to three different PEs, a same Rd value e.g. <Customer AS #>:100
can be assigned at each location as shown in Figure 11. Though this looks like a simple and
straight forward approach, unfortunately, this option prevents from offering load sharing to the
VPN client in the presence of route-reflectors which is the case in the <CUSTOMER NAME>
network. If load sharing is not a requirement, then this scheme may be useful (as it reduces the
memory requirements at the PE routers).

The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

Figure 11 Unique RD per each VPN

Approach#2 - Unique RD per PE for each VPN

An alternative to the first approach is to assign a unique RD per PE for each VPN. In other words,
for a given VPN, a unique RD value will be assigned on each PE. This is illustrated in theFigure
12. Note that, with this approach, routes received from multiple interfaces belonging to the
same VPN on a particular PE will share the same RD value. However, each PE will assign a
unique RD. The main advantage of this approach is that it allows iBGP load balancing. However,
the drawback of this scheme is that extra memory is required to hold the additional paths at the
PE-routers. This is the recommended scheme in the case where Route Reflectors are deployed.

The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

43
Figure 12 Unique RD per site for each VPN

Approach# 3 - Unique RD per PE per interface for each VPN


Approaches 1 and 2 could be used in the simple or overlapping VPN requiring any-to-any connectivity. However,
implementing topologies such as hub and spoke etc. is not easy using approach 1 or 2. For Central or hub and spoke
topologies, a PE may have more than one interface belonging to the same VPN but the connectivity requirement on one
interface is different from the other interfaces. Approach 3 offers a solution to this problem by assigning a unique RD for
each VRF per interface. The main advantage of this approach is to uniquely identify the site that has originated a route
and enables the implementation of complex topologies. However, this capability comes at a relatively higher cost in
terms of memory consumption and the number of VRFs to be configured. Because of these issues, this method is not
recommended for simple VPNs. Moreover, BGP communities and Site-of-Origin (SOO) may be used to identify where a
particular route originated. This scheme is only recommended for Hub & Spoke scenarios where multiple spoke sites are
connected to the same PE router.

For <CUSTOMER NAME> network, we recommend to use scheme <Put here the scheme number and why its being
used>

VPN Route Target Communities


The distribution of VPN routing information is controlled through the use of VPN route target
communities, implemented by border gateway protocol (BGP) extended communities.
Distribution of VPN routing information works as follows:

When a VPN route learned from a CE router is injected into BGP, a list of VPN route target
extended community attributes are associated with it. Typically the list of route target
community values is set from an export list of route targets associated with the VRF from which
the route was learned.
An import list of route target extended communities is associated with each VRF. The import
list defines route target extended community attributes a route must have for the route to be
imported into the VRF. For example, if the import list for a particular VRF includes route target
communities A, B, and C, then any VPN route that carries any of those route target extended
communities --- A, B, or C --- is imported into the VRF.
The import and export values for route-targets can match the RD value for the VRF although
they don’t need to be the same. The RD uniquely identifies customer IPv4 routes and the route-
targets define import export policies for routes into and out of the VRF. Using the same route-
target and RD values simplifies the configuration and management of it.

44
2-15

VPN Topologies

Full Mesh
An Intranet VPN is the simplest way of deploying a VPN using MPLS. It essentially consists of all sites
of the same customer to directly peering with each other. From the customer's perspective, all of its sites
appear one hop away from each other. In reality a customer's IP packet may transit more than one core
node, though the customer will not see this.
Each of the sites exchanges VRF routes directly with its peer. Note that only routes that originate from
that VRF are exchanged. The result is that the customer's VRF table in each PE holds an identical set of
routes and each customer route is reachable via the next hop PE.

Hub and Spoke


One of the advantages of MPLS VPNs is the full peering that is available between customer sites.
However this is not always the ideal situation for some customers who may require a hub and spoke
topology where all traffic between spokes must pass through the hub. The hub will have the knowledge
of every destination, whilst the spoke will send traffic to the hub site for any destination. The hub
therefore is the central transit point between spoke sites. It can then control access between spoke sites.
Hub and spoke topologies require a special configuration. The Hub site requires two connections
(sub-interfaces) to the PE. One will be to import all spoke routes into the hub while the other will be
used export hub routes back to the spokes.
This simple example concentrates on using dynamic routing for the distribution of all routing
information. Static routing could also be used just as effectively, by placing a default route at spoke,
which would then be imported to all the spoke VRF's. Each spoke VRF would then only need a single
route to get to the hub.

Exranets

Customers with Unique Addresses


The creation of an Extranet is simply a matter of importing/exporting routes between the VRF's of two
or more customers. If IP address overlap between customers is not an issue, that is, the IP address space
is unique between customers, then routes could be imported directly between the VPN_<CUSTOMER>
VRF tables.

Customers with Overlapping Addresses


If customers wishing to participate in an Extranet share the same address space, or there is the possibility
at some stage at new Extranet members will cause addressing problems, then address translation to
unique addresses (provided by the service provider) must occur before traffic is allowed into the
Extranet.

45
Extranet NAT at a Common Service Point
NAT can be done at a central point managed by Service Provider. Each customer will have a physically
separate NAT gateway which is connected to a VRF in their respective Intranet VPN's. The VRF
connected to the NAT gateway will have the route of the translated addresses from the other customer
injected into it. So a route is injected into the VRF of Customer 2 Site B and a separated route is injected
into the VRF of Customer 1 Site A. This way each of the customers can participate in an Intranet, via
the two NAT gateways. The NAT gateways could also be firewalls, with a NAT function. Therefore
additional security could be provided between the Extranet customers.

Extranet NAT at Customer Edge


NAT can also be done at the customer edge. The example used here is that the CE can only connect to
the PE using a single 10BaseT/FL interface. So Extranet/NAT and Intranet/non-NAT traffic must travel
over the same interface to conserve hardware resources at the PE. In most situations the PE/CE
connection would be over a physical interface of some sort which could support sub-interfaces (NAT and
non-NAT)
If the CEs were owned by the customer, then they would be responsible for creating the translations on
the interfaces going to the Extranet VRF, and agreeing on the addresses to be used. Service Provider
would be responsible for creating the VRF's and injecting the translated routes, if static routing was
being used. A more desirable situation would be if Service Provider provided a managed router service.
This would mean Service Provider would have control all the way to the CE, and could provide and
end-to-end managed NAT service between the CEs. Both customers VRF tables will have the translated
routes injected into them so that packets can be routed in the Extranet. A special route-map and virtual
interface on each of the CE NAT routers, prevent any translation occurring for traffic destined to their
own Intranets. Intranet traffic would be classified as any packet with a destination address in the
customer intranet space.
Since translations are being done at both sites into the Extranet, static NAT translations would be
required for each host address that required Extranet communication. If dynamic translation was done,
there would be no way of knowing what NAT address was allocated to each inside host.

Controlling route exports in extranets


Route-maps are very useful if you want to avoid populating the Customer VRF's with unnecessary
Extranet routes. This assists in conserving memory and provides a basic form of security (no route no
access). Each customer VRF will have its standard route-targets to import/export routes for their
Intranets. These are the first two route-target commands shown in the configuration for each VRF.
Next, each VRF has an export map defined. This export map will set a specific route-target value
(referred to as an extended community attribute in BGP) for the Extranet route defined. By using
route-targets, we can selectively import the only the routes the CE needs to participate in an Extranet.
Individual host addresses could also be explicitly specified and exported using route-maps.

46
PE-CE Routing Implementation

A VPN Routing and Forwarding table (VRF) is associated with each CE interface on the PE and contains the routing
information associated with that site. A PE-CE routing protocol is necessary so that the PE table can be populated with
the customer’s routes.

The following routing protocols are available to operate between the PE and CE in a MPLS VPN environment;
• Static
• RIPv2
• eBGP
• OSPF
• EIGRP

BGP, RIP, and EIGRP protocols have been modified to understand VRF tables by the use of a feature called address
families. Address families define the VRF contexts that the routing protocol will operate in.

Note that the routing protocol that operates between the PE-CE is independent of any IGP that may run inside the VPN
customers network. Routes learnt at the local VPN site by the customer IGP will be redistributed into the PE-CE routing
protocol to populate the VRF. It is important to understand that no special MPLS configurations are needed at the
Customer Edge. Only standard IOS routing commands are required.

<CUSTOMER NAME> is planning to use <Put the name of routing protocols that the customer would use> for the PE-
CE routing protocols

Connectivity via Static Routing


It is recommended to use static routing if the customer is small or is a stub site and the IP addressing of devices is
unlikely to change. The CE router would have a default route pointing in the direction of the MPLS cloud, whilst the PE
would require a similar static route inserted into the appropriate VRF table associated with the customer interface. In
order to tell the remote VPN sites (or PE routers) about the local VPN routes, these customer specific vrf static routes on
the each PE router need to be redistributed in the customer specific BGP address-family.

47
Routing Stability
With static routing, if the PE-CE link fails, the static route associated with the interface will be removed from the routing
table. In the case of the PE, this will cause an MP-iBGP routing update to be forwarded to all other PE peers. To prevent
such a behavior, the keyword “permanent” can be appended when configuring the static route. This will cause the static
route to remain in the routing table regardless of the interface status. This obviously reduces the BGP update messages
and improves VPN route convergence, however, such an improvement comes at the cost of unnecessary backbone
bandwidth utilization. This is because the packet will get forwarded through the core all the way to the remote PE and
only then will get dropped if the directly connected link is down.

RIPv2 configuration (PE to CE)


RIPv2 could be used as a PE-CE routing protocol and is included here as an example. RIPv2 is a distance vector protocol
and will periodically send the whole routing table to each neighbor to maintain synchronisation of routes. For managed
CE routers, dual homing and more sophisticated routing policy eBGP should be used as the PE-CE protocol.

EBGP configuration (PE to CE)


eBGP routing would be the most appropriate protocol to use if the CE was dual homed to multiple PE’s or extensive
policy routing features are needed. By using eBGP between the PE and CE routing loops can be avoided using various
mechanism within BGP. No routing information is lost as BGP (either eBGP or MP-iBGP) is used along all the paths,
i.e. between PE-CE and PE-PE. <CUSTOMER NAME> is planning to use EBGP to inject routes from larger customers.
eBGP also has the ability to automatically prevent routing loops.

Configuration at the PE

Unique AS per customer site


The example in this section shows the BGP configuration for connecting CE’s from one customer, each of which uses a
unique AS.
shows a number of CE networks each with a different AS number. Therefore if the network at CE A wished to talk to
the network at CE B it would have to pass via the MPLS-VPN core and the AS_PATH followed would be 23756 65001.
The AS number 23756 will appear in the AS_PATH as the CE packet transits the <CUSTOMER NAME> core.

In this scenario if one (or more) of the CE’s were dual homed, routing loops would be avoided due to the standard AS
path check done on incoming routes to the CE from the PE.

The figure below and the above last two paragraphs are for a particular customer. In your HLD
you should use naming convention used by your customer

Figure 13 PE-CE eBGP with unique AS

48
The eBGP configuration for the PE-CE link is shown in the following diagram.

Single AS for all customer sites


There may be occasion where customers wish to use the same AS number at all their sites. This would be typical in an
existing BGP customer network where the customer is migrating to an MPLS-VPN network and does not want to have to
change their BGP configurations.

The figure below is for a particular customer. In your HLD you should use naming convention
used by your customer

Figure 14 PE-CE eBGP with single network wide AS

As shown in the Figure 14, CE B rejects the routes coming from CE A when it sees its own AS number in the BGP AS
Path. This is standard BGP loop prevention mechanism. As a result, CE B will not be able to communicate with CE A.

49
AS-Override
To solve this problem, the PE can be instructed to override the customer’s AS number before forwarding the BGP update
to the customer. This can be achieved by using BGP neighbor parameter “as-override” configuration commandWith
ASN override configured, the PE does the following:

• If the last ASN in the AS_PATH is equal to the neighboring one, it is replaced by the
provider ASN

• If last ASN has multiple occurrences (due to AS_PATH prepend) all the occurrences are
replaced with provider-ASN value

• After this operation, normal eBGP operation will occur and the provider AS will be added
to the AS_PATH

Site-of-Origin
By enabling as-override feature, loop detection using the AS_PATH is disabled. This obviously will cause problems if
the CE is dual-homed, as is the case for CE B in Figure 14. A BGP extended community attribute, referred to as the Site-
of-Origin (SOO) addresses this issue.

The SOO prevents routing loops when a site is multi-homed and the as-override feature is being also being used. This is
achieved by identifying each customer site with a unique SOO. The SOO, similar to route-target is a BGP extended
community and is denoted in the same format as route-target.

All routes originating from a customer site are identified with a SOO by the eBGP process on ingress to the PE. If those
routes for some reason end up back at the originating PE, they will not be re-advertised to the CE as the SOO will match
that of the site.

Note that a site may consist of many routers each containing the same routing information. If several of these routers are
connected to the MPLS-VPN backbone as CE’s, they will still use the same SOO. Only when the sites are different will
a different SOO be used.

Routing Stability
The eBGP route dampening feature can control flapping routes from the CE. The “maximum route limit” command
described in the following section and the BGP “neighbor x.x.x.x prefix-limit” command will allow the limiting of the
number of routes installed in the VRF and redistributed in MP-iBGP.

Controlling number of VRF routes


It is possible that an excessive number of routes get distributed into the VRF due to some problem in the customer
network. In the MPLs VPN network, multiple customers connect to the same provider edge router. Therefore it is very
important to protect the resources such as memory, and CPU on the PE routers.

If the PE-CE protocol is BGP, the numbers of routes received from the CE can be controlled at each site by using the
maximum-prefix command .

50
51
Additional MPLS VPN Services

Internet Access for MPLS/VPN customers

There are two basic design models for combining Internet Access with MPLS / VPN services.
• Internet access is offered through global routing on the PE routers. There are 2 implementation
options.
o A first one is to implement packet leaking shortcut between a VRF and the global routing
table. This option has a number drawbacks and must be avoided.
o A second implementation option is to use separate physical or logical interfaces for VPN
and for Internet access. The physical or logical interface meant for Internet access will be placed
in the global routing table. Ideally, the Internet interface (also called IPv4 link) will be
implemented on a separate CE router, which permits to put the FW in customer site.
• Internet access is offered through yet another VPN. This is called the Internet VPN (and associated
Internet VRF). This solution has the advantage that the provider’s backbone is isolated from the
Internet, resulting in improved security. A drawback is that full Internet routing cannot be implemented
because of scalability problems and is therefore not recommended solution for ST.

52
Separate CEs for Internet Access and VPN Access
From the point of view of the VPN customer, the “separate CE” design model maps ideally on the
situation where the VPN customer wants centralised and firewalled access to the Internet The customer
managed firewall can provide NAT services in between the private VPN addressing and the public Internet
addressing. The central customer site firewall gives the customer the ability to control security and Internet
service policies. A drawback is that all the Internet traffic must flow through a central site.

For example, a large bank with hundreds of branches would not want to implement Internet access directly
from each of the branches, as this would imply management of strict security policies at every site
The fig below is for a particular customer. In your HLD you should use customer specific figs

Figure 15 Internet Access from a VPN using separate CEs


(difficult and expensive). The centralised FW approach with two CE routers is more appropriate solution.
Region. Site MPLS Network Internet
Internet

CE1
PE1 PE2

PE3

Default route injected into


CE1 VPN
CE2
Data forwarding path from
regional sites to Internet

FW VRF_RED interface (VPNv4)


Central Site Global routing table
interface (IPv4)

It is worth to mention that default static routes will be injected into VPN and used by regional sites, but the
default route can not be used for VPN traffic on central site. On the drawing above, the CE2 will be
configured with a default route pointing to PE3 via IPv4 interface. For this reason, the CE1 (and CE2)
have to have all the VPN routes in the routing table.

Central site shall learn the VPN routes dynamically with BGP4 or RIPv2 between CE1 and PE3. This is
recommended approach as it allows greater flexibility and redundancy. For example, customer may want
to implement two VPN CEs in central site to improve service availability.

In case of small number of regional prefixes, or if all regional prefixes can be summarized in a single
aggregate route, static route can be implemented from CE1 to PE3 for VPN traffic.

Low-cost Internet Access (1CE + one/two access links)


The low-cost solution described in this chapter is not as secure as the one with two CE routers and firewall
in customer site. The low-cost solution can therefore become very expensive if the security is

53
compromised and intruder gains the access into customer’s VPN. Customers with sensitive data shall
subscribe to secure Internet access from their VPN.

Therefore, the single-CE design for Internet&VPN access shall not be recommended to <Customer
Name> customers!

Two options exist to provide Internet connectivity from a single CE router:


• Single access-layer connection for Internet and VPN traffic, and packet leaking on the PE.
• Two logical PVCs or two physical connections between the CE and PE; one for VPN traffic (VPNv4
link) and one for Internet traffic (IPv4 link).

Single link option


The option with single link for VPN and Internet traffic represents serious risk for that VPN because of the
“shortcut” that has to be created between the global routing table on the PE (i.e. the Internet) and the VRF.
No security mechanisms (e.g. packet filtering) are available on this shortcut. CE_Blue on Figure 16 below
depicts this situation.

Packet leaking between a VRF and the global routing table is implemented with two IOS mechanisms:
• A static route with a global next-hop can be configured in a VRF. Packets following this static route
will end in the global address space at the next-hop router. Traffic originated at a customer site can
thus be forwarded into the Internet.
• Global static route can be defined pointing to a connected interface, which belongs to a VRF. This
static route is further redistributed into IGP or BGP. Packets originated in the global address space will
follow this route (in the global routing table) and will eventually be forwarded toward a CE router.
Traffic originating in the Internet can thus be forwarded to the CE router.

Since the default route in the VPN points to the Internet, no additional default routing can be used in the
customer VPN. In addition, when a customer site looses connectivity to the MPLS / VPN backbone,
packets from other sites destined for the failed VPN site will be leaked to the Internet. This is another
major security issue. In general, this option is also fairly complex to implement.

VPNv4 and IPv4 links


The two links between CE and PE can be implemented as two separate physical circuits (e.g. two E1
circuits) or as a two logical connections - for example the ATM PVCs. IPv4 link will terminate in the
Global Routing table on the PE router, VPNv4 link will be assigned to the customer’s VPN.

Static default route will be configured on the CE for Internet access and it will point towards PE via IPv4
link. VPN routes will be in most cases uploaded to the CE with dynamic routing protocol (eBGP, RIPv2),
but can be statically configured on the CE if number of prefixes is small.

The single-CE solution implemented with separated links for VPN and Internet traffic allows configuring
packet filtering on IPv4 link on the CE router, but does not offer logical separation of two security zones
(MPLS/VPN and Internet) with a firewall. It is mandatory to define a strict packet filtering rules in both
directions: to and from the Internet. Outbound filter must for example prevent VPN packets to be leaked in
the Internet (via default route) when VPNv4 connection fails. Inbound filter must clearly define the list of
hosts and applications that can be reached from the global Internet. It is up to customer and service
provider (<Customer Name>) to define and implement desired security policy (i.e. packet filters) on a
managed CE router.

54
If the customer uses private IP addresses, NAT would have to be implemented on the IPv4 link. Please
note that static “one-to-one” translation is needed only for Internet servers, whereas the clients can be
dynamically translated in a pool of IP addresses in a PAT-like mode.

Figure 16 Internet Access from a VPN – Single CE (two links in CEred, single link on CEblue)

In the end explain which option is being used

CE RED CE BLUE
!
VPNv4 &
VPNv4 link IPv4 link ip route vrf BLUE 0.0.0.0 0.0.0.0 <PE_loopb> global
IPv4 link
!

vrf_red global_rt vrf_blue

PE

MP-BGP

Shared vrf-aware services

Network Address Translation for MPLS/VPN customers


The following configuration template can be used on customer’s CE router in case of private IP addressing
in customer site. The example below shows two types of NAT translations:
• Static on-to-one translation for servers in customer site, that must be reachable from the Internet
• Dynamic NAT in overload mode (PAT) for PC clients.

Please note that NAT is only required on IPv4 link.

The fig below is for a particular customer. In your HLD you should use customer specific figs

55
Figure 17 NAT in CE router

VRF
ip route 10.10.10.0/24 -> Static NAT translation
S1@CE1 10.10.10.5 <-> 171.68.1.1

10.10.10.5/24

VPNv4 link S1 E0
Web
.254 serv.
IPv4 link S0
PE CE

10.10.10.x/24

PC
Global RT Dynamic NAT in
ip route 171.68.1.1/32 -> overload mode
S0@CE1

Connecting Downstream ISPs to PE routers


Internet customers that require full Internet routing table (eg. a downstream ISP or multi-homed customer)
to implement primary/backup or any other inter-domain routing policy will be in most occasions attached
to the two iGW routers. If there’s a need to interconnect such customer in regional PoP, <Customer
Name> will install a PE router with sufficient memory and CPU power to hold the full Internet routes.
Otherwise, the customer would have to install two eBGP sessions: one with iGW to download full Internet
routes and another with the PE router to advertise its customers’ routes. This is required because next-hop-
self feature is systematically applied on PE-RR and iGW-RR BGP neighborships.

Remote Access (ASWAN/Security, Dial, DSL,


Cable)

Wireless

VOIP

Inter-AS/CsC

56
Traffic Engineering and Fast Reroute
Technology Overview

Traffic Engineering Basics

Traffic Engineering is a powerful MPLS-based tool, which can be used not only to reduce cost for
service providers (SPs) but to generate new revenues as well. One of the key functions of Traffic
Engineering is to maximize the utilization of network resources. By making the SP’s network more
efficient, Traffic Engineering reduces the cost of the network. Another function of Traffic
Engineering is restoration. While delivering the same level of protection as SONET APS, Traffic
Engineering restoration is more flexible and less costly. With Traffic Engineering, the SP may
choose to protect only the set of links that are most vital to the entire network, and only the traffic
which requires low loss probability. Traffic Engineering restoration increases the reliability of the
SP’s network and improves the quality of the SP’s service. Alternatively, the SP may sell Traffic
Engineering restoration as a premium service. Traffic Engineering helps the SP generate new
revenue because it enables the SP to offer new services.

First, we introduce the concept of traffic trunks. Traffic trunks are aggregated micro-flows 4 that share a
common path. In the context of this document, a "common path" does not refer to the end-to-end path of
the flows, but a portion of the end-to-end path within the service provider's network. Typically, the
common path originates from the ingress of the service provider's wide area network to the egress of the
service provider's wide area network. For example, all traffic originating from an IP address in San Jose
and destined for an address in New York City may constitute a traffic trunk, and all traffic between an
address in Palo Alto and an address in Washington D.C. another.

Optionally, we may require that all packets within a traffic trunk have the same class of service. For
example, all ftp and telnet (priority 1) traffic between San Francisco and New York City may be
considered a trunk, and all VoIP (priority 5) traffic between San Francisco and New York City another
one.

4
A micro-flow refers to the packets travelling from a source to a destination using the same transport protocol and the
same port number. For example, an ftp session between two IP hosts constitutes two micro-flows, one from the client to
the server, and the other from the server to the client.
57
Traffic Engineering creates one or more explicit paths with bandwidth assurances for each traffic trunk. It
takes into consideration the policy constraints associated with the traffic trunks, and the physical network
resources, as well as the topology of the network. This way, packets are no longer routed just based on
destination, but also based on resource availability, and policy. The following section describes the
operation of Traffic Engineering.

Figure 1 illustrates the operation of Traffic Engineering. Each step shown in the diagram is explained
below.

traffic statistics Creation or update traffic


model (using off-line
optimization tools)
resource trunk
attributes attributes

Input resource constraints Input traffic model at the


throughout network head-ends of traffic trunks
topology and resource
information
Path selection at traffic
trunk head-ends

Explicit routes
Path maintenance
Path admission,
reservation, and/or LSP
creation for calculated
paths (via extended
RSVP)

Figure 18 - Traffic Engineering Mechanisms

The network operator must create a traffic model. Based on statistics collected from the routers, as well as
administrative policies, the network operator needs to identify the traffic trunks within the network, and
decide how these traffic trunks should be routed. The operator can use an off-line tool to optimize the
traffic model. This does not mean that the operator is required to use the off-line tool to determine the
routes for all traffic trunks. Typically, the operator identifies a full mesh of traffic trunks but
administratively routes only the "top" N traffic trunks. On-line procedures are used for the rest of the
trunks, as well as to handle failure situations. Traffic trunks could also be forwarded along routes
computed by conventional IGP.

The router uses RSVP to set up Label Switching Paths (LSPs) and to reserve bandwidth at each hop along
the LSPs. During the LSP setup process, any router within the network must perform admission control
and/or preemption to ensure that resources are available to honor the reservation. After the paths are set
up, the head-end routers forward the packets belonging to traffic trunks by placing them into the
appropriate LSPs.

The following section breaks down Traffic Engineering into components and describes each component.

58
Traffic Trunk Attributes

Traffic trunk attributes allow the network operator to describe the characteristics of traffic trunks. They
must be granular enough to account for the different types of packets traversing the network, and detailed
enough to specify the desired behaviour in failure situations. There are six traffic trunk attributes and each
is described below.

Bandwidth

This attribute specifies the amount of bandwidth the traffic trunk requires.

Path Selection Policy

This attribute gives the network operator the option to specify the order in which the head-end routers
should select explicit paths for traffic trunks. Explicit paths may be either administratively specified or
dynamically computed.

Resource Class Affinity

This attribute is used to allow the network operator to apply path selection policies by administratively
including or excluding network links. As will be shown later, each link on the network may be assigned a
resource class as one of the resource attributes. Resource class affinity specifies whether to include or
exclude links with resource classes in the path selection process. It takes the form of the tuple <resource
class mask, resource affinity>. The "resource class mask" attribute indicates which bits in the resource
class need to be inspected, and the "resource affinity" attribute indicates whether to explicitly include or
explicitly exclude the links.

Adaptability

This attribute indicates whether the traffic trunk should be re-optimized. The re-optimization procedure is
discussed in a later section.

Resilience

This attribute specifies the desired behavior under fault conditions, i.e., the path carrying the traffic trunk
no longer exists due to either network failures or preemption. Traffic Engineering's restoration operation is
discussed in a later section.

59
Priority

Priority is the mechanism by which the operator controls access to resources when the resources are under
contention. It is a required function to place all traffic trunks. Another important application of the
priority mechanism is supporting multiple classes of services. We assign two types of priorities to each
traffic trunk: holding priority, and setup priority. Holding priority determines whether the traffic trunk has
the right to hold a resource reservation when other traffic trunks attempt to take away its existing
reservation. Setup priority determines whether the traffic trunk as the right to take over the resources
already reserved by other traffic trunks.

Resource Attributes

Resource attributes are used to describe the network links used for path calculations. There are three
resource attributes, each of which is described below.

Available Bandwidth

This attribute describes the amount of bandwidth available at each setup priority. Note that the available
bandwidth for the higher setup priority is always larger than that for the lower setup priority. This attribute
needs not necessarily reflect the actual available bandwidth. In some cases, the network operator may
oversubscribe a link by assigning a value that is larger than the actual bandwidth, e.g., 49.5 Mbps for a
DS-3 link.

Resource Class

This attribute indicates the resource class of a link. Recall that the trunk attribute, resource class affinity,
is used to allow the operator to administratively include or exclude links in path calculations. This
capability is achieved by matching the resource class attribute of links with resource class affinity of traffic
trunks. The resource class is a 32-bit value. The resource class affinity contains a 32-bit resource affinity
attribute and an associated 32-bit resource class mask. .

Path Selection

Path selection for a traffic trunk takes place at the head-end routers of traffic trunks. Using extended IS-
IS/OSPF, the edge routers have knowledge of both network topology and link resources. For each traffic
trunk, the router starts from the destination of the trunk and attempts to find the shortest path toward the
source (i.e., using the shortest path first (SPF) algorithm). The SPF calculation does not consider the links
which are explicitly excluded by the resource class affinities of the trunk, as well as the links which have

60
insufficient bandwidth. The output of the path selection process is an explicit route consisting of a
sequence of label switching routers. This path is used as the input to the path setup procedure.

Path Setup

Path setup is initiated by the head-end routers. RSVP5 is the protocol which establishes the forwarding
state along the path computed in the path selection process. The head-end router sends a PATH message
for each traffic trunk it originates. The PATH message carries the explicit route computed for this traffic
trunk. As a result the PATH message always follows this explicit route. Each intermediate router along
the path performs trunk admission control after receiving the PATH message. Once the router at the end
of the path receives the PATH message, it sends a RESV message in the reverse direction towards the
head-end of the traffic trunk. As the RESV message flows toward the sender, each intermediate node
reserves bandwidth and allocates labels for the trunk. Thus when the RESV message reaches the sender,
the LSP is already established.

The following diagram is an example of the path setup procedure.

R8 R9
R3
R4
R2 Pop

R5
R1
32
49
R6 R7
17

22

Setup: Path (R1->R2->R6->R7->R4->R9)

Reply: Resv communicates Tags and


reserves bandwidth on each link

Figure 19 - Traffic Engineering Path Setup

Once you’ve decided to set up an LSP for a tunnel, you do that using RSVP with certain extensions to
support this feature. In RSVP, the forward leg of the signaling message is called the path message, and the
reverse leg is called the reservation message. So one of the extensions is that the path message can carry
the source route in the new object. Resources are actually allocated on the reverse leg with the reservation
message. In addition to bandwidth, which is an existing RSVP resource, there are extensions so that labels
can be allocated and transmitted in the reverse direction on the reservation message.

5
Note that the usage of RSVP in Traffic Engineering deviates from the original design goal of RSVP. Extensions to
RSVP and the justification for using RSVP are discussed in a later section.
61
In Figure 2 we’re establishing a tunnel from R1 to R9 along the path shown in the slide here. That path is
included in the path message that is generated by R1, and it directs the path along the yellow arrows from
the head of the tunnel to the tail.

In the reverse direction, the reservation message flows back on whatever series of hops was
established by the path. At each hop the tag from the hop closer to the tail is received and
programmed into the MPLS forwarding table. A new tag is allocated, and that new tag or label is
sent upstream towards the head until eventually we get back to the head and the head knows that to
send traffic down the tunnel, it should use label 49.

One feature of interest about the resulting LSP and about the MPLS tunnels under IOS in general is
that they’re unidirectional. Traffic flows from the head to the tail, but there’s no automatic reverse
direction. So you couldn’t for instance run an adjacency over one of these MPLS tunnels because the
traffic’s one way.

Link Protection (FRR) Basics

Regular MPLS traffic engineering automatically establishes and maintains label-switched paths
(LSPs) across the backbone using Resource Reservation Protocol (RSVP). The path used by a given
LSP at any point in time is based upon the LSP resource requirements and available network
resources such as bandwidth.

Available resources are flooded via extensions to a link-state based Interior Gateway Protocol (IGP), such
as IS-IS or OSPF.

Paths for LSPs are calculated at the LSP headend. Under failure conditions, the headend determines a new
route for the LSP. Recovery at the headend provides for the optimal use of resources. However, due to
messaging delays, the headend cannot recover as fast as possible by making a repair at the point of failure.

Fast Reroute provides link protection to LSPs. This enables all traffic carried by LSPs that traverse a failed
link to be rerouted around the failure. The reroute decision is completely controlled locally by the router
interfacing the failed link. The headend of the tunnel is also notified of the link failure through the IGP or
through RSVP; the headend then attempts to establish a new LSP that bypasses the failure.

Local reroute prevents any further packet loss caused by the failed link. This gives the headend of the
tunnel time to re-establish the tunnel along a new, optimal route. If the headend still cannot find another
path to take, it will continue using the backup tunnel.

62
Figure 20 - TE FRR Example

R8 R9
Swap 37->14 Pop 14
Push 17 R4
R2
Push 37
R1 R5

R7
R6
Swap 17->22 Pop 22

Label Stack: R1 R2 R6 R7 R4 R9
37 17 22 14 None
14 14

The example in Figure 1 illustrates how Fast Reroute link protection is used to protect traffic carried in a
TE tunnel between devices R1 and R9, as it traverses the mid-point link between devices R2 and R4. [The
TE tunnel from R1 to R9 is considered to be the primary tunnel and is defined by labels 37, 14, and Pop.]
To protect that R2-R4 link, you create a backup tunnel that runs from R2 to R4 by way of R6 and R7. This
backup tunnel is defined by labels 17, 22, and Pop.

When R2 is notified that the link between it and R4 is no longer available, it simply forwards traffic
destined for R4 through the backup tunnel. That is accomplished by pushing label 17 onto packets destined
to R4 after the normal swap operation (which replaces label 37 with label 14) has been performed. Pushing
label 17 onto packets forwards them along the backup tunnel, thereby routing traffic around the failed link.
The decision to reroute packets from the primary tunnel to the backup tunnel is made solely by R2 upon
detection of link failure.

The Fast Reroute feature has two noticeable benefits.

• Increased reliability and minimal traffic loss it gives to IP traffic service during link loss.

63
• High scalability inherent in its design.

Increased Reliability for IP Services

MPLS traffic engineering with Fast Reroute uses fail over times that match the capabilities of
SONET link restoration. This leverages a very high degree of resiliency for IP traffic that flows over
a service provider's backbone, leading to more robust IP services and higher end-customer
satisfaction.

High Scalability Solution

The Fast Reroute feature uses the highest degree of scalability by supporting the mapping of all
primary tunnels that traverse a link onto a single backup tunnel. This capability bounds the growth
of backup tunnels to the number of links in the backbone rather than the number of TE tunnels that
run across the backbone.

64
TE/TE-FRR Design

Deciding on the tunnel topology and tunnel types

How to Route Traffic Into TE Tunnels

Policy Based Routing

You can use PBR to send traffic down a TE tunnel. However you cannot apply policy routing to an MPLS-
VPN interface as the Hardware and IOS software for the VRF interface is not PBR aware. This
enhancement may be added in future line cards and IOS software.

So for normal Ipv4 interface you just set the outgoing interface in the policy map as the tunnel interface.

RtrA(config)#int s0
RtrA(config-if)#ip policy route-map set-tunnel

RtrA(config)#route-map set-tunnel
RtrA(config-route-map)#match ip address 101
RtrA(config-route-map)#set interface Tunnel1

Static Routing Into Tunnels


You can manually send traffic down specific TE tunnels using static routes. In this case the destination
interface is the tunnel interface. This is the simplest method of “steering” traffic into a tunnel and many
service providers use this method in relatively simple topologies.
However this method is obviously un-scalable in larger, more complex topologies and can be prone to
“routing loops” unless careful provisioning is adhered to.

An example syntax is:-

ip route H.H.H.H 255.255.255.255 Tunnel1 (where X.X.X.X is the I.P Destination)

Auto-Route
Cisco IOS MPLS Autoroute Announce installs the routes announced by the tail-end router and its
downstream routers into the routing table (forwarding table) of the head-end router as directly reachable
through the tunnel.

The Constrained Based Routing Algorithm allows MPLS TE to establish a Label Switch Path from the
head-end to the tail-end node. By default, those paths will not be announced to the IGP routing protocol.
Hence, any prefixes/networks announced by the tail end router and its downstream routers would not be
"visible" through those paths.

For every MPLS TE tunnel configured with Autoroute Announce, the link state IGP will install the routes
announced by the tail-end router and its downstream routers into the RIB. Therefore, all the traffic directed
to prefixes topologically behind the tunnel head-end is pushed onto the tunnel.

To have a better understanding of this feature, consider an example with and without Autoroute Announce
enabled.
Consider the topology of Figure 4. For the sake of simplicity, assume that Ri's loopback address is i.i.i.i.

Figure 21 - Topology Without Tunnels

The corresponding routing table on Router R1 with normal IGP and no MPLS TE looks like the following.

66
Figure 22 - R1 Routing Table – No MPLS TE

Considering the same topology as in Figure 4, now let us introduce two MPLS Traffic Engineering tunnels
T1 and T2 respectively. Tunnel T1 will originate in R1 and its tail end is R4. Tunnel T2 will originate in
R1 and its tail end is R5.

MPLS TE Autoroute Announce will be enabled on the two tunnels. Similarly, R1 routing table entries are
given in Figure 7.

Figure 23 – Topology With TE Tunnels

Figure 24 - R1 Routing Table With Autoroute Announce

67
The routing tables (Figure 5 and Figure 7) demonstrate that R4 and R5 are directly reachable through
tunnel T1 (resp. T2) with MPLS TE Autoroute Announce. Similarly, R8 is now reachable through the
tunnel T1 via R4 instead of the "physical" connection.

Without Cisco MPLS TE Autoroute Announce, even though Tunnel T1 is up, route to R8 is done via the
"physical" connection (as in Figure 5).

Forwarding Adjacency
The MPLS TE Forwarding Adjacency feature allows a network administrator to handle a traffic
engineering, label-switched path (LSP) tunnel as a link in an Interior Gateway Protocol (IGP) network
based on the Shortest Path First (SPF) algorithm. A forwarding adjacency can be created between routers
regardless of their location in the network. The routers can be located multiple hops from
each other, as shown in Figure 8.

Figure 25 - Forwarding Adjacency Topology

As a result, a TE tunnel is advertised as a link in an IGP network with the link's cost associated with it.

Routers outside of the TE domain see the TE tunnel and use it to compute the shortest path for routing
traffic throughout the network.

Benefits
• TE Tunnel Interfaces Advertised for SPF
• TE tunnel interfaces are advertised in the IGP network just like any other links. Routers can then
use these advertisements in their IGPs to compute the SPF even if they are not the head end of
any TE tunnels.

Restrictions
• Using the MPLS TE Forwarding Adjacency feature increases the size of the IGP database by
advertising a TE tunnel as a link.

68
• The MPLS TE Forwarding Adjacency feature is supported by Intermediate System-to-
Intermediate System (IS-IS). Open Shortest Path First (OSPF) support will be available in a
future release.
• When the MPLS TE Forwarding Adjacency feature is enabled on a TE tunnel, the link is
advertised in the IGP network as a Type Length Value (TLV) 22 without any TE sub-TLV.
• MPLS TE forwarding adjacency tunnels must be configured bidirectionally.
• Do not use the tunnel mpls traffic-eng autoroute announce statement in your configuration when
you are using forwarding adjacency.

Using Directed LDP Sessions

If you are using TE in conjunction with RFC2547 L3 VPN’s then an extra configuration step may be
needed on the primary tunnel interface.

When the TE tunnel is terminated on the egress PE, the MPLS VPN and the TE work together without any
additional configuration.

When the TE tunnel is terminated on any P routers (before the PE in the core), the MPLS VPN traffic
forwarding fails because packets arrive with VPN labels as the outer labels, which are not in the LFIBs of
these devices. Therefore, these intermediate routers are not able to forward packets to the final destination,
the VPN customer network. In such a case, LDP/TDP should be enabled on the TE tunnel to solve the
problem.

Below is an example of the extra configuration step required:-


P1#show run int tu0

interface Tunnel0
ip unnumbered Loopback0
no ip directed-broadcast
ip route-cache distributed
tag-switching ip *this enables tdp/ldp on the tunnel interface
tunnel destination 10.5.5.5
tunnel mode mpls traffic-eng
tunnel mpls traffic-eng autoroute announce
tunnel mpls traffic-eng path-option 10 dynamic
end
!

69
Number of Protected Prefixes
It is possible that if a customer has hundreds of prefixes in his FRR database that he may wish to prioritise
which order prefixes get re-written. This way you can manually configure certain prefixes to be re-written
on FRR switchover as a priority to ensure LSA’s are met.

The MPLS TE—FRR Prefix Ordering Using an ACL feature allows you to prioritize the FRR database
according to a single ACL ID. This feature was introduced in IOS 12.0(17)ST7.

The ACL ID can contain many networks and hosts. A match in the ACL simply gives precedence to the
prefix and places this prefix earlier in the database to provide faster switchover time in the event of a
failure.

Benefits

• FRR Database Sorting. This feature adds a modified software sorting function for the FRR
database based on the existence of a configured ACL. As a result, matching prefixes receive
higher priority during a failure and fewer packets are lost.

Restrictions

• This feature is limited to FRR functionality and the order of the failed-over routing prefixes.

• This feature does not add, delete, or modify the routing prefixes in the FRR database; it just
resorts them.

The following command output shows the FRR database before it is reordered:

Router# show mpls traffic-eng fast-reroute database

Tunnel head fast reroute information:


Prefix Tunnel In-label Out intf/label FRR intf/label Status
10.0.6.1/32 Tu3 12307 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.7.1/32 Tu3 12306 PO1/0:12305 Tu10:tag-implicit ready
10.0.8.1/32 Tu3 12304 PO1/0:12304 Tu10:tag-implicit ready
10.0.0.36/30 Tu3 12314 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.40/30 Tu3 12312 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.48/30 Tu3 12316 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.52/30 Tu3 12317 PO1/0:12307 Tu10:tag-implicit ready
10.0.0.60/30 Tu3 12315 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.64/30 Tu3 12318 PO1/0:12308 Tu10:tag-implicit ready

In the following command output, the last prefix, which is 10.0.0.64/30, is placed
first in the FRR database:

70
Router# configure terminal

Router(config)# access-list 1 permit 10.0.0.64 0.0.0.3

In the following command output, the ACL is applied globally:

Router(config)# mpls traffic-eng fast-reroute acl 1

In the following command output, the 10.0.0.64/30 prefix has been reordered and
now appears first in the FRR database:

Router# show mpls traffic-eng fast-reroute database

Tunnel head fast reroute information:Acl in use 1

Prefix Tunnel In-label Out intf/label FRR intf/label Status


10.0.0.64/30 Tu3 12318 PO1/0:12308 Tu10:tag-implicit ready
10.0.6.1/32 Tu3 12307 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.7.1/32 Tu3 12306 PO1/0:12305 Tu10:tag-implicit ready
10.0.8.1/32 Tu3 12304 PO1/0:12304 Tu10:tag-implicit ready
10.0.0.36/30 Tu3 12314 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.40/30 Tu3 12312 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.48/30 Tu3 12316 PO1/0:Pop tag Tu10:tag-implicit ready
10.0.0.52/30 Tu3 12317 PO1/0:12307 Tu10:tag-implicit ready
10.0.0.60/30 Tu3 12315 PO1/0:Pop tag Tu10:tag-implicit ready
LSP midpoint frr information:
LSP identifier In-label Out intf/label FRR intf/label Status

71
“3” Implementation Of TE-FRR

“3” Network Architecture

Introduction
The core network of “3” is illustrated in Figure 9 below. It consists of 3 major POP’s deployed in major
cities within the U.K.
Figure 26 - "3" Core Network Architecture

The core network is built entirely out of 124XX routers with 7200’s used as Route Reflectors. The network
utilises MPLS-VPN L3 RFC2547.

Cisco 12416’s are used as core switching routers and interface to a Nortel Optera DWDM network for
Optical Transport. OC-192 POS linecards are used to buid a 10G network infrastructure and these nodes
are used as “P” devices in the context of the MPLS-VPN.

Cisco 12410’s are used as edge routers (PE) and are inter-connected via OC-48 POS lincards to the P
routers within the POP. VPN interface’s are present on the GigE cards within these routers. Initially
Trident (3 X GigE) linecards were used and later these were swapped out for the new Tango (10 X GigE)
linecards.

The design uses a wide range of PE-CE connection models for various VPN’s:-

• Static
• Connected
• OSPF

TE-FRR Design

In the design it was decided to only protect the core OC-192 POS (Inter-POP) links as these had the
greatest chance of failure compared to the Intra-POP links. Obviously TE-FRR provides a very cost
effective mechanism of link protection compared to Sonet APS.

In the design IP traffic will be protected in the core by Fast Re-Route (FRR) for link protection for sub
50ms performance. Tunnel Engineering aims to optimize network resource usage by directing traffic onto
LSP tunnels established according to criteria other than lowest cost or fewest hops, which existing routing
protocols use today. For example, to minimize congestion and maximize performance, an ISP might want
all traffic destined for a particular network to use the path with maximum bandwidth.
Fast restoration is possible within 50 milliseconds. This is because no signaling is required, the backup
tunnel is already in place, and the ingress to the back-up tunnel can be co-located on the device that detects
the failure. Protection and restoration span is flexible. Backup LSP tunnels can be set up to protect
individual links.
MPLS-TE FRR will be used to protect all the OC-192 POS links between the 3 x GSRs in the test network.
In the event of a link failure, the backup FRR tunnels will provide an immediate local path around the
failure until the primary tunnel has re-optimised.

73
Primary Tunnels
So in the design we have a number of 1-Hop Primary tunnels going between the POP’s. This makes a total
of 6 Primary tunnels in the design. The primary tunnels are dynamically routed to the TE loopback address
of its neighbouring 2 POP’s.

Initially auto-route was used as the mechanism for injecting traffic into the tunnels, however this was
replaced with “Forwarding Adjacency” during system testing dues to un-expected traffic loss. (See Sec
XXX)

Its important to note that because of the use of 1-Hop tunnels that the tunnel head end is also the point
of local repair (PLR) so after an FRR operation the primary tunnel will re-route across the 2-Hop link.
This will happen after the fast re-write operation.

Backup Tunnels
So each protected link has a 2-Hop backup tunnel provisioned as the alternate path when FRR-LP kicks in.
Each backup tunnel is explicitly configured to go via the alternate POP to reach the original POP
destination. Figure 10 gives an example of the tunnel provisioning.

Obviously explicit backup tunnel configuration is sensible as you obviously provision the backup tunnels
to cross a specific 2 hop path

Manchester –GSR2

Hemel
GSR1
Birmingham – GSR3

Primary link used by primary tunnel, backed up by FRR

FRR backup Tunnel via alternative STM4 interface

Figure 27 - Illustration of Primary and Backup TE Tunnels

74
i.p addresses

Source Description Tunnel Explicit/ Final


Router Number Dynamic Destination

GSR1 Primary 1-2 1 Dynamic GSR2

GSR1 Primary 1-3 2 Dynamic GSR3

GSR1 Backup of 1-2 11 Explicit via GSR3 GSR2

GSR1 Backup of 1-3 12 Explicit via GSR2 GSR3

GSR2 Primary 2-1 1 Dynamic GSR1

GSR2 Primary 2-3 2 Dynamic GSR3

GSR2 Backup of 2-1 11 Explicit via GSR3 GSR1

GSR2 Backup of 2-3 12 Explicit via GSR1 GSR3

GSR3 Primary 3-1 1 Dynamic GSR1

GSR3 Primary 3-2 2 Dynamic GSR2

GSR3 Backup of 3-1 11 Explicit via GSR2 GSR1

GSR3 Backup of 3-2 12 Explicit via GSR1 GSR2

Table 5 Tunnel Provisioning

All Primary TE tunnel parameters will be as follows:


· IP Unnumbered to Loopback 0
· Path option - Dynamic
· Autoroute announce
· Priority 5 5
· Bandwidth 1
· Fast Re-Route enabled

All FRR backup TE tunnel parameters will be as follows:

75
· IP Unnumbered to Loopback 0
· Path option - Explicit path
· Priority 0 0
· Bandwidth 0

POS interface specifics:


· Enable AIS alarm when interface shutdown
· IP RSVP bandwidth to match link speed

Sample configurations

Generic Global Commands

mpls traffic-eng tunnels


no tag-switching ip propagate-ttl forwarded
tag-switching tdp router-id Loopback0

router isis
passive-interface Loopback0
mpls traffic-eng router-id Loopback0
mpls traffic-eng level-2
net 49.4401.1720.3125.0254.00
is-type level-2-only
domain-password vlPhuj8p5
metric-style wide level-2
max-lsp-lifetime 65535
lsp-refresh-interval 65000
no hello padding
log-adjacency-changes

Birmingham P Router
interface Tunnel1001
description from bm0gsr01 tunnel1001 to hh0gsr01 tunnel1002, Primary
ip unnumbered Loopback0
no ip directed-broadcast
mpls label protocol tdp
tag-switching ip

76
tunnel destination 172.31.252.254
tunnel mode mpls traffic-eng
tunnel mpls traffic-eng autoroute announce
tunnel mpls traffic-eng forwarding-adjacency
tunnel mpls traffic-eng priority 5 5
tunnel mpls traffic-eng bandwidth 1
tunnel mpls traffic-eng path-option 1 dynamic
tunnel mpls traffic-eng record-route
tunnel mpls traffic-eng fast-reroute

interface Tunnel1002
description from bmgsr01 tunnel1002 to mr0gsr01 tunnel1002, Primary
ip unnumbered Loopback0
no ip directed-broadcast
mpls label protocol tdp
tag-switching ip
tunnel destination 172.31.248.254
tunnel mode mpls traffic-eng
tunnel mpls traffic-eng autoroute announce
tunnel mpls traffic-eng forwarding-adjacency
tunnel mpls traffic-eng priority 5 5
tunnel mpls traffic-eng bandwidth 1
tunnel mpls traffic-eng path-option 1 dynamic
tunnel mpls traffic-eng record-route
tunnel mpls traffic-eng fast-reroute
!
interface Tunnel2001
description from bm0gsr01 tunnel2001 via mr0gsr01 to hh0gsr01 tunnel2002, Backup of pos3/0
ip unnumbered Loopback0
no ip directed-broadcast
tunnel destination 172.31.252.254
tunnel mode mpls traffic-eng
tunnel mpls traffic-eng priority 0 0
tunnel mpls traffic-eng path-option 1 explicit name backup-to-hh01-via-mr01
tunnel mpls traffic-eng record-route
!
interface Tunnel2002
description from bm0gsr01 tunnel2002 via hh0gsr01 to mr0gsr01 tunnel2002, Backup of pos12/0
ip unnumbered Loopback0
no ip directed-broadcast
tunnel destination 172.31.248.254
tunnel mode mpls traffic-eng

77
tunnel mpls traffic-eng priority 0 0
tunnel mpls traffic-eng path-option 1 explicit name backup-to-mr01-via-hh01
tunnel mpls traffic-eng record-route

interface POS3/0
description from bm0gsr01 pos 3/0 to hh0gsr01 pos 12/0 STM-64
ip address 172.31.254.6 255.255.255.252
no ip directed-broadcast
no ip proxy-arp
ip router isis
encapsulation ppp
carrier-delay msec 0
mpls label protocol tdp
mpls traffic-eng tunnels
mpls traffic-eng backup-path Tunnel2001
tag-switching ip
no peer neighbor-route
crc 32
clock source internal
pos ais-shut
pos framing sdh
pos report lrdi
pos flag s1s0 2
tx-cos STM64-TX
no cdp enable
isis circuit-type level-2-only
isis metric 100 level-2
isis password vlPhuj8p5 level-2
ip rsvp bandwidth 10000000 10000000

interface POS12/0
description from bm0gsr01 pos 12/0 to mr0gsr01 pos 12/0 STM-64
ip address 172.31.254.17 255.255.255.252
no ip directed-broadcast
no ip proxy-arp
ip router isis
encapsulation ppp
carrier-delay msec 0
mpls label protocol tdp
mpls traffic-eng tunnels
mpls traffic-eng backup-path Tunnel2002
tag-switching ip

78
no peer neighbor-route
crc 32
clock source internal
pos ais-shut
pos framing sdh
pos report lrdi
pos flag s1s0 2
tx-cos STM64-TX
no cdp enable
isis circuit-type level-2-only
isis metric 100 level-2
isis password vlPhuj8p5 level-2
ip rsvp bandwidth 10000000 10000000

ip explicit-path name backup-to-hh01-via-mr01 enable


next-address 172.31.254.18
next-address 172.31.254.1
!
ip explicit-path name backup-to-mr01-via-hh01 enable
next-address 172.31.254.5
next-address 172.31.254.2

• The configurations are in principle identical for Hemel and Manchester apart from the I.P addresses.

Quality of Service

Introduction
In order to fulfil ST requirements of having four distinct classes of service, each with their specific service
characteristics, QoS mechanisms are deployed on the access layer and backbone links. The following
section describes the technical implementation and features that form the basis for a set of new innovative
products.

Scalability and stability are the main criteria for any extension of the network. It is absolutely necessary to
aggregate IP streams with identical flow characteristic. The expression used for this solution is “service
classes”. Dedicated handling of single streams is only meaningful in special cases when high bandwidths
are involved, and there are no plans for this solution to be introduced in the first instance.

The number of service classes should be strictly limited from the technical point of view. This is not a
restriction to construct various commercial products on top of it. Service level agreements (SLA) form the
definition interface for the service that will be delivered to the customer by ST. Parameters should describe
a probability for a certain service and will be reported on a per class base.

79
For ST MPLS backbone network a robust solution that aligns to base ideas of IETF's DiffServ approach
would appear to be practicable at present. With respect to the intended MPLS solution, a maximum of 8
code points per path can be supported. These are distinguished using the three experimental bits of the
MPLS shim header. A large part of best effort background traffic is required to produce efficient high
quality service classes because DiffServ is based on relative priorities. The strength of a large IP backbone
network is to be seen in the fact that high-priority and low-priority traffic is merged on a single network
platform. This results in synergy that permits optimum resource utilisation. The bundling of many different
traffic streams (statistical multiplexing) smoothes individual bursts.

Differentiated Services Model – Introduction


This section is intended as an introduction to the Differentiated Services (DiffServ) reference model.

DiffServ is a new model by which traffic is treated by intermediate systems with relative priorities based
on the type of services (ToS) or Differentiated Services Code Point (DSCP) field. Defined in RFC’s 2474
and 2475, the DiffServ standard supersedes the original specification for defining packet priority described
in RFC 791.

The new DiffServ standard proposes a new way of interpreting a field that has always been part of an IP
packet. In the DiffServ standard, the ToS field will be renamed to Differentiated Services Code Point
(DSCP) and will have new meaning. The DiffServ standard proposes to increase the number of definable
priority levels by re-allocating bits of an IP packet for priority marking.

As per RFC 791, the ToS field describes one entire byte (eight bits) of an IP packet. Precedence refers to
the three most significant bits of the ToS field---that is, [XXX]XXXXX. There may be some confusion
because the RFC 1349 defines a new 4-bit ToS XXX[XXXX]X as shown on the following picture.

80
Figure 28 Various interpretations of the TOS field

The three most significant bits of the RFC-791 ToS field - the precedence bits - define the IP packet
priority or importance.

XXX00000 Bits 0,1,2 = Precedence, where:


111 = Network Control = Precedence 7
110 = Internetwork Control = Precedence 6
101 = CRITIC/ECP = Precedence 5
100 = Flash Override = Precedence 4
011 = Flash = Precedence 3
010 = Immediate = Precedence 2
001 = Priority = Precedence 1
000 = Routine = Precedence 0

The four bits of the RFC-1349 TOS are used in IOS configuration and have the following semantics:

000XXXX0 Bits 3, 4, 5, 6:
1000 = Minimize delay
0100 = Maximize throughput
0010 = Maximize reliability
0001 = Minimize monetary cost
0000 = Normal service

0000000X Bit 7: Reserved for future use

This one-byte ToS field has been almost completely unused since it was proposed almost 20 years ago.
Only in the last few years have Cisco and other router companies begun utilising the Precedence bits for
making forwarding decisions.

The DiffServ standard follows a similar scheme to RFC 791, but utilises more bits for setting priority. The
new standard maintains backward compatibility with RFC 791 implementations, but allows more efficient
use of bits 3, 4, and 5. (Bits 6 and 7 will still be reserved for future development.) With the additional 3
bits, there are now a total of 64 classes instead of the previous 7 classes.

81
RFC 2475 defines Per Hop Behaviour (PHB) as the externally observable forwarding behaviour applied at
a DiffServ-compliant node to a DiffServ Behaviour Aggregate (BA).

With the ability of the system to mark packets according to DSCP setting, collections of packets with the
same DSCP setting and sent in a particular direction can be grouped into a BA. Packets from multiple
sources or applications can belong to the same BA.

In other words, a PHB refers to the packet scheduling, queuing, policing, or shaping behaviour of a node
on any given packet belonging to a BA, as configured by a service level agreement (SLA) or a policy map.

The following sections describe the four available standard PHBs:


• Default PHB (as defined in RFC 2474)
• Class-Selector PHB (as defined in RFC 2474)
• Assured Forwarding (AFxy) PHB (as defined in RFC 2597)
• Expedited Forwarding (EF) PHB (as defined in RFC 2598)

Default PHB
The default PHB essentially specifies that a packet marked with a DSCP value of 000000 (recommended)
receives the traditional best-effort service from a DS-compliant node (that is, a network node that complies
with all of the core DiffServ requirements). Also, if a packet arrives at a DS-compliant node, and the DSCP
value is not mapped to any other PHB, the packet will get mapped to the default PHB.

For more information about default PHB, refer to RFC 2474, Definition of the Differentiated Services
Field in IPv4 and IPv6 Headers.

Class-Selector PHB:
To preserve backward-compatibility with any IP Precedence scheme currently in use on the network,
DiffServ has defined a DSCP value in the form xxx000, where x is either 0 or 1. These DSCP values are
called Class-Selector Code Points. (The DSCP value for a packet with default PHB 000000 is also called
the Class-Selector Code Point.)

The PHB associated with a Class-Selector Code Point is a Class-Selector PHB. These Class-Selector PHBs
retain most of the forwarding behaviour as nodes that implement IP Precedence-based classification and
forwarding.

For example, packets with a DSCP value of 110000 (the equivalent of the IP Precedence-based value of
110) have preferential forwarding treatment (for scheduling, queuing, and so on), as compared to packets
with a DSCP value of 100000 (the equivalent of the IP Precedence-based value of 100). These Class-
Selector PHBs ensure that DS-compliant nodes can coexist with IP Precedence-based nodes.

The DiffServ standard utilises the same precedence bits (the most significant bits: 0, 1, and 2) for priority
setting, but further clarifies their functions/definitions, plus offers finer priority granularity through use of
the next three bits in the ToS field. DiffServ reorganises (and renames) the precedence levels (still defined
by the three most significant bits of the ToS field) into the following categories:

82
Table 6 Class-Selector PHBs

Precedence 7 Stays the same (link layer and routing protocol keep alive)

Precedence 6 Stays the same (used for IP routing protocols)

Precedence 5 Class 5

Precedence 4 Class 4

Precedence 3 Class 3

Precedence 2 Class 2

Precedence 1 Class 1

Precedence 0 Best effort

For more information about class-selector PHB, refer to RFC 2474, Definition of the Differentiated
Services Field in IPv4 and IPv6 Headers.

Assured Forwarding PHB


Assured Forwarding PHB is nearly equivalent to Controlled Load Service available in the integrated
services model. AFxy PHB defines a method by which BAs can be given different forwarding assurances.

For example, network traffic can be divided into the following classes:
• Gold: Traffic in this category is allocated 50 percent of the available bandwidth.
• Silver: Traffic in this category is allocated 30 percent of the available bandwidth.
• Bronze: Traffic in this category is allocated 20 percent of the available bandwidth.

Further, the AFxy PHB defines four AF classes: AF1, AF2, AF3, and AF4. Each class is assigned a
specific amount of buffer space and interface bandwidth, according to the SLA with the service provider or
policy map.

Within each AF class, you can specify three drop precedence (dP) values: 1, 2, and 3. Assured Forwarding
PHB can be expressed as shown in the following example:
AFxy

In this example, x represents the AF class number (1, 2, or 3) and y represents the dP value (1, 2, or 3)
within the AFx class. In instances of network traffic congestion, if packets in a particular AF class (for
example, AF1) need to be dropped, packets in the AF1 class will be dropped according to the following
guideline:
dP(AFx1) <= dP(AFx2) <= dP(AFx3)

where dP (AFxy) is the probability that packets of the AFxy class will be dropped. In other words, y
denotes the dP within an Afx class. The dP method penalises traffic flows within a particular BA that
exceed the assigned bandwidth. Packets on these offending flows could be re-marked by a policer to a
higher drop precedence.

83
Bits 3 and 4 of DiffServ field allow further priority granularity through the specification of a packet drop
probability for any of the defined classes. Collectively, Classes 1-4 are referred to as Assured Forwarding
(AF). The following table illustrates the DSCP coding for specifying the priority level (class) plus the drop
percentage. (Bits 0, 1, and 2 define the class; bits 3 and 4 specify the drop percentage; bit 5 is always 0.)

Using this system, a device would first prioritise traffic by class, then differentiate and prioritise same-class
traffic by considering the drop percentage. It is important to note that this standard has not specified a
precise definition of "low," "medium," and "high" drop percentages. Additionally, not all devices will
recognise the DiffServ bit 3 and 4 settings. Remember also that even when the settings are recognised, they
do not necessarily trigger the same forwarding action to be taken by each type of device on the network---
each device will implement its own response in relation to the packet priorities it detects. The DiffServ
standard is meant to allow a finer granularity of priority setting for the applications and devices that can
make use of it, but it does not specify interpretation (that is, action to be taken).

Expedited Forwarding PHB


Resource Reservation Protocol (RSVP), a component of the integrated services model, provides a
Guaranteed Bandwidth Service. Applications such as Voice over IP (VoIP), video, and online trading
programs require this kind of robust service. The EF PHB, a key ingredient of DiffServ, supplies this kind
of robust service by providing low loss, low latency, low jitter, and assured bandwidth service.

EF PHB is ideally suited for applications such as VoIP that require low bandwidth, guaranteed bandwidth,
low delay, and low jitter. The recommended DSCP value for EF PHB is 101110.

For more information about EF PHB, refer to RFC 2598, An Expedited Forwarding PHB.

Figure 29 DSCP Interpretation


Class 0 Class 1 Class 2 Class 3 Class 4 Reserved Routing Routing
Prec 0 Prec 1 Prec 2 Prec 3 Prec 4 Prec 5 Prec 6 Prec 7

Class-Selector PHBs 000 000 001 000 010 000 011 000 100 000 101 000 110 000 111 000
BE PHB CS PHB CS PHB CS PHB CS PHB CS PHB CS PHB CS PHB
DSCP 0 DSCP 8 DSCP 16 DSCP 24 DSCP 32 DSCP 40 DSCP 48 DSCP 56

Unused 000 001 001 001 010 001 011 001 100 001 101 001 110 001 111 001

Low Drop Precedence 000 010 001 010 010 010 011 010 100 010 101 010 110 010 111 010
AF11 AF21 AF31 AF41
DSCP 10 DSCP 18 DSCP 26 DSCP 34

Unused 000 011 001 011 010 011 011 011 100 011 101 011 110 011 111 011

Medium Drop 000 100 001 100 010 100 011 100 100 100 101 100 110 100 111 100
Precedence
AF12 AF22 AF32 AF42
DSCP 12 DSCP 20 DSCP 28 DSCP 36

Unused 000 101 001 101 010 101 011 101 100 101 101 101 110 101 111 101

High Drop Precedence 000 110 001 110 010 110 011 110 100 110 101 110 110 110 111 110

84
AF13 AF23 AF33 AF43 EF PHB
DSCP 14 DSCP 22 DSCP 30 DSCP 38 DSCP 46

Unused 000 111 001 111 010 111 011 111 100 111 101 111 110 111 111 111

QoS and VoIP


Voice quality is directly affected by two major factors:
• Lost packets
• Delayed packets

Packet loss causes voice clipping and skips. The industry standard codec algorithms used in Cisco Digital
Signal Processor (DSP) can correct for up to 30 ms of lost voice. Cisco Voice over IP (VoIP) technology
uses 20-ms samples of voice payload per VoIP packet. Therefore, for the codec correction algorithms to be
effective, only a single packet can be lost during any given time.

Packet delay can cause either voice quality degradation due to the end-to-end voice latency or packet loss if
the delay is variable. If the end-to-end voice latency becomes too long (250 ms, for example), the
conversation begins to sound like two parties talking on a CB radio. If the delay is variable, there is a risk
of jitter buffer overruns at the receiving end. Eliminating drops and delays is even more imperative when
including fax and modem traffic over IP networks. If packets are lost during fax or modem transmissions,
the modems are forced to "retrain" to synchronize again. By examining the causes of packet loss and delay,
we can gain an understanding of why Quality of Service (QoS) is needed.

Network congestion can lead to both packet drops and variable packet delays. Voice packet drops from
network congestion are usually caused by full transmit buffers on the egress interfaces somewhere in the
network. As links or connections approach 100% utilization, the queues servicing those connections
become full. When a queue is full, new packets attempting to enter the queue are discarded.

Because network congestion is typically sporadic, delays from congestion tend to be variable in nature.
Egress interface queue wait times or large serialization delays cause variable delays of this type. Both of
these factors are discussed in the next section, "Delay and Jitter".

Delay is the time it takes for a packet to reach the receiving endpoint after being transmitted from the
sending endpoint. This time is termed the "end-to-end delay” and it consists of two components: fixed
network delay and variable network delay. Jitter is the delta, or difference, in the total end-to-end delay
values of two voice packets in the voice flow.

Fixed network delay should be examined during the initial design of the VoIP network. The International
Telecommunications Union (ITU) standard G.114 states that a one-way delay budget of 150 ms is
acceptable for high voice quality. Research at Cisco has shown that there is a negligible difference in voice
quality scores using networks built with 200-ms delay budgets. Examples of fixed network delay include
the propagation delay of signals between the sending and receiving endpoints, voice encoding delay, and
the voice packetization time for various VoIP codecs. Propagation delay calculations work out to almost
0.0063 ms/km. The G.729A codec, for example, has a 25 ms encoding delay value (two 10 ms frames + 5
ms look-ahead) and an additional 20 ms of packetization delay.

85
Congested egress queues and serialization delays on network interfaces can cause variable packet delays.
Without Priority or Low-Latency Queuing (LLQ), queuing delay times equal serialization delay times as
link utilization approaches 100%. Serialization delay is a constant function of link speed and packet size.
As shown in Table 7, the larger the packet and the slower the link clocking speed, the greater the
serialization delay. While this is a known ratio, it can be considered variable because a larger data packet
can enter the egress queue before a voice packet at any time.

If the voice packet must wait for the data packet to serialize, the delay incurred by the voice packet is its
own serialization delay plus the serialization delay of the data packet in front of it. Using Link
Fragmentation and Interleave (LFI) techniques, serialization delay can be configured to be a constant delay
value.

Table 7 Serialisation delay [ms] as function of link speed and packet size

Link speed \ packet size 64 bytes 128 bytes 256 bytes 512 bytes 1024 bytes 1500 bytes
56 kbps 9 18 36 72 144 214

64 kbps 8 16 32 64 128 187

128 kbps 4 8 16 32 64 93

256 kbps 2 4 8 16 32 46

512 kbps 1 2 4 8 16 23

2048 kbps (E1) 0,25 0,5 1 2 4 5,8

34 Mbps (E3) 0,015 0,3 0,06 0,12 0,24 0,35

155 Mbps (STM-1) 3.3*10-3 0,006 0,013 0,026 0,052 0,077

622 Mbps (STM-4) 0,82*10-3 1,6*10-3 3,3*10-3 6,6*10-3 0,013 0,019

2.5 Gbps (STM-16) 0,2*10-3 0,4*10-3 0,82*10-3 1,6*10-3 3,3*10-3 4,8*10-3

Because network congestion can be encountered at any time within a network, buffers can fill
instantaneously. This instantaneous buffer utilization can lead to a difference in delay times between
packets in the same voice stream. This difference, called jitter, is the variation between when a packet is
expected to arrive and when it actually is received. To compensate for these delay variations between voice
packets in a conversation, VoIP endpoints use jitter buffers to turn the delay variations into a constant
value so that voice can be played out smoothly.

Cisco VoIP endpoints use DSP algorithms that have an adaptive jitter buffer between 20 and 50 ms, as
illustrated in the following picture. The actual size of the buffer varies between 20 and 50 ms based on the
expected voice packet network delay. These algorithms examine the timestamps in the Real-time Transport
Protocol (RTP) header of the voice packets, calculate the expected delay, and adjust the jitter buffer size
accordingly. When this adaptive jitter buffer is configured, a 10-ms portion of "extra" buffer is configured
for variable packet delays. For example, if a stream of packets is entering the jitter buffer with RTP
timestamps indicating 23 ms of encountered network jitter, the receiving VoIP jitter buffer is sized at a
maximum of 33 ms. If a packet's jitter is greater than 10 ms above the expected 23-ms delay variation (23
+ 10 = 33 ms of dynamically allocated adaptive jitter buffer space), the packet is dropped.

86
Figure 30 Adaptive jitter buffer

Voice quality is only as good as the quality of the weakest network link. Packet loss, delay, and delay
variation all contribute to degraded voice quality. In addition, because network congestion (or more
accurately, instantaneous buffer congestion) can occur at any time in any portion of the network, network
quality is an end-to-end design issue.

Call admission control is another important issue that needs to be considered. Call admission control is a
mechanism for ensuring that voice flows do not exceed the maximum provisioned bandwidth allocated for
voice conversations. After doing the calculations to provision the network with the required bandwidth to
support voice, data, and possibly video applications, it is important to ensure that voice does not
oversubscribe the portion of the bandwidth allocated to it. While most QoS mechanisms are used to protect
voice from data, call admission control is used to protect voice from voice. This is illustrated in the
following figure, which shows an environment where the network has been provisioned to support two
concurrent voice calls. If a third voice call is allowed to proceed, the quality of all three calls is degraded.
Call admission control should be external to the network.

Figure 31 - Call admission control

Interleaving mechanisms: FRF.12 or MLPPP / LFI


For low-speed WAN connections (in practice, those with a clocking speed of 1 Mbps or below), it is
necessary to provide a mechanism for Link Fragmentation and Interleaving (LFI). A data frame can be sent
to the physical wire only at the serialization rate of the interface. This serialization rate is the size of the
frame divided by the clocking speed of the interface. For example, a 1500-byte frame takes 214 ms to
serialize on a 56-kbps circuit. If a delay-sensitive voice packet is behind a large data packet in the egress
interface queue, the end-to-end delay budget of 150-200 ms could be exceeded. In addition, even relatively
small frames can adversely affect overall voice quality by simply increasing the jitter to a value greater
than the size of the adaptive jitter buffer at the receiver.

87
LFI tools are used to fragment large data frames into regularly sized pieces and to interleave voice frames
into the flow so that the end-to-end delay can be predicted accurately. This places bounds on jitter by
preventing voice traffic from being delayed behind large data frames, as illustrated in the following figure.
The two techniques used for this are FRF.12 for Frame Relay and Multilink Point-to-Point Protocol
(MLPPP) for point-to-point serial links.

Figure 32 LFI to reduce frame delay and jitter

A 10-ms blocking delay is the recommended target to use for setting fragmentation size. To calculate the
recommended fragment size, divide the recommended 10 ms of delay by one byte of traffic at the
provisioned line clocking speed, as follows:

Fragment_Size = (Max_Allowed_Jitter * Link_Speed_in_kbps) / 8

For example:

Fragment_Size = (10 ms * 56) / 8 = 70 bytes

The following table shows the recommended fragment size for various link speeds.

Table 8 Recommended fragment size

Link Speed Recommended fragment size


(kbps) (bytes)
56 70

64 80

128 160

256 620

512 640

768 960

88
Obviously, the fragmentation size should be set larger than the largest VoIP packet in order to ensure that
no VoIP packets get fragmented.

When using FRF.12 as an LFI mechanism on a Frame Relay access link, traffic shaping (either FRTS or
dTS) becomes mandatory. Enabling FRF.12 will have an impact on the FRTS / dTS shaping parameters,
since it adds 4 bytes of overhead to each fragment (2 bytes of FRF.12 overhead and 2 bytes of Cisco
encapsulation overhead). The FRTS implementation will take into account this additional overhead (but
still not the FCS and flag overhead) but the dTS overhead will not take into account the additional FRF.12 /
Cisco encapsulation overhead). This is because FRF.12 runs in distributed mode on the VIP (dFRF.12).

Delay Model
The delay model for an IP packet consists of the summary of individual delays of nodes and links that are
part of the end-to-end connection. The main factors that determine the overall end-to-end delay are
typically:
• Serialisation delay of narrow-band links
• Propagation delays of long distance connections
• Queuing delay in case of congestion situations

All times have to be described statistically, and must be seen as average in a certain time period.

Table 9 The components of the end-to-end delay model

Decision This is the required time in a node to decide what interface a packet should go out. There can be a
Delay dependence on node utilisation, but in general on the high-end platforms TDecision < 1ms.
TDecision
Queuing Queuing delay has variable dependencies to determine this delay, queue length, queuing mechanism,
Delay line utilisation, platform and CPU utilisation.
TQueuing During times of non-congestion, there is no queuing delay; once congestion occurs the extra CPU
cycles required to manage the scheduling has a small impact on the delay variable in the network.

Serialisation This is the time that is necessary to put a packet of a certain size on a line of a certain speed (please see
Delay the Table 7)
TSerialisation
Transmit On the egress interface a single buffer exist which additionally has an influence on the transmit delay.
Buffer Delay This buffer is used to control the various queuing mechanisms (CBWFQ/MDRR) in front of the
TTransmit transmit queue, by using a threshold. The length of this queue can be configured. A suited set-up has to
be decided upon to minimise delay and maximise efficiency.

Propagation Describes the speed of light in a fibre which is about 6 ms per 1000 Km (2/3 c0)
Delay
TPropagation
Node Delay The node delay summarises all node dependent delays per node.
TNode TNode = TDecision +TQueuing +TTransmit

Link Delay The link delay summarises all link dependent delays per link.
TLink TLink = TSerialization +TPr opagation

Core Delay The core delay summarises all core dependent delays, which are all node and link delays inside the
TCore core. This includes PE routers, P routers and the links in-between. Summarizing node and link delay for
the core simplifies the delay model.

89
TCore = ∑TNode + ∑TLink
Core Core

Access Delay The access delay summarises all access dependent delays, which are all node and link delays in the
TAccess access network. This includes CE routers, PE routers and the links in-between. Summarizing node and
link delay for the access network simplifies the delay model.
T Access ( x ) = ∑
Access ( x )
TNode + ∑T
Access ( x )
Link

End-to-End The end-to-end delay is defined by the following formula:


Delay
TEnd −to −end = T Access ( local ) +TCore +T Access ( remote )
TEnd-to-End

Figure 33 Overview of end-to-end delay segments.


Tdecission Tqueueing Ttransmit Tserialization Tpropagation

Tnode Tlink

CE PE P P PE CE

Taccess Tcore Taccess

QoS in an MPLS network


MPLS is a technology allowing multi-service networking in an IP environment. In MPLS packets QoS
information is carried in the EXP bits of the MPLS header of frame based MPLS packets. The MPLS EXP
bits are only three bits long, while the DSCP bits are six. Therefore not all the information is copied
directly from the DSCP IP field into the MPLS EXP field. Only the class selector (three most significant
bits) are copied into the MPLS EXP bits by default as demonstrated in the following figure.

90
Figure 34 DSCP to EXP mapping

D S C P c o d e p o in t f ie ld U nused
C la s s S e le c t o r
C o d e p o in t
0 2 3 4 5 6 7 8

IP L 3 H e a d e r

L 3 c o d e p o in t
c o p ie d fr o m
D S C P to M P L S
E X P b its

M PLS Header

20 21 22
M PLS EXP

demonstrates the DSCP/EXP location; the MPLS header is pre-pended to the front of the IP packet. It is
also feasible that multiple labels are added to the front of the IP packets instead of the one demonstrated in
the drawing (e.g. MPLS/VPN label, TE label, FRR label). In such case, the QoS features in MPLS core
devices shall only look in the EXP bits of the top-most label as the DSCP and “inner” labels in the label
stack may carry customer-defined classes of services.

Figure 35 DSCP / MPLS Headers

IP v 4 P a c k e t IP v 4 P a c k e t Label x

DSCP DSCP EXP


abcd abcd ab

IP v 4 D o m a in M P L S D o m a in

DiffServ Aware TE

ST QoS design – An Overview


The following table and figure give an overview of the various QoS mechanisms that are used in the ST
MPLS network

The various QoS mechanisms and their detailed configuration will be discussed in detail in the subsequent
sections. Detailed configuration templates will be derived during staging procedures. It should be
understood that the IP addresses, DLCI numbers, VPI / VCI numbers, ACL numbers, etc, have been taken
for the sake of examples and should be adapted to the specific requirements of the ST network.

It is Cisco’s experience that a Quality of Service design and deployment is never a straightforward process
– after an initial deployment, a performance assessment phase and subsequent tuning of the QoS

91
deployment is a necessity. Therefore, we strongly recommend a tuning phase while beta customers are
connected.

Table 10 CoS Mechanisms Overview


Marketintg Standard Business Streaming Voice Routing Management
Class Best Effort data Business data Multimedia VoIP updates (e.g. SNMP)
(e.g. http) (e.g. SNA) (e.g. Video)
QoS
Mechanism

PHB BE AF11 AF31 EF CS5 CS6


DSCP 0 10 26 46 48 48
EXP 0 1 3 5 6 6

Max. % of link BW 25% 25% 25% 25% - -

Queue Length long medium short very short medium medium

Classification CE any non-classified packet ACL 100 ACL 101 ACL 102 - ACL 103

PE DSCP DSCP DSCP DSCP - ACL 103

P EXP EXP EXP EXP - -

Marking CE MQCLI MQCLI MQCLI MQCLI - LPR

PE - - - - - LPR

P - - - - - -

Policing CE - MQCLI MQCLI MQCLI - MQCLI

P/PE - - - - - -

Class Queuing Access class-default business streaming voice (LLQ) mgmt mgmt

Core class-default business streaming voice (LLQ) business business

Congestion CE,PE DSCP WRED DSCP WRED DSCP WRED Tail drop DSCP WRED DSCP WRED
Avoidance
P EXP WRED EXP WRED EXP WRED Tail drop EXP WRED EXP WRED
(minTH=maxTH)

The drawing below displays an overview of QoS mechanisms used in the ST network. The following
chapters will detail the QoS design on a hop-by-hop basis, following the packet from source (left CE) to its
destination (right CE).

92
Figure 36 QoS mechanisms overview

LAN

LAN
N/A

N/A
Managed CE Managed CE
Classification (ACL)
Marking (CAR, DSCP)
WRED

N/A
Policing (MQCLI)
LLQ

Queuing (DSCP)
Cong. mgmt. (DSCP)
IP

IP
WRED
Classification (DSCP)

LLQ
N/A

Queuing (DSCP)
Cong. mgmt. (DSCP)
10k, LLQ MPLS MDRR GSR MDRR MPLS 10k,
N/A
7206 VXR WRED WRED P WRED 7206VXR
PE PE

Classification (QoS-group) Classification (EXP)


Marking (DSCP->EXP auto) Queuing (EXP MDRR)
Queuing (DSCP) Cong. mgmt (EXP)
Cong. mgmt. (DSCP) [to-fabric, to-interface]

CE-to-PE QoS mechanisms (applied on the CE) – PPP or HDLC

Classification
On the CE, packets will be classified with extended access lists (ACLs). These ACLs can match packets on
IP S/D address, protocol type, and UDP/TCP port numbers.

The ACLs for Business (100) Streaming (101) and Voice (102) traffic should be agreed with the customer.
Any non-classified traffic will go into Standard traffic class, which is implemented as class-default in
MQC defintion.

The following is an example ACL for Voice traffic:

!
! Voice
!
access-list 102 permit udp any any range 16384 32767
!
! Voice Signalling MGCP
!
access-list 102 permit udp any any eq 2427
access-list 102 permit tcp any any eq 2428
access-list 102 permit tcp any any eq 1720
!
! H.323 voice control traffic
!
access-list 102 permit tcp any any range 11000 11999
!

93
The ACL for Management (103) traffic should match SNMP, TFTP, TELNET and any other required
traffic to and from the network management systems IP address range.

!
access-list 103 permit tcp any any eq bgp
access-list 103 permit udp any any eq rip
access-list 103 permit tcp any <NOC_lan> eq telnet
access-list 103 permit udp any <NOC_lan> eq snmp
access-list 103 permit udp any <NOC_lan> eq tftp
!

Voice signalling traffic will need to be classified and marked appropriately. Depending on the customer
VoIP implementation, the different possibilities are:
• RTCP: odd RTP port numbers
• H.323 / H.245 standard connect: TCP 11xxx
• H.323 / H.245 fast connect: TCP 1720
• H.323 / H.225 RAS: TCP 1719
• Skinny control traffic: TCP 2000-2002
• ICCP: TCP 8001-8002
• MGCP: UDP 2427, TCP 2428

Dependent on the actual signalling method used (packet sizes), speed of the access links and the number of
concurrent voice call set-ups that need to be supported, two possible design options can be taken with
regards to the queuing method used.
• Queue the voice signalling packets in the same PQ as the actual voice bearer packets. This will result in
a simpler design but could delay the transmission of some of the voice bearer packets (dependent on
voice signalling packet size, access link speed and number of concurrent voice call set-ups). This could
than have an impact on the voice delay / jitter.
• Queue the voice signalling packets in another normal class queue. This should ideally be a separate
class queue from the ones that are used for regular data traffic to ensure delivery of the voice signalling
packets. This will result in a more complicated design where bandwidth needs to be allocated for the
voice signalling class. Also, voice signalling packets might be delayed through the network resulting in
a delay in the voice call set-up process. The advantage is that the actual voice quality will not be
impacted as no voice signalling packets will travel in the PQ.

Testing has indicated that, without cRTP (Compressed Real Time Protocol) enabled, the effect of mapping
VoIP signalling packets together with the VoIP bearer packets in the same priority queue is negligible. The
signalling packets have little effect on the latency nor do they cause any drops due to the default bust size
of 200ms that has been built into the priority queue. Therefore, the design recommendation is to match the
VoIP signalling packets with ACL 102 and queue them together with the VoIP bearer packets in the
priority queue.

It should however be understood that VoIP signalling implementations differ and that some might have a
negative effect on the performance of the priority queue. In that event, the VoIP signalling traffic needs to
be mapped in another class queue (Business, for example).

The classified traffic will subsequently be mapped in their respective classes using the MQCLI. The
Standard traffic will not match any of the classes and will be mapped in the default class (class-default). A
maximum of 64 classes can be defined on a single router.

94
!
class-map match-all business
match access-group 100
class-map match-all streaming
match access-group 101
class-map match-all voice
match access-group 102
class-map match-any management
match access-group 103
!

Marking
After classification, packets need to be marked with their appropriate IP precedence or DSCP value. The
following is the required configuration for Class Based Marking on CE router.

Marking of Business, Streaming and Voice classes is actually configured through the MQCLI police
command, because these classess need to be policed to the SLA limits.

Standard traffic class is not policed, hence we can mark all the traffic with MQCLI “set ip dscp” command.

!
policy-map customer_profile
class business
police 128000 8000 16000 conform-action set-dscp-transmit 10 exceed-action drop
class streaming
police 64000 2000 2000 conform-action set-dscp-transmit 26 exceed-action drop
class voice
police 64000 2000 2000 conform-action set-dscp-transmit 46 exceed-action drop
class management
police 24000 8000 16000 conform-action transmit exceed-action drop
class class-default
set ip dscp 0
!

The following is the required configuration for LPR marking of the locally generated management traffic.
As discussed before, ACL 103 matches all management traffic.

!
ip local policy route-map management
!
route-map management permit 10
match ip address 103
! here we simulate the set ip dscp 48 command
set ip precedence 6
set ip tos 0
!

In/Out-Contract Traffic Profile


The In/Out-contract design is less restrictive than simple policing of class bandwidth to SLA limit, because
it allows customer to exceed the subscribed class-BW thresholds when other classes on CE-PE link are
underutilised. This is because in CBWFQ queuing strategy, the bandwidth of underutilised traffic classes
can be consumed by other classes proportionally with the respective configured class bandwidths.

95
However, the design option described in this chapter has not been recommended to ST, because it
involves fairly complex implementation, provisioning and monitoring. It introduces complexity not
only on access layer, but QoS implementation in the core has to support it as well.

Instead of policing in each of the traffic classes, it is possible to introduce a mechanism of in / out contract
for the Business and Streaming traffic classes. The main reasons behind this recommendation are
twofold:
• In an MPLS / VPN environment, it should be avoided that well behaving customer sites are penalised
by ill-behaving customer sites. A well behaving customer site is a site which sends traffic into the
network below the Ingress Committed Rate (ICR), and this on a per traffic class basis. An ill behaving
site sends traffic into the network above the ICR for a particular traffic class. The problem is that, if a
well behaving site and an ill behaving site both send traffic to a third site, congestion might occur on
the egress PE to that site. If there is no way of differentiating between the “well behaving” traffic and
“ill behaving” traffic, traffic from the well behaving site might be dropped instead of traffic from the ill
behaving site. The introduction of an in / out contract traffic marking mechanism at the ingress CE will
prevent this.
• The introduction of in / out contract traffic profiles will facilitate the capacity planning of the backbone
network which is shared among the different MPLS / VPN customers. Indeed, the shared backbone
network needs to be engineered and capacity planned only for the in-contract part of the customer
traffic. When, in a second phase, QoS mechanisms are deployed in the core backbone network due to
possible backbone congestion, it will be possible to differentiate the out-contract traffic from the in-
contract traffic and as a result, discard the out-contract traffic earlier.

The following would be the required configuration for Police marking of the Business, Streaming and
Voice traffic classes in ST network:
• The in-contract Business traffic is marked as AF11 (DSCP 10). The out-contract Business traffic is
marked as AF21 (DSCP 18).
• The in-contract Streaming traffic is marked as AF31 (DSCP 26). The out-contract Streaming traffic is
marked as AF41 (DSCP 34).
• The Voice traffic is marked as EF (DSCP 46). The notion of out-contract traffic does not apply to jitter-
sensitive Voice class (WRED is not applicable in LLQ).

!
policy-map customer_profile
class business
police 128000 8000 16000 conform-action set-dscp-transmit 10 exceed-action set-
dscp-transmit 18
class streaming
police 64000 2000 2000 conform-action set-dscp-transmit 26 exceed-action set-dscp-
transmit 34
class voice
police 64000 2000 2000 conform-action set-dscp-transmit 46 exceed-action drop
!

The following figure depict the in/out-contract marking in Businness and Streaming traffic classes. As
previously described, any packets beyond subscribed bandwidth of Business class would be re-coloured
and subject to more aggressive WRED dropping profile.

96
Figure 37 In/Out-contract Marking and Policing (example for Business class)
Marking (MQCLI)
Congestion
Management
(WRED)
Re-coloring
out-contract
18 IP payload
SLA Limit

Coloring
x IP payload 10 IP payload
in-contract

IP Packet Out-contract traffic dropped


before any in-contract packet

The following picture shows a another marking/policing alternative with two SLA limits:
• if the traffic rate exceeds the SLA Limit, traffic is re-coloured as out-contract and sent to the wire.
• then, if the traffic rate exceeeds Drop Limit, packets are uncoditionally dropped.

This design variation can be implemented through 2 cascaded CAR statements. The first CAR statement
will mark the in-contract traffic below the first rate threshold. The second CAR statement will mark the
out-contract traffic between the first and second rate thresholds and will also drop the traffic above the
second rate threshold.

The following is the required configuration for CAR policing (dropping) of Business and Streaming traffic
classes above a second rate threshold. In this particular example, the Business in-contract traffic is limited
to 128 Kbps, and the Business out-contract traffic is limited to 256 Kbps. The Streaming in-contract traffic
is limited to 64 Kbps, and the Streaming out-contract traffic is limited to 96 Kbps.

This can be also implemented using a two-rate policer as described in


http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122newft/122t/122t4/ft2rtplc.htm, but
this method is still depreciated due to relatively immature 12.2T IOS release.

Please note that Voice traffic is still policed above the first (SLA-limit) threshold.

!
interface Serial0/1
bandwidth 512
rate-limit output access-group 100 128000 8000 16000 conform-action set-dscp-transmit
10 exceed-action continue
rate-limit output access-group 100 128000 8000 16000 conform-action set-dscp-transmit
18 exceed-action drop
rate-limit output access-group 101 64000 2000 2000 conform-action set-dscp-transmit
26 exceed-action continue
rate-limit output access-group 101 32000 2000 2000 conform-action set-dscp-transmit
34 exceed-action drop
rate-limit output access-group 102 64000 2000 2000 conform-action set-dscp-transmit
46 exceed-action drop
encapsulation ppp
clockrate 512000
!

97
Figure 38 CAR based In/Out-contract Marking and Policing
Marking (CAR)
Congestion
Drop Management
Drop Limit (WRED)

Re-coloring
18 IP payload
out-contract
SLA Limit
Coloring 10 IP payload
x IP payload
in-contract

IP Packet Out-contract traffic dropped


before any in-contract packet

Policing
Policing in Voice traffic class is configured to provide rudimentary call admission thereby policing voice
traffic levels into the core network. The Policing is carried out by the exceed-action option on the end of
the police command. Anything over the expected number of voice calls bandwidth will not be forwarded.
If a customer attempts to exceed this limit then all the calls flowing through that specific CE-PE connection
could be affected to degradation in the quality of all the simultaneous calls. However, this affect is much
better than single customer affecting all the other customers in ST network sharing a specific backbone
link.

The Business and Streaming traffic classes will also be policed to subscribed SLA limits using MQCLI
police commands. A few important points surrounding the policing implementation should be understood.
• Policing propagates bursts to a certain extent. It does not shape the traffic flow and as such does not
cause any packet delay.
• Police bandwidths need to be configured in 8 Kbps multiples. This needs to be reflected in the ST
service offerings.
• Compared to CAR, police bandwidths include some layer-2 overhead (please see the Class Queuing
chapter for details).

The police configuration requires the setting of the <normal-burst> NB and <excess-burst> EB parameters.
These are parameters used in police’s Token Bucket algorithm.

For TCP oriented classes such as Business class, the recommended settings for rate limit normal and
excess burst are:

NB = max(8000, {RTT x Committed Rate in Bytes})


EB = 2 x NB
where RTT is ~ 0.05s

The calculation result is rounded to the nearest 1000-byte boundary. The following table identifies the
recommended NB and EB values in function of the access link speed.

98
Table 11 NB and EB settings

Link BW [kbps] NB EB [byte]


[byte]

64 8000 16000

128 8000 16000

256 8000 16000

512 8000 16000

1024 8000 16000

2048 12800 25600

34368 214800 429600

100000 625000 1250000

155520 972000 1944000

The recommended settings for rate limit normal and excess burst for the VoIP oriented classes such as
Voice class are:

NB = 2000
EB = NB (CBR like policer to avoid jitter)

The following is policing configuration example on 512 kbps link. Please note that configured police limits
shall match the definition of class bandwidths in each of traffic classes.

!
!
policy-map customer_profile
class business
police 128000 8000 16000 conform-action set-dscp-transmit 10 exceed-action drop
class streaming
police 64000 2000 2000 conform-action set-dscp-transmit 26 exceed-action drop
class voice
police 64000 2000 2000 conform-action set-dscp-transmit 46 exceed-action drop
class management
police 24000 8000 16000 conform-action transmit exceed-action drop
class class-default
! Standard class is not policed
set ip dscp 0
!
interface Serial0/1
description CE-PE link
bandwidth 512
encapsulation ppp
max-reserved-bandwidth 95
service-policy output customer_profile
clockrate 512000
!

99
Class Queuing
Queuing within the classes is implemented through Low latency Queuing (LLQ). LLQ is in fact the
combination of Class Based Weighted Fair Queuing (CBWFQ) and Priority Queuing (PQ). The PQ is used
for delay sensitive traffic such as VoIP. LLQ is configured through the MQCLI.

Different traffic classes – a maximum of 64 traffic classes can be defined on a single router – can be
combined in a service policy. This is kind of a traffic profile. Each of the classes in the service policy will
be assigned a minimum bandwidth according to the service contract that has been agreed with the
customer. The minimum bandwidth that can be configured is 8 Kbps 6. Under congestion, each of the traffic
classes will have this minimum bandwidth available:
• If one class is congested (and so experiences delay), the congestion is isolated from other classes,
which still have a guaranteed minimum share of the link bandwidth.
• If one class is under-utilised, other classes can use the available bandwidth7. All flows and classes get a
proportionate share of the spare bandwidth. The proportion is dictated by the configured bandwidth for
classes where the higher the allocated bandwidth, the higher the proportion allocated. For flow-based
weighted fair queuing, configurable in the default-queue, the proportion of available bandwidth is
allocated based on the precedence of the packets where the packets with the highest precedence values
get the highest proportion of bandwidth.

This enables worst-case bounds on delay and jitter to be designed independently between the classes whilst
preventing any single class from being starved by over utilisation on other classes. Also, other parameters
like congestion avoidance and control parameters can be configured on a per-class basis. This will be
discussed further on.

The sum of the minimum bandwidths reserved for the customer traffic classes needs to be lower than the
total link bandwidth. Some bandwidth needs to be reserved for management traffic and routing traffic.
Since ST will offer a managed service, it needs to keep control over the CEs, even under congestion
circumstances. Also the routing traffic – which is BGP or RIP in this case – needs to have some minimum
bandwidth available (8 Kbps or 1 %, whatever is larger).

It should also be understood that the actual minimum bandwidths configured through MQCLI include the
following layer 2 overhead, in contrast with CAR which only includes pure layer 3 IP bandwidth.
Overhead added by the hardware (CRC, flags) is not included in the MQCLI bandwidths8.
• The 8 bytes of SNAP/LLC overhead and 4 bytes of the 8-byte AAL5 trailer for ATM interfaces (the
remaining 4 bytes of the AAL5 trailer CRC are not taken into account). AAL5 padding is equally not
taken into account. The ATM cell overhead (5 bytes per cell payload of 48 bytes) is not taken into
account.
• The 4-byte Frame Relay overhead for Cisco Frame Relay encapsulation (additional overhead due to
possible FRF.12 headers is not taken into account). CRC and flags overhead is not taken into account.
• The 2 bytes of PPP encapsulation overhead.

Also, all reports will indicate the configured rates – so including the L2 overhead. It is worth considering
for ST to include the L2 overhead in traffic contracts with customers. This would ensure consistency in
between the contracted bandwidths and the performance reports.

After defining the service policy in a policy-map, it needs to be applied on an interface (service-policy).

6
On 10k series the granularity is 1/255th of link bw.
7
Except on 10k and 12000 series where LLQ is policed to configured class-bw.
8
Except on 10k series, where MQCLI on ATM interfaces inlcudes all layer-2 overhead.
100
By default, on the non-distributed router platforms (non VIP based), the sum of the minimum bandwidths
needs to be lower than 75 % of the configured access bandwidth. Since the actual required sum of
minimum bandwidths will probably be larger, this default parameter setting can be changed (maximum-
reserved-bandwidth) to 100 %. However, it is also a very good design practice not to push the design
boundaries to the edge without allowing for any margin of error or unexpected traffic patterns. Therefore, it
is still recommended to keep the sum of all minimum bandwidths below 100 %. Keeping the sum of all
minimum bandwidths around 95 % will allow for unaccounted traffic such as layer 2 overhead, layer 2
keepalives, LMI (in the case of Frame Relay), etc.

The following is the sample configuration for LLQ class queuing. Class bandwidths can be configured in
[kbps] or [%] of (max-res-bw – voice-bw).

On 10000 series routers, the cumulative bandwidth applied on traffic classes must not exceed the 99% of
link bandwidth. The bandwidth is configurable in steps of 1/255 of link (or PVC) bandwidth. This rule
must be respected when configuring the class bandwidths on the CE router.

!
policy-map customer_profile
class business
bandwidth percent 30
class streaming
bandwidth percent 20
class voice
priority 64
class management
bandwidth percent 5
class class-default
bandwidth percent 45
!
interface Serial0/1
bandwidth 512
encapsulation ppp
max-reserved-bandwidth 95
service-policy output customer_profile
clockrate 512000
!

In the configuration template above, the Voice traffic class has been allocated 64kbs of link capacity. The
“priority” command guarantees bandwidth to the priority class and restrains the flow of packets from the
priority class: when the link is not congested, the priority class traffic is allowed to exceed its allocated
bandwidth. When the device is congested, the priority class traffic above the allocated bandwidth is
discarded (but we will police it to contractual Voice class bandwidth).

Business, Streaming, Management and Standard classes will share the remaining max-reserverd-bandwidth
as configured. For example, the Streaming traffic class will receive minimum bandwidth of ((512*95%)-
64)*20% = 84 kbps in congestion periods.

Congestion avoidance
Congestion avoidance techniques monitor network traffic loads in an effort to anticipate and avoid
congestion at common network bottlenecks. Congestion avoidance is achieved through packet dropping.
Among the more commonly used congestion avoidance mechanisms is Random Early Detection (RED),
which is optimum for high-speed transit networks. Cisco IOS QoS includes an implementation of RED

101
that, when configured, controls when the router drops packets. If there is no Weighted Random Early
Detection (WRED) configured, the router uses the cruder default packet drop mechanism called tail drop.

WRED combine the capabilities of the RED algorithm with the IP Precedence feature. Within the section
on WRED, the following related features are discussed:
• Tail Drop. Tail drop is the default congestion avoidance behaviour when WRED is not configured.
Tail drop treats all traffic equally and does not differentiate between classes of service within the same
queue. Queues fill during periods of congestion. When the output queue is full and tail drop is in effect,
packets are dropped until the congestion is eliminated and the queue is no longer full.
• Weighted Random Early Detection. WRED avoids the globalisation problems that occur when tail
drop is used as the congestion avoidance mechanism on the router. Global synchronisation occurs as
waves of congestion crest only to be followed by troughs during which the transmission link is not
fully utilised. Global synchronisation of TCP hosts, for example, can occur because packets are
dropped all at once. Global synchronisation manifests when multiple TCP hosts reduce their
transmission rates in response to packet dropping, then increase their transmission rates once again
when the congestion is reduced.

About Random Early Detection


The RED mechanism was proposed by Sally Floyd and Van Jacobson in the early 1990s to address
network congestion in a responsive rather than reactive manner. Underlying the RED mechanism is the
premise that most traffic runs on data transport implementations that are sensitive to loss and will
temporarily slow down when some of their traffic is dropped. TCP, which responds appropriately—even
robustly—to traffic drop by slowing down its traffic transmission, effectively allows the traffic-drop
behavior of RED to work as a congestion-avoidance signalling mechanism.

TCP constitutes the most heavily used network transport. Given the ubiquitous presence of TCP, RED
offers a widespread, effective congestion-avoidance mechanism. The minimum threshold value should be
set high enough to maximise the link utilisation. If the minimum threshold is too low, packets may be
dropped unnecessarily, and the transmission link will not be fully used.

The difference between the maximum threshold and the minimum threshold should be large enough to
avoid global synchronisation of TCP hosts (global synchronisation of TCP hosts can occur as multiple TCP
hosts reduce their transmission rates). If the difference between the maximum and minimum thresholds is
too small, many packets may be dropped at once, resulting in global synchronisation.

Random drops occur once the average queue length exceeds the minimum thresholds, once the average
queue equals the maximum threshold the number of dropped packets equals the maximum drop probability
value. When the average queue is greater than the maximum threshold then all packets are dropped.

Weighted random early detection


WRED makes early detection of congestion possible and provides for multiple classes of traffic. It also
protects against global synchronisation. For these reasons, WRED is useful on any output interface where
congestion is expected to occur.

However, WRED is usually used in the core routers of a network, rather than at the edge of the network.
Edge routers assign IP precedence to packets as they enter the network. WRED uses this precedence to
determine how to treat different types of traffic.

WRED provides separate thresholds and weights for different IP precedence, allowing ability to provide
different qualities of service in regard to packet dropping for different traffic types. Standard traffic may be
dropped more frequently than premium traffic during periods of congestion.
102
DiffServ compliant WRED
DiffServ Compliant WRED extends WRED to support Differentiated Services (DiffServ) and Assured
Forwarding (AF) Per Hop Behavior (PHB). This feature enables customers to implement AF PHB by
coloring packets according to differentiated services code point (DSCP) values and then assigning
preferential drop probabilities to those packets.

The dscp-based argument enables WRED to use the DSCP value of a packet when it calculates the drop
probability for the packet. The prec-based argument enables WRED to use the IP Precedence value of a
packet when it calculates the drop probability for the packet. After enabling WRED to use the DSCP value,
you can then use the new random-detect dscp command to change the minimum and maximum packet
thresholds for that DSCP value.

MPLS compliant WRED


The MPLS Compliant WRED feature enables WRED to use the MPLS EXP value when it calculates the
drop probability for a packet. The MPLS value is the 3 bits of the MPLS Experimental bits in the label
header.

MPLS based WRED is automatically enabled if the transmitting packet has a MPLS header and uses the
same values from the precedence configuration.

WRED operation
WRED is a congestion avoidance and control mechanism whereby packets will be randomly dropped when
the average class queue depth reaches a certain minimum threshold (min-threshold). As congestion
increases, packets will be randomly dropped (and with a rising drop probability) until a second threshold
(max-threshold) where packets will be dropped with a drop probability equal to the mark-probability-
denominator. Above max-threshold, packets are tail-dropped.

The following picture depicts the WRED algorithm.

Figure 39 WRED Algorithm


Drop
Probab.

0
Avg. length of
class queue
minTH maxTH

WRED will selectively instruct TCP stacks to back-off by dropping packets. Obviously, WRED has no
influence on UDP based applications (besides the fact that their packets will be dropped equally).

103
The average queue depth is calculated using the following formula:
new_average = (old_average * (1-2-e) + (current_queue_depth * 2-e)

The “e” is the “exponential weighting constant”. The larger this constant, the slower the WRED algorithm
will react. The smaller this constant, the faster the WRED algorithm will react. The exponential weighting
constant can be set on a per-class basis. The min-threshold, max-threshold and mark probability
denominator can be set on a per precedence or per DSCP basis.

The mark probability denominator should always be set to 1 (100 % drop probability at max-threshold).

WRED design objective in ST


WRED will be applied on the Business, Streaming and Standard traffic classes.

In order to reduce the packet delay and jitter in the Streaming class, smaller min-threshold and max-
threshold values will be used compared to the Business and Standard classes.

In order to reduce the packet loss in the Business class, larger min-threshold and max-threshold values will
be used compared to the Streaming class.

Again, it should be stressed that tuning QoS parameters is never a straightforward process and the results
are depending on a large number of factors, including the offered traffic load and profile, the ratio of load
to available capacity, the behaviour of end-system TCP stacks in the event of packet drops, etc. Therefore,
it is strongly recommended to test these settings in a testbed environment using expected customer traffic
profiles and to tune them, if required. In addition, after an initial production beta deployment, a
performance assessment phase and subsequent tuning of the QoS deployment is a necessity.

Minimum and Maximum Thresholds


Each queue has a required length to serve its purpose of attempting to maintain specific maximum delay
values. Depending on the service that will be using a specific queue, one may want to increase or decrease
the time that packets are allowed in a queue before WRED starts dropping.

Different queue lengths have been selected for each of the defined classes. Each class serves data with
distinct delay, jitter and packet loss sensitivities therefore dictating how long a queue can be before packets
can be dropped.

The Business traffic class will be servicing mostly TCP data that is somewhat sensitive to delay but more
so to packet loss, hence the medium sized queue. The Streaming traffic class will be serving data such as
streaming video based on UDP that is sensitive to delay but less so to packet loss. A short queue allows us
to estimate end-to-end delay. The Standard traffic class serves best effort data without a specific maximum
end-to-end delay or packet loss requirement. A long queue that starts dropping earlier than other queues but
at a lower ration because of a shallower RED curve is therefore ideal.

The values used below are estimate values and must be adjusted once ST has a better understanding
of their traffic patterns and quality of service.

104
The minimum and maximum WRED threshold values are calculated on the basis of the allocated class
bandwidth and not on the link bandwidth. This will yield the most realistic results. The following generic
formula is used to derive WRED thresholds based on the maximum allowed delay:

classBW [byt/s]
maxTH = delay [s] *
MTU [byt]

B [pkts/s]

The minimum and maximum queue thresholds for each of the service classes will be calculated as follows:

Business Class – Medium Queue – Max per-hop delay 100ms:


Min-threshold = 0.03 x B
Max-threshold = 0.1 x B
With B representing the class bandwidth in MTU sized packets per second. For ST MPLS network a MTU
size of 1500 bytes is assumed. On the core trunks the management traffic will be carried in the Business
class. For obvious reasons we have to protect the management traffic from customers’ traffic flows with
less aggressive packet drop policy. The following are min and max thresholds for management traffic
(DSCP 48) within the Business traffic class:
Min-threshold = 0.1 x B
Max-threshold = 0.2 x B

Streaming Class – Short Queue – Max per-hop delay 50ms:


Min-threshold = 0.015 x B
Max-threshold = 0.05 x B
With B representing the bandwidth in MTU sized packets per second. For ST’s MPLS network a MTU size
of 1500 bytes is assumed.

Standard Class – Long Queue – Max per-hop delay 150ms:


Min-threshold = 0.045 x B
Max-threshold = 0.15 x B
With B representing the bandwidth in MTU sized packets per second. For ST MPLS network a MTU size
of 1500 bytes is assumed.

For Voice traffic it is necessary to implement tail-drop to minimise and predict delay/jitter under
congestion conditions. Therefore, no WRED will be used for the Voice traffic class (except on the GSR).
WRED will also not be applied to management class.

The WRED min-threshold and max-threshold (calculated on basis of the class bandwidth) settings are as
detailed in the following tables. They represent the values to be used across all platforms except for the
GSR ENG-2 line cards. These will be presented in GSR QoS design chapter later on.

If ST wishes to offer a class-bw, which is not included in the following tables, the min/max thresholds can
be calculated as per formulas above.

105
Table 12 WRED Settings for Business Class.

Class Class Class Class Class Class Class Class


Link Link Link BW BW Class BW BW Class BW BW Class BW BW Class
Speed BW BW 10% in 10% BW 10% 20% in 20% BW 20% 25% in 25% BW 25% 30% in 30% BW 30%
in kbps B minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH

64 6 3 9

128 11 3 9

256 22 3 9

512 43 3 9

1024 86 3 9

2048 171 6 18

10000 834 26 84 1000 3 9 2000 6 17 2500 7 21 3000 8 26

34684 2891 87 290 3468 9 29 6937 18 58 8671 22 73 10405 27 87

100000 8334 251 834 10000 26 84 20000 51 167 25000 63 209 30000 76 251

155000 12917 388 1292 15500 39 130 31000 78 259 38750 97 323 46500 117 388

622000 51834 1556 5184 62200 156 519 124400 312 1037 155500 389 1296 186600 467 1556

2400000 200000 6000 20000 240000 600 2000 480000 1200 4000 600000 1500 5000 720000 1800 6000

For values smaller than E1, on a class percentage, the calculated value will be less than 3 for the MIN
Threshold and 9 for the MAX threshold. Any smaller value will defeat the objectives of WRED, seeing
that the router would not allow for much burst and react to aggressively in dropping the packets.

These values are therefore no considered in the calculations.

Table 13 WRED Settings for Streaming Class.

Class Class Class Class Class Class Class Class


Link Link Link BW BW Class BW BW Class BW BW Class BW BW Class
Speed BW BW 10% in 10% BW 10% 20% in 20% BW 20% 25% in 25% BW 25% 30% in 30% BW 30%
in kbps B minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH

64 6 3 9

128 11 3 9

256 22 3 9

512 43 3 9

1024 86 3 9

2048 171 3 9

10000 834 13 42 1000 3 9 2000 3 9 2500 4 11 3000 4 13

34684 2891 44 145 3468 5 15 6937 9 29 8671 11 37 10405 14 44

100000 8334 126 417 10000 13 42 20000 26 84 25000 32 105 30000 38 126

155000 12917 194 646 15500 20 65 31000 39 130 38750 49 162 46500 59 194

622000 51834 778 2592 62200 78 260 124400 156 519 155500 195 648 186600 234 778

106
2400000 200000 3000 10000 240000 300 1000 480000 600 2000 600000 750 2500 720000 900 3000

Table 14 WRED Settings for Standard Class.

Class Class Class Class Class Class Class Class


Link Link Link BW BW Class BW BW Class BW BW Class BW BW Class
Speed BW BW 10% in 10% BW 10% 20% in 20% BW 20% 25% in 25% BW 25% 30% in 30% BW 30%
in kbps B minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH kbps minTH maxTH

64 6 3 9

128 11 3 9

256 22 3 9

512 43 3 9

1024 86 4 13

2048 171 8 26

10000 834 38 126 1000 4 13 2000 8 26 2500 10 32 3000 12 38

34684 2891 131 434 3468 14 44 6937 27 87 8671 33 109 10405 40 131

100000 8334 376 1251 10000 38 126 20000 76 251 25000 94 313 30000 113 376

155000 12917 582 1938 15500 59 194 31000 117 388 38750 146 485 46500 175 582

622000 51834 2333 7776 62200 234 778 124400 467 1556 155500 584 1944 186600 700 2333

2400000 200000 9000 30000 240000 900 3000 480000 1800 6000 600000 2250 7500 720000 2700 9000

Drop Probability
The drop probability at max-threshold for all classes will initially be configured as mark-propability-
denominator=1. This means that when the average-queue-length reaches the max-threshold, all packets will
be dropped until the average goes below the Max-threshold.
The formulae for this is:

1
mpd

This means that when setting the mpd to 2 for instance, ½ according to the formula above represents that at
the “max-threshold” only half or rather 50% of the all the packets are being dropped. This also means that
the ratio at which the packets are dropped as the average queue length increases is also lower than if the
mpd was set “to 1 for instance, seeing that an mpd of 1 actually means that 1/1 or 100% packets are
dropped at “max-threshold.

Why is it important to set mpd to 1 rather than to another value? The answer is predictability. When
calculating the other values for WRED, we know that any packet after Max-threshold is tail dropped.
Therefore, by setting the mpd to 1, we ensure a more realistic drop ratio throughout the WRED curve. If
the value was set to 2 for instance, WRED would only drop a number of packets so to reach a 50% drop
ratio by the time the average queue depth reaches the Max-threshold and then, all of a sudden, one packet
takes it over the Max-threshold and the packet drops go from 50% to 100%.

107
Exp. Weighting Const
WRED calculates an exponentially weighted average queue size, rather than the current queue size, when
deciding the packet drop probability. The current average queue length depends on the previous average
and on the queue's current actual size. In using an average queue size, RED achieved its goal to not react to
momentary burstiness in the network and react only to persistent congestion.

With high values of exponential-weighting-constant, the average queue size closely tracks the old average
queue size and more freely accommodates changes in the current queue size, resulting in the ability for
RED to accommodate temporary bursts in traffic, smoothing out the peaks and troughs in the current queue
size. RED is slow to start dropping packets, but it can continue dropping packets for a time after the actual
queue size falls below the minimum threshold.

If exponential-weighting-constant is too high, RED does not react to congestion, as the current queue size
becomes insignificant in calculating the average queue size. Packets are transmitted or dropped as if RED
were not in effect.

With low values of exponential-weighting-constant, the average queue size closely tracks the current queue
size, which enables the average queue size to move rapidly with the changing traffic levels. This means the
RED process responds quickly to long queues. When the queue falls below the minimum threshold, the
process stops dropping packets.

If exponential-weighting-constant it too low, RED overreacts to temporary traffic bursts and drops traffic
unnecessarily. The formula for calculating exponential-weighting-constant (ewc) is as follows:

ewc = 10/B if Line Rate (core)/Committed Rate (edge) <= 34Mbps


ewc = 1/B if Line Rate (core)/Committed Rate (edge) > 34Mbps

,where B is the rate of 1500 byte packets (i.e. CEILING(Rate[kbps] * 1000 / 8 / 1500).

The configured exponential-weighting-constant (x) is applied to the router configuration as a negative


power of 2. The relation between ewc and the configured value is:

ewc = 2-x which can be rewritten as:


x
1/ewc = 2 and the final formula for configured ewc is:
x = ln(1/ewc) / ln(2)

x = ln(B/10) / ln(2) if Line Rate (core)/Committed Rate (edge) <= 34Mbps


x = ln(B) / ln(2) if Line Rate (core)/Committed Rate (edge) > 34Mbps

Note:
The exponential-weighting-constant parameter is calculated based on the Class Bandwidth value and NOT
on the link rate. For the GSR12000, however, since it is not possible to configure per class, the
exponential-weighting-constant is calculated based on the link rate.

108
The ewc for Standard class (class-default) shall be based on link rate.

If the Class Bandwidth Allocation is configured as a percentage value in MQC, this should be converted to
a value in Kbps for calculating ewc.

The following table computes the exponential-weighting-constant in function of the link speed (GSR) or
class speed (10xxx or smaller).

Table 15 WRED - exponential weighting constant

B B Class B Class B B
Link Class BW BW Class
Speed or BW 10% or 20% in or 25% in or BW 30% or
in kbps B B/10 X in kbps B B/10 x kbps B B/10 x kbps B B/10 x in kbps B B/10 x

32 3 3 3 3.2 1 1 3 6.4 1 1 3 8 1 1 3 9.6 1 1 3

64 6 6 3 6.4 1 1 3 12.8 2 2 3 16 2 2 3 19.2 2 2 3

128 11 11 3 12.8 2 2 3 25.6 3 3 3 32 3 3 3 38.4 4 4 3

256 22 22 4 25.6 3 3 3 51.2 5 5 3 64 6 6 3 76.8 7 7 3

512 43 43 5 51.2 5 5 3 102.4 9 9 3 128 11 11 3 153.6 13 13 4

1024 86 86 6 102.4 9 9 3 204.8 18 18 4 256 22 22 4 307.2 26 26 5

2048 171 171 7 204.8 18 18 4 409.6 35 35 5 512 43 43 5 614.4 52 52 6

10000 834 834 10 1000 84 84 6 2000 167 167 7 2500 209 209 8 3000 250 250 8

34684 2891 289.1 8 3468.4 290 290 8 6936.8 579 579 9 8671 723 723 9 10405.2 868 868 10

100000 8334 833.4 10 10000 834 834 10 20000 1667 1667 11 25000 2084 2084 11 30000 2500 2500 11

155000 12917 1291.7 1g0 15500 1292 1292 10 31000 2584 2584 11 38750 3230 323 8 46500 3875 387.5 9

622000 51834 5183.4 12 62200 5184 518.4 9 124400 10367 1036.7 10 155500 12959 1295.9 10 186600 15550 1555 11

240000 200000 20000 14 240000 20000 2000 11 480000 40000 4000 12 600000 50000 5000 12 720000 60000 6000 13
0

The following is the required WRED configuration template on CE-PE link.


!
policy-map customer_profile
class voice
!
class streaming
random-detect dscp-based
random-detect exponential-weighting-constant <x>
random-detect dscp 26 <minTH> <maxTH> 1
class business
random-detect dscp-based
random-detect exponential-weighting-constant <x>
random-detect dscp 10 <minTH> <maxTH> 1
random-detect dscp 48 <minTH> <maxTH> 1
class management
!
class class-default
random-detect dscp-based
random-detect exponential-weighting-constant <x>

109
random-detect dscp 0 <minTH> <maxTH> 1
!

SAA-to-PE QoS mechanisms (applied on the SAA)


The SAA routers will be installed in ST PoPs as CE devices with special purpose: gathering of inter-PoP
QoS statistics. For this, the SAA router will generate the Probes and measure the QoS attributes like one-
way delay or jitter towards any other ST PoP, for each of the traffic classes.

The SAA routers and links between the SAA and PE will be provisioned by the VPNSC (as this is also
accomplished for VPN CE routers). This means that VPNSC will also control the configuration of probes
that will simulate the customers’ traffic flows across the MPLS network.

Ideally the SAA would generate the probes marked with the DSCP field that reflects the ST CoS design, so
that we could reuse the CE configuration templates. Due to the limitation in current VPNSC version, the
SAA can only set the IP precedence bits on traffic probes it generates. This requires additional colouring
with appropriate TOS value (using the LPR feature), so that resulting DSCP field (Precedence and TOS
bits) complies with ST CoS design.

DSCP-based classification of SAA probes is not possible, as locally sourced packets are not CEF switched.
We will use access lists to classify the SAA probes into appropriate traffic class.

There’s no need for policing or WRED of SAA probes as the amount of SAA traffic is under control of ST.

Exact class bandwidth requirement will depend on type, number and frequency of SAA probes, and will be
determined during and after staging.

The following QoS configuration template will be applied on SAA routers.

hostname xxxSAA1
!
class-map match-any business
match access-group 150
class-map match-any streaming
match access-group 152
class-map match-all voice
match access-group 154
class-map match-any management
match access-group 155
!
policy-map SAA_profile
class business
bandwidth percent 35
class streaming
bandwidth percent 20
class voice
priority 128
class management
bandwidth percent 5
class class-default
bandwidth percent 40
!
interface Serial<x>

110
description E1 link towards PE
bandwidth 2000
encapsulation ppp
max-reserved-bandwidth 95
service-policy output SAA_profile
clockrate 2000000
!
! Marking of locally originated SAA probes
!
ip local policy route-map Mark_SAA_probes
!
! Classify the SAA probes based on IP precedence
!
access-list 150 permit ip any any precedence 1 ! Business
access-list 152 permit ip any any precedence 3 ! Streaming
access-list 154 permit ip any any precedence 5 ! Voice
access-list 155 permit ip any any precedence 6 ! Management
access-list 156 permit ip any any precedence 0 ! Standard
!
route-map Mark_SAA_probes permit 10
match ip address 150 152
set ip tos 4
!
route-map Mark_SAA_probes permit 20
match ip address 154
set ip tos 12
!
route-map Mark_SAA_probes permit 30
match ip address 155 156
set ip tos 0
!

CE-to-PE QoS mechanisms (applied on the PE) – PPP or HDLC


The QoS mechanisms used on the PE (10k and 7206VXR platforms) are basically a subset of the
mechanisms used on the non-distributed CE platforms. The configuration on the PEs is almost identical to
the one on the CE. There are some differences and these will be highlighted.

Classification
The traffic can be classified on PE routers by matching the DSCP values, because all traffic has already
been properly marked on the CEs when entering the network.

Traffic classification on CE-PE connection is required only for packets received from unmanaged CEs and
Internet connections as explained below.

Marking
No customer traffic packet marking would be performed on the PE, since all packets have already been
marked appropriately on the ingress CEs.

The management traffic generated locally on the PE will be marked through Local Policy Routing (LPR).
The configuration template is the same as on the CE router.

111
Policing
Traffic has been already policed on the CE router so there’s no need to police the traffic coming from
managed CE routers on the PE.

Unmanaged CEs and Unmanaged Internet CPEs


Unmanaged CE means that ST does not have control over the CE router in customer’s premises, i.e. the
customer is managing the CE device.

Service without QoS


The decision is that by default no QoS will be implemented and offered to customers with unmanaged
CEs. In other words, traffic received from unmanaged CE router will be treated as best effort within the ST
MPLS network and as such assigned to Standard traffic class. This is also true for customers who will not
subscribe to ST QoS services (even if the CE is managed by ST).

The following configuration template will classify and mark the traffic from unmanaged CE routers9.
!
policy-map unmanaged_CE
class class-default
set ip dscp 0
!
interface Serial 2/0/1:0
bandwidth <bw>
description Link to unmanaged CE
service-policy input unmanaged_CE
!

The second example shows how the police command can be used to limit the bandwidth on high-speed
circuits to subscribed subrate of kbps.

!
policy-map limit_customer_512k
class class-default
police 512000 12800 25600 conform-action set-dscp-transmit 0 exceed-
action drop
!
interface Serial 2/0/1:0
bandwidth 2000
description Link to unmanaged CE with subrate of 512kb
service-policy input limit_customer_512k
!

Customer configures QoS on the CE router


ST can in theory co-ordinate a proper CE router QoS configuration to the customer (vie e-mail or phone
support), but based on our experiences this is in most cases extremely painful procedure for the service
provider. QoS configuration, monitoring and troubleshooting is extremely complex task and may result in
service disruption if non-skilled customers adjust the QoS parameters on customer-managed CE routers. It
is then not trivial to prove to such customer that the ST core network was operating normally when the
customer experienced service outage due to QoS misconfiguration!

9
Also, traffic received from upstream transit providers, peering partners and Internet customers must be marked with
DSCP 0, to prevent “precedence-spoofing” attacks.
112
The following configuration template shows how to re-enforce the policing of traffic classes for
unmanaged CE routers. The policy-map would have to be replicated and tuned for each customer.

On the CE side, the QoS configuration template of managed CE can be reused for unmanaged CE routers.

!
! Customer has already classified and marked the IP packets on unmanaged CE
! The classification class-map is the same as with managed CE routers (the
! same config for all CEs)
!
class-map match-any voice
match ip dscp 46
class-map match-any management
match ip dscp 48
match access-group 103
class-map match-any business
match ip dscp 10
class-map match-all streaming
match ip dscp 26
!
! ST must police the traffic classes according to SLA
! of that customer – this is customer-specific configuration and can result
! in a very long router configuration file.
!
policy-map CUSTx_police
class business
police <bps> <normal_burst> <ext_burst> conform-action transmit exceed-action drop
class streaming
police <bps> <normal_burst> <ext_burst> conform-action transmit exceed-action drop
class voice
police <bps> <normal_burst> <ext_burst> conform-action transmit exceed-action drop
class management
police <bps> <normal_burst> <ext_burst> conform-action transmit exceed-action drop
class class-default
set ip dscp 0
!
interface Serial 2/0/1:0
bandwidth <bandiwdth>
description Link to unmanaged CE of customer X
service-policy input CUSTx_police

SAA Routers
Traffic received from SAA router will be handled in the same way as packets received from managed CEs.
This implies that the set_qos_group service policy shall be configured on SAA links in the same way
as already explained for managed CE connections.

PE-to-P QoS mechanisms (applied on the PE)

Classification
The customer traffic received from the CE and SAA routers has been marked with the DSCP. On MPLS
uplinks the DSCP value will be automatically mapped in the MPLS EXP bits as shown in Figure 34.

113
The following configuration example depicts the EXP based classification on PE-P uplinks. MPLS frames
needs to be classified in order to perform queuing and apply proper WRED drop policy.

!
class-map match-any business_management
match mpls experimental 1 6
class-map match-any streaming
match mpls experimental 3
class-map match-any voice
match mpls experimental 5
!

Marking
IP packets will be encapsulated in MPLS frames when leaving the PE router. The DSCP code point value
(i.e. the precedence bits) will be automatically mapped into EXP bits of MPLS label. No further
configuration is needed.

Class queuing
The parameter setting for the reservable interface bandwidth has been changed from 75% (default) to 97%
on 7206VXR PE routers This provides enough space for unaccounted traffic such as layer 2 overhead,
layer 2 keepalives, LMI (in the case of Frame Relay), etc..
In the following configuration templates, all MQCLI class bandwidth calculations are based on this value.

On 10000 series routers, the cumulative bandwidth applied on traffic classes must not exceed the 99% of
link bandwidth. The bandwidth is configurable in steps of 1/255 of link (or PVC) bandwidth.
Furthermore it is important to notice, that on 10000 series POS interfaces the calculation of the minimum
class bandwidth is based on the avilable information bandwidth.
For example, the basic rate of STM-1 POS interfaces is 155.520 Mbps. The avilable information
bandwidth is 149.760 Mbps (155.520Mbps – Sonet Overhead).
The avilable information bandwidth can be dispalyed with the following commands:

10K-PE#sh hardware pxf cpu queue pos 1/0/0

VCCI 2:
Class ID Length/Max Res Dequeues Drops
~ 0 class-default 291 0/1024 3 295173 0
...

10K-PE#sh hardware pxf cpu queue 291


ID (queue/packet-queue) : 291/291
...
Bandwidth Index : 73 (149760 kbps)
...

As already mentioned above, on 10000 series the LLQ is policed to configured class-bw. This is done by
default when configuring ‘priority <bw value>’ command. Neverthless a warning message will be
displayed after entering this command:

10K-PE(config-cmap)#policy-map PE_P_155M

114
10K-PE(config-pmap)# class voice
10K-PE(config-pmap-c)# priority 37587
% This command is an unreleased and unsupported feature

For a period of time, the command will still work as it did in the past (even though the warning is
displayed). It will disappear in future releases. Therefore we recommend to use the police command within
the high priority class on 10000 routers (like shown below).

The following is an example configuration for the class queuing on PE-to-P trunks. The same queuing
template must be applied on primary and backup uplinks.

!
policy-map PE_P_155M
class voice
priority
police 36312000 conform-action transmit exceed-action drop violate-action drop
class business_management
bandwidth 36317
class streaming
bandwidth 36317
class class-default
bandwidth 36317
!

Congestion avoidance
WRED is used for graded packet dropping in each traffic class. The DSCP-based WRED is currently
supported on MPLS uplinks.

The following configuration template will be used for congestion management on PE-to-P links. WRED
thresholds and ewc are derived in the same way as for the CE-to-PE links.

!
policy-map PE-P
class qos_group_business_management
random-detect dscp-based
random-detect exponential-weighting-constant 9
random-detect dscp 10 117 388 1
random-detect dscp 48 388 775 1
class qos_group_streaming
random-detect dscp-based
random-detect exponential-weighting-constant 8
random-detect dscp 26 49 162 1
class class-default
random-detect dscp-based
random-detect exponential-weighting-constant 11
random-detect dscp 0 146 485 1
!

PE-P, P-P and P-PE QoS mechanisms (applied on the P)

On new Engine 3 GSR linecards the QoS implementation is slightly different from the currently used
linecards. MQCLI will be introduced on GSR. This is described in a separate chapter (see ).

115
Class Queuing (MDRR)
MDRR is architecturally different from LLQ where bandwidth is not reserved per class but rather weights
or ”timeslots” are allocated for each class. With MDRR we have the ability to manipulate queue weights to
define the quantum or time spent servicing a queue. Also, like the PQ in LLQ, MDRR has a low latency
queue typically used for servicing real-time traffic such as voice. The low latency queue will shall be set to
”alternate priority”.

The GSR also differs architecturally from the other platforms in that it maintains two instances of queuing
with MDRR during the flow of a packet from the input interface to the output interface. The first instance
is called “to fabric” and the second instance is called “from fabric”.

“To-fabric” or RX-COS MDRR


The “to-frabric” MDRR is applied exactly as the name implies – packets exiting a line card to the
switching fabric. The considerations to take here are that, unlike “from fabric” queuing, one does not know
the destination port line speed but still need to take all possibilities into account. Consider the following:
Packets come in from a high speed STM-16 port and are destined to exit through a lower speed STM-1
card. Clearly, this will cause congestion. The ability to push back the congestion management to before the
packets hit the switching fabric is clearly beneficial. Therefore, when creating a traffic management policy
or “cos-group” as it is known in MDRR, one must first create one for each available interface in the
chassis. In ST, the “to-fabric” and “from-fabric” cos-groups are the same because we want the same
behaviour at both queuing instances.

The application method is as follows:


• For packets destined to a slot with an STM-1 line card, an STM-1 cos-group will be applied, regardless
of what the source line card is.
• For packets destined to a slot with an STM-16 line card, an STM-1 cos-group is applied if the source
line card is STM-16, STM-16 cos-group if the source is an STM-16.

In doing so, one applies the cos-group depending on what the destination slot is, therefore avoiding
“congestion” on the switching fabric.

“From-fabric” or TX-COS MDRR


The “from-fabric” MDRR is a lot simpler is terms of configuration. The queueing occurs at the egress to
the TX-queue. At this stage, one knows the exit slot and interface speed. The cos-group is simply applied
to the actual interface, just like a service policy is applied to an interface on a 7XXX platform.

MDRR queuing operation


Each DRR queue can be given a relative weight, with one of the queues in the group is defined as a low
latency queue. This is done via the queue command under the cos-queue-group.
queue <0-6> <1-2048>
queue low-latency [alternate-priority | strict-priority] <1-2048>

The weights give a relative bandwidth for each queue when the interface is congested. The DRR algorithm
de-queues data from each queue in turn if there is data in the queue to be sent. So if all the regular DRR
queues have data in them they will be serviced as the following:
0-1-2-3-4-5-6-0-1-2-3-4-5-6...

116
On each time through cycle the queue will get to packet de-queue the quantum Q that is proportional to
the configured queue weight W. Packet de-queue quantum Qn is:
Qn = MTU + (Wn - 1)*512

A value of 1 is equivalent of giving the interface a weight of its MTU. For each increment above 1, the
weight of the queue increases by 512 bytes. For example, if the MTU of a particular interface is 4470 and
the weight of a queue is configured to be 3, each time through the rotation 4470 + (3-1)*512 = 5494 bytes
will be allowed to be de-queued. If for example 2 normal DRR queues, Queue0 and Queue1 are used,
Queue0 is configured with a weight of 1 and Queue1 configured with a weight of 9. If both queues were
congested, each time through the rotation Queue0 would be allowed to send 4470 bytes and Queue1 would
be allowed to send 4470 + (9-1)*512 = 8566 bytes. This would give traffic going Queue0 approximately
1/3 of the bandwidth and the traffic going through Queue1 about 2/3.

The low latency queue can be added to give more priority to certain traffic. The low latency queue can be
given 2 different priorities within the group. It can be put in strict priority or in alternating priority. In
strict priority, this queue is serviced whenever it is non-empty.

To minimize the jitter in Voice class of ST network, the LLQ will be configured in strict priority
mode.

The following table gives an example for MDRR weights that can be used on the ST network as initial
queuing and class capacity definition. Weights have been calculated following the algorithm above.

MTU on POS links is 4470.

Table 16 MDRR weights

Service Class % of link BW Queue STM-1 STM-16

Class BW [Mbps] Weight Class BW [Mbps] Weight


Voice 20 low latency 31 10 480 10

Business, Mgmt 30 2 48 13 720 13

Streaming 25 1 38 10 600 10

Standard 25 0 38 10 600 10

MDRR configuration guide for ST


From-fabric (TX COS)
Each interface has eight COS queues, which can be configured independently. Flexible mapping between
IP precedence and the eight possible queues is offered in the MDRR implementation. MDRR allows a
maximum of eight queues so that each IP precedence value can be made its own queue. The mapping is
flexible; however, the number of queues needed and the precedence values mapped to those queues are
user-configurable. It is possible to map one or more precedence values into a queue.
The ST network will have four queues,
• Low-Latency Queue. Low-latency queue will be an alternate priority queue; this will be for VoIP
traffic packets marked with MPLS EXP 5 will be forwarded to this queue. 25% of the available
physical bandwidth will be available for voice traffic.

117
• Queue 2 will be used for Business and Management traffic classes. Packets marked MPLS EXP 1 and
6 will be forwarded to this queue. 25% of the available physical bandwidth will be available for
Business and management traffic.
• Queue 1 will be the Streaming data queue for delay sensitive traffic but variable packet sizes. Packets
marked with MPLS EXP 3 will be forwarded to this queue. 25% of available physical bandwidth will
be available for streaming traffic.
• Queue 0 will be for default-classified traffic – i.e. Standard traffic class. MPLS EXP 0 will be
forwarded to this queue. 25% of the available physical bandwidth will be available for best-effort
traffic.

The following commands are an example configuration in the ST network. The same MDRR TX-COS
configuration could be applied to STM-1 and STM-16 links, but the WRED parameters will be different.
So we have to have one cos-queue-group per link capacity. However, the same cos-queue-group can be
applied on RX and TX side; this will reduce the size of router configuration file.

The precedence-based configuration acts on EXP bits in the case of MPLS packets.

!
cos-queue-group STM<1,16> ! Duplicated for each rate, same for RX and TX side
prec 0 queue 0 ! Map the packet with PREC/EXP=0 into queue 0
prec 1 queue 2
prec 2 queue 2
prec 3 queue 1
prec 4 queue 1
prec 5 queue low-latency
prec 6 queue 2
prec 7 queue 2
queue 0 10
queue 1 10
queue 2 13
queue low-latency strict-priority 10
!
interface pos 3/1
description This is STM-1 backbone link
tx-cos STM1

To-fabric or RX COS
In addition to the transmit COS, a receive COS will also be configured. The queues will be identical to the
interface transmits queues, but instead of being applied directly to the line interface they are built as a table
and applied from the receive buffer to the backbone fabric buffers.

With the cards supplied for ST MDRR is supported in hardware, each line card has eight COS queues per
destination interface. With 16 destination slots and 16 interfaces per slot, the maximum number of COS
queues is 16 X 16 X 8 = 2048. All the interfaces on a destination slot have the same COS parameters.

In the example, the slot-table-cos stm-to-fabric command defines the COS policy for destination line cards
2,3 and 5,6 based on the STM-1 and STM-16 cos-queue-group. The rx-cost-slot command applies the stm-
to-fabric slot-table-cos configuration to a particular slot (line card). As previously mentioned, the cos-
groups will be applied as follows:
• For packets destined to a slot with an STM-1 line card, an STM-1 cos-group will be applied, regardless
of what the source line card is.
• For packets destined to a slot with an STM-16 line card, an STM-1 cos-group is applied if the source
line card is STM-1, STM-16 cos-group if the source is an STM-16.

118
!
rx-cos-slot 2 STM1-TO-FABRIC ! We have STM-1 interfaces in this slot
rx-cos-slot 3 STM1-TO-FABRIC ! We have STM-1 interfaces in this slot
rx-cos-slot 5 STM16-TO-FABRIC ! We have STM-16 interfaces in this slot
rx-cos-slot 6 STM16-TO-FABRIC ! We have STM-16 interfaces in this slot
!
slot-table-cos STM1-TO-FABRIC
destination-slot all STM1
!
slot-table-cos STM16-TO-FABRIC
destination-slot 2 STM1
destination-slot 3 STM1
destination-slot 5 STM16
destination-slot 6 STM16
!

Congestion management
WRED parameters on GSR routers will follow the guidelines already explained for CE and PE routers. The
GSR-specific configuration is depicted in this chapter.

Exponential weighting constant


On GSR the ewc cannot be configured on a per-class basis. For this reason, the link bandwidth will be used
to calculate the ewc. According to Table 15 the ewc for STM-1 links will be 10, and for STM-16 links the
ewc will have the value of 14.

Policing of Voice class with WRED


Because the provisioning rule for ST is max. 20% Voice traffic on a link, congestion in the Voice class
would therefore be highly unlikely and if at all, only under extreme cases such as multiple flows from
STM-16 links to a single STM-1. Nonetheless the remote possibility of this occurring should be prevented.

In the GSR and MDRR, tail-drop or control of the LLQ is not possible. In order to achieve this, a WRED
setting will be applied on the LLQ. The MIN/MAX-threshold settings will be calculated based on a
maximum delay of 3ms and an average packet size of 64 bytes. The idea is to allow a small burst. The
MIN-threshold will therefore be quite small and the MAX-threshold will be equal to the MIN.

Random-detect-label 5 will be used to apply WRED on the Voice traffic (valid for ENG-2 linceards as
well).

Max-threshold (Voice) ~ 0.003 x B [256 for STM-1, 2048 for STM-16]


Min-threshold = Max-threshold

where B = bandwidth in MTU sized packets. For Voice an MTU of 64 bytes is assumed.

119
WRED on Engine-2 Linecards
The following three tables represent the WRED Min and Max settings for the ENG-2 linecards on the GSR
platforms. The reason that slightly different values have been allocated is due to architectural design. The
constraint being that the difference between the minTH and maxTH values must be a power of 2.

The basis for the calculation is still the same with the same base values being used to calculate the minTH
and initial maxTH. Once the two threshold values have been worked out, the difference between the two is
derived (Delta1). If the value of the difference between the Min and Max threshold is not a power of 2 then
a new value is assigned (Delta2) and added to the original minTH to derive a new and valid maxTH. When
linecard assigns Delta2 as a difference, the closest value to the original difference (Delta1) is used. The
following example demonstrates this:

Min Threshold = 39
Max Threshold = 130
Difference (Delta1) = 91
Assigned Difference (Delta2) is either 64 or 128
New Assigned Difference = 64 (closer to 91 than 128)
New Max Threshold = 103

Table 17 WRED Setings for Business Class (ENG-2 GSR)

Table 18 WRED Setings for Streaming Class (ENG-2 GSR)

Table 19 WRED Setings for Standard Class (ENG-2 GSR)

WRED Configuration
This is an example configuration template for WRED on STM-1 GSR links. In case of ENG-2 linecards
the thresholds need to be adjusted as described above.

Please note that “precedence x random-detect-label y” statements apply to IP packets with precedence x
and also to MPLS frames with EXP bits set to x. “y” here refers to index of WRED profile.

!
cos-queue-group STM1 ! Duplicated for each STM rate with
precedence 0 random-detect-label 0 ! appropriate WRED thresholds and EWC

120
precedence 1 random-detect-label 1
precedence 2 random-detect-label 0
precedence 3 random-detect-label 3
precedence 4 random-detect-label 0
precedence 5 random-detect-label 5
precedence 6 random-detect-label 6
precedence 7 random-detect-label 6
random-detect-label 0 146 485 1 ! Standard
random-detect-label 1 117 388 1 ! Business
random-detect-label 3 49 162 1 ! Streaming
random-detect-label 5 180 181 1 ! Voice (3ms tail-drop of 64byt packets)
random-detect-label 6 388 775 1 ! Routing & Management
exponential-weighting-constant 10 ! 10 is default
!

PE to CE QoS mechanisms (applied on the PE)

Classification
The traffic will be classified by matching the DSCP values, for scheduling onto PE-CE connection.
Management traffic is carried in a dedicated Management class on PE-CE links. Classification of locally
sourced traffic with LPR has already been demonstrated.

The following configuration template will classify the traffic for queuing and congestion management on
PE-CE link (outbound direction).
!
class-map match-any business
match ip dscp 10
class-map match-any streaming
match ip dscp 26
class-map match-any voice
match ip dscp 46
class-map match-any management
match ip dscp 48
!

Class queuing
The following is the sample configuration for the class queuing on PE-to-CE links. Please note that class
bandwidths shall match with those configured on the CE side.
!
policy-map PE-CE
class business
bandwidth percent 35
class streaming
bandwidth percent 20
class voice
priority 64
class management
bandwidth percent 5
class class-default
bandwidth percent 40
!
interface Serial 2/0/1:1.1
description PE-CE access layer link
bandwidth 512
encapsulation ppp

121
max-reserved-bandwidth 95
service-policy output PE-CE
!

Congestion avoidance
WRED one PE-CE link shall be configured with the same parameters as on the CE router. Below is a
sample configuration template.
!
policy-map PE-CE
class voice
!
class streaming
random-detect dscp-based
random-detect exponential-weighting-constant <x>
random-detect dscp 26 <minTH> <maxTH> 1
class business
random-detect dscp-based
random-detect exponential-weighting-constant <x>
random-detect dscp 10 <minTH> <maxTH> 1
class management
!
class class-default
random-detect dscp-based
random-detect exponential-weighting-constant <x>
random-detect dscp 0 <minTH> <maxTH> 1
!

QoS mechanisms on ATM PVCs (applied on the CE and PE)


ATM PVCs can be used in between the CE and PE. This section describes the design modification required
on the ATM CEs and PEs. It is assumed that both the CE and PE make use of the ATM port adapter which
supports IP to ATM CoS. This means basically that we can do LLQ on a per ATM VC basis. This section
only highlights the differences in comparison with the previous configurations. Please refer to the relevant
sections on CE and PE QoS configurations for a complete overview.

It is important to understand that ATM introduces a significant amount of overhead, which is not always
accounted for in the QoS configuration bandwidths. This overhead needs to be taken into account when
performing capacity planning and when provisioning the network. The following is a short summary of the
ATM overhead that can be incurred.

• SDH STM-1 used on the PE has 90 bytes of overhead in each 2430-byte OC-3c frame. This means
3.70 % is not available for higher-level protocols. This overhead is not taken into account in the LLQ
bandwidths.
• PDH E3 used on the CE uses G.804 / G.832 framing to map ATM cells into the payload of an E3
circuit. Each frame is 537 bytes which includes 7 bytes of overhead. This 1.30% overhead is not
available for higher level protocols. This overhead is not taken into account in the LLQ bandwidths.
• ATM cells are 53 bytes long containing a 5 byte header and a 48 byte payload. This adds 9.43%
overhead that is not available for higher level protocols. This overhead is not taken into account in the
LLQ bandwidths.
• The AAL5 protocol overhead consists of a trailer at the end of the AAL5 PDU. This trailer occupies
the last 8 bytes of the last cell of the PDU. In addition AAL5 specifies that a cell may only belong to a
single PDU so any payload available in the ATM cell after the AAL5 trailer has been added as the last
8 bytes of the cell, cannot be used. With an IP MTU of 576 bytes this adds a 6.41% overhead. This

122
overhead in only partly taken into account in the LLQ bandwidths (the AAL5 trailer, but without 4
bytes of the CRC and without any padding).
• RFC 1483 LLC encapsulation requires LLC, OUI and Ethertype headers to precede the IP datagram.
This overhead amounts to 8 bytes per datagram. With an IP MTU of 576 bytes this adds a 1.37%
overhead. This overhead is taken into account in the LLQ bandwidths.
• Operation, Administration and Maintenance (OAM) cell overhead. This overhead is not taken into
account in the LLQ bandwidths.

Assuming an IP MTU of 576 bytes (Internet inter-network default) then each layer contributes the
following percentage overhead to transmit an IP datagram.

Table 20 ATM Overhead

WAN Link Protocol Layer % Overhead

ATM STM-1 SDH STM-1 3.70

ATM 9.43

AAL5 6.41

LLC / SNAP 1.37

Total 20.91

ATM E3 PDH E3 1.30

ATM 9.43

AAL5 6.41

LLC / SNAP 1.37

Total 18.51

The following table summarises which overhead is or is not included in the MQCLI LLQ bandwidth
statements.

Table 21 LLQ bandwidths and ATM

Overhead Length Included in MQCLI


RFC 1483 LLC / SNAP header 8 bytes Yes

AAL5 trailer 8 bytes Partially. 4-byte CRC field is not included

AAL5 padding to make last cell an even multiple of 48 bytes Variable No

ATM cell header 5 bytes No

Due to the significant ATM overhead that is not accounted for in the MQCLI bandwidths, it is
recommended to allocate not more than 80 % (a conservative figure) of the total available ATM bandwidth
to LLQ traffic classes. The ATM PVC bandwidth for a VBR-nrt ATM CoS is defined as the Sustained Cell

123
Rate (SCR). In other words, not more than 80 % of a particular PVC SCR should be allocated in service
policies attached to that PVC.

On the PE and CE routers (except on 75xx), the amount of bandwidth that can be applied to interfaces in
service policies can be controlled through the “max-reserved-bandwidth” bandwidth command. The default
is 75 %.

The following is the required configuration for applying the service policy to an ATM PVC. ATM traffic
shaping needs to be configured on the ATM PVC. ATM traffic shaping is a mechanism that alters the
traffic characteristics of a stream of cells on a connection to achieve better network efficiency by ensuring
conformance at a policed remote ATM switch interface. Traffic shaping must maintain cell sequence
integrity on a connection.

!
interface ATM5/1
no ip address
max-reserved-bandwidth 80
!
interface ATM5/1.50 point-to-point
ip address n.n.n.n n.n.n.n
pvc 50/105
vbr-nrt <SCR> <PCR> <MBS>
service-policy output customer_profile
!

124
High Availability

This chapter would dicuss the high availability comppnent as it relates to the propose architecture.
Depending on the size of the content this chapter and next may be combined

125
Security

This chapter would dicuss the high availability comppnent as it relates to the propose architecture.
Depending on the size of the content this chapter and next may be combined. Some very general topics are
presented here as a sample

Password Management
Passwords and similar secrets (such as SNMP community strings) are the primary defence against
unauthorized access to your router. The best way to handle most passwords is to maintain them on a
TACACS+ or RADIUS authentication server. However, almost every router will still have a locally
configured password for privileged access, and may also have other password information in its
configuration file.

The enable secret command is used to set the password that grants privileged administrative access
to the IOS system. An enable secret password should always be set. You should use enable secret,
not the older enable password because the later uses a weak encryption algorithm.

If no enable secret is set, and a password is configured for the console TTY line, the console password may
be used to get privileged access, even from a remote VTY session. This is almost certainly not what you
want, and is another reason to be certain to configure an enable secret.

The service password-encryption command directs the IOS software to encrypt the passwords,
CHAP secrets, and similar data that are saved in its configuration file. This is useful for preventing casual
observers from reading passwords, for example, when they happen to look at the screen over an
administrator's shoulder.

However, the algorithm used by service password-encryption is a simple Vigenere cipher; any
competent amateur cryptographer could easily reverse it in at most a few hours. The algorithm was not
designed to protect configuration files against serious analysis by even slightly sophisticated attackers, and
should not be used for this purpose. Any Cisco configuration file that contains encrypted passwords should
be treated with the same care used for a clear text list of those same passwords.

126
This weak encryption warning does not apply to passwords set with the enable secret command, but
it does apply to passwords set with enable password.

The enable secret command uses MD5 for password hashing. The algorithm has had considerable
public review, and is not reversible as far as anybody at Cisco knows. It is, however, subject to dictionary
attacks (a "dictionary attack" is having a computer try every word in a dictionary or other list of candidate
passwords). It's therefore wise to keep your configuration file out of the hands of untrusted people,
especially if you're not sure your passwords are well chosen.

Console Ports
It is important to remember that the console port of an IOS device has special privileges. In particular, if a
BREAK signal is sent to the console port during the first few seconds after a reboot, the password recovery
procedure can easily be used to take control of the system. This means that attackers who can interrupt
power or induce a system crash, and who have access to the console port via a hardwired terminal, a
modem, a terminal server, or some other network device, can take control of the system, even if they do not
have physical access to it or the ability to log in to it normally.

It follows that any modem or network device that gives access to the Cisco console port must itself be
secured to a standard comparable to the security used for privileged access to the router. At a bare
minimum, any console modem should be of a type that can require the dialup user to supply a password for
access, and the modem password should be carefully managed.

Controlling TTY’s
Local asynchronous terminals are less common than they once were, but they still exist in some
installations. Unless the terminals are physically secured, and usually even if they are, the router should be
configured to require users on local asynchronous terminals to log in before using the system. Most TTY
ports in modern routers are either connected to external modems, or are implemented by integrated
modems; securing these ports is obviously even more important than securing local terminal ports.

By default, a remote user can establish a connection to a TTY line over the network; this is known as
"reverse Telnet," and allows the remote user to interact with the terminal or modem connected to the TTY
line. It is possible to apply password protection for such connections. Often, it is desirable to allow users to
make connections to modem lines, so that they can make outgoing calls. However, this feature may allow a
remote user to connect to a local asynchronous terminal port, or even to a dial-in modem port, and simulate
the router's login prompt to steal passwords, or to do other things that may trick local users or interfere with
their work.

To disable this reverse Telnet feature, apply the configuration command transport input none to
any asynchronous or modem line that should not be receiving connections from network users. If at all
possible, do not use the same modems for both dial-in and dial-out, and do not allow reverse Telnet
connections to the lines you use for dial-in.

Controlling VTYs and Ensuring VTY Availability


Any VTY should be configured to accept connections only with the protocols actually needed. This is done
with the transport input command. For example, a VTY that was expected to receive only Telnet
sessions would be configured with transport input telnet, while a VTY permitting both Telnet
and SSH sessions would have transport input telnet ssh. If your software supports an
encrypted access protocol such as SSH, it may be wise to enable only that protocol, and to disable clear

127
text Telnet. It's also usually a good idea to use the ip access-class command to restrict the IP
addresses from which the VTY will accept connections.

A Cisco IOS device has a limited number of VTY lines (usually five). No additional remote interactive
connections can be established if all of the VTY’s are in use. This creates the opportunity for a denial-of-
service attack; if an attacker can open remote sessions to all the VTY’s on the system, the legitimate
administrator may not be able to log in. The attacker does not have to log in to do this; the sessions can
simply be left at the login prompt.

One way of reducing this exposure is to configure a more restrictive ip access-class command on
the last VTY in the system than on the other VTY’s. The last VTY (usually VTY 4) might be restricted to
accept connections only from a single, specific administrative workstation, whereas the other VTY’s might
accept connections from any address in a corporate network.

Another useful tactic is to configure VTY timeouts using the exec-timeout command. This prevents an
idle session from consuming a VTY indefinitely. Although its effectiveness against deliberate attacks is
relatively limited, it also provides some protection against sessions accidentally left idle. Similarly,
enabling TCP keepalives on incoming connections (with service tcp-keepalives-in) can help to
guard against both malicious attacks and "orphaned" sessions caused by remote system crashes.

Disabling all non-IP-based remote access protocols, and using IPSec encryption for all remote interactive
connections to the router can provide complete VTY protection. IPSec is an extra-cost option, and its
configuration is beyond the scope of this document.

Logging
Cisco routers can record information about a variety of events, many of which have security significance.
Logs can be invaluable in characterizing and responding to security incidents. The main types of logging
used by Cisco routers are:
• AAA logging, which collects information about user dial-in connections, logins, logouts, HTTP
accesses, privilege level changes, commands executed, and similar events. AAA log entries are sent to
authentication servers using the TACACS or RADIUS protocols, and are recorded locally by those
servers, typically in disk files. If you are using a TACACS or RADIUS server, you may wish to enable
AAA logging of various sorts; this is done using AAA configuration commands such as aaa
accounting.
• SNMP trap logging, which sends notifications of significant changes in system status to SNMP
management stations.
• System logging, which records a large variety of events, depending on the system configuration.
System logging events may be reported to a variety of destinations, including the following:
o System console port (logging console).
o Servers using the syslog protocol (logging <ip-address>, logging trap).
o Sessions on VTY’s and TTY’s (logging monitor, terminal monitor).
o Local buffer in router RAM (logging buffered).

Console logging shall be disabled during debugging of various router protocols to prevent router “freeze”

From a security point of view, the most important events usually recorded by system logging are interface
status changes, changes to the system configuration, access list matches, and events detected by the
optional firewall and intrusion detection features.

128
Each system-logging event is tagged with an urgency level. The levels range from debugging information
(at the lowest urgency), to major system emergencies. Each logging destination may be configured with
threshold urgency, and will receive logging events only at or above that threshold.

Saving logging information


By default, system-logging information is sent only to the asynchronous console port. Since many console
ports are unmonitored, or are connected to terminals without historical memory and with relatively small
displays, this information may not be available when it is needed, especially when a problem is being
debugged over the network.

Almost every router should save system logging information to a local RAM buffer. The logging buffer is
of a fixed size, and retains only the newest information. The contents of the buffer are lost whenever the
router is reloaded. Even so, even a moderately sized logging buffer is often of great value. On low-end
routers, a reasonable buffer size might be 16384 or 32768 bytes; on high-end routers with lots of memory
(and many logged events), even 262144 bytes might be appropriate. You can use the show memory
command to make sure that your router has enough free memory to support a logging buffer. Create the
buffer using the logging buffered <buffer-size> configuration command.

Larger installations will have syslog servers. You can send logging information to a server with logging
<server-ip-address>, and you can control the urgency threshold for logging to the server with
logging trap <urgency>. Even if you have a syslog server, you should still enable local logging.

If your router has a real-time clock or is running NTP, you will probably want to time-stamp log entries
using service timestamps log|debug datetime msecs.

Recording Access List Violations


If you use access lists to filter traffic, you may want to log packets that violate your filtering criteria. Older
Cisco IOS software versions support logging using the log keyword, which causes logging of the IP
addresses and port numbers associated with packets matching an access list entry. Newer versions provide
the log-input keyword, which adds information about the interface from which the packet was
received, and the MAC address of the host that sent it.

It is not usually a good idea to configure logging for access list entries that will match very large numbers
of packets. Doing so will cause log files to grow excessively large, and may cut into system performance.
However, access list log messages are rate-limited, so the impact is not catastrophic.

Access list logging can also be used to characterize traffic associated with network attacks, by logging the
suspect traffic.

Anti-spoofing
Many network attacks rely on an attacker falsifying, or spoofing the source addresses of IP datagrams.
Some attacks rely on spoofing to work at all, and other attacks are much harder to trace if the attacker can
use somebody else’s address. Therefore, it is valuable for network administrators to prevent spoofing
wherever feasible.

129
Anti spoofing should be done at every point in the network where it is practical, but is usually both easiest
and most effective at the borders between large address blocks, or between domains of network
administration. It is usually impractical to do anti-spoofing on every router in a network; because of the
difficulty of determining which source addresses may legitimately appear on any given interface.

For an Internet service provider effective anti-spoofing, together with other effective security measures,
can cause expensive, annoying problem subscribers to take their business to other providers. ISP’s should
be especially careful to apply anti-spoofing controls at dialup pools and other end-user connection points
(see also RFC 2267).

Administrators of firewalls or perimeter routers sometimes install anti-spoofing measures to prevent hosts
on the Internet from assuming the addresses of internal hosts, but do not take steps to prevent internal hosts
from assuming the addresses of hosts on the Internet. It's a far better idea to try to prevent spoofing in both
directions. There are at least three good reasons for doing anti-spoofing in both directions at an
organizational firewall:
• Internal users will be less tempted to try launching network attacks and less likely to succeed if they do
try.
• Wrongly configured internal hosts will be less likely to cause trouble for remote sites.
• Outside crackers often break into networks as launching pads for further attacks. These crackers may
be less interested in a network with outgoing spoofing protection.

Anti-spoofing with packet filters


Unfortunately, it is not practical to give a simple list of commands that will provide appropriate spoofing
protection; access list configuration depends too much on the individual network. However, the basic goal
is simple: to discard packets that arrive on interfaces that are not viable paths from the supposed source
addresses of those packets. For example, on a two-interface router connecting a corporate network to the
Internet, any datagram that arrives on the Internet interface, but whose source address field claims that it
came from a machine on the corporate network, should be discarded.

Similarly, any datagram arriving on the interface connected to the corporate network, but whose source
address field claims that it came from a machine outside the corporate network, should be discarded. If
CPU resources allow it, anti-spoofing should be applied on any interface where it is feasible to determine
what traffic may legitimately arrive.

ISPs carrying transit traffic have limited opportunities to configure anti-spoofing access lists, but can
usually at least filter outside traffic that claims to originate within the ISP's own address space.

In general, anti-spoofing filters must be built with input access lists; that is, packets must be filtered at the
interfaces through which they arrive at the router, not at the interfaces through which they leave the router.
This is configured with the ip access-group <list> in interface configuration command. It is
possible to do anti-spoofing using output access lists in some two-port configurations, but input lists are
usually easier to understand even in those cases. Furthermore, an input list protects the router itself from
spoofing attacks, whereas an output list protects only devices behind the router.

Please note that anti-spoofing filters can increase operational/management complexity. Some large
VPNs may change or update their address allocation on a daily or weekly basis, which means that ST
operations will have to maintain and update packet-spoofing filters accordingly. The fact that IP packets
from a given VPN can’t escape into any other VPN somehow eliminates the need for use of anti-spoofing
filters. A misbehaved customer can only attack its own sites. A MPLS/VPN customer cannot affect any
other MPLS/VPN customer, nor the ST backbone routers.

130
Inbound anti spoofing filter are implemented on IPv4 (Internet) connections:
• IPv4 CPE-PE interfaces on the PE routers.
• Peering interfaces on iGWs.
• Virtual-template and other dialup interfaces.
• Access interfaces on IPv4 CE routers.

Access list 101 consists of the following major sections:


• Block packets with invalid or prohibited source IP address from being sent towards or across ST
backbone
• Improve protection of P and RR routers by only allowing PING and TRACEROUTE traffic to hit the
IP address block 213.81.248.0/20 (ie. the address block reserved for Backbone links).
• Allow any other packet that is not destined (ie. transit traffic) towards ST backbone

Controlling Directed Broadcasts


IP directed broadcasts are used in the extremely common and popular “smurf” denial-of-service attack, and
can also be used in related attacks.

An IP directed broadcast is a datagram which is sent to the broadcast address of a subnet to which the
sending machine is not directly attached. The directed broadcast is routed through the network as a unicast
packet until it arrives at the target subnet, where it is converted into a link-layer broadcast. Because of the
nature of the IP addressing architecture, only the last router in the chain, the one that is connected directly
to the target subnet, can conclusively identify a directed broadcast. Directed broadcasts are occasionally
used for legitimate purposes, but such use is not common outside the financial services industry.

In a smurf attack, the attacker sends ICMP echo requests from a falsified source address to a directed
broadcast address, causing all the hosts on the target subnet to send replies to the falsified source. By
sending a continuous stream of such requests, the attacker can create a much larger stream of replies, which
can completely inundate the host whose address is being falsified.

If a Cisco interface is configured with the no ip directed-broadcast command, directed


broadcasts that would otherwise be expanded into link-layer broadcasts at that interface are dropped
instead. The command no ip directed-broadcast must be configured on every interface of every
router that might be connected to a target subnet; it is not sufficient to configure only firewall routers. The
no ip directed-broadcast command is the default in Cisco IOS software version 12.0 and later.
In earlier versions, the command should be applied to every LAN interface that is not known to forward
legitimate directed broadcasts.

IP Source Routing
The IP protocol supports source routing options that allow the sender of an IP datagram to control the route
that datagram will take toward its ultimate destination, and generally the route that any reply will take.
These options are rarely used for legitimate purposes in real networks. Some older IP implementations do
not process source-routed packets properly, and it may be possible to crash machines running these
implementations by sending them datagrams with source routing options.

A Cisco router with no ip source-route set will never forward an IP packet, which carries a source
routing option. You should use this command unless you know that your network needs source routing.

131
It is strongly recommended to disable the IP source routing option in ST MPLS network.

ICMP Redirects
An ICMP redirect message instructs an end node to use a specific router as its path to a particular
destination. In a properly functioning IP network, a router will send redirects only to hosts on its own local
subnets, no end node will ever send a redirect, and no redirect will ever be traversed more than one
network hop. However, an attacker may violate these rules; some attacks are based on this. It is a good idea
to filter out incoming ICMP redirects at the input interfaces of any router that lies at a border between
administrative domains, and it is not unreasonable for any access list that is applied on the input side of a
Cisco router interface to filter out all ICMP redirects. This will cause no operational impact in a correctly
configured network.

Note that this filtering prevents only redirect attacks launched by remote attackers. It's still possible for
attackers to cause significant trouble using redirects if their host is directly connected to the same segment
as a host that's under attack.

CDP
Cisco Discovery Protocol (CDP) is used for some network management functions, but is dangerous in that
it allows any system on a directly connected segment to learn that the router is a Cisco device, and to
determine the model number and the Cisco IOS software version being run. This information may in turn
be used to design attacks against the router. CDP information is accessible only to directly connected
systems. The CDP protocol may be disabled with the global configuration command no cdp running.
CDP may be disabled on a particular interface with no cdp enable.

NTP
The Network Time Protocol (NTP) is a protocol used to time-synchronize network devices. NTP runs over
UDP and is documented in RFC 1305. An NTP stratum 1 server should get its time from an authoritative
time source, such as a GPS system or an atomic clock attached to a timeserver. NTP then distributes this
time across the network. NTP is a very sophisticated and efficient protocol, which only needs one packet
per minute to synchronize two machines to within a millisecond of one another.

NTP uses the concept of a "stratum" to describe how many NTP "hops" away a machine is from an
authoritative time source. A "stratum 1" time source has a reference clock such as a GPS or atomic clock
directly attached, a "stratum 2" time source receives its time from a "stratum 1" time source, and so on.
This “hop” count isn’t related to the IP hops between two NTP time sources. A device running NTP
automatically chooses the lowest stratum timeserver as its time source. It only talks and listens to servers,
which it has a configuration entry for.

To avoid synchronization problems NTP has two methods to determine the validity of the time source.
NTP will never synchronize to a device, which is not synchronized itself. It will also not synchronize to a
source; whichs time is significantly different than all the other time sources.

The NTP configuration is usually static. Every device has a list of IP addresses with which it will exchange
NTP messages. These communication agreements are called associations. On LAN segments NTP can use
IP broadcast messages as well.

132
With Cisco two mechanisms are available to secure the communication: an access list-based restriction
scheme and an encrypted authentication mechanism. A limitation of Cisco’s implementation is that it
doesn’t support stratum 1 service, which means a reference clock such as a GPS or atomic clock cannot be
connected directly to the Cisco box.

NTP is a very valuable tool for reporting and troubleshooting, because cause and effect of problems can be
clearly correlated. Care must be taken, where the time information comes from, especially if additional
time sources from the Internet are used as a reference. Confusing the time system can render system log
files completely useless.

The Network Time Protocol (NTP) will be used to synchronize router clocks. NTP authentication will used
to have secure NTP associations. The loopback0 address is used to form NTP associations.

133
Network Management

Depending on the size of the content there is a possibility that there would be a separate LLD on Network
Management. In that case put a reference to that doc. Otherwise discuss in detail all the aspects of
Network Management here

134
Appendix I

135
Appendix II
Corporate Headquarters European Headquarters Americas Headquarters Asia Pacific Headquarters
Cisco Systems, Inc. Cisco Systems Europe Cisco Systems, Inc. Cisco Systems Australia, Pty., Ltd
170 West Tasman Drive 11 Rue Camille Desmoulins 170 West Tasman Drive Level 9, 80 Pacific Highway
San Jose, CA 95134-1706 92782 Issy-Les-Moulineaux San Jose, CA 95134-1706 P.O. Box 469
USA Cedex 9 USA North Sydney
www.cisco.com France www.cisco.com NSW 2060 Australia
Tel: 408 526-4000 www-europe.cisco.com Tel: 408 526-7660 www.cisco.com
800 553-NETS (6387) Tel: 33 1 58 04 60 00 Fax: 408 527-0883 Tel: +61 2 8448 7100
Fax: 408 526-4100 Fax: 33 1 58 04 61 00 Fax: +61 2 9957 4350

Cisco Systems has more than 200 offices in the following countries and regions. Addresses, phone numbers, and fax numbers are listed on the
Cisco Web site at www.cisco.com/go/offices.

Argentina • Australia • Austria • Belgium • Brazil • Bulgaria • Canada • Chile • China • Colombia • Costa Rica • Croatia • Czech Republic Denmark • Dubai, UAE
Finland • France • Germany • Greece • Hong Kong SAR • Hungary • India • Indonesia • Ireland • Israel • Italy • Japan • Korea • Luxembourg • Malaysia • Mexico
The Netherlands • New Zealand • Norway • Peru • Philippines • Poland • Portugal • Puerto Rico • Romania • Russia • Saudi Arabia • Singapore • Slovakia • Slovenia
South Africa • Spain • Sweden • Switzerland • Taiwan • Thailand • Turkey • Ukraine • United Kingdom • United States • Venezuela • Vietnam • Zimbabwe

You might also like