You are on page 1of 26

Unit 8

ERX Policy Management


Overview

References
RFC 2474 The Differentiated Services Field (DS Field)
RFC 2475 An Architecture for Differentiated Services
RFC 2597 Assured Forwarding PHB Group
RFC 2598 Expedited Forwarding PHB Group
www.stardust.com
www.qosforum.com

Policy Management Rev 3.2 Page 8- 1


The Need for QoS
Traditional Data
Traffic

Video
Internet

Voice

• Applications in use on the internet are changing


- Voice
- Video
- Traditional Data
• Service providers need new products
- Need to be able to sell services
- Better than best effort
- Better profits

The characteristics of internet traffic is evolving once again. While the expansion of data
traffic continues unabated a new challenge has been added, convergence. Traditional
voice carriers are rushing to move their voice traffic on to the internet while multimedia
content providers add real time information like voice and video into the IP arena.
Increased bandwidth for LANs and new broadband access services such as xDSL and
Cable networking are delivering the capacity needed to support new applications.
Improved optical technologies like DWDM are providing the added capacity needed in
core of the Internet. However, increases in network bandwidth alone do not deliver the
guaranteed quality of service and high reliability demanded by critical applications.
The emergence of time sensitive applications like Voice over IP and Video over IP have
introduced a requirement to provide service better than the “Best Effort” delivery
traditionally provided by IP.
A secondary driver behind Quality of Service is a business need for service providers to
differentiate themselves from their competitors. If all any service provider offers is “Best
Effort” service why should a potential customer choose one over another.
Quality of Service (QoS) refers to the ability of network equipment to differentiate data
streams and apply various queuing policies on the data to offer customers differentiated
services for time sensitive packets. Quality of Service is in essence a bandwidth
management function ensuring that enough resources are allocated to support time
sensitive applications and shield them from the bursty nature of the typical data traffic.
Quality of Service Service Level Agreements between the customer and the Service
Provider provide monetary penalties if end-to-end performance metrics are not met while
providing a commodity for the Service Provider to sell at premium prices.

Policy Management Rev 3.2 Page 8- 2


End-End QoS

ISP-B

U U

ISP-A ISP-D

ISP-C

QoS
Service Level
Agreements

In order for true Quality of Service to be realized the policies must be enforced from end-
end throughout the internet. An ISP cannot simply sell a service level agreement (SLA)
defining better than best effort delivery for a customers traffic. Additional SLAs with peer
ISPs will have to be negotiated to ensure that the traffic receives the same level of service
as it traverses the internet. Customer, service provider, and carrier business models are
all factors – end-to-end service cannot be reasonably provided until all providers have
QoS Service Level Agreements (SLAs) at each peering point.
Not only must we insure that enforceable agreements are negotiated between a number of
core ISPs it is equally important that client applications have the ability to ensure QoS
forwarding on their local network segments.
Good QoS design must move the majority of control functions as close to the applications
as possible. Ideally core routers will have a limited set of packet forwarding rules
optimized to enable wire line throughput.

Policy Management Rev 3.2 Page 8- 3


Differentiated Services
Network Components
Policy
Manager
Core Routers

Edge Device
U U
Customer Peer
Network DiffServ
Domain

BA Classifier
Per Hop Behavior

Unisphere is supporting the DiffServ QoS movement in the networking industry. DiffServ
implements the use of the Type of Service (ToS) field in the IP Header to mark each
packet for QoS treatment. In this way different traffic flows can be aggregated into a
smaller number of QoS class flows. Each class would receive pre-defined treatment.
This alleviates the need for end-end session state monitoring required for RSVP while
allowing RSVP reservation criteria to be mapped to flow aggregates. Each router in the
domain needs to be configured to understand the various class flow policies and support
the queuing strategies required to implement those policies.
There are three key components in the DiffServ architecture, the Edge Device, Behavior
Aggregates, and the Policy. The QoS development work for the ERX is focused primarily
on implementing Edge Router functionality.
The edge device may also be responsible for implementing the Per Hop Behavior
associated with Behavior Aggregates. Per Hop Behaviors are essentially queuing policies
that define how much of the available bandwidth out of a router is allocated to each of the
various Behavior Aggregates.

Policy Management Rev 3.2 Page 8- 4


Edge Device

Edge Device
IP IP
TOS PRO SP DP
SA DA
U

Multifield Classification
DSCP Marking
Flow Policing
Traffic Shaping

The Edge Device is the first place in an administrative domain that DiffServ rules are put
into effect. The edge device can be the originating workstation of an IP Datagram, or many
hops into a packets path. The border router connecting separate DiffServ Domains will
always be an Edge Device.
The Edge Device is responsible for doing Multi Field Classification of incoming packets.
MF classification can include the IP Source/Destination address, TCP/UDP Source and
Destination Ports, the Protocol field in the datagram header. In a later release of our
software the classification will be extended to include the IP Type of Service field or DS-
Field. The traffic is compared to listed attributes in a classifier access control list and, if
there is a match, the Edge Device will take the appropriate action.
One of the actions that may be required of an Edge Device is to mark the packets in a
classified flow with an appropriate Differentiated Services Code Point (DSCP) in the
Datagram header.
Marking is used to assign the traffic flow to a Behavior Aggregate (BA). The BA will define
the treatment of the packet by routers within the local DS Domain. A BA defines a queuing
strategy for all traffic sharing common characteristics.
Once the MF Classifier has identified and marked the packet flows for the appropriate
service type the Flow Policing function kicks in. Since the point is to sell differentiated
service levels the policing functions ensures that customers get what they pay for and pay
for what they get.
Finally an Edge Device may be required to perform Traffic Shaping to ensure that the rate
of traffic leaving the DiffServ Domain conforms to Traffic Conditioning Agreements
negotiated with Peer DiffServ Domains. In this way we can avoid having the peer discard
traffic that is out of conformance with our SLA.

Policy Management Rev 3.2 Page 8- 5


BA Classifier
Per Hop Behavior

BA Classifier
Core Routers
IP IP
TOS PRO SP DP
SA DA

BA Classifiers are the core routers in a DiffServ Domain. The classification function is
limited to examining the DS Field (TOS Byte) in the IP Datagram header and then
applying the appropriate Per Hop Behavior. Per Hop Behaviors are the queuing strategies
used to implement the QoS service level agreements. Core routers need to forward traffic
and should not be burdened with the same complicated classification/policing functions
that are implemented in Edge Devices.
The DiffServ Architecture defines some standard classes of service or Per Hop Behaviors
for use in DiffServ implementations. Differentiated Services uses the eight bit Type of
Service field in the IP datagram header to mark DiffServ service criteria. RFC 2474
defines a six bit DS Code Point utilizing the space formerly used for ToS.
Expedited Forwarding (EF) - The DS Code Point defined in RFC2598 for EF is
Codepoint 101110.
Minimizes Delay and Jitter
Provide Highest level QoS
Out of Profile Traffic is Dropped
Assured Forwarding (AF) described in RFC 2597 provides four classes of IP forwarding
service for “Better then Best Effort” routing.
Four Class of Service with three drop precedence per Class
Out of Profile Traffic may be demoted in class or drop precedence.
Class 1 Class 2 Class 3 Class 4
Low Drop Prec | 001010 | 010010 | 011010 | 100010 |
Medium Drop Prec | 001100 | 010100 | 011100 | 100100 |
High Drop Prec | 001110 | 010110 | 011110 | 100110 |

Policy Management Rev 3.2 Page 8- 6


DiffServ Policy

Policy Manager

COPS COPS
UMC
U
U

Policy
Policy Decision Point (PDP)
Policy Enforcement Point (PEP)

Policy -- The DiffServ Policy is the overall set of rules that define DiffServ operation within
the domain. The overall policy framework consists of two elements, the Policy Decision
Point (PDP) and the Policy Enforcement Point(PEP). The Policy Decision Point, here
shown as a Unisphere Management Center acting as Policy Manager, is responsible for
storing and disseminating the DiffServ policy. It may be co-located within the PEP or in a
remote server and may provide only static policy definition or may respond dynamically to
event driven policy requests from the PEP.
The PEP classifies, polices, and marks datagrams to enforce a domains DiffServ Policy.
Metering is also a requirement of a DiffServ Policy. In order to ensure that Service Level
Agreements are fulfilled traffic aggregates will be metered in the PEP and PDP to assure
policy compliance.
The connection shown here between the UMC and the ERXs is a TCP communication
channel running a signaling protocol called Common Open Policy Services(COPS).
Originally developed as part of the IntServ protocol COPS can be used to communicate
policy updates from the PDP to the PEP.

Policy Management Rev 3.2 Page 8- 7


Policies in the ERX

• Policy Manager

- QoS Classification and Marking


- Policy Routing
- Packet Filtering
- Rate Limiting
- Committed Access Rate (CAR)

In release 1-3 of ERX software QoS functions are implemented through a software
process called Policy Manager. Policy Manager is the CLI interface tool that uses the
following functions to build a robust set of QoS Service Level Agreements.
•QoS Classification and Marking - a new Classifier FPGA will be responsible for
examining incoming datagrams and classifying traffic flows.
•Policy Routing - Policy routing will allow the ERX to classify a packet on ingress to the
box and make a forwarding decision based on QoS classification without the need to
perform the normal routing table processing. This will provide superior performance for
real time applications.
•Packet Filtering- Pretty straight forward, drop packets based on classification.
•Rate Limiting - A Rate limit is set for an interface and out of profile packets are dropped.
•Committed Access Rate - A method of color coding traffic on ingress to the box based on
band width profiles. A two rate three color scheme where traffic within a committed profile
is marked green, traffic within a burst window is marked yellow and traffic exceeding the
profile is marked red. When congestion is encountered on egress from the box the colors
are used to define drop order.

Policy Management Rev 3.2 Page 8- 8


Policy Manager
Tiered 12MB ACompanyUDP
Hardlimit 9MB ACompanyIGMP
Hardlimit 3MB
QCorpICMP
ERX
Database

Rate Limit Profiles RouteforACompany


Classifier Control Lists
RouteforQCorp
action
SecurityFilter classification
Rule 1
next-interface
Rule 2
next-hop Rule 3
action |
filter |
Rule n
set-tos-byte

Policy action commands


Policy Lists

Policy Manager will provide the CLI interface to build data bases which can be drawn from
to implement a Policy. Each data base will contain traffic specifications individually
specified. These data bases are global in nature. When building a Policy the user
specifies input from one or more of these data bases. The policy is then applied to an
interface. By combining these inputs into policies a wide variety of services can be
deployed.
•Rate Limit Profiles - This is the tool for assigning bandwidth limit policy. A committed
rate with burst size and peak rate with burst size can be configured as well as actions to
take when traffic is in or out of conformance with the specified rates. Flexibility in rate limit
profile configuration allows for developing a variety of services.
•Classifier Access Control Lists (CLACLs) - The CLACLs are used to classify traffic into
flows with common characteristics. CLACLs will specify a range of values in the IP
datagram header. The fields that will be examined can include IP Source/Destination
Address, TCP/UDP Source/Destination Port, and the Protocol field.
•Policy Lists - A Policy is a set of actions to be performed on classified traffic flows. Policy
actions can include packet filtering, policy routing, bandwidth limiting, traffic classification
and packet marking.

Policy Management Rev 3.2 Page 8- 9


Rate Limiting
Tiered 12MB
Hardlimit 9MB
Hardlimit 3MB

Tiered 12MB ACompanyUDP


Hardlimit 9MB ACompanyIGMP
Hardlimit 3MB
QCorpICMP
ERX
Database

Rate Limit Profiles RouteforACompany


Classifier Control Lists
Rate Limit Profiles action
RouteforQCorp
SecurityFilter classification
Rule 1
next-interface
Rule 2
next-hop Rule 3
action |
filter |
Rule n
set-tos-byte

Policy action commands


Policy Lists

Rate Limiting enforces data rates below the physical line rate of a port for either an IP
interface or a classified packet flow. Rate limiting is implemented by configuring a Rate
Limit Profile that specifies bandwidth attributes and actions.
Rate limit profiles can be configured to provide a variety of services including tiered
bandwidth service where traffic conforming to configured bandwidth levels is treated
differently than traffic that exceeds the configured values.
Rate limit profiles can also be configured to simply set a bandwidth limit where traffic
exceeding the configured limit is discarded.

Policy Management Rev 3.2 Page 8- 10


Rate Profile - Hard Limit

12 M U
bM
ax
T3

Max
20 Mb
T3 Data which exceeds
the configured
limit is clipped

• Clipping function which limits bandwidth to a configured


value as opposed to the physical bandwidth of the
link

Using a rate limit profile a service can be configured that allows an ISP to configure a hard
rate limit on an interface. Different maximum rates can be set for traffic flowing in each
direction (ingress and egress) with traffic exceeding the maximum rate being discarded.
This service is essentially a clipping function, limiting the bandwidth a customer can send
or receive to configured values rather than to physical interface rates.
Configured with a rate limit profile only a committed rate is configured, any data over the
committed rate is dropped. The committed burst must be at least as large as the
maximum anticipated packet size to avoid bursty performance. The recommend burst size
is 10 % of the committed rate.

erx3(config)#rate-limit-profile HardLimit
erx3(config-rate-limit-profile)# committed-rate 12000000
erx3(config-rate-limit-profile)# committed-burst 150000
erx3(config-rate-limit-profile)#committed-action transmit
erx3(config-rate-limit-profile)#conformed-action drop
erx3(config-rate-limit-profile)#exceeded-action drop

Policy Management Rev 3.2 Page 8- 11


Rate Limit Profile - Committed Access Rate
(CAR)

12 Mb Committed
15 Mb Peak
T3

Data within the


committed access
rate is given priority

Using Rate Limit Profiles it is possible to build a committed access rate service that
emulates Frame Relay CIR behavior. By assigning both committed and peak rates we can
prioritize traffic based on bandwidth utilization.

erx3(config)#rate-limit-profile CAR
erx3(config-rate-limit-profile)#committed-rate 12000000
erx3(config-rate-limit-profile)#committed-burst 150000
erx3(config-rate-limit-profile)#peak-rate 15000000
erx3(config-rate-limit-profile)#peak-burst 94000
erx3(config-rate-limit-profile)#committed-action mark 40
erx3(config-rate-limit-profile)#conformed-action mark 48
erx3(config-rate-limit-profile)#mask-val 255
erx3(config-rate-limit-profile)#exceeded-action drop

With this configuration the two rate three color marker will define the difference in packet
handling internally to the ERX while the TOS field marking will identify the relative priority
of packets to upstream routing elements.

Policy Management Rev 3.2 Page 8- 12


Rate Limit Profiles

Committed Peak
Committed Committed
Peak Burst Burst
Rate Burst
Data Burst
Exceeded Exceeded Token Buckets
Rate Exceeded

Peak Rate

Committed Rate

Sample
Interval

Rate Limit profiles provide a flexible tool for limiting bandwidth usage on interfaces.
Configurable for two different rates, committed and peak, each rate is configured in bits
per second. Known as a Two Rate Three Color marking mechanism, token buckets
control how many packets per second will be accepted at each of the configured rates.
When we configure Rate Limit Profiles we can configure a Committed Rate, in bits per
second, and optionally its associated Committed Burst configured in bytes per second. A
Peak Rate and its associated Peak Burst are also configurable.
When configuring Rate Limit Profiles each of these conditions can be used to specify
DiffServ marking rules, traffic priority levels or policing policy where traffic is dropped when
out of profile.
The token buckets provide flexibility in dealing with the bursty nature of data traffic.
Through these buckets we can define how long a traffic flow can burst over the committed
rate and still be considered in conformance with the traffic specification
At the beginning of each sample period, the two buckets are filled with tokens based on
the configured burst sizes. Traffic is metered to measure its volume. When traffic is
received at a volume below the peak rate one token is removed from each bucket for
every byte of data processed. As long as there are still tokens in the committed burst
bucket the traffic is treated as conforming to the traffic specification.
When the committed burst token bucket is empty but tokens remain in the peak burst
bucket, traffic can be marked as not conforming to the traffic specification and forwarded
with lower priority classification.
When the peak burst token bucket is empty a third priority classification can be applied.

Policy Management Rev 3.2 Page 8- 13


Congestion Management
QUEUE

Yellow Red
Queue
Drop Drop
Limit
Threshold Threshold

Configuration: Color: Data Rate:


Committed Green Below Committed Burst
Conformed Yellow Below Peak Burst
Exceeded Red Above Peak Burst
When the rate limit profile is configured a side effect is the internal tagging of packets with
a drop preference. The color coded tag is added automatically when the committed and
peak burst values for an interfaces Rate Limit Profile are exceeded. This drop preference
is used when there is contention for outbound queuing resources and packets must be
dropped.
The queuing system uses drop eligibility to select packets for dropping when there is
congestion on an egress interface. This method is called dynamic color-based threshold
dropping. Each packet classified by a Rate Limit Profile has a two bit tag associated with it
internally in the ERX. The two bit code assigns a color code to the packet, Red, Yellow, or
Green. Each packet queue in the system has two color-based thresholds as well as a
queue limit. Red packets are dropped when when congestion cause the queue to fill past
the red threshold. Yellow packets are dropped when the yellow threshold is reached.
Green packets are never dropped until the queue limit is reached.
Remember that this internal tagging is automatic and does not necessarily reflect the
operation of the Policy on an interface.

Policy Management Rev 3.2 Page 8- 14


How Can I Tell Its Configured?

ERX3#show rate-limit-profile

Rate-Limit-Profile: training
Reference count: 0
Committed rate: 10000000
Committed burst: 44000
Peak rate: 12000000
Peak burst: 22000
Mask: 255
Committed rate action: mark(40)
Conformed rate action: mark(48)
Exceeded rate action: drop

Rate limit profiles can be configured with a variety of combinations of the above
parameters. In this example both committed and peak parameters are configured. This
profile has the following effects:
•All traffic below a data rate of 10 Mbps is internally tagged green and marked in
the DiffServ code point for Assured Forwarding Class 1 with a low drop
precedence (40=00101000) and forwarded.
•Up to 1000 44 byte packets (44000 bytes) received in a burst above 10 Mbps but
below 12 Mbps will also be tagged green and marked with the Assured
Forwarding Class 1 code point.
•Up 500 44 byte packets (22000 bytes) received at the peak data rate of 12 Mbps
will be marked with the Assured Forwarding Class 1 code point but with a different
drop precedence (48=00110000). but tagged internally for a yellow drop
precedence.
•Any data receive in excess of 12 Mbps will be dropped.
The configuration is flexible in that not all the parameters are required in all Rate Limit
Profiles. This allows a variety of Service Level Agreements to be deployed.

Policy Management Rev 3.2 Page 8- 15


Classifier Access Control Lists
ACompanyUDP
ACompanyIGMP
QCorpICMP

Tiered 12MB ACompanyUDP


Hardlimit 9MB ACompanyIGMP
Hardlimit 3MB
QCorpICMP
ERX
Database

Classifier Access Control Lists


Rate Limit Profiles RouteforACompany
Classifier Access Control Lists
RouteforQCorp
action
SecurityFilter classification
Rule 1
next-interface
Rule 2
next-hop Rule 3
action |
filter |
Rule n
set-tos-byte

Policy action commands


Policy Lists

Classifier Access Control Lists (CLACLs) are an extension of ACLs for QoS flow
classification. Implemented like an access list they can be configured to classify traffic
based on five IP datagram header fields, IP Source Address, IP Destination Address,
TCP/UDP Source Port, TCP/UDP Destination Port and the Protocol field. In a later
release the list will be extended to include the Type of Service field.
The Classifier FPGA is responsible for evaluating data streams on IP interfaces for QoS
classification. Using the FPGA puts the classification in hardware, so that it does not
impede wire speed forwarding, while maintaining the flexibility of software rule
configuration through the use of CLACLs.
The classifier FPGA supports the notion of rules and rule sets. A rule is a match clause
based on fields of an IP header. A two-field rule may match on a range of IP SAs and a
range of IP DAs. For example, a two-field rule can specify a match if the packet SA is in
subnet 192.168/16 AND the packet DA is in subnet 10/24. A five-field rule specifies
ranges for the typical five fields of the IP header – the IP protocol, IP SA and DA, and
TCP/UDP source port and destination point.
Rules are combined into rule sets. The ERX ingress processor supports rule sets of up to
32 five-field rules, or rule sets of up to 64 two-field rules or 124 - 1 field rules. The FPGA
supports up to 8000 rule sets per line card. Rule sets are assigned one to an IP interface.
A single rule set can be shared across many interfaces, but each interface can have just
one rule set applied.
The total number of rules per line card with statistics enabled is:
Old Line Cards: 32k rules
ASIC Line Cards: 64k rules.
Without statistics enabled the totals are:
Old Line Card: 8k interfaces x 32 rules or 2k interfaces x 124 rules
ASIC Line Cards: 16k interfaces x 32 rules; or 4k interfaces x 124 rules

Policy Management Rev 3.2 Page 8- 16


Classifier List configuration

erx1(config)#classifier-list training ?
<1 - 255> The protocol matched by this classifier list
icmp Configure a classifier list specific to the ICMP protocol
igmp Configure a classifier list specific to the IGMP protocol
ip Configure a classifier list specific to the IP protocol
not Match packets with protocols not equal to specified protocol
tcp Configure a classifier list specific to the TCP protocol
udp Configure a classifier list specific to the UDP protocol
erx1(config)#classifier-list training ip 10.1.0.0 0.0.255.255 not 10.1.0.0 0.0.255.255

There are significant differences in the configuration of CLACLs from ACLs.


1. Classifier lists are given a name rather than a number.
2. No action is specified in the CLACL. With a standard ACL every entry on the list is
assigned either a permit or deny action. For CLACLs the action is defined in a policy-list
not within the classifier-list itself.
3. As shown above the CLACL provide a number of fields in the datagram that can be
examined whereas the ACL basically allows us to list IP Source and Destination address.
Here we can set a value to match in the protocol field, specify the icmp or igmp protocols,
build an IP address list, or build lists specific to the TCP or UDP transport protocols.
4. Another tool added to provide versatility is the not entry that allows flexibility in
identifying ranges of values. In the example here the classifier-list is used to identify traffic
flows that originate in the 10.3.0.0 range of Source Address but are not destined for
Destination Address with in the same range.

Policy Management Rev 3.2 Page 8- 17


How Can I See The Configuration ?

erx1#show classifier-list <CLACL Name>

Classifier Control List Table


---------- ------- ---- -----
Classifier Control List training
Reference count: 0

Classifier-List training.1
Protocol: ip
Not Protocol: false
Source IP Address: 10.1.0.0
Source IP Mask: 0.0.255.255
Not Source Ip Address: false
Destination IP Address: 10.1.0.0
Destination IP Mask: 0.0.255.255
Not Destination Ip Address: true

The show classifier-list command will display the Classifier Control list table with details of
the flow classification rules. Entering show classifier list will display the rules for all
classifier lists configured. A name qualifier can be entered to display only a specific
classifier list.

Policy Management Rev 3.2 Page 8- 18


Policy Lists
Tiered 12MB ACompanyUDP
Hardlimit 9MB ACompanyIGMP
Hardlimit 3MB
QCorpICMP
ERX
Database

Rate Limit Profiles RouteforACompany


Classifier Control Lists
RouteforQCorp
action
SecurityFilter classification
RouteforACompany
Rule 1
next-interface
Rule 2
Rule 3
RouteforQCorp
next-hop

filter
action |
|
SecurityFilter
Rule n
set-tos-byte Rule 1
next-interface
Policy action commands Rule 2
Policy Lists
next-hop Rule 3
|
action |
filter
Rule n
set-tos-byte

Policy action commands


Policy Lists

Policy Lists are the central tool for implementing QoS in the ERX. Policy Lists can be
created with up to 32 rules. Once specified, a Policy List specifies a rule set that can be
applied to IP interfaces. Each Policy List includes a policy command and, optionally, a
classifier control list.

Policy Management Rev 3.2 Page 8- 19


Policy Commands

Policy List CompanyA


Rule 1 Action=Rate-Limit-Profile Hard20MB Prec=20 Classifier
Rate Limit Access
Profiles Rule 2 Action=Filter Clacl=CompanyA Prec=10 Control
Lists
Rule 3 Action=Next-hop Clacl=VoIP Prec=50
Hard20MB
Rule 4 Action=Next-hop Clacl=ISPA Prec=70

Rule 5 Action=Mark Clacl=VideoConf Prec=30 VoIP


ISPA
Rule 6 Action=Next-interface Clacl=VideoConf Prec=40 VideoConf
CompanyA

The policy commands define the actions that should be taken when incoming traffic
matches the criteria identified in the rule set. The policy commands that can be used are;
•Next-interface - Used to implement policy routing, this command defines an
egress interface on the ERX. The system examines incoming traffic and classifies
traffic into packet flows that are sent to the configured destination interface. The
packet flows can be interface based, i.e. all packets on an IP interface, or a subset
of traffic on an interface determined by a classifier access control list. The system
does not perform a route table lookup on these packet flow, simply forwards them
to the specified interface.
•Next-hop - Used to define the next hop IP address for the Policy List. This action
will forward packets to the Next-hop defined regardless of the next hop shown in
the routing table for a packets IP Destination Address.
•Filter - Used to drop packets. A classifier-group specified with the rule will
determine which packets will be discarded. If no classifier-group is specified then
all packets from an interface associated with the policy list will be discarded.
Rate-limit-profile - Specifies a rate-limit-profile to be applied with this policy list.

Policy lists are applied to interfaces in either the inbound or outbound direction.

Policy Management Rev 3.2 Page 8- 20


Building Services with Policy Lists
Video
Conference
atm int 2/0.1
Provider
U

T3 interface 3/0

ISP A
T3
10.0.0.1
Company A
192.168.0.0 thru
192.168.255.255
Policy List CompanyA
Rule 1 Action=Rate-Limit-Profile Hard20MB Prec=20
VoIP
Rule 2 Action=Filter Clacl=CompanyA Prec=10
Rule 3 Action=Next-hop Clacl=VoIP Prec=50 200.172.15.3 Gateway
Rule 4 Action=Next-hop Clacl=ISPA Prec=70
Rule 5 Action=Mark Clacl=VideoConf Prec=30
Rule 6 Action=Next-interface Clacl=VideoConf Prec=40

The point of all of these QoS tools is to allow the service provider to sell value added services to their
customers. In the next few pages we cover a sample scenario where Company A has requested service.
In this example, Company A has a T3 access line to the service providers’ ERX but their current traffic
levels won’t fill a 45Mb pipe so they don’t want to pay for a full T3. Most of the traffic the company sends
will be forwarded to the Internet via ISP A and require only Best Effort delivery. There is a monthly video
conference that requires special treatment and an experiment in the use of Voice Over IP to reduce
telephony costs. Finally there are some subnets within Company A that they would like to keep isolated
through filtering.
We will apply a policy on Company A’s IP interface with the rules needed to implement the requested
services.
First we will build the Classifier Access Control Lists, then the Rate-Limit-Profile, then we will reference
these tools in a Policy List and finally we will apply the Policy-List to the IP Interface for Company A

Policy Management Rev 3.2 Page 8- 21


CLACLs and Rate Limit Profile
ERX(config)#classifier-list VoIP tcp 192.168.0.0 0.0.255.255 any range 8000
9000
ERX(config)#
ERX(config)#classifier-list VideoConf ip 192.168.0.0 0.0.255.255 176.16.1.0
0.0.0.255
ERX(config)#
ERX(config)#classifier-list ISPA ip 192.168.0.0 0.0.255.255 any
ERX(config)#
ERX(config)#classifier-list CompanyA ip 192.168.17.0 0.0.0.255 any
ERX(config)#classifier-list CompanyA ip 192.168.43.0 0.0.0.255 any
ERX(config)#
ERX(config)#rate-limit-profile Hard20Mb
ERX(config-rate-limit-profile)#committed-rate 20000000
ERX(config-rate-limit-profile)#committed-burst 250000
ERX(config-rate-limit-profile)#exit
ERX(config)#

Four different access lists will be required,


1. The first one will be used to identify the traffic destined for the voice over IP gateway.
The CLACL will check for an IP source address in Company As’ range 192.168.0.0 -
192.168.255.255 and a TCP destination port in the range of 8000-9000. This is an just an
example, real world VoIP applications may use a different port range.
The second CLACL is used to identify traffic destined for the Video Conference service,
packets with an IP source address in Company As’ range with a destination address of
176.16.1.0 through 172.16.1.255 will match this classification.
Company A two subnets that will not be permitted to send IP traffic through the ERX,
192.168.17.0/24 and 192.168.43.0/24. We will build CLACL to identify traffic from those
subnets so that the policy can filter that traffic.
Then we build a less specific classifier to identify Internet traffic from Company A. Traffic
from Company A to any IP destination address will match this list.
The Rate Limit Profile needed to implement the service for Company A is fairly generic
providing a hard bandwidth limit of 20 Mega bytes per second. The Rate-limit-profile we
build here can be reused in policies for other customers. Notice that we specify a
committed rate and committed burst size. This is to ensure that we dont have abrupt
committed rate cut outs in the middle of a packet. The commited burst must be set to at
least the maximum anticipated packet size. The recommendation for optimal performance
is to configure a burst size of 10% of the committed rate. Remember that the burst is in
bytes while the rate is in bits per second. in this example we have a committed rate of
20Mbps, 10% of that gives us a burst size of 2Mbps divided by 8 bits in a byte gives us a
committed burst value of 250000 bytes.

Policy Management Rev 3.2 Page 8- 22


Policy-list for Company A

ERX(config)policy-list CompanyA
ERX(config-policy)#filter classifier-group CompanyA precedence 10
ERX(config-policy)#rate-limit-profile Hard20Mb precedence 20
ERX(config-policy)#next-interface atm 2/0.1 classifier-group VideoConf prec 40
ERX(config-policy)#mark 40 mask 255 classifier-group VideoConf prec 30
ERX(config-policy)#next-hop 10.0.0.1 classifier-group ISPA
ERX(config-policy)#next-hop 200.172.15.3 classifier-group VoIP prec 50
ERX(config-policy)#exit
ERX(config)#
ERX(config)#interface serial 3/0
ERX(config-if)#ip policy input CompanyA statistics enabled

Once the Classifier Access Control Lists and the Rate Limit Profiles are built they can be
referenced in a Policy List. In this example the policy list is created with the name
CompanyA, then six rules are included in the policy list that will define the overall service
provided to to Company A.
Now we have a policy-list in the ERX database but it doesn’t take effect until we apply it to
Company As’ incoming T3 interface slot 3 port 0.

Policy Management Rev 3.2 Page 8- 23


Dynamic Policy
UMC/SSC

policy-list

Service Request ERX


ATM U

DSL
DSLAM T3/E3
diane@isp1.com Modem

ATM
tim@isp1.com Switch
MAC=A

Dynamic QoS is the ability to assign IP Quality of Service handling specification to users
automatically when the service is requested. In this example a BRAS client like
diane@isp1.com establishes her PPPoE session as normal but after the authentication
process is complete she is connected to a service portal, like the Service Selection Center
element of the Unisphere Management Center. Through the SSC she selects the QoS
level and services she wants for this session and the UMC dynamically assigns the policy-
list for the service requested to the IP interface supporting diane@isp1.com.
The assignment of a policy-list can be transmitted in a variety of ways from the server to
the ERX. A service portal can telnet into the ERX and transmit CLI command to implement
the policy. The server could implement the policy through a series of SNMP set
commands to the ERX or the ERX can maintain a connection to the policy server using
the Common Open Policy Services (COPS) protocol.

Policy Management Rev 3.2 Page 8- 24


Configuring Service Selection Center Client
(SSCC)

ERX2(config)#sscc enable
ERX2(config)#
ERX2(config)#sscc primary address 10.1.1.1 port 3310
ERX2(config)#sscc secondary address 10.1.2.1 port 3310
ERX2(config)#sscc tertiary address 10.1.3.1 port 3310
ERX2(config)#
ERX2(config)#sscc retryTimer 180

Configuring the ERX to connect to a remote server requires enabling the SSC client
function, specifying the IP address and destination ports that the primary, secondary, and
tertiary SSC servers listen on and, optionally specifying a retry timer value (default is 90
seconds) which sets how long the SSC client software waits for a response from the
primary server before switching to the secondary server etc.

Policy Management Rev 3.2 Page 8- 25


How Can You Tell If Its Working?

• Policy Troubleshooting Commands


- show classifier-list <name>
- show rate-limit-profile <name>
- show policy-list <name>

• Viewing Policy Assignment


- show ip interface <interface>

• Viewing SSC configuration


- show sscc info <brief>
- show cops info

erx1#show rate-limit-profile erx1#show classifier-list

Rate Limit Profile Table Classifier Control List Table


---------- ------- ---- -----
---- ----- ------- -----
Classifier Control List training
Rate-Limit-Profile: training
Reference count: 0
Reference count: 0
Committed rate: 20000000
Classifier-List test.1
Committed burst: 5000
Protocol: IP
Peak rate: 0
Not Protocol: false
Peak burst: 0
Source IP Address: 10.3.0.0
Mask: 255
Source IP Mask: 0.0.255.255
Committed rate action: mark(40)
Not Source Ip Address: false
Conformed rate action: transmit
Destination IP Address: 10.3.0.0
Exceeded rate action: transmit
Destination IP Mask: 0.0.255.255

erx1#show policy-list Not Destination Ip Address: true

Policy Table
------ -----
Policy CompanyA
Administrative state: enable
Operational status: enabled
Error Value: 0
Reference count: 1

Referenced by interface(s):
serial4/0:1/1 Input policy, Statistics disabled

Policy Management Rev 3.2 Page 8- 26

You might also like