You are on page 1of 15

C our 34350:B r se oadB and N et or w ks. M andat y Lab R epor .

or t

Modeling MPLS Networks

Martin Nord Jos Soler (studienr 020725 PHD)

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Accronyms
ATM FEC FR HTTP IF IP LER LSP LSR MPLS TAG QoS Asynchronous Transfer Mode Forward Equivalency Class Frame Relay Hyper Text Transfer Protocol Interface Internet Protocol Label Edge Router Label Switched Path Label Switching Router Multi Protocol Label Switching Tell-And-Go Quality of Service

Introduction
We have defined our own model to become familiar with all the steps in defining such a network.

SM A R T G O A LS
Purpose is to demonstrate different means of utilising MPLS in an IP centric network. We will focus on TE (load-balancing) and back-up recovery in case of node failure. The results will be observed by measuring delay and throughput. Hence we have defined SMART goals. Specific Measurable Acceptable Realisable Thorough

About the model


Pr ectdet l oj ais
We have created a project called MaJo_TE1 to do the lab. All the model files used to build the project are found in the folder jsopc//In&Out/BBNmandatoty/files/ and \\comserv\mn\public\BBN. This project is modelled in Opnet 8.1 The model has several scenarios, each one representing a step forward in our work as presented in the Scenarios section in this write-up.

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Topol ogy

Figure 1. Network topology

The topology is based on a 4-router core network in which node 0 and node 1 are the ingress and egress node. There are two disjoint routes between these edge routers. The routers are interconnected by OC3 links. Attachted to the ingress router is the node labelled Madrid, attached to the egress router are four nodes, Barna, Mallorca, Denia and server. From now on, these nodes are referred to by the names as indicated in Figure 1. These nodes are connected to the edge routers by DC3 links.

Configuration
Madrid, Barna, Mallorca and Denia are defined as workstations , while server is a HTTP server. The IP addresses for these nodes and the edge routers are configured manually. The two other IP routers are configured automatically by Opnet. The interfaces of edge routers is then configured manually for the nodes. The interface configuration for node 1 is shown in Table 1 as an example. More details can be found in the project files.
Table 1. Interface configuration of node 1.

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Several scenarios
The scenarios created and the aim of each are summarised below. More details are found in the section Scenarios. 1 No_TE_init Defines the basic network, allows us to get used to Opnet, and is the starting point of our work. No_TE_congestion Modifies traffic to demonstrate that using traditional routing protocols may lead to congestion since all traffic follows the shortest path. TE_initial Upgrades core routers to MPLS, configures MPLS and FECs to balance load unidirectionally. TE_initial2 Bidirectional LSPs are used. TE_initial3_nodefailure A node fails in the LSP no protection path is defined, and we see the effect on the traffic in the FEC that uses the LSP. PROTECT_1 A backup LSP path is defined but not used. PROTECT_2 Failure conditions are created and the backup path is used to protect from node failure.

4 5

6 7

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

MPLS primer
MPLS represents the convergence of connection-oriented forwarding techniques and the Internet routing protocols. The precedents of MPLS (Ipsilons IP Switching, IBMs ARIS, Ciscos early TAG Switching, and Toshibas Cell Switch Router architectures) leveraged the high performance cell-switching capabilities of ATM switch hardware and melded them together into a network using existing IP routing protocols. As standardization progressed, packet-based MPLS also emerged to simplify the mechanisms of packet processing within core routers, replacing full or partial header classification and longest-prefix-match lookups with simple index label lookups. MPLS offers one powerful tool unavailable to solutions based on conventional IP routers- the capability to forward packets over arbitrary non-shortest paths and emulate high-speed tunnels between non-label switched domains. Such traffic-engineering capabilities enable service providers to optimise the distribution of QoS sensitive and Best Effort traffic around their network. Additionally, MPLS can support metering, policing, marking, queuing, and scheduling behaviours ranging from the fine granularity of IntServ to the aggregated granularity of DiffServ- and offer them simultaneously on a single network. Simply replacing connectionless shortest-path forwarding with label-switched shortest path forwarding is not a major win. However, an LSP need not follow the shortest path between two edge LSRs. Although conventional IP routing protocols typically do not generate non-shortest path routes, external routing algorithms can be used to determine new routes for LSPs that result in more optimal distribution of loads around a network. This feature is a major advantage of MPLS over IntServ or DiffServ alone. Encoding Labels on Specific Links. MPLS forwarding is defined for a range of link-layer technologies, some of which are inherently label switching (ATM, FR) and others not (packet over SONET/ SDH (POS) and Ethernet). Although switching logically occurs on the label in the top stack entry, ATM and FR switch their native data units (cells and frames, respectively) based on a link layer copy of the top stack entry. For packet-based layers, the MPLS frame is simply placed within the links native frame format. The stacking scheme allows for LSPs to be tunnelled through other LSPs. The action of putting a packet onto a LSP constitutes a push of a MPLS Label Stack entry. The action of reaching the end of a LSP results in the top stack entry being removed (popped).

Figure 2 MPLS encoding for PPP over SONET/SDH and ATM links.

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Driving Per-hop behaviour. By definition a particular LSP is associated with a particular FEC. For Best Effort Services, the FEC is derived solely from topological considerations. However , three different solutions exist when additional edge-to-edge QoS requirements are taken into account: - Each distinct queuing and scheduling behaviour may be encoded as a new FEC (LSP), ignoring the experimental field. - The Experimental field encodes up to eight queuing and scheduling behaviours for the same FEC (LSP). - The Experimental field encodes up to eight queuing and scheduling behaviours independent of FEC (LSP). The Label field can provide context from which per-hop queuing and scheduling parameters are determined. However, per hop behaviour is intimaly associated with a specific LSP because the entire Label field is also being used to determine a packets next hop (its path context). Distinct service classes require distinct LSPs if the experimental field is not being used. Likewise, distinct hop precedence levels require distinct LSPs.

Figure 3: The label alone can provide per-hop behaviour context.

If the experimental bits are used to provide additional classification context, up to eight additional permutations of service class and drop precedence are possible. These permutations may be determined within the context of particular LSPs or completely independent of the LSP. These approaches can significantly reduce the number of Label field values required to encode multiple service classes and drop precedence levels across a MPLS network.

Figure 4: Label and Experimental bits together provide per-hop behaviour.

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Label-Switched Path Merging and QoS One way of reducing the amount of label space consumed within a MPLS network involves the use of LSP merging. It involves having two or more incoming labels map to a single downstream label at a core LSR. In essence traffice belonging to the same FEC but originating from different ingress LSRs is merged onto a single LSP at some point in the middle of the network. From the merging point onward, a single LSP replaces two or more LSPs that would otherwise have independently converged on the same egress LSR for the same FEC. Naturally, this technique reduces label consumption on all links downstream of the merge point. Edge behaviours At the edge of a MPLS network sits the label edge router (LER). A LER terminates and/or originates LSPs and performs both label-based forwarding and conventional IP rouing functions. On ingress to a MPLS domain, a LER accepts unlabeled packets and creates an initial MPLS frame by pussing one or more MPLS label entries. On egress the LER terminates a LPS by popping the top MPLS stack entry and forwarding the remaining packet based on rules indicated by the popped label. Figure 5 shows a LER labelling an IP packet for transmission out of a MPLS interface. Conventional IP packet processing determines the FEC and, hence, the contents of a new packets initial MPLS Label Stack and its outbound queing and scheduling service. Once labelled, packets are transmitted into the core along the chosen LSP.

Figure 5 : Simplified Ingress Label Edge Router.

Hybrid LSRs may originate / terminate some LSPs while acting as a transit point for other LSPs (and edge for some traffic, a core for others). LSRs may even do both simultaneously when it supports the tunnelling of one LSP within another. At the ingress to such a tunnel, the LSR pushes a new label stack entry based on the ingress packets existing top label. At the egress from the LSP tunnel, the top-level label is popped, and the LSR then switches the remaining MPLS frame based on the new top label. LERs are also responsible for traffic conditioning which covers both classifying

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

packets onto particular LSPs and rate shaping (policing) the traffic going onto particular LSPs to maintain overall service goals. A LER classifies incoming IP packets, using as much header information as necessary to map packets to correct LSP and to correctly set the experimental bits if appropiate. Traffic Engineering Topology driven MPLS may be combined with the per-hop behaviour described earlier to create a network capable of supporting specific edge-to-edge service levels. However, it doesnt assist the network operator in balancing the load around the nodes and links making up the networks core. Load balancing requires explicitly defining the cross-core routes each LSP takes in order to optimise the average and peak traffic loads on the various paths that may exist between any two LERs. Such explicit management of LSPs is often referred to as traffic engineering.

Scenarios
1 Pur I e P
Intro In the scenario No_TE_init we have defined background traffic between Madrid and 3 destinations: Barna, Mallorca and Denia. The dynamics and amplitude of the traffic allows an easy identification of the three flows. The start and end times for the flows to Barna, Mallorca and Denia are (10, 50), (10, 52) and (5, 54), respectively. The throughputs of each flow are indicated in Figure 6. We also have explitcitly modelled traffic, namely heavy HTTP browsing as defined in OPNETs traffic profiles. The purpose of this is to allow measurements on packet delay and processing delays, and furthermore in subsequent scenarios to have the possibility to use the traffic type as a FEC mapping criterion. Results Using traditional routing protocols, all the traffic follows the shortest path. This is illustrated in Figure 6, where the link from node 0 to node 2 carries all the traffic no traffic passes through node 6. We observe the traffic dynamics of the three flows. The total traffic of 43000 packets/second that passes through node 2 is less than its IP datagram forwarding rate of 50000 packets/second. For a one way distance of around 500 km, the round-trip propagation delay will be approximately 5 ms. The measured HTTP response times are around 20 ms. The HTTP server task processing time is around 6 ms, while the IP processing delay in the routers are approximately 0.2 ms. Some delay contributions from e.g. SDH/SONET constitute the remainging delay of around 8 ms. We will see the increased contributions from the IP processing delay when congestion arises in the next scenario

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Figure 6. Left: Throughputs at different links in the network. Right: Delays at different nodes.

C ongeston i

Intro In order to demonstrate the non-optimum performance in such a pure IP routing model, we increase the load that is to be processed by node 2. To achieve this, in the scenario No_TE_congestion, we create two additional nodes using node 2 as their only intermediate router, as illustrated in Figure 7. The background traffic source is Logroo and the destination is Zaragoza. The throughput in the (16, 45) period is of 180000 packets/seconds. Clearly, the total traffic to be processed by the IP router exceeds the IP datagramforwarding rate. The node will be congested, and we aim to study this by studying the HTTP traffic delay, which depends on the IP processing delay in the router.

Figure 7. New network topology with two added nodes.

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Results The throughput of the flows through the congested node is significantly reduced, because the totalt packet-forwarding rate of the router is 50000 packets/second. However, this limitation only occurs after 35 there is a period between 18 and 35 during which the outgoing throughput from node 2 is around ~77000 packets/second. After 35 the outgoing throughput stabilises at 50000 packets/second. This may be a result of the modelling of the background traffic, since in reality (or for explicit traffic) the IP router could not exceed its maximum packet-forwarding rate.

Figure 8. Throughput

The HTTP response time is significantly increased while the node is congested. The increase in HTTP page response time of ~14 ms can be directly attributed to the increase in IP processing delay of 7 ms, because it should be accounted for twice.

Figure 9. Delay

10

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Load- anci bal ng

Scenario (TE_initial2) Intro The former case is a suitable example to demonstrate the TE capabilities of MPLS. The goal of the load balancing is to reduce the load of the congested node, making some traffic follow a non-shortest path through the network. In order to achieve this, we define a unidirectional LSP that goes from node 0 to node 1 via node 6 and another one in the reverse direction. They are named balancing path and reverse balancing path.

Figure 10. Network topology with LSPs (red and blue) specified.

To specify the traffic that should use these LSPs, we define two FECs: a destination based (bound_Denia) and a traffic type based (HTTP traffic). A combination could also have been defined. The FEC details are illustrated in Figure 11.

Figure 11. FEC details of all HTTP traffic and traffic going to Denia.

The FECs must be assigned to the LSPs at the LER. This is done by mapping FECs to LSPs as indicated in Figure 12. The LER is responsible for studying the incomming traffic, find the FEC, push a MPLS label to the datagram and correctly output the MPLS packet as indicated in the LSP table.

11

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Figure 12. One of the two FEC mappings used.

Results We see from Figure 13 that the load is effectively split for Madrid outgoing traffic when it arrives at the LER, node 0, according to the FEC definition and FEC mapping described above. Hence, traffic to Denia and HTTP traffic are independent of the load of node 2. The HTTP traffic can only be observed by zooming in on the graph due to its low throughput.

Figure 13. Left: Load balancing based on FECs from node 0. Right: Zoom-in on low throughput HTTP traffic in the (node 0 - node 6) link.

In this scenario there were no back-up LSP; when a node or link in the LSP fails, the traffic in correspondings FECs is routed according to ordinary routing protocols. The MPLS header is not pushed on the IP datagram. To illustrate this Figure 14 shows the throughput of node 0 to node 6 and node 0 to node 2, when a failure in node 6 occurs after ~12 minutes and recovers after 25 minutes. This is simulated in the scenario TE_initial3_nodefailure. The throughput is shown in the reverse direction in to be able to see the HTTP traffic only. Again, the background traffic is not affected by the LSR failure; we consider this to be a result of the background traffic modelling in Opnet.

12

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Figure 14. When a LSR fails, traditional routing protocols governs explicit traffic. Modelled background traffic is not affected by LSR failure.

4
Intro

Pr ecton ot i

Figure 15. The network topology with a backup LSP.

In the scenario Protection_2, we now introduce a back-up path for the LSP through a new core IP router, node 4. In order to do this we define two unidirectional LSPs (backup balancing path and reverse backup balancing path) with the same source and destination as the previous ones. The node failure is modelled to be the same as in the previous scenario. Since only explictly modelled traffic is affected by the LSR failure we only consider the HTTP FEC to simulate the back-up path. The back-up path configuration is illustrated in Figure 16.

13

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Figure 16. Configuration of the backup path for the HTTP FEC.

Results The effect of the node failure can be observed in Figure 17. The traffic in the primary LSP path is 0 during failure, and is re-routed to the back-up path. The rerouting of HTTP traffic is fast; the reduction of throughput at any time is insignificant. When the LSR in the primary LSP recovers, traffic is rerouted to the original LSP.

Figure 17. The HTTP traffic is rerouted to the backup LSP during failure of node 6.

Discussion
Using background traffic is a trade-off between time-efficient and accurate modelling: On one hand, it is a highly practical and time efficient way of loading the network and allows us to demonstrate many aspects of IP routing and MPLS. On the other hand, it does not accurately reflect the IP routers dynamic behaviour when it comes to maximum packet forwarding rate, nor events like node failure in a LSP. Explicit traffic is required for measuring e.g. delay and packet loss, and in a more accurate model only this type of traffic should be used. However this could lead to an excessive simulation time required, considering the goals of demonstrating main OPNET features and TE in the context of MPLS.

14

34350 BBN Mandatory Lab Report

Martin Nord Jos Soler

Conclusion
We have acquired basic knowledge of the OPNET simulation tool. Basic IP network has been configured, and demonstrated inadequacies in the traditional IP routing protocols (here: OSPF). MPLS has been succesfully implemented to achieve TE; both load balancing and protection has been demonstrated. Further applications of MPLS like VPNs could have been demonstrated using this model and the applied technique. However, we prefer to focus on the main project (GMPLS modelling) in the remaining time frame of the course. Hence, the SMART goals defined in the Introduction section have been (IOHO: in our humble opinion) fulfilled. REFERENCES [1] MPLS Technology and Applications. Davie & Rekhter.Morgan Kauffman publishers. ISBN: 1558606564. [2] QoS in IP Networks. Foundations for a Multi-Service Internet. Grenville Armitage. Macmillan Technical Publishing. ISBN: 1578701899. [3] Internet Performance Survival Guide. QoS Strategies for Multiservice Networks. Geoff Huston. ISBN: 0471378089 [4] Modelling MPLS Networks. Mandatory Lab Description. Henrik Christiansen. [5] MPLS Model Description. OPNET Documentation. [6] Representing Network Traffic. OPNET Documentation. [7] Standard Network Applications. OPNET Documentation. [7] Configuring Applications and Profiles. OPNET Documentation. [8] Simulation Methodology for Deployment of MPLS. OPNET Documentation. [9] Simulation-based Analysis of MPLS Traffic Engineering. OPNET Documentation. [10] www.playboy.com (HTTP traffic modelled as heavy web browsing)

15

You might also like