1

Multiple Routing Configurations for Fast IP Network Recovery
ˇ cc Amund Kvalbein, Member, IEEE, Audun Fosselie Hansen, Member, IEEE, Tarik Ciˇ i´ , Member, IEEE, Stein Gjessing, Member, IEEE, Olav Lysne, Member, IEEE

Abstract— As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP networks, we present a new recovery scheme called Multiple Routing Configurations (MRC). Our proposed scheme guarantees recovery in all single failure scenarios, using a single mechanism to handle both link and node failures, and without knowing the root cause of the failure. MRC is strictly connectionless, and assumes only destination based hop-by-hop forwarding. MRC is based on keeping additional routing information in the routers, and allows packet forwarding to continue on an alternative output link immediately after the detection of a failure. It can be implemented with only minor changes to existing solutions. In this paper we present MRC, and analyze its performance with respect to scalability, backup path lengths, and load distribution after a failure. We also show how an estimate of the traffic demands in the network can be used to improve the distribution of the recovered traffic, and thus reduce the chances of congestion when MRC is used.

I. I NTRODUCTION In recent years the Internet has been transformed from a special purpose network to an ubiquitous platform for a wide range of everyday communication services. The demands on Internet reliability and availability have increased accordingly. A disruption of a link in central parts of a network has the potential to affect hundreds of thousands of phone conversations or TCP connections, with obvious adverse effects. The ability to recover from failures has always been a central design goal in the Internet [1]. IP networks are intrinsically robust, since IGP routing protocols like OSPF are designed to update the forwarding information based on the changed topology after a failure. This re-convergence assumes full distribution of the new link state to all routers in the network domain. When the new state information is distributed, each router individually calculates new valid routing tables. This network-wide IP re-convergence is a time consuming process, and a link or node failure is typically followed by a period of routing instability. During this period, packets may be dropped due to invalid routes. This phenomenon has been studied in both IGP [2] and BGP context [3], and has an adverse effect on real-time applications [4]. Events leading to a re-convergence have been shown to occur frequently [5]. Much effort has been devoted to optimizing the different steps of the convergence of IP routing, i.e., detection, dissemination of information and shortest path calculation, but the
Manuscript received December 21, 2006, revised July 21 2007 All authors are with Simula Research Laboratory, Oslo, Norway

convergence time is still too large for applications with real time demands [6]. A key problem is that since most network failures are short lived [7], too rapid triggering of the reconvergence process can cause route flapping and increased network instability [2]. The IGP convergence process is slow because it is reactive and global. It reacts to a failure after it has happened, and it involves all the routers in the domain. In this paper we present a new scheme for handling link and node failures in IP networks. Multiple Routing Configurations (MRC) is a proactive and local protection mechanism that allows recovery in the range of milliseconds. MRC allows packet forwarding to continue over pre-configured alternative next-hops immediately after the detection of the failure. Using MRC as a first line of defense against network failures, the normal IP convergence process can be put on hold. This process is then initiated only as a consequence of non-transient failures. Since no global re-routing is performed, fast failure detection mechanisms like fast hellos or hardware alerts can be used to trigger MRC without compromising network stability [8]. MRC guarantees recovery from any single link or node failure, which constitutes a large majority of the failures experienced in a network [7]. MRC makes no assumptions with respect to the root cause of failure, e.g., whether the packet forwarding is disrupted due to a failed link or a failed router. The main idea of MRC is to use the network graph and the associated link weights to produce a small set of backup network configurations. The link weights in these backup configurations are manipulated so that for each link and node failure, and regardless of whether it is a link or node failure, the node that detects the failure can safely forward the incoming packets towards the destination on an alternate link. MRC assumes that the network uses shortest path routing and destination based hop-by-hop forwarding. The shifting of traffic to links bypassing the failure can lead to congestion and packet loss in parts of the network [9]. This limits the time that the proactive recovery scheme can be used to forward traffic before the global routing protocol is informed about the failure, and hence reduces the chance that a transient failure can be handled without a full global routing re-convergence. Ideally, a proactive recovery scheme should not only guarantee connectivity after a failure, but also do so in a manner that does not cause an unacceptable load distribution. This requirement has been noted as being one of the principal challenges for precalculated IP recovery schemes [10]. With MRC, the link weights are set individually in each backup configuration. This gives great flexibility with respect to how

VI. Thus. However. When a router detects that a neighbor can no longer be reached through one of its interfaces. The use of a standard routing algorithm guarantees loop-free forwarding within one configuration. IV. If a failure is deemed permanent. We then define MRC formally and present an algorithm used to create the needed backup configurations in Sec. there is a valid path with a finite cost between each node pair. we say that a packet is forwarded according to a configuration. The backup configurations differ from the normal routing configuration in that link weights are set so as to avoid routing traffic in certain parts of the network. depending on whether the endpoints of the link are important. Also. III. In Sec. we give it infinite weight. Similarly. The packets must be marked with a configuration identifier. so that every network component is excluded from packet forwarding in one configuration. VII. packets that would normally be forwarded over the failed interface are marked as belonging to a backup configuration. and each time a node or link is permanently added or removed. A. with a set of nodes N and a set of unidirectional links (arcs) A1 . Finally. that are used to route recovered traffic on alternate paths after a failure. we explain how the generated configurations can be used to forward the traffic safely to its destination in case of a failure. i. The backup configuration used after a failure is selected based on the failure instance. We use the notation shown in Tab. we formally describe MRC and how to generate configurations that protect every link and node in a network. In our approach. all packets are forwarded according to the original configuration. The rest of this paper is organized as follows. The failure of that node will then only affect traffic that is sourced at or destined for the node itself. The algorithm will typically be run once at the initial start-up of the network. II. there is a configuration where that link or node is not used to forward traffic. where all link weights are normal. for any single link or node failure. and finally we conclude in Sec. each router creates a set of configuration-specific forwarding tables. Upon detection of a failure. there will exist a configuration that will route the traffic to its destination on a path that avoids the failed element. and the associated link weights. Second. . II we describe the basic concepts and functionality of MRC. and hence makes it easier to use mechanisms that prevents micro-loops during convergence. new configurations must be generated based on the altered topology. we discuss related work. In this paper we talk about building a separate forwarding table for each configuration. A).2 the recovered traffic is routed. by regarding such a group as a single component that must be avoided in a particular configuration. we construct the backup configurations so that for all links and nodes in the network. It is important to stress that MRC does not affect the failurefree original routing. In Sec. All other traffic is forwarded according to the original configuration as normal. MRC gives continuous packet forwarding during the convergence. We present performance evaluations of the proposed method in Sec. and forwarded on an alternative interface towards its destination. If a failure lasts for more than a specified time interval.e. III. so the routers along the path know which configuration to use. and thus we can choose link weights in the backup configurations that are well suited for only a subset of failure instances. the topology graph G must be bi-connected.. In Sec. we propose an algorithm that can be used to automatically create such configurations. and thus also the backup next-hop. Instead. which is the same in all configurations.e. other packet marking strategies like IPv6 extension headers or using a private address space and tunneling (as proposed in [11]) could be used. when there is no failure. MRC OVERVIEW MRC is based on building a small set of backup routing configurations. i. We observe that if all links attached to a node are given sufficiently high link weights. We formally represent the network topology as a graph G = (N. In order to guarantee single-fault tolerance. is detailed in Sec. In Sec. III. If this is not possible. V. meaning that it is forwarded using the forwarding table calculated based on that configuration. v) to denote a link. For simplicity. A configuration is defined by this topology graph and the associated link weight function: 1 We interchangeably use the notations a or (u. but we believe that more efficient solutions can be found in a practical implementation. we discuss how we can improve the recovery traffic distribution if we have an estimate of the demands in the network. The link can then fail without any consequences for the traffic. only traffic reaching the failure will switch configuration. MRC does not interfere with this convergence process. Then. VIII. for each configuration. it does not immediately inform the rest of the network about the connectivity failure. to exclude a link (or a group of links) from taking part in the routing. Our MRC approach is threefold. we will first detail the requirements that must be put on the backup configurations used in MRC. The selection of the correct backup configuration. traffic will never be routed through that node. In Sec. Configurations Structure MRC configurations are defined by the network topology. IV. a normal re-convergence will be triggered. or make it longer than normal. the backup configurations must be constructed so that all nodes are reachable in all configurations. we design a forwarding process that takes advantage of the backup configurations to provide fast recovery from a component failure. Packet marking is most easily done by using specific values in the DSCP field in the IP header. I. G ENERATING BACKUP C ONFIGURATIONS In this section. First. Using a standard shortest path calculation.. which differ among configurations. at the cost of longer convergence times [12]. Shared Risk Groups can also be protected. a standard routing algorithm like OSPF is used to calculate configuration specific shortest paths and create forwarding tables in each router. based on the configurations. we create a set of backup configurations.

The restricted link weight wr must be set to a sufficiently high. v) pi (u. v)) \ (S i ∪ {u. A configuration Ci is an ordered pair (G. More than one node may be isolated in a configuration. . Isolated links do not carry any traffic. 3-5 and 4-5 are isolated (dotted). or it connects two isolated nodes. In each configuration. and the links 1-2. v}) = ∅ ∧ wi (pi (u. Importantly. The purpose of the restricted links is to isolate a node from routing in a specific backup configuration Ci . v) = wr ⇒ (u ∈ Si ∧ v ∈ S i ) ∨ (v ∈ Si ∧ u ∈ S i ) (2) wi (u. Still. i > 0. Right: A configuration where nodes 1. However. MRC can co-exist with traffic engineering schemes that rely on asymmetric link weights for load balancing purposes.3 TABLE I N OTATION G = (N. such as node 5 to the left in Fig. v ∈ N : N (pi (u. When we talk of a backup configuration. in that all isolated nodes are directly connected to a core of nodes connected by links with normal weights: Definition. Only traffic to and from the isolated node will use these restricted links. These traffic regulations are imposed by assigning high weights to some links in the backup configurations: Definition. Definition. wi (u. v) ∈ A. The set of isolated nodes in Ci is denoted Si . Restricted links are used to isolate nodes from traffic forwarding. we refer to a configuration that adheres to (2) and (3). . 1. wi (u. MRC guarantees single-fault tolerance by isolating each link and node in exactly one backup configuration. We assume that C0 is given with finite integer weights. three nodes and three links are isolated. Since all backup configurations must adhere to (2) and (3). MRC treats links as unidirectional. . For a node to be reachable. Definition. wr . v ∈ Bi : a ∈ A(pi (u. wmax }. A node u ∈ N is isolated in Ci if ∀(u. A configuration backbone Bi = (S i . Ai ). selected links and nodes must not carry any transit traffic. restricted and isolated links are always attached to isolated nodes as given by the following rules. this means that a link is always A backbone is connected if all nodes in S i are connected by paths containing links with normal weights only: Definition. traffic must be able to depart from and reach all operative nodes. A configuration Ci is valid if and only if ∀u. v) N (p) A(p) wi (u. . all links have “normal” weights w0 (a) ∈ {1. finite value to achieve that. . A link a ∈ A is isolated in Ci if wi (a) = ∞. Ai ⊆ A consists of all non-isolated nodes in Ci and all links that are neither isolated nor restricted: a ∈ Ai ⇔ wi (a) ≤ wmax (5) With MRC. A) Ci Si Bi A(u) (u. These two rules are required by the MRC forwarding process described in Sec. v) = ∞ ⇒ u ∈ Si ∨ v ∈ Si (3) This means that a restricted link always connects an isolated node to a non-isolated node. 4 and 5. and makes no assumptions with respect to symmetric link weights for the links that are not restricted or isolated. v) ∈ A. In the backup configurations. we cannot isolate all links attached to the node in the same configuration. v) = wr (1) isolated in the same configuration as at least one of its attached nodes. ∞} that assigns an integer weight wi (a) to each link a ∈ A. Nodes are isolated by assigning at least the restricted link weight to all their attached links. v)) < ∞ (4) We observe that all backup configurations retain a characteristic internal structure. C0 . all node pairs must be connected by a finite cost path that does not pass through an isolated node or an isolated link. v) wi (p) wr n Graph comprising nodes N and directed links (arcs) A The graph with link weights as in configuration i The set of isolated nodes in configuration Ci The backbone in configuration Ci The set of links from node u The directed link from node u to node v A given shortest path between nodes u and v in Ci The nodes on path p The links on path p The weight of link (u. v) ≥ wr ∧ ∃(u. For all links (u. We distinguish between the normal configuration C0 and the backup configurations Ci . 1. A link a ∈ A is restricted in Ci if wi (a) = wr . IV in order to give correct forwarding without knowing the root cause of failure. In the example to the right in Fig. v) in configuration Ci The total weight of the links in path p in configuration Ci The weight of a restricted link The number of configurations to generate (algorithm input) 2 1 3 1 4 2 3 4 6 5 6 5 Fig. Left: Node 5 is isolated (shaded color) by setting a high weight on all its connected links (stapled). more than a single node can be isolated simultaneously. v)) ⇒ wi (a) ≤ wmax (6) An important invariant in our algorithm for creating backup configurations is that the backbone remains connected. In many topologies. . A backbone Bi is connected if and only if ∀u. and the set of normal (non-isolated) nodes S i = N \ Si . Hence. In the normal configuration. MRC is agnostic to the setting of these weights. 1. . v) ∈ A. Definition. . A configuration that satisfies this requirement is called valid: Definition. wi (u. Restricted and isolated links are always given the same weight in both directions. we can . An isolated link either connects an isolated node to a non-isolated node. wmax . wi ) of the graph G and a function wi : A → {1.

v ′ ∈ S i : wi (u. Assume u ∈ Si ∧ v ∈ Si . v)). The algorithm terminates when either all nodes and links in the network are isolated in exactly one configuration. Instead we present a heuristic algorithm that attempts to make all nodes and links in an arbitrary bi-connected topology isolated. u′ ) = wr ∧ wi (v. which attempts to find a valid assignment of isolated and restricted links for node u as detailed below. fewer links and nodes need to be isolated per configuration. no shortest path will include x as the transit. all isolated nodes are directly connected to the configuration backbone. Finding this set would involve solving the Set Cover problem. using the notation shown in Tab. Since |Ai | < 2|A|. the state requirement for the backup routing information storage is reduced. Topologies where the failure of a single node disconnects the network can be processed by simply ignoring such nodes. transit traffic is constrained to the configuration backbone. which is known to be N P -complete [13].1: A backup configuration Ci is valid if and only if it contains a connected backbone. From (4). an isolated node. and the total weight of the sub-path that is within Bi will be at most |Ai | · wmax . assume u. A link is isolated in the same iteration as one of its attached nodes. III-B. or a node that cannot be isolated is encountered. Then. v}) = ∅ (7) (8) and (4) follows. or in the routers. restricted link weight value wr = |A| · wmax (9) is sufficiently high to exclude x from any shortest path in Ci which does not start or end in x. To guarantee recovery after any component failure. and then find the complete set of configurations with lowest cardinality. The link queue is initially empty. The algorithm is agnostic to the original link weights w0 . The node queue contains all nodes in an arbitrary sequence. and hence requires a bi-connected as input. Proof: We first show that a connected backbone implies that Ci is valid. I. 1) Description: Algorithm 1 loops through all nodes in the topology. A set. it is first tested that doing so will not disconnect Bi according to definition (6). v)) must have normal weights. v ∈ S i and node x ∈ N (pi (u. ∃Ci ∈ C : wi (a) = ∞ ∧ ∀u ∈ N.Cn } be a set of backup configurations. if fewer configurations are constructed. and assigns new link weights only to restricted and isolated links in the backup configurations. of backup configurations is complete if ∀a ∈ A.. B.. From the definition of an isolated node and (2). In backup configurations. but all links in the network will have to pass through it. which are then left unprotected. For each node pair u and v. wi (pi (u. one or both of u and v are in Si . zero. For the converse implication. v is in Si . C. and tries to isolate them one at a time. function isolate is called. A queue of nodes (Qn ) and a queue of links (Qa ) are initiated. However.4 show that a backup configuration with a connected backbone is equivalent to a valid backup configuration: Lemma 3. the weight of a path through x will be at least 2·wr = 2·|A|·wmax . except possibly the first or the last hop. v ′ ) = wr . If the connectivity test is positive. We now specify the algorithm in detail. such links can not be part of A(pi (u. Algorithm The number and internal structure of backup configurations in a complete set for a given topology may vary depending on the construction model. A valid configuration contains a connected backbone. Let C = {C1 . From (4). of u. or an isolated link in configuration Ci . If successful. its output is a complete set of valid backup configurations. For a sufficiently high n. and all links in A(pi (u. v)).3. The algorithm can be implemented either in a network management system. . v)) \ (S i ∪ {u. v ′ )) < ∞ N (pi (u. One solution would be to find all valid configurations for the input consisting of the topology graph G and its associated normal link weights w0 . On the other hand. When a node u is attempted isolated in a backup configuration Ci . the algorithm will always terminate successfully. as will be further discussed in Sec. As long as all routers have the same view of the network topology. x ∈ S i and wi (pi (u. removing it from the queue. a) Main loop: Initially. From the definition of an isolated node and (2). v)) < ∞. . If the algorithm terminates successfully. Since by (2) restricted links are always connected to at least one isolated node. In the next subsection we present an efficient algorithm for this purpose. giving a richer (more connected) backbone in each configuration. ∃Ci ∈ C : u ∈ Si (10) A complete set of valid backup configurations for a given topology can be constructed in different ways. From (6) a ∈ A(pi (u′ . A subset of the above is sufficient to show the same if only one. v ′ )) ⇒ w(a) ≤ wmax . any shortest path in Ci will be entirely contained in Bi . We say that Definition. Our algorithm takes as input the directed graph G and the number n of backup configurations that is intended created. v)) ≤ 2wr + wi (pi (u′ . If more configurations are created. ∃u′ .2: Let x be a node isolated in the valid backup configuration Ci . Method first returns the first item in the queue. n backup configurations are created as copies of the normal configuration. calculating the minimum number of configurations for a given topology graph is computationally demanding. Thus. Proof: Since all links attached to the isolated node x have a weight of at least wr . A restricted link weight wr that is sufficiently high to achieve this can be determined from the number of links in the network and the maximal normal link weight: Proposition 3. The connected method at line 13 decides this by testing that each of u’s neighbors can reach each other without passing through u. This algorithm isolates all nodes in the network. they will compute the same set of backup configurations. or none. every node and every link must be isolated in one backup configuration.

Proof: Links are only given weights wr or ∞ in the process of isolating one of its attached nodes. w0 ) Si ← ∅ Bi ← Ci end Qn ← N Qa ← ∅ i←1 while Qn = ∅ do u ← first (Qn ) j←i repeat if connected(Bi \ ({u}. u) end return Ci It can be shown that it is an invariant in our algorithm that in line 1. A(u))) then Ctmp ← isolate(Ci . If it does terminate with success. a complete set of valid configurations with cardinality n could not be built using our algorithm. and link (v. the algorithm aborts (line 22 in Alg. . Cn } is complete. and all configurations Ci ∈ C are valid. it is checked that the node has at least one neighbor not isolated in Ci (line 14 in Function isolate). . in accordance to the definition of isolated nodes. and (3) follows. the node is not isolated (isolate. When isolating a node. A link is always isolated in the same configuration as one of its attached nodes. C2 . The algorithm will then terminate with an unsuccessful result (line 22). Hence it is impossible to isolate two neighbor nodes without also isolating their connecting link. . Function isolate(Ci . When the links are isolated. If it is. u) ← ∞ else wi (u. v) ← first (Qa ) if ∃j : v ∈ Sj then if wj (u. all links in all configurations have original link weights. v) : wi (u. and (2) follows. x) = ∞ then wi (u. we keep trying to isolate u in every configuration. if the link was not isolated in the same configuration as v. v) ← wi (v. u) if Ctmp = null then Ci ← Ctmp Si ← Si ∪ {u} Bi ← Bi \ ({u}. it must be isolated along with node u. line 9). v) ← wi (v. v) = ∞ and i = j then wi (u. . . we leave it for node v to isolate it later. Otherwise. the produced backup configurations adhere to (2) and (3). until all n configurations are tried (line 20). selected at line 10 in Alg. 1 terminates successfully. u) ← wr else if ∃(u. we must test (line 6) that u will still have an attached non-isolated link. the altered configuration remains valid. u can not be isolated in the present configuration (line 9). . If u could not be isolated in any configuration. Before we can isolate the link along with u. v) = wr then if ∃(u. (2) requires that only one of the attached nodes are isolated. Otherwise no changes are made in Ci . we must decide whether the link should be isolated along with u (line 7). v) ← wi (v. x) = ∞ then wi (u. as many as possible of its attached links are isolated. If the link can not be isolated together with node u. The node v in the other end of the link may or may not be isolated in some configuration already (line 4). or if it is already isolated in the configuration where v is isolated (line 11). u) must be the first among the links originating from node v to be processed (line 18. v) : wi (u. This invariant is maintained in line 7 in function isolate by demanding that if a node attached to a restricted link is attempted isolated. The algorithm runs through all nodes. A(u)) i ← (i mod n) + 1 until u ∈ Si or i=j if u ∈ Si then / Give up and abort end isolate returns the modified configuration and the changes are committed (line 16). Proof: Initially. In the case that the neighbor node v was not isolated in any configuration (line 12). 3) Termination: The algorithm runs through all nodes trying to make them isolated in one of the backup configurations b) Isolating links: Along with u. x) ∈ A(u) (u. To make sure that this link can be isolated along with v. u) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Qa ← Qa + (u. A link must always be isolated in the same configuration as one of its end nodes. u) ← wr Qn ← v + (Qn \ v) Qa ← (v. Each time a new node and its connected links are isolated in a configuration Ci we verify that the backbone in that configuration remains connected. selected at line 2). If one node cannot be isolated. 2) Output: We show that successful execution of Algorithm 1 results in a complete set of valid backup configurations. thus the configuration set is complete.3: If Alg. 1).4: If Alg. we must process v next (line 17. If this is not possible. Proposition 3. the link must also be isolated. we also isolate as many as possible of the connected links. Proposition 3. v) ← wi (v. If this is not the case. The algorithm runs through the links A(u) attached to u (lines 2-3 in function isolate). all links in Qa are attached to node u.5 Algorithm 1: Creating backup configurations. we move on to the next node. u) ← ∞ else return null else if wj (u. we isolate the link along with u if there exists another link not isolated with u (line 14).1. x) ∈ A(u) (u. the backup configurations set C = {C1 . all nodes and links are isolated in one configuration. 1 terminates successfully. For restricted links. v) ∈ A(u) while Qa = ∅ do (u. v). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 for i ∈ {1 . . Hence. From Lemma 3. 1). ∀(u. n} do Ci ← (G. If u was successfully isolated.

IV. Based on these. When a packet reaches a point of failure. all attached links except one. v) = Ci ⇔ wi (u. In this section. is responsible for finding a backup configuration where the failed component is isolated. node v and link (v. forwarding can be done in configuration C(v). This can be repeated until we have configurations so that every node and link is isolated.6 and will always terminate with or without success. where both v and (u. node u selects C(v) in step 2. The detecting node must find the correct backup configuration without knowing the root cause of failure. The algorithm will create |N | backup configurations. and forwards the packet. Proof: Assume n = |N |.5: Given a bi-connected graph G = (N. In the worst case.5. However. Let C(u) denote the backup configuration where node u is isolated. L OCAL F ORWARDING P ROCESS Given a sufficiently high n. the algorithm is designed so that any bi-connected topology will result in a successful termination. V we analyze the number of backup configurations created by Alg. u) will be isolated in the next configuration. all transit routers identify the packet with the Failed forwarding in node u towards node v Switched configuration before No Look up nexthop 2 in conf C(v) Yes Drop packet 1 3 Failed link returned from lookup Yes Look up nexthop in conf C(u) No Forward packet in conf C(v) Forward packet in conf C(u) 4 Fig. whether the next-hop node v or link (u. and the link (v. v). v) will be avoided. v). d)). the challenge is to provide recovery for the failure of link (u. Node v is isolated in C(v) and will not be in the shortest path pi (u. 1 will terminate successfully. 2. Assuming that node d is the egress (or the destination) in the local network domain. and neither node v nor link (u. detected by node u on path to the network-local destination d via node v. In the other case.e.. The steps that are taken in the forwarding process by the detecting node u are summarized in Fig. d)). i. Proposition 3.. In bi-connected networks with higher connectivity it is often possible to reuse the configurations and terminate the algorithm with a lower n. a standard shortest path algorithm is used in each configuration to calculate configuration specific forwarding tables. The detecting node marks the packet as belonging to this configuration. v) has failed. Additionally. since its last link will always lead to a node that is already isolated in another configuration. v) = ∞.e. Proof: If v = d. Proof: Node u selects C(v) in step 2. d) according to proposition 3. v = d.1: Node u selects configuration Ci so that v ∈ N (pi (u. if v = d. so that Alg. Along with a node u. Consider a situation where a packet arrives at node u. selected backup configuration. The worst case running time for the complete algorithm is then bound by O(n∆|N ||A|). and cannot be forwarded to its normal next-hop v because of a component failure. also in the case that node d indeed has failed. 2. v) ∈ A(pi (u. the algorithm terminates without success. v) when node v is operative. if the number of configurations allowed is sufficiently high. packets that have changed configuration before (their configuration ID is different than the one used in C0 ). If v = d. Assume there is only a single component failure in the network. Proposition 4. Ring topologies represent the worst-case input for our algorithm since all |N | nodes have two links each and would have to be isolated in different configurations in order to close the loop described in Prop. This way packets loops are avoided.2. From the packet marking. whose number has an upper bound in the maximum node degree ∆. 1 for different input network topologies.. we must run through all n configurations to find a configuration where a node can be isolated. bound to worst case O(|N |+|A|). say (u. the algorithm presented in Sec. A). 4) Complexity: The complexity of the proposed algorithm is determined by the loops and the complexity of the connected method. By forcing node v to be the next node processed (isolate line 17). C(u. u) to be first among A(v) (line 18). Similarly. i. Since all links and nodes can be isolated. Packet forwarding state diagram. called the detecting node. must be discarded. Proposition 4. the algorithm will terminate successfully. let C(u. In Sec.2: Node u selects configuration Ci so that (u. isolating one node in each backup configuration. and still meet a failed component on their forwarding path. and forward it to the egress node avoiding the failed component. III will create a complete set of valid backup configurations. the node adjacent to the failure. This method performs a procedure similar to determining whether a node is an articulation point in a graph. This holds also for the last node processed.e. In bi-connected topologies this can always be done. If a node cannot be isolated in any of the configurations. i. The running time on a standard desktop computer varies from sub-second for small topologies to a few seconds for topologies with several hundred nodes. we describe how these forwarding tables are used to avoid a failed component. 3. Furthermore. Our strategy is to forward the packet using a path to v that does not contain (u. can be isolated. we run through all adjacent links. v) will be in the shortest path . for each node. v) denote the backup configuration where the link (u. C(u) = Ci ⇔ u ∈ Si . we can distinguish between two cases. there will exist n ≤ |N |. v) is isolated. since this information is generally unavailable.

we also measure the number of configurations needed if we exclude the nodes that can be covered by Loop-Free Alternates (LFA) [21]. C(v) = Cj . [15] provides a useful framework for MRC implementation. 3. The figure shows isolated nodes (shaded color). It must be noted that MRC yields the shown performance immediately after a failure. In case (c). v)) does not contain (u. and hence the number of needed values would be small. v) = C(v). This way the forwarding decision at the point of failure can be simplified at the cost of storing the identifier of the correct backup configuration to use for each destination and failing neighbor. Node u would then perform at most two additional next-hop look-ups in the case of a failure. These IETF drafts specify how routers can exchange information about the link weights used in several logical topologies. V. The packets would then be decapsulated at the egress and forwarded from there as normal towards the final destination. each packet must carry information about which configuration it belongs to. then C(u. Remember that according to (3). and will be returned as the next hop. and build topology specific forwarding tables. Recall from Sec. LFA is a cheaper fast reroute technique that exploits the fact that for many destinations. v) = Ci according to the definition of an isolated node and (2). We distinguish between three possible cases. We use its performance as a reference point and evaluate how closely MRC can approach it. v)) does not contain (u. the links are given unit weight. Assume that v is the egress node for destination d. forwarding step 2 will select C(v) = Cj and A(pj (u. If such alternate paths exist for 2 Our simulation software http://simula. For the Rocketfuel topologies. and C(u. recovered packets and tunnel them shortest path in the selected backup configuration to the egress node. and the forwarding will be done in C(u). a packet must be forwarded in the configuration where the connecting link is isolated. Link (u. III-B and created configurations for a wide range of bi-connected synthetic and real topologies2 . For the routers to make a correct forwarding decision. v) = Ci as in Fig. 3c. The drawback with this method is the additional processing and bandwidth resource usage associated with tunneling. v). 3b. Finally. C(v) = Cj . and step 4 will select C(u) = Ci and A(pi (u. illustrated in Fig. v). and the forwarding will be done in C(v). C(u. d). The detecting node could then encapsulate is available at . all nodes in the network have full knowledge of the structure of all backup configurations. For each topology. P ERFORMANCE E VALUATION MRC requires the routers to store additional routing configurations.7 C(u) = C(v) u v u C(v) v a) C(u) u v b) c) Fig. and is not influenced by the link weights. restricted links (dashed). forwarding step 2 will select C(v) = Cj . Since routing in a backup configuration is restricted. 3. Longer backup paths will affect the total network load and also the end-to-end delay. In cases (a) and (b). If C(u) = Ci and C(v) = Ci as in Fig. the number of configurations needed is valid irrespective of the link weight settings used. The synthetic topologies are obtained from the BRITE topology generation tool [17] using the Waxman [18] and the Generalized Linear Preference (GLP) [19] models. To explore the effect of network density. Full. Hence. MRC will potentially give backup paths that are longer than the optimal paths. there exists an alternate next-hop that will not lead to a forwarding loop. Step 3 will detect this. a very limited number of backup configurations are needed to guarantee recovery from all single link or node failures. v) is not isolated in Cj . Forwarding step 2 will select C(v) = Ci and A(pi (u. The amount of state required in the routers is related to the number of such backup configurations. III-B that our algorithm for creating backup configurations only takes the network topology as input. v) = Cj as in Fig. if C(u) = Ci . v) = C(v). A.6 for GLP topologies. while IP reconvergence can take seconds to complete. For all synthetic topologies. and isolated links (dotted). Use of these drafts for providing proactive recovery is sketched in [16]. v) = C(v). the average node degree is 4 or 6 for Waxman topologies and 3.no/research/networks/software/ pi (u. However. The real topologies are taken from the Rocketfuel topology database [20]. v) = C(u) ∨ C(u. An explicit approach could be to use a distinct value in the DSCP field of the IP header to identify the configuration. When there is an error in the last hop u → v. Hence. requiring the detecting node u to know the backup configuration C(v) for each of its neighbors. we measure the minimum number of backup configurations needed by our algorithm to isolate every node and link in the network. Recent IETF standardization work on Multi Topology routing mechanisms [14]. If C(u) = Ci . v)) does not contain (u. The number of nodes is varied between 16 and 512 to demonstrate the scalability. node u can determine in advance the correct backup configuration to use if the normal next hop for a destination d has failed. C(u. A. Each node in the IGP cloud would get a separate address in each configuration. Evaluation setup We have implemented the algorithm described in Sec. i = j. i = j. global IGP re-convergence determines shortest paths in the network without the failed component. C(u. v). and C(u. A more implicit approach would be to assign a distinct local IP address space for each backup configuration. Implementation issues The forwarding process can be implemented in the routing equipment as presented above. This information can be either explicit or implicit. As we will see shortly. 3a.

For a selected class of topologies. The shifting of traffic from the normal path to a recovery path changes the load distribution in the network. Φ is convex and resembles an exponentially growing function. For simplicity. MRC needs to isolate fewer components. 3 or 4 is typically enough to isolate every element in a topology. and with 16 nodes. We use shortest path routing with equal splitting of traffic if there exists several equal cost paths towards a destination. The differences with respect to link loads between different recovery strategies will only be visible when there exists more than one possible backup path. However. pgl 32 3xwa 32 2xwa 32 81. The traffic matrix has been scaled so that the load on the most utilized link in the network is about 2/3 of the capacity. The COST239 network all traffic that is routed through a node. Since some nodes and links are completely covered by LFAs. This local search heuristic has been shown to give performance that is close to the optimal solution that can be achieved by a connection oriented technology like MPLS. II shows the minimum number of configurations Alg. The table also shows how many nodes that are covered by LFAs. The results show that the number of backup configurations needed is usually modest. Each bar in the figure represents 100 different topologies given by the type of generation model used. For our load evaluations. we avoid a situation where there exists only one possible backup path to a node. and can in some cases lead to congestion and packet loss. we still need two backup configurations to cover this single node. To achieve a good load distribution and minimize the chances of congestion in the failure-free case. Based on the created configurations. and hence the number of configurations decreases for some topologies. the links-to-node ratio. We see that for the COST239 network. providing the necessary maneuvering space. we look at constant packet streams between each node pair. all nodes except one is covered by LFAs. designed to connect major cities across Europe. we show results for both the POP-level and router level topologies. Tab. In the COST239 network each node has a node degree of at least four. As an example the bar name wax-2-16 denotes that the Waxman model is used with a links-to-node ratio of 2. The algorithm often goes through all network nodes without meeting this situation even if n is low. and the number of nodes in the topology. In other words. and is more successful in topologies with a higher average . we adopt the link weight optimization heuristic introduced in [23]. pgl Type of topology Fig. because isolating all the attached links in a single configuration would leave the node unreachable. we use a gravity-style traffic matrix where the traffic between two destinations is based on the population of the countries they represent [22]. pgl 64 3xwa 64 2xwa 64 81. They define a piecewise linear cost function Φ that is dependent on the load l(a) on each of the links a in the network. Alg. we have performed simulations of the European COST239 network [22] shown in Fig. we measure the backup path lengths (hop count) achieved by our scheme after a node failure. 4. All links in the network have equal capacity. 1 could produce for selected real world topologies of varying size. They then introduce a local search heuristic that tries to minimize the value of Φ by randomly perturbing the link weights. The number of backup configurations required for a wide range of BRITE generated topologies. To do this. 1 could produce in a wide range of synthetic topologies. 4. Fig. 5. No topology required more than six configurations. we evaluate how the backup path lengths depend on the number of backup configurations. The algorithm fails only if it meets a node that if isolated disconnects the backbone in each of the n backup configurations.8 Copenhagen 100 % London Amsterdam Berlin Percentage of topologies 80 % 60 % 6 5 4 3 Brussels Luxembourg Prague 40 % 2 20 % Zurich Paris Milan Vienna 0% 2 51 3xwa 2 51 2xwa 2 51 81. For the Sprint US network. pgl 16 3xwa 16 2xwa 16 81. By using this network. 1 performs very well even in large topologies. and the number of configurations needed when MRC is used in combination with LFAs. TABLE II N UMBER OF BACKUP CONFIGURATIONS FOR SELECTED REAL WORLD NETWORKS Network Sprint US (POP) Sprint US (R) Geant COST239 German Telecom DFN Nodes 32 284 19 11 10 13 Links 64 1882 30 26 17 37 Confs 4 5 5 3 3 2 LFA 17 186 10 10 10 13 Confs 4 5 4 2 - B. pgl 8 12 3xwa 8 12 2xwa 8 12 81. We therefore test the effect our scheme has on the load distribution after a failure. Number of Backup Configurations Figure 5 shows the minimum number of backup configurations that Alg. we can rely on LFA instead of protecting the node using MRC. The COST239 network is selected for this evaluation because of its resilient network topology.

R ECOVERY L OAD D ISTRIBUTION MRC recovery is local. we measure the throughput on each 1 0. This modest number of backup configurations shows that our method is implementable without requiring a prohibitively high amount of state information. in order to focus more on the topological characteristics than on a specific link weight configuration. 6.2 0 0 10 20 Link ID Failure free IGP reroute MRC 30 40 50 Fig. 1 less surprising. MRC gives a somewhat higher maximum link utilization in this network. and the recovered traffic is routed in a backup configuration from the point of failure to the egress node. unidirectional link for every possible link failure. the effect a failure has on the load distribution when MRC is used is highly variable. III-B we stated the problem of finding a minimal complete set of valid configurations can be transformed to the Set Covering problem.5 1 2 3 4 5 6 7 8 9 10 11 12 3 4 5 6 7 8 9 10 Number of configurations 0 Path length in number of hops Fig. For reference. Fig. for all paths with at least two hops. 6 shows path length distribution of the recovery paths after a node failure. as . after IGP reconvergence. Occasionally the load added on a link can be significant. Shows each individual links worst case scenario. We then find the maximum link utilization over all failures for each link. It has long been known that heuristic algorithms can efficiently approximate an optimal solution to this problem [24].8 Relative load 0. 7. In Fig. D.6 0. The figure shows that the average recovery path length decreases as the number of backup configurations increases. This means that the affected traffic will not suffer from unacceptably long backup paths in the period when it is forwarded according to an MRC backup configuration. number of configurations 5 MRC Optimal local IGP reroute 50 4.worst case load after failure 1. Results for link failures show the same tendency and are not presented.5 20 3 10 2. Five backup configurations were used. we see that for some link failures. local rerouting based on the full topology except the failed component (“Optimal local”). as well as MRC with 5 backup configurations. about 5 seconds for the router level Sprint US network. and when MRC is used to recover traffic. The maximum link load after the worst case link failure is 118% with MRC. Backup Path Lengths Fig. In the next section. we let every intermediate node fail. which makes the good performance of Alg.2 node degree. VI. COST239 . Average backup path lengths in the case of a node failure as a function of the number of backup configurations. Algorithm 1 yields richer backup configurations as their number increases. we show the path length distribution in the failure-free case (“IGP normal”).9 Path lengths .32 nodes. This shifting of traffic from the original path to a backup path affects the load distribution in the network. We see that MRC gives backup path lengths close to those achieved after a full IGP re-convergence. we discuss a method for improving the post failure load balancing with MRC. For each of these paths. compared to 103% after a full IGP reconvergence. 8. Load on all unidirectional links in the failure free case. The running time of our algorithm is modest. The results indicate that the restricted routing in the backup topologies result in a worst case load distribution that is comparable to what is achieved after a complete IGP rerouting process. which are indexed from the least loaded to the most loaded in the failurefree case.5 Average Path length IGP normal Percentage of paths 40 4 30 3. The numbers are based on 100 different synthetic Waxman topologies with 32 nodes and 64 links. All the topologies have unit weight links. 64 links 60 MRC 5 Optimal local IGP reroute IGP normal Path lengths vs. Figure 8 shows the maximum load on all links. C. Load on Individual Links In order to evaluate the routing performance while MRC is used to recover traffic. Backup path lengths in the case of a node failure. and measure the resulting recovery path lengths using global IGP rerouting.4 0. and might lead to congestion. In Sec. 7 we have plotted the average backup path lengths for the 75 of the 100 input topologies that could be covered using 3 backup configurations. In our experience. However.

j ∈ {1. 3. . . If MRC is used for fast recovery. and should be conducted as a background refinement process. This node is then attempted isolated in backup configuration Ci+1 . which links and nodes are isolated in each Ci ∈ {C1 . it can only be implemented by a central network management unit. Algorithm We first optimize the link weights in the normal configuration C0 . If a node can not be isolated in any of the backup configurations. The intuition behind our algorithm is that we want the amount of traffic that is potentially recovered in each backup configuration to be approximately equal. as described in the evaluation method in Sec. and all non-affected traffic is distributed according to them. minimized. Using the optimized link weights in C0 . The potential γi of a backup configuration Ci is the sum of the potential of all nodes that are isolated in Ci : γi = u∈Si γ(u) (12) Our modified backup configuration construction method is defined in Alg. First. . . so that the potential γi of each backup configuration is approximately equal: γi ≈ γj . Network operators often plan and configure their network based on an estimate of the traffic demands from each ingress node to each egress node. the knowledge of such demand matrix D provides the opportunity to construct the backup configurations in a way that gives better load balancing and avoids congestion after a failure. i. v) + l(v. we optimize the link weights in the backbones of the backup configurations to get a good load distribution after any link failure. We propose a procedure to do this by constructing a complete set of valid configurations in three phases. If the load is close . The outcome of this algorithm is dependent on the network topology and the traffic demand matrix D. A more detailed description of this procedure can be found in [25]. The link weight optimization is a computing intensive process. and so on with the nodes with the highest potential in Cn . As in Alg. 3 this information is used to construct backup configurations C1 . We restrict ourselves to link failures in the optimization phase in order to limit the calculation complexity. For some nodes. where we have a better chance of distributing it over less loaded links by setting appropriate link weights. In this section. We then go through all nodes in the network. The backup configuration link weight assignments (c) determine which among the available backup paths are actually used. this traffic should be routed in a rich (well connected) backbone. .. Since the optimization procedure described in this section uses random search and relies on an estimate of the traffic matrix. the input to our algorithm for generating backup configurations is the normal configuration C0 . . . V. 1.e. we calculate the load l(a) on each unidirectional link in the network in the failure-free case. . we can optimize the structure and link weights used in these configurations for the different failure scenarios without sacrificing performance in the failure free case. we take advantage of the load distribution in the failure free case to construct the MRC backup configurations in an intelligent manner. We start our configuration generation algorithm by ordering all nodes with respect to their potential and assigning each node to a tentative backup configuration CT (u) (line 6 in Alg. For this purpose we use the link weight optimization heuristic and the cost function Φ introduced in [23]. this will disturb the desired property of equalizing γi among the backup configurations. until all backup configurations are tried. n} (13) A. . we describe an approach for minimizing the impact of the MRC recovery process on the post failure load distribution. (c) The link weight assignments used in the backbones B1 . and the number n of backup configurations we want to create. . The functionality with respect to single link or node failure recovery guarantees is unchanged. Second. i. in our experience this typically only happens for a very limited number of nodes. However. . so that the cost of routing the demands D is The nodes with the smallest potential are assigned to C1 . Ci+2 and so on (line 20). 3). 8. . Cn . III is the congestion avoidance during MRC recovery. We want to avoid that the failure of heavily loaded links results in large amounts of traffic being recovered in backup configurations with a sparse backbone. In Alg. u)) (11) Definition. Instead. Finally. Bn of the backup configurations. we give up and abort. (b) The structure of the backup configurations. the load distribution in the network during the failure depends on three factors: (a) The link weight assignment used in the normal configuration C0 . The potential γ(u) of a node u is the sum of the load on all its incoming and outgoing links: γ(u) = v∈N (l(u.10 we saw in Fig. Since the backup configurations are only used to route recovered traffic. and the consequences are not severe. . . Cn }. the link weights in the normal configuration are optimized for the given demand matrix D while only taking the failure free situation into account. those with somewhat higher potential to C2 . Note that when nodes can not be isolated in the backup configuration it was assigned to. Clearly. The backup configuration structure (b) dictates which links can be used used in the recovery paths for each failure. The link weights in the normal configuration (a) are important since MRC uses backup configurations only for the traffic affected by the failure. To implement this load-aware algorithm we calculate the potential of each node in the network and the potential of each backup configuration: Definition. this might not be possible without breaking the definition of a connected backbone given in (5). . . The improvement of this proposal compared to the MRC backup configuration generation described in Sec. and attempt to isolate each node u in its tentative backup configuration (line 10).

9. The effect of the proposed recovery load balancing is highlighted in the bottom panel of Fig. . A(u)) else i ← (i mod n) + 1 until u ∈ Si or i=j if u ∈ Si then / Give up and abort end To capture possible changes caused by the altered link weights in the backup configurations. .11 Algorithm 3: Load-aware backup configurations. where the optimized and standard MRC are directly compared. A(u))) then Ctmp ← isolate(Ci . we run the local search heuristic with the objective of minimizing the sum a∈LC Φa . our third phase consists of optimizing the link weights used in these configurations so that the recovered traffic is distributed over less utilized links.. In order to get a method that scales to large networks. This means that for the failure of a particular link (u. 8. The local search heuristic terminates and returns the weight setting that gives the lowest value of the objective function after a given number of iterations. ascending) Qa ← ∅ while Qn = ∅ do u ← first (Qn ) i = CT (u) j←i repeat if connected(Bi \ ({u}. If the traffic distribution is more skewed. b ∈ LC : Φa ≥ Φb . w0 ) Si ← ∅ end Qn ← N assign CT (Qn . 9. Our goal is to find a weight function wi for each configuration. Note that the initial calculation of LC is performed after we have optimized w0 . This is in accordance with the goal of having a rich backbone in which to reroute traffic after the failure of heavily loaded links. This is possible since the re-converged network will choose the shortest available path. Evaluation To evaluate our load aware construction algorithm. as explained in Sec. C. and set the critical link set size k to 20. this cost will be the same. i. which is better than what is achieved after a full IGP re-convergence. we only need to evaluate the resulting cost Φa after the failure of each link a in Li . but before we have optimized w1 . The set of critical links Li of a configuration Ci is Li = {a ∈ LC |a ∈ Bi } / (14) to equally distributed on the links before a failure. The links are sorted by the load in the failure-free case. the links are sorted by their load after a worst case failure using standard MRC. The maximum link load after the worst case link failure has been reduced from 118% to 91%. For each backup configuration Ci . In the top panel in Fig. Cn are created. . the set of critical link failures is periodically updated during the link weight search. The top panel in Fig. For all other link failures. Given the set of critical link failures for each backup configuration.e. The setting of k can be used to / balance the result precision with the computation complexity. IV. Note that the optimizations described here will only have an effect if the network topology allows more than one possible . We reuse the evaluation framework from Sec. . and compare it to the results achieved by the original algorithm. However. Here. Weight Optimization When the backup configurations C1 . We see how the optimized MRC often manages to route traffic over less utilized links after the failure of a heavily loaded link. We see that the worst case load peaks for the optimized MRC are somewhat reduced compared to the standard MRC. v). evaluating the Φ for a given weight setting is a complex task because it involves recalculating the load on all links in the network. wn . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 for i ∈ {1 . while MRC in this case manages to route the recovered traffic over less utilized links. V. and we adopt the cost function Φ introduced in [23]. . . We define the critical link set LC as the k links that give the highest value of Φa upon failure. so that the cost Φ of routing the demands through the network is as small as possible after any link failure. We use a weight search heuristic similar to the one used in the failure free case. . . we end up with approximately the same number of nodes isolated in each backup configuration. while only very few nodes. u) if Ctmp = null then Ci ← Ctmp Si ← Si ∪ {u} Bi ← Bi \ ({u}. we compute the worst case load on each link after a link failure. We take advantage of this to further reduce the number of cost evaluations performed in our local search heuristic. the algorithm typically ends up with isolating many nodes with a small potential in C1 . Each time we change a link weight wi (a) in backup configuration i. LC is the set of links with cardinality k so that ∀a ∈ LC . We then optimize the link weights taking only the critical link failures into account. . we define Li ⊆ LC as the set of critical links whose failure results in recovered traffic being routed according to Ci : Definition. . With MRC. the distribution of recovered traffic is determined only by the link weights in the backup configurations where nodes u and v are isolated. there is a dependency between a particular link failure and the backup configurations that are used to recover the traffic. n} do Ci ← (G. are isolated in backup configuration Cn . B. we identify a limited set of critical link failures. Let Φa denote the cost of routing the demands through the network when link a has failed. with a high potential. we show the worst case link loads for the load aware MRC (“Optimized MRC”) and after a full IGP re-convergence on the new topology. 9 is directly comparable to Fig. γ.

In the discussion below.worst case load after failure standard vs optimized 1. We have also run our optimizations on less connected networks than COST239. our link weight optimization is performed in seconds or minutes. it is connectionless. they do not solve the ”last hop problem”. Nelakuditi et al. it is not strictly pre-configured. In another paper.6 0. Since LFAs do not provide full coverage. and is strictly pre-configured. Network layer recovery in the timescale of milliseconds has traditionally only been available for networks using MPLS with its fast reroute extensions [26]. Not-via is the connectionless version of MPLS fast reroute [26] where packets are detoured around the failure to the next-next hop. 9. Narvaez et al.8 Relative load 0. Load on all unidirectional links in the COST239 network after the worst case link failure. and hence it operates independent of the root cause of failure.4 0.2 0 0 10 20 Link ID Failure free IGP reroute MRC 30 40 50 COST239 .. To protect against the failure of a component P. We also indicate whether they solve the ”last hop problem”.. This way. They propose to do a local re-convergence upon detection of a failure. for both link and node failures. the tunneling approach may give less optimal backup paths. We indicate whether each mechanism guarantees one-fault tolerance in an arbitrary bi-connected network.8 Relative load 0. For larger networks the optimizations can take several hours. for handling node failures [29].12 COST239 . we focus on the most important contributions aimed at restoring connectivity without a global re-convergence. For small networks like COST239. Tab. However. This method will also cover link failures. they have also proposed a similar method. IETF is also drafting a tunneling approach based on so called “Not-via” addresses to guarantee recovery from all single link and node failures [11]. since no traffic is routed there during normal operation. without achieving any significant improvements over the method described in Sec. Forwarding tables are then calculated for these addresses without using the protected component. We call these approaches local rerouting. However. A similar approach also considering dynamic traffic engineering is presented in [31]. without passing through (“Not-via”) P. [28] propose a method relying on multi-hop repair paths. R ELATED W ORK Much work has lately been done to improve robustness against component failures in IP networks [10]. backup path after a failure. named Failure Inferencing based Fast Rerouting (FIFR). a special Not-via address is created for this component at each of P’s neighbors. while all other nodes forward packets according to their pre-computed interface specific forwarding tables without being explicitly aware of the failure. notify and send updates only to the nodes necessary to avoid loops.2 1 0. we focus mainly on solutions for connectionless destinationbased IP routing. a next hop is defined as an LFA if this next hop will not loop the packets back to the detecting node or to the failure. a)Optimized MRC vs complete IGP rerouting.e. b)Standard vs optimized MRC. A related survey can be found in [27]. FIFR guarantees one-fault-tolerance in any bi-connected network. The idea behind FIR is to let a router infer link failures based on the interface packets are coming from. Our main inspiration for using multiple routing functions to achieve failure recovery has been a layer-based approach .6 0. their method will not guarantee this for the last hop.2 1 0.worst case load after failure (optimized) 1. pre-configured and it does not affect the original failure-free routing. [8] propose using interface specific forwarding to provide loop-free backup next hops to recover from link failures. However. VII.e. and therefore avoid the problems of root cause of failure and the last hop. and should be conducted as a background refinement process.2 0 0 10 20 Link ID Standard Optimized 30 40 50 Fig. and can hence not recover traffic in the same short time-scale as a strictly pre-configured scheme. all nodes get a path to each of P’s neighbors. i. It also covers against link and node failures using the same mechanism.4 0. III. IETF has recently drafted a framework called IP fast reroute [30] where they point at Loop-Free Alternates (LFAs) [21] as a technique to partly solve IP fast reroute. Their method does not guarantee one-fault-tolerance in arbitrary bi-connected networks. Note that updating the link weights in the backup configurations can be done without consequences for the traffic. When a link fails. the attached nodes locally reroute packets to the affected destinations. The Not-via approach is similar to MRC in that loop-free backup next-hops are found by doing shortest path calculations on a subset of the network. They are designed only for link failures. independent of the root cause of failure (failure agnostic). and less flexibility with regards to post failure load balancing. Their approach is called failure insensitive routing (FIR). III summarizes important features of the different approaches. In this section. It is obviously connectionless. i. From a node detecting a failure.

13 TABLE III C ONCEPTUAL COMPARISON OF DIFFERENT APPROACHES FOR FAST IP RECOVERY Scheme MRC Not-via tunneling [11] Local rerouting [28] FIR [8] FIFR [29] LFA [21] MPLS FRR [26] Rerouting (OSPF) Guaranteed in bi-connected yes yes no yes yes no yes yes Node faults yes yes no no yes yes yes yes Link faults yes yes yes yes yes yes yes yes Preconfigured yes yes no yes yes yes yes no Connectionless yes yes yes yes yes yes no yes Failure agnostic yes yes N/A N/A yes yes no yes Last hop yes yes N/A N/A no yes N/A yes used to obtain deadlock-free and fault-tolerant routing in irregular cluster networks based on a routing strategy called Up*/Down* [32]. Oct. apr 2007. G. Jahanian. Yu. 225–236. however MRC and Notvia tunneling seems to be the only two covering all evaluated requirements. F. MRC guarantees recovery from any single node or link failure in an arbitrary bi-connected network. [8] S. 4. In the COST239 network. Many of the approaches listed provide elegant and efficient solutions to fast network recovery. Francois. Iannaccone. Chuah. [39]. 106– 114. 1988. Deshpande. However. and operating based on locally available information only. Chuah. Zhang. Bhattacharyya. “An approach to alleviate link overload as observed on an IP backbone. “Achieving sub-second IGP convergence in large IP networks. Aug. vol. Labovitz. AK. or on finding link weights that work well both in the normal case and when the routing protocol has converged after a single link failure [37]. “IP resilience within an autonomous system: Current approaches.” IEEE Communications Magazine. “Impact of link failures on VoIP performance. Shand. no. i. In the graph-theoretical context. This is achieved by using careful link weight assignment according to the rules we have described. vol. and M. pp. and C. [12] P. General packet networks are not hampered by deadlock considerations necessary in interconnection networks. Previdi. California. G. “Delayed Internet Routing Convergence. June 2007. 15.” IEEE/ACM Transactions on Networking. may 2007. “Fast local rerouting for handling transient link failures. B. no. vol. Boutremans.” SIGCOMM. no. and leaves more room for optimization with respect to load balancing. “Characterization of failures in an IP backbone network. challenges. Existing work on load distribution in connectionless IGP networks has either focused on the failure free case [23]. called layers.-N. “Disruption free topology reconfiguration in OSPF networks. Y. D. 3. Francois. Computer Communications Review. In this paper we refine these ideas and adapt them to an IP setting. Aug. The link weight assignment rules also provide basis for the specification of a forwarding procedure that successfully solves the last hop problem. Iannaccone. Jahanian. . [9] S. and O. and C. San Diego. “Stability issues in OSPF routing. and future directions. 63–71. Riecke.” in Proceedings INFOCOM.” in Proceedings INFOCOM. whether the forwarding disruption is caused by a node or link failure. June 2001. USA. O. 2004. S. C ONCLUSION We have presented Multiple Routing Configurations as an approach to achieve fast recovery in IP networks. R EFERENCES [1] D. and we have proposed a method that minimizes the risk of congestion after a link failure if we have an estimate of the demand matrix. Bonaventure. Labovitz. Evans. pp. no. vol. pp.” in Proceedings INFOCOM. The performance of the algorithm and the forwarding mechanism has been evaluated using simulations.” in Proceedings of International Workshop on Network and Operating System Support for Digital Audio and Video. 2002. [5] D. J. “IP fast reroute using not-via addresses. USA. 2003. 293–306. pp. [6] P. 35 – 44. Filsfils. 2005. and C. this approach gave a maximum link load after the worst case link failure that was even lower than after a full IGP re-convergence on the altered topology. USA: IEEE Computer Society. . C. Mar. Basu and J. Clark.” Internet Draft (work in progress). Nelakuditi. N. [4] C. DC. Rai. vol. [38]. and C. pp. MRC backup path lengths are comparable to the optimal backup path lengths—MRC backup paths are typically zero to two hops longer. MRC is based on providing the routers with additional routing configurations. Lee. Anchorage. MRC thus achieves fast recovery with a very limited performance penalty. Z. 2.” in Proceedings of SIGCOMM. 204–213. and S. July 2005. pp. 35. None of the proactive recovery mechanisms discussed above take any measures towards a good load distribution in the network in the period when traffic is routed on the recovery paths. and F. Washington. Each layer contains all nodes but only a subset of the links in the network. 142–149. Bhattacharyya.. draft-ietfrtgwg-ipfrr-notvia-addresses-01. 359–372. Bonaventure. 9. A. S. We have evaluated the effect MRC has on the load distribution in the network while traffic is routed in the backup configurations. Iyer. 18. “Experiences with monitoring OSPF on a regional service provider network. Diot. pp. MRC operates without knowing the root cause of failure. A. we argue that MRC offers the same functionality with a simpler and more intuitive approach.” IEEE/ACM Transactions on Networking. and O. allowing them to forward packets along routes that avoid a failed component. MRC can act promptly after failure discovery. 10.e. Diot. [10] S. Bose. pp. S. [2] A. Ahuja. [3] C. Mar. no. [35]. Bryant. 406–416. Mukherjee.” ACM SIGCOMM Computer Communication Review. 2. Watson. By calculating backup configurations in advance. G.-N. VIII.-L.” in ICDCS ’03: Proceedings of the 23rd International Conference on Distributed Computing Systems. C. pp. Shand. [11] S. Taft. Markopoulou. We have shown that MRC scales well: 3 or 4 backup configurations is typically enough to isolate all links and nodes in our test topologies. [7] A. RRL is based on calculating spanning sub topologies of the network. 2003. 2001. 43. [36]. “The design philosophy of the DARPA internet protocols. Diot. and hence we generalized the concept in a technology independent manner and named it Resilient Routing Layers [33][34]. and C. M.

Shand and S. no. Fortz and M. Oct. “Local restoration algorithms for link-state routing protocols. His research interests include network architectures. “Results from the COST 239 project. H. 2005. 1979. “Network resilience through multi-topology routing. Guerin. Norway. June 2001. http://www. 9. [37] A. “Robust optimization of OSPF/IS-IS weights. June 2002. and his Ph. Ciˇ i´ . degree from the University of Oslo in 2003. Berlin. Norway. I. and an adjunct researcher at Simula Research Laboratory. Lysne. pp. and he served as general chair of ICPP 2005. He is currently working towards his Ph. Norway. K. Kvalbein is a member of the IEEE. June 2007. . fault tolerance and Quality of Service. Guerin. F. R. Audun Fosselie Hansen received his M. Norway. Thorup. 2001. pp. a fault tolerant and topology agnostic routing technique. 51. Chuah. 38–49. Ciˇ i´ has been a member of the IEEE since 2000 and ACM since 2004. University of Oslo. A. and network resilience. S.. “On distinguishing between internet power law topology generators. “FRoots. Ibe. where he is leading a project on network resilience. Philos. degree in 1985. 2003. 2000. G. Narvaez. Lysne. New York. and S. Kvalbein. “Making IGP routing robust to link failures. “Fast recovery from link failures using resilient routing layers. 2006. Computers and Intractability: A Guide to the Theory of NP-Completeness. Johnson.-Y. in particular issues related to fault tolerance. [15] T. Lysne is a member of the IEEE.” IETF Internet Draft (work in progress). and professor in computer science at Simula Research Laboratory and the University of Oslo. “Fast reroute extensions to RSVP-TE for LSP tunnels. 2005. He has been a postdoctoral fellow at Simula Research Laboratory. N. “IGP Link Weight Assignment for Transient Link Failures. His main research interests are network layer protocols. Aug. Zhang. pp.” in Proceedings of IEEE INFOCOM.” in INOC. W. [18] B. 13.” in Proceedings of ICC. Waxman.txt. S. ˇ cc [25] A. degree and the Ph. Lee.” in Proceedings INFOCOM.14 [13] M. [20] “Rocketfuel topology mapping. Aug. Gjessings main research interests are currently within network resilience. Waterloo. S.cs. [21] A.” IEEE/ACM Transactions on Networking. “Internet traffic engineering without full mesh overlaying. Siu. Yu. Hansen. Roy. [22] M.. Ultra-high capacity optical transmission networks. [28] P. Pan. Psenak. Wang.” Computer Networks. [14] P. and C. “Achieving near-optimal traffic engineering solutions for current OSPF/IS-IS networks. draft-ietf-ospf-mt-07. and is currently a professor of Computer Science in Department of Informatics. [32] I. Gjessing. [23] B. communication protocols modeling. His main research interests are resilience in fixed and wireless IP networks. Sridharan and R.” WWW. no. Atlas. and L. Lysne. and O. ˇ cc Tarik Ciˇ i´ received the M. “A survey of IP and multiprotocol label switching fast reroute schemes. pp. 1973. Olav Lysne is Research Director at Simula Research Laboratory. 2001. [27] A.-Y. Norway. Ciˇ i´ . annual ACM symp. ˇ cc Dr. 11–14.” in Proceedings of International Conference on Dependable Systems and Networks (DSN). Bu and D. [31] R. Oslo. his most important scientific contributions are in interconnection networks. Diot. Bhattacharyya. Wang. R. Sheth. Johnson. and N. degree at Simula Research Laboratory. and A. 2006. Nguen. Mirtorabi. However. Medina. pp. Wang. [30] M. Shen. 802. Hansen is member of the IEEE.” in Proceedings of the 5th International Workshop on Design of Reliable Communication Networks (DRCN). May 2007. 519–528. [19] T. Pillay-Esnault. [26] P. Swallow. 638–647.D. Norway. vol.” in Proceedings INFOCOM. S. Atlas and A. 8.D. C. Helsinki.washington.-N. 1988. Byers. mar 2007. He received the Masters degree in 1988 and Dr.” IETF Internet Draft (work in progress).Sc. Gjessings original work was in the field object oriented concurrent programming. [17] A. Mar.17). Taft.” RFC 4090. Matta. Kvalbein. Apr. He is also holding a part time position at Telenor R&I. Garey and D. S. May 2005.txt. 1136–1150. S. L. both at the University of Oslo. Scient. draft-ietf-rtgwg-ipfrr-framework07. T. Internet-like networks and optical networks. “Internet traffic engineering by optimizing OSPF weights. Tzeng. Amund Kvalbein is a Post Doc at Simula Research Laboratory. and C. in 1997 and 2002 respectively. “M-ISIS: Multi topology (MT) routing in IS-IS. degree in 1992. ˇ cc [33] A. and J. draft-ietfrtgwg-ipfrr-spec-base-06.” IETF Internet Draft (work in progress). Freeman & Co. Diot. and associate professor at the Department of Informatics. of the 5. scalability and robustness under unexpected operational environments.” in Proceedings of the 22nd European Conference on Optical Communication (ECOC’96). pp.” IEEE Journal on Selected Areas in Communications. 1996. including sensor networks. June 2005. “Approximation algorithms for combinatorial problems. [38] B. Oct. Oslo. Apr. “BRITE: An approach to universal topology generation.” in Proceedings of IEEE MASCOTS. 346–353. Gjessing.” IEEE Transactions on Parallel and Distributed Systems. Rabbat and K. Stein Gjessing received his Dr. “Failure inferencing based fast rerouting for handling transient link and node failures. University of Oslo. T. 2. Bryant. “MTOSPF: Multi topology (MT) routing in OSPF.” in Proceedings of IEEE Global Internet. Oct. and P. Gjessing. “Routing of multipoint connections. Canada. Martin. Today he is the CTO of Media Network Services. “IP fast reroute framework. 234–247. [29] Z.D. Lakhina. Przygienda. Sridharan. and he has participated in several European projects.” Internet Draft (work in progress). Germany. no. Raja and O. draftietf-isis-wg-multi-topology-11. degree from the University of Oslo in 2001. 2003. Schroeder. Siu. S. Menth and R. The early research contributions of Lysne were in the field of algebraic specification and term rewriting.D. He has worked with computer interconnects (Scalable Coherent Interface (IEEE Std. 1596) and LAN/MANs (Resilient Packet Ring (IEEE Std.” in Proceedings of Networking. Thorup. J. Nucci. 1882– 1907.” in Proceedings of IEEE International Conference on Computer Communications and Networks. on Theory of computing. O’Mahony. ˇ cc [34] A. 2005. Zhong. Nov. Ciˇ i´ .edu. network measurements. routing in heterogeneous wireless networks and media streaming to heterogeneous networks and mobile terminals. pp. Kvalbein. Y. Z. His thesis deals with fast reroute in IP networks. jun 2007. degree from the University of Oslo. Hansen. [16] M. and H. 1617–1622.Sc. [39] A. and O. degree from the same university in 2007. 2005. 17. focusing on problems like effective routing. A. “Resilient routing layers for recovery in packet networks. “Restoration methods for traffic engineered networks for loop-free routing guarantees. S. B. June 2005. “Basic specification for ip fast-reroute: Loop-free alternates. Theiss and O. A. 2005. and C. Towsley. pp. F. vol. J. Nelakuditi. pp. 225–230. vol. Oct. T.” in Proc. 1999. 6.” in Proceedings 10th IEEE Symposium on Computers and Communications (ISCC).” in 18th International Teletraffic Congress.-Y. He received his M. 10. Oct. 565– 571. In this area he has participated in numerous program committees. Finland. no. pp. pp. Zinin. “Post-failure routing performance with multiple routing configurations. vol. Dec. thesis is fast recovery mechanisms. [24] D. Fortz and M. A. Sept.S. The main focus of his Ph.” in Proceedings INFOCOM. [36] A. [35] Y. N. M. 2003–2007.

Sign up to vote on this title
UsefulNot useful