You are on page 1of 6

Efficient IP routing table lookup scheme

R.C. Chang a n d B.-H. Lim

Abstract: One of the pertinent issues in IP router design is the IP routing table lookup. With highspeed multi-gigabit links required in the Internet, the lookup has become a great bottleneck. The authors propose a lookup scheme that can efficiently handle IP route lool<uip, insertion and deletion inside the routing table. This method is less complex in coinparison to other schemes. By using careful memory management design, each of the 1P routes are only stored once instead of the range that is used conventionally.Therefore, the required memory is reduced. In addition, a novel skip function is introduced to further decrease the memory size. The proposed scheme, which furnishes approximately 75 x lo6 lookups/s, needs only a small memory size of 0.59 Mbyte. This scheme can be implemented in a pipeline hardware design and thus achieves one route lookup for every memory access. This lookup scheme can also be easily scaled to lPv6 in the fiiture.

Introduction

As long as electronic applications such as e-commerce, e-banks, etc. are widely used on the Internet, a high-speed and high-bandwidth network will be in great demand. The network speed has already been upgraded from megabits to gigabits per second. It will be upgraded to terabits per second in the near future. Owing to the rapid changes in the network, all of the components that construct the network must also be upgraded [l-31. Routers and physical link wires are the two most important components. However, the big breakthrough in fibre optics has guaranteed the network link speed. Nevertheless, faster routers that support multi-gigabits are still not widely available. Hence, designing a multi-gigabit router is a very important subject. When designing a router, three critical issues must be addressed: packet looltup, switching and output scheduling. Some good solutions for switching, such as banyan switches or fast buses, have been developed and used for ATM switching [4]. Besides that, one can use fLill-scale fair queueing [S] in the output scheduling for tightly delay bounds or deficit round robin for cheaper approximation aiid easier implementation [6].However, tlie lookup is still a bottleneck inside a router. The main subject of this paper is to deal with this remainiiig issue, a fast routing lookup. IP-based networks are the most popular networks in world. Since the advent of CIDR (classless interdoinaiii routing) [7], an IPv4 (Internet protocol version 4) [8] route or address can be split into network and host identifiers at any point. The address is then written as (route prefix/ prefix length), where the prefix length ranges between 0-32 bits. When receiving an IP packet, IP routing lookup will be performed in the routing table to determine where the 1P packet is to be forwarded. The lookup result is the next hop on the path towards the destination. Since the routing lookup may match two entries in the routing table at the
8 IEE, 2002 IEE Procec.tli!~gs online no. 20020034 DOI; IO. 1019/ip-com:20020034 Paper first received 10th May and in revised foim 17111 September 2001 The authors arc with the Depariment of Electrical Engineering, National Chuag-Hsing University, Taicliuiig, Taiwan, Republic of China

same time, it may forward the one with the longest match on the prefix. For instance, a routing table may have the entries (140.120/16), (140.120.90/24) and ( 140.120.90.80/28). The IP address 140.120.32.5 has the longest prefix matching tlie first entry, while the address 140.120.90.80has the longest prefix matching the last entry. In the Internet, a well known fact is that the routing tables are not stablc [9], but rather are exposed to ongoing fluctuations called 'route flapping' [IO]. A large number of routes might be inserted or deleted over short periods, although they are alleviated by protocol improvements [I 11. Therefore, both insertion and deletion of 1P routes are essential mechanisms implemented in a router. An efficient router algorithm for insertion and deletion that has no speed degradation and less complexity will be a greatly desired feature in the future. For the past few years, several fast routing lookup mechanisms have been proposed [ 12-27]. These include both software and hardware solutions. For example, by using a trie-like data structure, Degermark et al. proposed a software-based small forwarding table with a memory of 15@160 kbyte [15]. A main concern in their work was to ensure that the size of the trie fit in the on-chip cache memory of a processor. Consequently, this structure will not be useful for IPv6 [28] with longer addresses. Efficient algorithms for prefix insertion and deletion were not presented in their work. Tzeng and Przygienda proposed a software-based lookup scheme, Le. a multiresolution trie that can achieve 2 x lo6 look~~p/s Using structures with [16]. different widths, such a trie is able to cover different parts of the address. However, when new prefixes are inserted or deleted, extra memory is required to support finding specific CIDR prefixes within the trie. Waldvogel et al. proposed a scheme that stores the entries in the hash table [17]. This scheme is based on a binary search of possible prefix lengths and takes a worst case log2W hashes, where W is the address length. For 128-bit IPv6 addresses, this scheme may require as many as seven hash-table lookups, each of which might require several memory accesses. An improvement in the above binary search scheme was presented by Lanipson et u/. [18]. With additional preconiputation, it is possible for the scheme to perform prefix matching by doing a binary search within a sorted array. However, it is again nontrivial for the scheme

to perform insertion or deletion of the prefixes with such a sorted array. Gupta et al. presented a hardware-based routing lookup scheme using a huge DRAM [19]. A memory size of 33 Mbyte or 9 Mbyte is required in t h s scheme, each with two or three memory accesses in the worst case. Although routing table update mechanisms are supported in this scheme, extra memory or memory access is needed to perform such actions. Furthermore, it is not apparent how to guarantee the memory scalability with growing address length. Huang and Zhao proposed a novel IP routing lookup scheme and its hardware architecture [20]. According to this scheme, an IPv4 address is split into two parts, segment (16 bits) and offset (k bits, O < k < 16). The lookup scheme introduces a small lookup table with about 450470 kbyte using a k-bit offset, instead of the conventional 16-bit offset, and a technique called a compression bit map. Besides, Liu and Lea proposed a similar lookup scheme that reduces the memory size to 300-320kbyte [21]. A method, which determines the segment length according to the prefix length distribution, is introduced for the memory reduction. However, both the routing tables need to be rebuilt periodically and both the schemes will not be scaled to IPv6 with such a compacted table. In this paper, we propose a faster and less complex IP routing lookup scheme, which can deal with the insertion and deletion of the IP routes efficiently. Furthermore, the insertion or deletion can have the same high speed as the lookup. An efficient memory management method is used to decrease the memory usage in our scheme. We also introduce a novel skip function that can further reduce the required memory size. Thus, the proposed scheme uses only a small memory size of 0.59 Mbyte. If a hardware pipeline is utilised in our scheme, a speed of 75 x lo6 lookups/s can be achieved.
2

(m/4). A path from the root node to any double-circle node indicates a prefix stored in the trie. In Fig. I , the prefix (&0/3) corresponds to the path starting at the root and ending in a leaf node that has a right-left-right turn going down from the root node. After constructing the prefix trie, one can easily perform prefix lookup, insertion or deletion. Furthermore, one can also efficiently find the longest-prefix match in the trie. For example, a prefix ( u / 4 ) will match both the stored prefixes (LOOO/l) and (lOJ0/3) while walking down the trie from the root node. The prefix (101 0/3) has three bits identical to the example, while the prefix (1000/1) has oiily one bit identical to the example. Thus, the longest prefix match of (m/4) the stored prefix is (&l0/3). Instead of using one bit in each trie node, we can use variable bits to construct the trie. This is one of the main ideas of our proposed scheme and it will be further explained in the next Section.
3
Proposed scheme

Prefix trie

A trie is a general-purpose data structure for storing strings and is organised into a tree-like data structure. Given a set of prefixes, we shall use a prefix trie structure to represent these prefixes. This enables efficient search (or lookup), insertion and deletion of any prefix in an existing set. A prefix trie arranges its nodes according to the rules that the left and right descendents of a node are identified as zero and one, respectively. A prefix, then, is stored as a path from the root node to a leaf node or an internal node. Fig. 1 shows an example of a prefix trie that represents a set ofprefixes: (1000/1), (@00/2), (m0/3), 0 / 3 ) and (m

111_0/4

Fig. 1 Exuwple of the prejx trie


78

In the proposed scheme, we first partitioned the 32-bit 1Pv4 address into a set of variable integers 0 = {U,, Q2, ..., e,,}, where 01, 02, . .., e,, are positive integers and n is the number of partitions (or levels). For example, if the IPv4 address is partitioned into eight equal parts, then we should have n = 8 and 0 , = H2 = ... = U8 = 4. These partitioned prefixes, each of which has variable length dl (where i = 1, 2, ..., n), are then used to construct the prefix trie. In accordance with the CIDR, each of the incoming prefixes may have an effective length smaller than or equal to 32. After partitioning, the prefix may have a last partitioned integer, say q,smaller than or equal to one of the integers in 0. More precisely, 'p I where i = 1, 2, ..., n. Next, all H1, these partitioned prefixes are considered as the nodes in a prefix trie. As a whole picture view, the prefix trie is expanded according to the predefined partitions 0 beginning from the root node. If an incoming prefix has the last partitioned integer cp smaller than or equal to HI, i.e. cp I 81, it will be just stored inside the root node. If the prefix has q s 02, it will first walk through the root node, and then be stored as a leaf node in the second level. Similarly, each of the prefixes will walk through those previous nodes and finally be stored in the level that meets the last partitioned integer q.This means that each prefix will have a leaf node that is located in a particular level. Thus, the whole prefix trie is constructed. Instead of the conventional one-bit node, we used variable-bit nodes in t h s scheme. Therefore, the underlying structure of the scheme is an array contained in each node. The array, which has 2'1 entries, stores all of the information carried by a partitioned prefix to perform lookup, insertion or deletion. All of these arrays will form the block of memory that is required in each level. Although more spaces are required for the variable-bit scheme, the major advantage is that fewer memory accesses are required in the lookup, insertion or deletion. A detailed description of the array will be given later. To understand this scheme better, we describe here the way to actually construct the trie in detail. The incoming prefixes will first be partitioned and the partitioned prefixes will be sent to different levels for performing lookup, insertion or deletion. Here we denote the partitioned prefix as PR,, and the level as L,, where i = 1, 2, ..., n indicates the number of partitions a n d j = I, 2, ..., n indicates the incoming 1P prefixes. As shown in Fig. 2, each PR will be sent to a particular level according to different clock cycles. For instance, PRII will be sent to L I in the first clock cycle; PR12 and PR21 will be sent to L2 and L l ,respectively, in the PR22,and PR3, will be second clock cycle. Similarly, PR13,
IEE Pvoc.
Cuiiiiiruii.,

Vol. 149, No. 2, April 2002

.... ...
..... .....

f
..... ....
pRl ....3. .... .

.
-t

The skip value denotes whether the lookup operation can be skipped from the next level or not. The proposed skip function will make the prefix matching process much easier and reduce the memory size as well. This will be explained further in Section 3.2.
...... .....
..... .....

...

3.1 Examples
Here we give some examples of the prefix insertion to clearly explain the prefix trie construction in the scheme. As a simple example, we use a previously mentioned prefix partition, 0 = {4,4,4,4,4,4,4,4} and n = 8. Hence, there will be 24 = 16 entries in each array. An incoming prefix R that has an output-port value y is denoted as Rb), where the prefix R is written as a (route prefix / prefix length) pair, and y belongs to the set of the output-port values denoted as {a, p, y, 6, ... }. For simplicity, only the first 12 bits of the prefix is written out in binary sequence, whle the effective prefix length is written in decimals. For example, R ( a ) = (101010001111/6) represents a prefix R with its prefix length equals to six (the underlined six bits) is destined to the output port a. As shown in Fig. 3, three prefixes must be inserted: R1 (p) = (@0000000000/2), R ~ ( T= (001100000000/7) ) and R3 ( )= (1 11100001111/ 12). At the beginning, the 6 first prefix RI(P) is partitioned into several PRs. For simplicity, only P R I l ,PR12, and PRI3are considered. As the effective length of is two, only the first partitioned prefix PR, 1 = B needs to be inserted. Thus, only the first O O level LI is needed to handle PR1,. Furthermore, because only the first two bits of PRll are effective, i.e. the length of the partitioned prefix 1 is equal to two, four entries are needed in the new array to insert the output-port p. These four entries are those beginning with 00, i.e. 0000, 0001, 0010 and 0011. The output-port jis written at the second column from the right of the array because 1 = 2 for PR, I. In the second example, two levels are required to handle R2(y) because both PR21 =poll and PR22 = OOoO are effective. Hence, PRzl will first get a pointer 'A' at the entry-0011 in L I , and then PRZ2will be handled in L2 in accordance to pointer 'A'. Now, two entries, which begin

....................

Fig. 2

Bloclcdicigrum of' the proposed scheme

sent to L,, L2, and L,, respectively, in the third clock cycle. Consequently, each PR will be sent to different levels in a particular clock cycle to perform insertion, deletion or lookup. In brief, all of the partitioned prefixes of an incoming prefix PR,, will be sent to different levels L,, respectively, in the ith clock cycle. These actions will come to an end when we meet the last partitioned prefix, which has the last partitioned integer 43. In each level, each new incoming PR will be assigned a new array, which was previously known as a leaf node. However, if the pointer from the previous level exists before, the incoming PR will be directed to an existing array in this level. Tlie array, which has 2"i entries, will keep all of the information carried by the PR. Each of the entries contains a skip value, an indicator, a pointer and a set of output-port values. The pointer is used to indicate a particular array at the next level. The set of output-port values are the next hop values that a variable-length IP prefix is destined for. They will be independently inserted into the array according to the length of the PR, 1 (where 1 I 1 I 0,). The advantage of such an array is that it is used to lookup, insert or delete a specific CIDR prefix easily. This is an important feature that should be supported by the lookup scheme. After prefix matching, only one of the output-port values or a pointer will be sent out. Tlie indicator is then used to distinguish between the output-port value and the pointer.

LP

Ln

12 a
0)

!:
.E a

.u E outputport

0000

0001
p
u)

.E

r l

'6
Q

output port

0010 0011

0000 0001

0010
001 1

0 . 0

1111
I
I

1111

u
0
0 0

Fig. 3 Exarnples of the proposed ,sclwine


IEE Psoc.
Coiiii~~~in., Vol.

149, NCJ. April 2002 2,

19

with 000, are needed to insert the output-port ;I. Thc outputport 1 is written at the third column from the right of the array because / = 3 for PR2?. If there are other prefixes having tlie samc first four bits as R2, they will use the same pointer A in LI and the siiine array in Lz.

associated array will be released for further usage. By having an operation similar to the lookup, the insertion aiid deletion can be perforined as fast a s the lookup specd. Therefore, insertion and deletion arc less coniplcx and more efficient in comparison with other schemes.

3.2 Novel skip function


The third example explains the novel skip function in the proposed scheme. The effective length of the prefix R3(6) is 12 and we have PR31 PRQ = and PR33 = Thus, three levels are needed to handle it. However, we found out that thcrc are continuous zeros in R3(iS). Therefore. we can assign a one-entry array instead of the full-entry array for the partitioned prefix PR32 in L2. That will make it skip through L2 and thus not only reducc the size of the memory, but also make the prefix lookup more efficient. This skip function will have a good performance in the future IPv6 lool<up, because there will bc inany continuous zeros in an IPv6 address. Hence, the prefix R3(6)will first have a pointer B and a skip 1 in L , , another level-independent pointer A in L2, and finally have its output-port 6 inserted in the particular entry in L3. The most important thing is that tlie skip is marked a s 1 to indicate that tlie skip function will be performed in L2. Additionally, all of tlie white spaces in the entries will be filled with zeros wlien memory is initialised at the beginning of the actions.

3.5 Memory reduction

u.

=m,

oooo

3.3 Lookup operation


The loolcup operation is very simple in the proposed scheme. After an incoming prefix is partitioned into several partitioned prefixes, those partitioned prefixes will be sent to different levels in sequence. There is a signal to indicate the lookup, insert aiid delete operations of the partitioned prefix. Inside each level, the incoming partitioned prefix is used as an index to the entries in a particular array. IC the last partitioned prefix has not been received in the current level, a pointer towards the next level will be given to the partitioned prefix. In the next level, the pointer froin the previous level is then used to designate a particular array and continue the lookup operation. While in the lookup operation, if one of the levcls receives the last partitioned prefix, only the output-port value will be acquired and sent out after the lookup operation is performed in accordance to the length of the partitioned prefix (I). The best matching result is always available for a prefix, as it has already stored the entries according to the prefix length. After the last lookup is performed, the output-port value will then pass through all of the latter levels without further lookup operations. In this scheme, the lookup operation is similar to that used for the Patricia tries [29]. However, unlike the Patricia tries, backtracking on the trie is avoided. Hence, a significant speedup is achieved.

With carefd observation, we found out that the memory usage of the output-port columns in an array could be further reduced. Consider that we have iiii array with 2*~ = 24 = 16 entries and three partitioned prefixes PRll = 0000, PR21 = 0000 and PR3l = m with output-port values a, 0 and 11, respectively. As shown in Fig. 4a, because only one bit of the PR, is efkctive, all of the eight entries I beginning with zero are written with the output-port value a. Similarly, the four entries beginning with 00 will be written with /j because the number of effective bits in PR?, is two. Both of the entries beginning with 000 will have the output-port value y for P R 3 1 . This is the conventional way to store all of tlic output-port values. However, it wastes lots of memory space and can be improved. When we use tlie method shown in Fig. 4h, only one space is used to store cach of thc output-port values and it is shared among those entries. Therefore, the size of the array is reduced. As shown in this example, the array with = 4 needs only 30 spaces instead of the previous 64 spaces. Thus, 53% o f the array size is saved. We will have inuch more reduction in the array size when Oi is larger. In summary, at least 25% of the whole niemory size is saved when we use the smallest Ui = 2

0000 0001 0010 0011 0100


0101

0000 0001

0110 0111

0010 0011 0100 0101 0110 0111

1111

1111
I

U
24t23t22+21= spaces 30
b

Software simulation and performance analysis

3.4

Insert and delete operation

The insertion and deletion of an IP prefix are nearly the same as the lookup operation described above. The only difference is that a signal is assigned to designate insert or delete instead of a lookup operation. As in thc lookup operation, a pointer to the next level will be obtained if tlie current level has not yet received the last partitioned prefix. Otherwise, the pointer from tlie previous level, the partitioned prefix and its length will be used to indicate a certain place in a particular array, and thus insertion or deletion will be performed. If the insert signal is set, the output-port value of the prefix will be written into the entry, but if the delete signal is set. the output-port value in the entry will be replaced by null.If all the entries arc nulls, the
80

To analyse the performance of the routing table constructed using the proposed lookup scheme, a prototype that supports all of the features described above, including prefix lookup, insertion and deletion, was carried out using the Verilog hardware description language. Thc performance was evaluated using the well known Internet core routing table with about 40 000 addresses, available at the website for the Internet Performance Measurement and Analysis (IMPA) Project [30]. First of all, the rcquired memory size for various prefix partitions and different methods in niemory reduction and skip function was investigated. Taking into account that odd-number partition is not as efficient a s even-number partition while inipleinenting liardware, we decided to use
lEl< Proc.
Cuiiiimiii.,

Val. 149, No. 2. April 2002

even uiuiibers in the prefix partition. Besides, ; strategy, I whereby the next number should always be smaller than or equal to the prcvious iiuiiiber in the partition, is used to prevent the rapid growth of the memory size. The software simulation results are given in Table 1. The memory size is calculated from thc data generated by the Verilog simulation program. As we can see from thc table, the memory size decreases when the firs1 partitioned integer becomes larger. However, when the first partilioned integer is equal to 16, the nieiiiory size begins to grow. TI~LIs, the partition, { 14,4,4,4, 21, lias the sinallcst memory size 4, of all the other partitions. Besides, by using thc memory reduction method in the scheme, the memory size will be reduced to almost half of that when using the reduction metliod. The memory size is further reduced using the novel skip fLmction. The benefit or the skip function will be more significant in lPv6 iiiipleiiientatioii, because there will be iiiore continuous zeros in an 1Pv6 address. Hence, will1 the methods of memory rediictioii and skip function, the partition (14,4,4,4, 2) that lias the sinallest 4, memory size of 0.59 Mbyte will be used iii our proposed scheme.

Table 2: Performance comparisons of different lookup schemes


Proposed scheme Speed Memory size (40000 entries) Multiresolution trie

DIR-21-3-8

75 x 106
lookupsis

2 x IO6 lookupsis 1 Mbyte


Yes

20 x IO6
lookupsis

0.59Mbyte
Yes

9 Mbyte
no poor

SRAM InsetVDelete
Memory access Implementation

good

medium

1
hardware pipeline

11
software- based

1-3
hardware pipeline

3-8 and the iiiiiltircsolutioii trie have sonic inscrlioii and deletion shortages. Our proposcd scheme ticeds only onc memory access in the prefix 1ook~ip insertion or deletion. This is bctter than both the DLR-21-3-8 scheme with I 3 nieinory acccsses, and the multiresolutioii lrie schcmc with 1 1 meinory accesses.

Table 1: Required memory sizes for different partitionsand methods


Memory size (Mbytes) With memory reduction Different partitions With skip Without skip Without memory reduction With skip Without skip

Conclusions

16,4,4,4,2,2 14,4,4,4,4,2 12,4,4,4,4,4

1.01 0.59 0.77

1.01 0.61 0.78 0.78 0.96 2.14 3.75

2.95 1.26 1.37 1.37 1.63 4.81 8.56

2.96 1.29 1.40 1.39 1.64 4.85 8.59

12,4,4,4,2,2,2,2 0.78 10,4,4,4,4,2,2,2 0.94 8,6,4,4,4,2,2,2 2.13 6,6,6,4,4,2,2,2 3.74

I n this paper, itii efficient IP routing hblc looltup schcinc was preseiitcd. Thc scheme, which ciiii perform ellicient lookup, insertion and deletion o f IP prcfixes, is less complcx in comparison to other schemes. By inti-oducitig memory rccduction and the novel skip function, we haw successfully reduced the required memory sizc to ;tboiit 0.59 Mbyte. This lookup scheme, which was carricd out using the Verilog hardware description language, can achieve one route lookiip in cvery memory ~ C C C S Swlicii it is iniplemented in a pipeline fksliioii. The Verilog simulation results sliow tiiat we are able to obtain approxiiiiately 75 x IO" lookup/s with real Intcriiet-bacl<boiie IPv4 addrcsscs. I n addition, our scheme can also bc casily scaled from IPv4 to 1Pv6. Tli~is, believe that our proposed scheme is indccd we feasible Tor high-speed router design.
6 Acknowledgments

Also, we will compare our scheme with two other lookup schemes; the DIR-21-3-8 proposed by Gupta et ul. [I91 and the multiresolution tric proposed by Tzcng and Przygienda [ 161. Tablc 2 illustrates the cooiparisoii results. In a hardware pipeline configuration, a fixed prefix partitioned with { 14,4,4,4,4, is able to achieve one route lookup in 2) every memory access. Al'ter the simulations are done with tlie real backbone IPv4 prefixes of diffcrent lengths, the tiiaximum data rate is determined. Thus, we observed consistently about 75 x 10" lookups/s in our proposcd scheme, as shown in Table 2. This is milch greater than thc lookup speed contributed in tlie multiresolution trie scheme with 2 x IO6 lookups/s and the DIR-21-3-8 scheme with 20 x IO6 lool~ups/s. the DIR-21-38 scheme, the requircd In memory is 9 Mbyte. Because of such a large memory, it is not easy to put the roiiting tables into a fkster SRAM. The routing tablc constructed by the niultiresolutioii trie is small aiid close to 1 Mbyte. However, the scheme is a softwarebased approach and it is hard to predict the memory size if the scheme is implemented in hardware. Siiiiiilation results show that only 0.59 Mbyte is needed for tlie proposcd routing lookup scheme. This is much smaller than both of' the previously mentioned schemes. It is very easy to perform the insertion and deletion in our scheme. Both tlic DIR-21-

This work W B S supported by the Natioixil Science Council of Taiwan, ROC under grant NSC9O-22 13-E-005-0I3 and the Meng Yao Chip Ccnler.
References
PARTlIlDGE. C.. CARVEY. P.P.. DIJIIGESS. li.. CASI'INEYRA. 1.. CLAl<KF,, T., G R A H A M , L., HA'I I-IAWAY. M.. HERMAN. I),, I)., I<ING,A.. KOHALMI, S., MA,T., M(.CALLEN,.I..R/IENI>LZ. T., MILLIKEN, W.C., I'ETTYJOHN. I<.. IIOKOSZ, J.. SEEGEI1. I'EI .. .I., SOLLINS, M., S1'0RCtl. : TODEK. I%., S.. TROXEI.. G.[).,

8 BAKER, F. (Ed.): Requirements for IP version 4 routers. RFC 1812, 1995 9 LABOVITZ, C., MALAN, G.R., and JAHANIAN, F.: Internet routing instability. Proceedings of ACM SIGCOMM Conferencc, Cannes, France, 1997, pp. 115-126 10 LABOVITZ, C.: Rotiting analysis, Internet performance measurement and analysis (IPMA) project, [Online],available at http://www. mcrit.edu/ipmajanalysis/routing.litnil, 1997 11 VILLAMIZAR, C., CHANDRA, R., and GOVINDAN. R.: BGP route flap damping. Internet Engineering Task Force, 1997 12 TZENG, H.-Y.: Longest prefix search using compressed trees. Proceedings of IEEE Global Communication Conference, Sydncy, Australia, 1998 13 ZUKOWSKI, C.A., and PE, T.: Putting routing tables into silicon, IEEE Netlv., 1992, pp. 42-50 14 McAULLEY, A,, and FRANCIS, P.: Fast routing lable lookup using CAMs. Proceedings of IEEE INFOCOM Conference, 1993, Vol. 3, pp. 1382-1391 15 DEGERMARK, M., BODIK, A,, CARLSSON, S., and PINK, S.: Small forwarding tables for fast routing lookups. Proceedings of ACM STGCOMM Conference, Cannes, France, 1997, pp. 3-14 16 TZENG, H.-Y., and PRZYGIENDA, T.: On fast address-lookup algorithms, ZEEE J. Sd Areus Conn/?it/z., 1999, 17, (6) pp. 1067-1082 17 WALDVOGEL, M., VARGHESE, G., TURNER. J., and PLATTNER, B.: Scalable high speed IP routing lookups. Proceedings of ACM SIGCOMM, Cannes, France, 1997, pp. 25-36 18 LAMPSON. B.. SRINIVASAN. V.. and VARGHESE. G.: IP ~ooltupsusing inultiway and mihticoluinn searcli. Procekdings of IEEE INFOCOM, San Francisco, CA, USA, 1998, pp. 1248-1256 19 GUPTA, P., LIN, S., and McKEOWN, N.: Routing looktips in hardware at memory access speeds. Proceedings of rEEE INFOCOM, San Francisco, CA, USA, 1998, pp. 124&-1247
~~ ~~

~I

20 HUANG, N.-F., and ZHAO, S.-M.: A novel IP-routing lookup scheme and liardwiire architecture for niultigigabit switching routers, IEEE J Se/. AWUS C O I ~ / W W / . , 17, (6), pp. 1093-1 104 1999, 21 LIU, Y.-C., and LEA, C.-T.: Fast IP table lookup and memory reduction. Proceedings of lEEE Workshop on High pcrforniancc switching and routing, 2001, pp. 228-232 22 SRINIVASAN, V.. and VARGHESE, G.: Fast address lookups using controlled prefix expansion, Proceedings of ACM Sigmetrics Conference, Madison, WI, USA, 1998, pp. 1-11 23 NILSSON. S.. and KARLSSON. G.: IP-address lookun using LCtries, ZEEk J Se/. Areas Con&/k, 1999, 17, (6), pp. l683-10F2 24 DOERINGER, W., KARJOTH, G., and NASSEHI, M.: Routing on lon~est-iiiatcliiiiR prefixes, IEEEIACM Truns. Netiv., 1996, 4, _ . pp. 8 6 3 7 25 XU, K., WU, J.-P.. WU, I., and CHEN, X.-H.: The analysis and design of fast route lookup algorithms for high perfoniiance router. Procccdings of IEEE ATM (ICATM 2001) and High S p e d Intelligcnt Internet Symposium, 200 1, pp. 320-325 26 PAK, W., and BAHK, S.: Flexible and fast IP lookup algorithm. Proceedings of IEEE International Conferciice on Comnitinications, 2001, Vol. 7, pp. 2053 -2057 27 ZITTERBART. M.: HeaRT: High nerformance routine table look up, Philox fl.c/ns. R. Soc., 2000, %58: (I 773, pp. 221 7-223 1 28 HINDEN, R., and DEERING, S.: 1P version 6 addressing architecture, RFC 1884, 1995 29 MORRISON, D. R.: Patricia-Pr~iclical algorithm to retrieve information coded in i~lphanumeric, ACM, 1g68, 1.5, pp. 515-534 J. 30 Michigan University and Merit Network, Internet pcrformance mcasurement and analysis (IPMA) project, [Online], available at litlp://nic.merit.edu/ ipina/

82

IEE Proc.

Co!niizi/ii.,Vol. 149.

No, 2,Apsil2002

You might also like