You are on page 1of 40

A Model for Interconnection of IP Networks

by Pedro Ferreira
(pferreira@cmu.edu)
Engineering and Public Policy Department
Carnegie Mellon University

Main Advisor: Prof. Marvin Sirbu


Other Advisors: Prof. Francisco Veloso, Prof. Scott Matthews

Word Count 1 (main text): 5268


Word Count 2 (full paper): 10545

Monday, January 06, 2003


A Model for Interconnection of IP Networks

Abstract

Internet Service Providers (ISPs) must interconnect in order to realize the network externality
benefits of being able to reach all users. Market power determines whether and how much one
provider pays another for interconnection. Recently, anti-trust authorities have expressed concern
about the potential abuse of market power with respect to network interconnection in the scope of
the Internet. This paper develops models for understanding when, where and using what
mechanisms, ISPs will choose to interconnect. By solving these models with the appropriate
inputs one could observe the optimal interconnection strategies for ISPs and understand how the
topology of the Internet might evolve.

Analyses of the various sources of costs for interconnection are provided as parameters for these
models. Between January 1999 and July 2002, bandwidth prices have decreased at an average
monthly rate of about 5%. Transit prices have decreased by 50% from May 1999 to October
2001. The movement of these prices in the same direction hampers understanding whether
providers will likely rely more or less on peering agreements. The former would decrease the
potential for the emergence of dominant providers.

Keywords: Internet, interconnection, price of bandwidth, cost modeling, econometrics

3/5/2003 2/40
A Model for Interconnection of IP Networks

1 Introduction

Technology to connect computers that are geographically remote started being developed in the
late 1960s at the University of California, Los Angeles, under the auspices of the Department of
Defense’s Advanced Research Projects Agency (ARPANET) [1]. Until the early 1990s there was
a unique nation-wide backbone, run by the NSFNET, used strictly to support research, education
and governmental activities. However, networking technology was quickly disseminated and a
number of for-profit organizations started running their own networks for commercial purposes,
such as WorldCom and Sprint [2]. In 1995, the NSFNET reverted back to a research project,
leaving the management of the Internet backbone in commercial hands1. Besides these large
corporations that run the core of the Internet, termed Internet Backbone Providers (IBPs), we also
find in today’s network a large number of Internet Service Providers (ISPs) that operate the edges
of the Internet and provide access to end-users [3].

The diversity of network operators, however, raises a very interesting problem. Suppose that you
are a customer of one of these networks and you want to exchange information with a friend who
is a customer of another network. A solution is for the two networks to interconnect at some point
where information can flow both ways. In this case, your provider will be carrying the traffic you
want to send to your friend up to the interconnection point, and your friend’s provider will route it
beyond that point up to your friend’s premises. Note that this situation is overly simplified. You
and your friend may run big corporations, each with its own internal networks, which want to
interconnect not only between themselves but also with a large number of other institutions. That
is, interconnection might be established among several parties and between networks of networks
[4].

This is what the Internet is all about. The Internet is set of networks, called Autonomous Systems
[5], which interconnect allowing IP traffic to flow among them. Interconnection points,
sometimes called Network Access Points (NAPs), developed along as the number of Internet
users increased. In 1995, the NSF funded four NAPs in New Jersey, Washington DC, Chicago
and San Francisco [6]. Increasingly, private companies have been establishing more of these

1
For more information on the early history and development of the Internet refer to “A Brief History of the
Internet” by Leiner, Cerf, Clark, Kahn, Kleinrock, Lynch, Postel, Roberts and Wolff, August 2000,
available at http://www.isoc.org/internet/history/brief.shtml and to “Economic FAQs about the Internet” by
MacKie-Mason and Varian in “Internet Economics”, McKnight and Bailey (Eds.), MIT Press, 1995

3/5/2003 3/40
A Model for Interconnection of IP Networks

“meet points” to directly exchange traffic avoiding potential congestion that may slow down the
public NAPs2.

Interconnection raises a set of challenging issues in the scope of engineering policy. What
technology is commercially available for carriers to interconnect? What sort of interconnection
arrangements make most sense, both technically and economically? What prices, if any, should
providers pay to each other to interconnect? Can dominant carriers emerge in this market? If so, is
a dominant carrier able to extract monopoly rents? Can smaller carriers route around the largest
providers? Can a dominant carrier deny interconnection to smaller providers on the grounds of
unequal exchange of value? Should then carriers be obliged to interconnect by law? If so, for the
provision of what sort of services would they have to interconnect? Or are the positive network
externalities from interconnection always enough to encourage it3?

This paper is divided into 5 parts. In order to understand the current regulatory framework that
defines the duties of providers in terms of interconnection, Section 2 analyzes the extent to which
the Telecommunications Act of 1996 applies to IP networks. Section 3 presents several
architectures for interconnection, as a way to characterize the technology currently available. To
assess the economic feasibility of these architectures, Section 4 provides an analysis of the
various sources of costs for interconnection. Section 5 specifies a full competitive model for
interconnection among ISPs that once provided with the appropriate inputs would allow us to
determine the optimal interconnection strategies for ISPs and to understand if smaller carriers
could route around a dominant IBP, in which case the existence of the latter might not be so
problematic. Finally, section 6 conveys the most important conclusions.

2
For more information on the evolvement of the Internet in the US during the last decade see “The Internet
Coming of Age”, by the Committee on the Internet in the evolving information infrastructure, National
Academy of Sciences, 2001. For an analysis of the growth of the Internet internationally see “Internet
traffic exchange and competition in the backbone” by Paltridge, OECD Workshop on Internet Traffic
Exchange, 7-8 June, 2001
3
Note that interconnection between two networks entails positive networks externalities for both networks,
unless the incremental costs of interconnection exceed the benefits that the networks can appropriate.
Therefore, there is usually an incentive for networks to interconnect. However, the distribution of those
benefits between the two networks is a delicate and complex issue that often hampers the establishment of
the interconnection agreement

3/5/2003 4/40
A Model for Interconnection of IP Networks

2 Regulatory Framework

This paper discusses interconnection among IP networks. An IP network is a network that routes
traffic using TCP/IP4 [7] and in which each interface of every machine connected to the network
is identified by an IP address. These networks are packet-switched. Messages are broken into
packets that are routed according to the information in their headers. Along the network, routers
maintain forwarding tables that store information about where packets should be forwarded [8].
Several mechanisms are used to disseminate the topology of the network across routers, as well as
to communicate link failures and routing policy changes [9].

The Telecommunications Act of 1996 [10] defines (§3) information service as a service that
“changes the form or content of the messages but does not affect the management, control or
operation of a telecommunications system”. The transport of IP traffic is classified as an
information service by the Federal Communications Commission (FCC). Telecommunications is
defined in the Act as “the transmission among points specified by the user of information of the
user’s choosing without change in the form or content of the information as sent and received”. A
Telecommunication service pertains to the offering of telecommunications for a fee directly to the
public.

In addition, the Telecommunications Act of 1996 defines a set of obligations that apply only to
telecommunications service providers, particularly to Incumbent Local Exchange Carriers
(ILECs), such as the duty to allow Competitive Local Exchange Carriers (CLEC) to “physically
co-locate at the ILECs’ premises to obtain access to Unbundled Network Elements”, and
therefore interconnect, “with at least the same quality, rate, terms and conditions as the access
provided to subsidiaries and affiliates”5. However, as long as IP transport is classified as an
information service, none of these obligations apply to networks carrying IP traffic. They do
however apply in the case of Asynchronous Transfer Mode (ATM) networks, which are
essentially another type of packet-switched networks. Another example of inconsistency in the
current regulatory framework is the fact that telephony over IP is not regulated at all whereas

4
TCP/IP stands for the Transmission Control Protocol / Internet Protocol, which is the end-to-end protocol
used in today’s network to route messages among machines
5
See § 251(b) and § 251(c) of the Act

3/5/2003 5/40
A Model for Interconnection of IP Networks

telephony as traditionally provided by ILECs and CLECs is regulated as common carriage by the
FCC.

In other words, the public policy goals that motivated the enactment of the Telecommunications
Act of 1996, among them the desire to promote open competition6, have not been applied to the
case of IP networks yet. Today, ISPs and IBPs conduct their business in this incoherent regulatory
framework, to the advantage of some and at the expense of others. Often, larger IBPs agree to
interconnect with smaller providers for free only if the latter meet a number of requirements. The
only legal instruments smaller providers can use to dispute the reasonability of these requirements
are anti-trust laws and tort law [11].

Concerns regarding issues of dominance in the backbone market were the central topic in the
WorldCom-Sprint merger. In June 2000, the Department of Justice (DOJ) filed a suit to enjoin
WorldCom from acquiring Sprint, arguing that the proposed merger “would substantially lessen
competition in violation of Section 7 of the Clayton Act” [12]. The merger would lead to higher
prices for consumers, lower Quality of Service (QoS) and less innovation, thus violating anti-trust
laws.

By that time, the provider Level 3 argued before the FCC [13] that a high concentration of
customers on a small number of providers’ networks without open non-discriminatory
interconnection to those networks, would render small providers and new entrants at a clear
competitive disadvantage. The merger between WorldCom and Sprint could create a
concentration in the market likely unhealthy for the industry. The DOJ complaint argued that, in
June of 2000, the HHI for the largest 15 backbones, in terms of traffic, was about 18507 and that
the merger would raise this figure to approximately 30008,9.

The harmful effects of such a concentration could be further exacerbated in the absence of an
open interconnection framework for IP networks. Such a framework would encompass

6
See, for example, the speech of W. Kennard, Chairman of the FCC, February 8th, 1999, by occasion of the
third anniversary of the Telecommunications Act of 1996
7
By December 2001, the Hirschman-Herfindahl Index (HHI), in terms of IP traffic, for the 20 largest IBPs
in the US market had declined to about 900, based on the data from Roberts [17], indicating increased IBP
competition
8
See appendix A of the DOJ complaint against the WorldCom-Spring merger ([12])
9
Recall that following the 1982 Federal merger guidelines [18], a market with an HHI below 1000 is
considered not concentrated. If the HHI is between 1000 and 1800, the market is considered moderately
concentrated and mergers that increase the index by 100 points or more are challenged by the government

3/5/2003 6/40
A Model for Interconnection of IP Networks

measurable and published peering criteria by all providers. Smaller providers, such as Level 3 and
Genuity, made public their interconnection requirements as a way to pressure larger IBPs to
publish theirs [16, 17]. This would allow smaller carriers to publicly know exactly what is needed
to obtain settlement-free interconnection agreements from those IBPs and consequently reduce
their potential abuse of market power.

The FCC issued in September of 2000 a report on this matter [16], which concluded that in the
absence of a dominant backbone, market forces are enough to encourage interconnection. Still,
this report notes that if a dominant backbone provider should emerge, regulation may be
necessary to prevent monopoly rents10. In December 2000, the FCC issued two additional reports
on interconnection. The first made the case for settlement-free agreements as the default regime
for interconnection at central offices in the context of traditional telephony [19]. The second
suggested that carriers should split equally the incremental costs of interconnection [20].
However, for the last two years, the FCC has been silent with respect to interconnection issues.
Thus, a goal of this paper is to contribute to the discussion of interconnection in the context of IP
networks, a topic insufficiently addressed given the increased importance of the Internet in
today’s societies [21].

Therefore, the most interesting research questions with respect to interconnection include: 1)
under what conditions can a dominant backbone provider emerge?; 2) Can smaller carriers route
around such a provider, thus interconnecting directly and avoiding requesting the transport
services of a dominant carrier that could abuse its market power and charge monopoly rents?
These questions are intertwined. If smaller providers can route around a dominant provider when
this charges high fees then its market share will decrease and it will likely lose its dominant
position. But if backbone transport fees are low, smaller providers may find it beneficial to use
the services of a large provider and the market share of the latter will increase. To control
transport fees, it may be sufficient that the market for transport is “contestable” via peering [22]
11
.

10
As it has been in other networked industries such as telephony
11
For an empirical analysis of the structure of the Internet refer to “Backbone topology, access, and the
commercial Internet, 1997-2000”, by O’Kelly and Grubesic, Environmental and Planning B: Planning and
Design 2002, Vol. 29, pp. 533-552. This study argues that many providers have moved away from fully
connected mesh networks to sparser topological configurations, suggesting an increasing reliance on
peering between providers

3/5/2003 7/40
A Model for Interconnection of IP Networks

3 Interconnection Practices

This section describes three practices used to interconnect IP networks: direct peering, peering at
a NAP and transit. The illustrations provided focus on the case of interconnection between an ISP
and an IBP, but the analysis applies to all interconnection situations in the Internet in general12.

3.1 Direct Peering

Direct peering is usually a bilateral business and technical arrangement, where two providers
agree to accept traffic from one another, and from one another’s customers and from the
customers of their customers, as illustrated in Figure 1. Usually, peering does not include the
obligation to carry traffic to third parties [23].

Historically, direct peering has often been offered on a Bill-and-Keep (B&K) basis13. However,
there is an element of barter when both networks do not perceive a roughly equal exchange of
value. For this reason, many of the largest IBPs impose minimum peering requirements that
smaller providers willing to peer with them must meet. These requirements usually include a
minimum number of locations for peering and a minimum bandwidth for the peering connections.

ISP’s network IBP’s network

ISP’s
Customers
ISP R R IBP IBP’s
Customers
EGBP session

Figure 1 - Illustration of a peering relationship between an ISP and an IBP.

In IP networks, inter-carrier routing is accomplished by the Border Gateway Protocol (BGP) [24].
Two edge routers speaking BPG establish an External BGP (EBGP) session over TCP/IP and

12
For a detailed analysis of interconnection architectures and settlements refer to “ISP survival guide” by
Huston, 1998, Wiley and Sons
13
Under a Bill-and-Keep agreement, both operators keep their own revenues and no interconnect payments
are made. For this reason, these agreements are also called Sender-Keeps-All

3/5/2003 8/40
A Model for Interconnection of IP Networks

send to each other, according to their routing policies, information about how to reach
destinations in the Internet [25, 26]14. In this example, an EBGP session runs in the link between
the ISP and the IBP. This link is also used to exchange actual traffic.

Link ownership may take one of several forms: the ISP and the IBP may split the costs of the
link, either of them can own the link completely or, alternatively, they can lease the link from a
link provider in the industry15. In the case of interconnection between to a large IBP, the cost of
the link will be, most likely, borne by the ISP. ISPs of similar size usually split the cost of the link
equally.

IP traffic from the ISP’s customers is filtered at the ISP’s premises and only the traffic that is
destined to customers of the IBP passes to the link between them. Once that traffic is within the
IBP’s network, it is the IBP’s responsibility to deliver it to its final destination. Traffic in the
opposite direction flows likewise. Direct peering is an attractive solution for interconnection
because it facilitates filtering traffic, since traffic from different ISPs flows in separate links, thus
arriving at an IBP on separate interfaces [27].

3.2 Peering at Network Access Points (NAPs)

NAPs are meet points where several providers meet to exchange traffic. Figure 2 illustrates one
such point. ISPs and IBPs deploy routing equipment at a point where they establish a combination
of peering connections usually on B&K basis. In this case, each provider is responsible for the
link that runs between its premises and the meet point.

14
BGP also allows for disseminating routing information within the same network. Routers within a
network establish Internal BGP (IBGP) sessions with the edge routers in that network to learn the routing
information acquired from the other networks
15
In this case, they can split the payment to the link provider or either one of them can pay it entirely

3/5/2003 9/40
A Model for Interconnection of IP Networks

ISP’s premises IBP2’s infrastructure

ISP’s ISP R R IBP3 IBP2’s


Customers MEET POINT Customers

EGBP sessions
R R

IBP1’s infrastructure IBP3’s infrastructure


R R

mesh of
EBGP sessions
IBP1’s IBP3’s
Customers IBP1 R R IBP3 Customers

Figure 2 - Illustration of peering between an ISP and several IBPs at a Network Access Point (NAP).

The meet point is usually owned and run by a separate entity that leases space to the providers
willing to co-locate there. NAPs can be either centralized or distributed. Centralized NAPs,
usually in a single building, have the disadvantage of becoming a single point of failure in the
network. To avoid this, Internet Exchanges (IX) have been increasingly deploying distributed
NAPs (c.f. London IX, appendix).

NAPs are a convenient solution for peering because many providers come to the same physical
place and with short links within the meet point they can reach several other providers.
Additionally, an ISP can run a fat link from its premises to the meet point that takes all the traffic
to be delivered to peers at this point, which is usually cheaper than deploying one link per peering
connection16. The biggest problem of a NAP occurs when the meet point uses a bus or ring
topology because that shared medium can become congested. Congestion of the original NAPs
funded by the NSF gave rise to a number of new NAPs known as Metropolitan Area Exchanges.
Additionally, there is indication that congestion has decreased since NAPs started deploying
ATM switches [16].

16
For a more in-depth discussion of the advantages of peering at a NAP, see “A business case for peering”,
by Norton, white paper, Equinix, available from www.equinix.com

3/5/2003 10/40
A Model for Interconnection of IP Networks

3.3 Transit

In the previous examples, IBPs would only accept traffic from the ISP that is destined to their
customers. However, an IBP can act as a transit provider, in which case it also accepts traffic to
other destinations in the Internet. In this case, we say that the ISP and the IBP agree on a transit
relationship. A transit agreement is usually a bilateral business and technical arrangement, where
one provider, the transit provider, agrees to carry traffic to third parties on behalf of another
provider. In most cases, the transit provider carries traffic to and from its customers, and to and
from every destination in the Internet, as Figure 3 shows [23].

The major difference between a peering agreement and a transit agreement is that in the latter
case the provider requesting the connection is seen as a customer of the provider offering the
service. In Figure 3, the ISP is a customer of IBP1. Therefore, it will send traffic destined to IBP2
in the link to IBP1 and IBP2 will send traffic destined to the ISP on its peering link to IBP1.

IBP2’s network

IBP2’s
R IBP2 Customers
ng
eri
pe

ISP’s network IBP1’s network

R
transit INTERNET
ISP’s ISP R R IBP1 IBP1’s
Customers
EGBP session
Customers CLOUD
R
pe

IBP3’s network
er
in
g

IBP3’s
R IBP3 Customers

Figure 3 – Illustration of a transit relationship between an ISP and an IBP.

In this case, in the EBGP session between the ISP and IBP1, IBP1 advertises routes to every
destination in the Internet17. Ownership of the link between the ISP and IBP1 may exhibit the
same variety as noted for the case of direct peering.

17
Except to customers of the ISP, since the traffic between its customers is routed within the its own
network

3/5/2003 11/40
A Model for Interconnection of IP Networks

4 Modeling Interconnection Costs

This section presents models for the costs of routing equipment, the price of bandwidth and the
price of transit. These models will later allow for pricing the interconnection strategies described
in the previous section.

4.1 Routing Equipment Costs

Figure 4 illustrates the block diagram of an Internet Cisco router. The chassis serves as
motherboard for the Power Supply Units (PSUs), the switch fabric, the slots for the processor and
the line cards, and the slots for the memory cards. The Gigabit Route Processor (GRP) is the
engine that provides the routing intelligence for the router18. The switch fabric is responsible for
synchronizing all the switching functions. The line cards connect the router to other devices.

CHASSIS (GSR) S S
LC LC

PSU SFC CSC


MEM
(Packets) S
SFC SFC S
PSU GRP
GRP
MEM LC
(Tables)
SFC CSC
PSU
SWITCHING
BACKUP
FABRIC S S
PWR
LC LC

Figure 4- Illustration of a router based on Cisco 12000 Series.

We use pricing information for Cisco 12000 Series routers to model routing costs, since these are
routers typically used for backbone interconnection19. Table 1 indicates the types of routers in this
Series and the SONET [28] line cards available20.

18
The GRP determines the network topology and calculates the best path across the network. The switch
fabric, according to the information provided by the GRP, synchronizes all switching functions in the
router. For more information on the functioning of routing equipment refer to Cisco’s guide, available at
www.cisco.com
19
According to the block diagram of a Point of Presence obtained from UUNET

3/5/2003 12/40
A Model for Interconnection of IP Networks

Router Cost of motherboard Line cards supported


Number (KUS$, 2001) (CMi) (LCi)
12008 65 7
12012 75 11
12016 95 15

Line Card Technology Number Speed Price


Number and Type of Ports (Ni) (Mbps) (Sj) (KUS$, 2001 ) (Pj)
1 DS-3 6 45 35
2 DS-3 12 65
3 OC3 4 155 33
4 OC3 8 71
5 OC3 16 141
6 OC12 1 622 25
7 OC48c 1 2480 65
8 OC48c 4 280
9 OC192c 1 9920 225

Table 1 – Cisco 12000 Series routers and line cards available.


(data source: Cisco at Pittsburgh, PA)

Let B=[b1…bn] represent the bandwidth of the n links connected at a city and let RC represent the
routing costs at that city. RC is defined as the minimum cost for ports and motherboards that
allows all the links to be connected at that city at the ISP’s premises. Thus, we can write

RC ( B ) = m.( MinY ={ y1 ... y3 }, X ={ x1 ... x9 } ∑ y CM


i =1...3
i i + ∑x P )
j =1...9
j j st

1) ∑
i =1, 2 , 3
LCi yi ≥ ∑x
j =1...9
j ∧ y k = {0,1,2...}

2) ∑x N
i with S j
i i ≥ TB ( B, S j −1 , S j ), ∀j = 1,...5 ∧ xi = {0,1,2...}

where Y and X represent, respectively, the number of motherboards and the number of line cards
needed. CMi Pj, LCi, Ni and Sj are defined in Table 1. TB(B,lb,ub) is a function that indicates the
number of links in B with bandwidth between lb and ub21. m is a multiplicative factor used to
obtain the yearly price of a router22. The first constraint indicates that there must be enough
motherboards to accommodate all line cards. The second constraint indicates that there must be
enough ports of speed Sj to terminate all the links with bandwidth between Sj and Sj-1, for all
speeds in Table 1 (allow S0=0).

20
Table 10 (c.f. appendix) shows a typical configuration for a Cisco 12008 router with a line card for an
OC-48c and the prices associated with the various components
21
That is, TB(B,lb,ub)=Σi δ(lb<bi≤ub), where δ(e)=1 if e is true and 0 otherwise
22
Refer to the appendix for information on how m is computed

3/5/2003 13/40
A Model for Interconnection of IP Networks

4.2 Price of Bandwidth

We have used data from two sources to describe the price of bandwidth. The first is Band-X, who
provides a neutral and independent arena to trade bandwidth. From Band-X’s website, and during
the year of 2001, we were able to obtain information on 381 network offers for 1-year long
contracts for point-to-point clear-channel capacity [29]. Table 2 lists the variables used in this.
The order number of the contract was used as a proxy for time23. The origin and termination
points of the link were used to compute the distance between the cities served. The total charge is
the reservation price for the auction. See the appendix for more information on these data.

Variable Name Used Description and units


ORDER NUMBER ORDER The order number of the contract as it entered Band-X database
ORIGIN POINT ORIGIN The city and country where the bandwidth link originates
TERMINATION POINT TERM The city and country where the bandwidth link terminates
NETWORK SPEED BAND The capacity of the bandwidth link in Mbps
LEAD TIME LTIME The time needed to setup the link in number of working days
NET AVAILABILITY NETAV The minimum percentage of time that the link will be up
TOTAL CHARGE PRICE The reservation price for the auction at Band-X in 2001 USD
CONTRACT TYPE CTYPE Boolean that indicates if a Service Level Agreement (SLA) is used
SAME COUNTRY* SAMEC Boolean that indicates if the endpoints of the link are in the same country
MILEAGE** MILE Surface distance between the endpoints of the link in miles
Table 2 – Definition of the variables used in studying Band-X data.
(Note: * - variable defined a-posteriori;
** - variable obtained from www.wcrl.ars.usda.gov)

The second source of data is Telegeography24. From their website, we have downloaded 205 price
quotes for 1-year long contracts for leased lines between NYC and London, UK25 between
January 1999 and June 2002. Table 3 lists the variables used in this case, which are the actual
price paid, the bandwidth provided and time.

Variable Name Used Description and units


TOTAL CHARGE PRICE The price to pay for the bandwidth link in 2001 USD
NETWORK SPEED BAND The capacity of the bandwidth link in Mbps
DATE OF QUOTE TIME The month for the price quote, in months elapsed from January 1999
Table 3 – Definition of the variables used in the Telegeography case.

23
We do not know the exact relationship between this number and time. However, from inspecting daily
Band-X’s website, we suspect that it is, at least, an ordering over the contracts at Band-X
24
Telegeography is an independent subsidiary of Band-X for statistics and analysis. From their website is
available at www.telegeography.com
25
The data from Telegeography was obtained using a free trial demo version of their database on
bandwidth prices, which reports monthly price quotes for T-1, E-1, DS-3 and STM-1 lines between those
two cities

3/5/2003 14/40
A Model for Interconnection of IP Networks

Figure 5 shows the prices for leased lines of 45 Mbps and 155 Mbps and the best exponential fits.
This figure suggests that a function of the form f(.)eγt might well describe these data, where t is
time, γ is a negative constant and f is a function of the remaining variables that affect the price.

Price for 45 Mbps links between NYC and London Price for 155 Mbps links between NYC and London
(contracts 1 year long) (contracts 1 year long)

1,400 2,500
y = 1393e-0.0844x y = 3711.8e-0.1006x
1,200
R2 = 0.8691 2,000 R2 = 0.7804
1,000

1,500
Thousand

Thousand
800
USD

USD
600
1,000

400
500
200

0 0
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30
Month Month

Figure 5- Price for leased lines (45 Mbps and 155 Mbps) between NYC and London, between January 1999 and
June 2002, for 1 year long contracts, and best exponential fits.

Applying this functional form to the reservation prices at Band-X, and assuming different power
laws for the bandwidth and the length of the link, we can write

PRICE = a 0 BAND b0 MILE b1 e c0ORDER + ε

In this model, a0 embodies the effect of other variables that might explain variance in the price of
bandwidth. That is,
a 0 = LTIME d 0 NETAV d1 e d 2CTYPE e d 3SAMEC

We have estimated a linear version of this model, obtained by applying logarithmic


transformations to the expressions above. Table 4 shows the results obtained. All variables in the
model are significant, at least at the 5% level, except the lead time, which does not seem to
explain any variance in the price.

3/5/2003 15/40
A Model for Interconnection of IP Networks

Observations 381 R-squared 0.501


Parameters 7 Adjusted R-squared 0.493
F-stat 63.600

Variable Coefficient T-stat P-value


Log(BAND) 0.4261 19.0340 0.0000
Log(MILE) 0.3774 6.9260 0.0000
ORDER -0.0002 -2.0410 0.0419
Log(LTIME) -0.0113 -0.1350 0.8924
Log(NETAV) 1.8445 18.3720 0.0000
CTYPE -0.2934 -3.1150 0.0353
SAMEC -0.3904 -2.1120 0.0020
Table 4 – Results from the econometric regression on data from Band-X.

Several conclusions follow from this analysis. First, the functional form proposed seems to be
appropriate. Second, the bandwidth and the mileage exhibit significant economies of scale26.
Third, the coefficient on ORDER is negative, which confirms the decrease in prices over time.
Fourth, links with higher “network availability” are usually more expensive. Fifth, contracts using
Band-X’s SLAs27 [30] are usually less expensive. This can be attributed to the fact that using a
contract template provided by Band-X facilitates transactions. Finally, links across borders are
usually more expensive, which can be associated to the fact that such links have to conform to
regulation in more than one country, for example, in right-of-ways issues.

We ran a similar study on the data from Telegreography. Table 5 shows the results obtained.
Again, this functional form seems to appropriately describe the price of bandwidth. The exponent
on the bandwidth entails significant economies of scale. For example, the price per Mbps
decreases 44% from a DS-3 line to an OC-3 line. The exponent on time suggests an average
monthly decline rate in the price of bandwidth of about 5%28.

26
For example, the reservation price per Mbps decreases 51% from a 45 Mbps line to a 155 Mbps line
27
Service Level Agreements (SLAs) are standardized contracts that allow providers to flexibly specify the
Quality of Service (QoS) delivered. For more information on SLAs see “A network architecture based on
market principles” by Fankhauser (available at http://www.tik.ee.ethz.ch/~gfa/)
28
For a more detailed analysis of these results and their implications for the industry see Ferreira, P.,
Mindel, J. and McKnight, L., “Why bandwidth trading markets have not matured yet?”, forthcoming 2003
in the International Journal of Technology, Management and Policy, Issue 2

3/5/2003 16/40
A Model for Interconnection of IP Networks

Observations 205 R-squared 0.889


Parameters 3 Adjusted R-squared 0.888
F-stat 808.520

Variable Coefficient T-stat P-value


Constant 11.6557
Log(BAND) 0.5269 32.7760 0.0000
TIME -0.0725 -25.5690 0.0000
Table 5 – Results from the econometric regression on data from Telegeography.

Combining both models, we conclude that the yearly price of a leased line for a 1-year long
contract can be approximately given by29

Pr ice = 5216.56 Bandwidth 0.5269 Mileage 0.3774 e -0.0725Time

This function is plotted in Figure 6 for June 200230. A problem in this analysis is that for the
Band-X case, we can control for mileage but we do not have the actual prices paid, only the
reservation prices for the auctions. In contrast, from Telegeography’s database, we have the
actual prices but we cannot control for mileage. Combining the two models introduces bias in our
estimates, but it is the best we can do, since we are interested in capturing the effect of mileage on
actual prices and not on reservation prices.
PREDICTED PRICE
(2001 USD)

$82,186

$26,243
E
MILEAG
(miles)

OC-3
DS-3 NYC-LONDON
DALLAS-MEXICO CITY
BA (M
ND b p
W s)
ID
TH

Figure 6 - Predicted price for bandwidth (plot for June 2002).

29
This approximation is obtained by taking the exponent on mileage from the Band-X case and keeping the
exponents on the bandwidth and on time from the Telegeography case. The multiplicative factor is
computed by dividing the distance between NYC and London, with the exponent on mileage (0.38), by the
coefficient of the constant term in the Telegeography model
30
Two examples are identified in this figure, one for an OC-3 line between NYC and London and another
one for a DS-3 line between Dallas and Mexico City. The predicted price per Mbps-month for these lines is
$44 and $49, respectively.

3/5/2003 17/40
A Model for Interconnection of IP Networks

4.3 Price of Transit

A transit agreement is usually priced based solely on the amount of traffic delivered. Data on the
price of transit for May 1999 and October 2001 is reported in [31]31. The monthly price per Mbps
for a transit service can be approximated by a logarithmic function of the bandwidth provided
(Figure 7). From May 1999 to October 2001, the price of transit decreased by about 50%.

Monthly Price for a Transit Service

May 1999 October 2001 Log. (May 1999) Log. (October 2001)

700
Price//Mbps-month (2001 USD)

600

500
y = -64.858Ln(x) + 729.12
2
R = 0.9617
400

300

200

100 y = -28.89Ln(x) + 350.29


2
R = 0.8841
0
10 100 1000
Bandwidth (Mbps)

Figure 7- Price of transit service and logarithmic fit for May 1999 and October 2001.

(adapted from Linton(2001))

From the data for October 2001, and assuming an annual discount rate of 5%, we estimate that the
price of transit for a 1-year long contract is approximately given by:

Pr ice = 8.86(−28.89 Ln( Bandwidth) + 350.29) Bandwidth

where Bandwidth represents the amount of traffic. Note that the effects of scale economies are
not as significant as for the case of bandwidth. The price/Mbps decreases only 16% from a DS-3
line to an OC-3 line.

31
The data for May 1999 are very similar to those previously published in “Internet Service Providers and
Peering”, working paper Equinix, available from www.equinix.com

3/5/2003 18/40
A Model for Interconnection of IP Networks

5 Modeling Interconnection Decisions

This section specifies a model that allows ISPs to decide with whom to interconnect, how and at
what cities. We will consider n ISPs serving m cities32. End-users at these cities generate traffic
that must be delivered throughout the network. A city where no traffic is generated will
potentially be used as a NAP.

A network is modeled as a graph (V,E), as Figure 8 illustrates, where V is the set of vertices and
E is the set of edges. Vertex ip represents ISP i at city p. Not all ISPs serve all cities. Therefore,
||V||≤nm33. Edges represent connections between vertices. Let Ei be the set of edges of the form
ipiq for p≠q. Ei represents the set of internal links to ISP i’s network. We have ||Ei||≤m(m-1), since
not every ISP deploys a full mesh network. Normally we have ||Ei||≤3.5m [32]. Additionally, no
ISP connects to itself at the same city and not every ISP interconnects with every other ISP
everywhere. Thus, we have that ||E||≤n2m2-nm.

Traffic flows on the edges of this graph. There are n2m2 different types of traffic in this network.
Traffic of type i’p’j’q’ is traffic that originated at city p’, is picked up by ISP i’, and is destined to
an end-user of ISP j’ at city q’. Let T represent the traffic flows in the network. tipjq,c is the amount
of traffic of type c flowing on a link from ISP i at city p to ISP j at city q. tipjq,c is defined for all
ip≠jq. Additionally, let D denote the traffic input for the network. dipjq represents the traffic
generated by end-users of ISP i at city p destined to end-users of ISP j at city q. dipjq is defined for
all ipjq34.

Interconnection agreements between ISPs are attributes of the edges in this graph. Let A be the
set of attributes. aip,jq, for i≠j, indicates the type of interconnection agreement between ISP i at city
p and ISP j at city q. It can assume the following values: -2, if ISP i and ISP j do not interconnect;
-1, if ISP i buys transit from ISP j; +1, if ISP i sells transit to ISP j; +2 if ISP i and ISP j peer. For
network consistency, we must have aip,jq=ajq,ip when |aip,jq|=2 and aip,jq=-ajq,ip otherwise. Thus, A

32
In this section, we will use the term ISP to refer to a generic provider. Additionally, we will use i, j, i’
and j’ for indexes for ISPs, thus varying from 1 to n, and p,q,p’ and q’ for indexes to cities, thus varying
from 1 to m
33
In this section, we will use ||S|| to denote the number of elements in set S and #(S) to denote the
cardinality of space S, that is, the number of possible configurations of an element of that space
34
Note that T is an nm(nm-1) by n2m2 matrix and D is a vector of size n2m2

3/5/2003 19/40
A Model for Interconnection of IP Networks

has n(n-1)m2/2 degrees of freedom, each of which can assume one of four possible values. Hence,
#(A)=2n(n-1)m2.

INTERCONNECTION AGREEMENTS

ISPi,Cp aip,jq={-2,-1,+1,+2} ISPj,Cp


INTERNAL NETWORKS

tipjq,c

tjqip,c

ISPi,Cq ISPj,Cq

diqj’q’

Figure 8 – Illustration of the graph used to model a network.

Let Πi(T,A) represent the profit that ISP i enjoys given the flows of traffic t and the
interconnection agreements A. Πi(T,A)=RTi(T,A)-Ci(T,A), where Ci(T,A) and RTi(T,A) are,
respectively, the cost of infrastructure and the net revenue from selling and buying transit for ISP
i. Additionally, let T*(A,D) represent the flows of traffic the that ISPs choose given agreements A
and traffic input D.

We are interested in understanding under what conditions A is an equilibrium. For that, we resort
to the concept of Nash equilibria [33]. A is an equilibrium if there is no pair of ISPs, say i and j,
that would prefer agreements B to agreements A, where B is an arrangement of agreements
similar to A except for ipjq, for some p and q35. Assuming side payments, ISP i and ISP j would
prefer b to a when

Π i (T * ( B, D), B ) + Π j (T * ( B, D), B) ≥ Π i (T * ( A, D), A) + Π j (T * ( A, D), A)

35
In other words, A is an equilibrium iff no pair of ISPs would like to change their interconnection
agreement at some pair of cities given the interconnection agreements of the other ISPs in the network

3/5/2003 20/40
A Model for Interconnection of IP Networks

that is, they would agree to change the interconnection agreement at ipjq iff their combined profit
once they consummate this change is higher then their combined current profit, with profits
computed at the flows of traffic that the ISPs choose. Finding an equilibrium of agreements is a
NP-complete problem because the cardinality of A growths exponentially with n and m. If n and
m are small, we can exhaustively search this space for an equilibrium, but otherwise a heuristic to
should be used. In the appendix, we show some preliminary results that can be used to inform
such a heuristic.

T*(A,D) indicates the flows of traffic that the ISPs decide upon given the interconnection
agreements A and traffic input D. Note that for a particular A and D each ISP maximizes its own
profit. Let TIi represent all the traffic in ISP i’s network and TOi all the traffic ISP i sends to other
ISPs. That is

TI i = {tipiq ,i ' p ' j 'q ' , ∀ p ≠ q ∀i ' p ' j 'q ' } TOi = {tipjq ,i ' p ' j 'q ' , ∀ j ≠i ∀i ' p ' j 'q ' }

Let TO-i denote TOj for all j≠i. In addition, let Ti=TIi ∪ TOi. Then, ISP i solves the following
multi-commodity flow problem [34]:

MaxTi Π i (Ti , TO−i , A) st


1) ∀ p ∀ c =ipj 'q ' ∑t
jq ≠ ip
ipjq ,c − t jqip ,c = d c

2) ∀ p ∀ c =i ' p ' j 'q ':i ' p '≠ip ∑t


jq ≠ ip
ipjq ,c − t jqip ,c = 0

3) ∀ pjq: jp ≠ip ∀ c t ipjq ,c ≥ 0


4) ∀ pjq: jp ≠ip ∧ aipjq >0 ∀ i ' p ' j 'q ': j '∉TR ( j , A) t ipjq ,i ' p ' j 'q ' = 0

where TR(j,A) represents the “transit reacheability set” for ISP j given the interconnection
agreements in A. An ISP j’ belongs to TR(j,A) iff it buys transit from j, or if it buys transit from
some other ISP that buys transit from j and so on. Mathematically36,

36
In this formulation, TRh(j,A) includes the ISPs that ISP j reaches through a series of transit agreements
after h hops. If ISP j’ belongs to TRh(j,A) then it is a “tier-h” ISP relative to ISP j. For more information on
the hierarchical structure of the Internet see Leida, B. and McKnight, L. (1997), “Internet Telephony:
Costs, Pricing, and Policy.”, proceedings of Telecommunications Policy Research Conference, 1997,
Alexandria, VA, September

3/5/2003 21/40
A Model for Interconnection of IP Networks

j '∈ TR ( j , A) iff ∃h : j '∈ TRh ( j , A)


with TR0 ( j , A) = { j} ∧ TRh ( j , A) = {i '∉ TRh −1 ( j , A) : ∃ pq ∃ j '∈TRh −1 ( j , A)∧ ai ' pj 'q =−1}, h ≥ 1

The first two constraints are conservation of flows constraints. They state that all traffic received
by the ISP at some city must be delivered for every type of traffic. The third constraint indicates
that traffic flows must be positive.

The fourth constraint reflects the limitations in terms of which types of traffic can flow on a
particular edge. When ISP i peers with ISP j, ISP i can only send traffic on that link destined to
either ISP j or some customer of ISP j or some customer of a customer of ISP j. A similar
constraint applies when ISP i sells transit to ISP j. The fourth constraint restricts the traffic
flowing on such edges to traffic destined to ISP j’∈ TR(j,A).

Let Ti*(TO-i,A,D) denote the solution of the multi-commodity flow problem for ISP i. Define
G(T)=(g1(T),…,gn(T)), where gi(T)=Ti*(TO-i,A,D). Then, T* is an equilibrium of flows in the
network iff G(T*)=T*. One can employ a fixed point iterative algorithm to search for T*37.

It remains to define Ci(T,A) and RTi(T,A). For that, let Fipjq denote the maximum between the
amount of traffic that ISP i at city p sends to ISP j at city q and the amount of traffic in the reverse
direction. Fipjq is given by

Fipjq = max{ ∑t ,
jqip , c
c = i ' p ' j 'q '
∑t ipjq ,c
c =i ' p ' j 'q '
}

Ci(T,A) is given by BWCi(T,A)+RECi(T), where BWCi and RECi represent, respectively, the
Bandwidth Costs and the Routing Equipment Costs for ISP i. BWCi is given by

BWCi (T , A) = ∑ CB ( Fipiq , L( p, q )) + ∑ CB ( Fipjq , L( p, q )).(δ (aipjq = 2) / 2 + δ (aipjq = −1))


pq pjq
q≠ p j ≠i

where L(p,q) is the distance between city p and city q and CB(B,M) is the function defined in
section 4.2. This formulation assumes that an ISP buying transit pays for the entire cost of the link

37
See, for example, the algorithms fixed point iterative algorithms that derive from Banach fixed point
theorem in Introductory Functional Analysis with Applications by Kreyszig, 1989, Wiley & Sons

3/5/2003 22/40
A Model for Interconnection of IP Networks

to the transit provider and that the cost of a peering link is equally split by the ISPs that use it to
peer. RECi is computed using the function RC(B) defined in section 4.1 Assuming that each link
terminates in a separate port at either end, we can write

REC i (t ) = ∑ RC ({F
pjq
ipjq })
ip ≠ jq

The net revenues from buying and selling transit are given by

RTi (t , a ) = ∑ aipjq .δ (| aipjq |= 1).CT ( Fipjq )


pjq
j ≠i

where CT(B) is the function defined in section 4.3.

3/5/2003 23/40
A Model for Interconnection of IP Networks

6 Conclusions

This paper makes the case that interconnection of IP networks is a central issue for the
development of the Internet and an important issue in the scope of engineering policy. The
obligations for interconnection in the Telecommunications Act of 1996 do not apply to IP
carriers, as long as IP transport is classified as an information service by the FCC. This raises
concerns regarding the potential emergence of a dominant backbone provider, as proved in the
case of the WorldCom-Sprint merger. Such a provider could extract monopoly rents in the form
of transit fees. It is therefore crucial to understand if smaller providers are able to route around
such a dominant carrier by directly interconnecting, which would render IP transport markets
contestable via peering.

This paper presents an analysis of various sources of costs for interconnection, namely routing
equipment, bandwidth and transit. We conclude that bandwidth prices have been decreasing about
5% on average per month between January 1999 and July 2002. Bandwidth exhibits significant
economies of scale. For example, the price per Mbps decreases 44% from a DS-3 line to an OC-3
line. Transit prices have decreased about 50% between May 1999 and October 2001. The
movement of these prices in the same direction complicates answering our question.

This paper specifies a full competitive model for interconnection among ISPs that would allow
them to decide to which providers interconnect, how and where. In this model, each pair of ISPs
deploys interconnection agreements and each ISP chooses traffic flows in a way that maximizes
its profit. We define equilibrium conditions for this market. By solving this model for the
appropriate inputs, we could, in principle, observe the optimal interconnection strategies for
smaller carriers and determine the extent to which direct peering is an alternative to transit. If it is
not and a dominant backbone is able to charge monopoly rents then regulation might be needed,
as it was for the case of telephony, to prevent the abuse of market power. This can be
accomplished with an open non-discriminatory interconnection framework for IP networks that
includes measurable and published peering criteria.

3/5/2003 24/40
A Model for Interconnection of IP Networks

Acknowledgements

I would like to acknowledge the precious help and guidance of Prof. Marvin Sirbu throughout this
qualifier process, as well as for all the very enlightening discussions we had about the issues
raised in this paper. I also would like to thank Prof. Granger Morgan and Prof. Francisco Veloso
for providing feedback on earlier versions of this paper. All remaining errors are my own. I also
would like to acknowledge the generous financial support of the Engineering and Public Policy
Department at Carnegie Mellon University, the Calouste Gulbenkian Foundation and the Center
for Innovation, Technology and Policy Research at Instituto Superior Tecnico.

3/5/2003 25/40
A Model for Interconnection of IP Networks

References

[1] Roberts, L. and Merrill, T. (1967), "Toward a Cooperative Network of Time-


Shared Computers", Fall AFIPS Conference, October

[2] Abbate, J. (2000), Inventing the Internet, MIT Press

[3] Boardwatch (2001), “Directory of Internet Backbone Providers”

[4] McGarty, T. (2002), “Peering, Transit, Interconnection: Internet Access in Central


Europe”, ITC Working Paper, January

[5] Hawkison, J. and Bates, T. (1996), “Guidelines for creation, selection, and
registration of an Autonomous System (AS)”, RFC 1930

[6] Dodd, A. (2001), The Essential Guide to Telecommunications, Prentice Hall

[7] Postel, J. (1981), “Transmission Control Protocol”, RFC 793

[8] Davie, B. and Peterson, L. (1999), Computer Networks: A Systems Approach,


Morgan Kaufmann Publishers

[9] Betsekas, D. and Gallager, R. (1991), Data Networks, Prentice Hall

[10] Telecommunications Act of 1996, available at http://www.fcc.gov/telecom.html

[11] McGarty, T. (1996), “Competition in the Local Exchange Market: an Economic


and Antitrust Perspective”, MIT Internet Telephony Consortium, September

3/5/2003 26/40
A Model for Interconnection of IP Networks

[12] Department of Justice (2000), United States v. WorldCom, Inc., and Sprint
Corporation, Complaint filed in the United States District Court for the District of
Columbia on June 26 (Case IV/M.11741 - MCI WorldCom/Sprint), available at
http://www.usdoj.gov/atr/cases/f5000/5051.pdf

[13] Morley, A. (2000), Proceeding of the Forum MCI WorldCom – Sprint Merger,
Before the Federal Communications Commission, FCC pp. 56-60, available at
http://www.fcc.gov/realaudio/tr040500.txt

[14] Level 3 (2001), “Level 3 Policy for Settlement Free Interconnection”, available at
http://www.level3.com/1511.html

[15] Genuity (2002), “Interconnection Guidelines for Genuity”, available at


http://www.genuity.com/infrastructure/interconnection.htm

[16] Kende, M. (2000), “The Digital Handshake: Connecting Internet Backbones”,


working paper 32, Office of Plans and Policy, Federal Communications
Commission, September

[17] Roberts, L. (2002), “Internet Growth Trends”, IEEE Computer - Internet Watch –
January

[18] Department of Justice and the Federal Trade Commission (1997), Horizontal
Merger Guidelines ¶ 1.51, available at
http://www.usdoj.gov/atr/public/guidelines/horiz_book/hmg1.html

[19] DeGraba, P. (2000), “Bill and Keep at the Central Office As the Efficient
Interconnection Regime”, working paper 33, Office of Plans and Policy, Federal
Communications Commission, December

3/5/2003 27/40
A Model for Interconnection of IP Networks

[20] Atkinson, J. and Barnekov, C. (2000), “A Competitively Neutral Approach to


Network Interconnection”, working paper 34, Office of Plans and Policy, Federal
Communications Commission, December

[21] Jorgenson, D. (2001), “Information Technology and the U.S. Economy”


American Economic Review, Vol. 91, No. 1, March, pp. 1-32

[22] Baumol, W., Panzar, J. and Willig, R. (1988), Contestable Markets and the
Theory of Industry Structure, International Thomson Publishing

[23] Marcus, J. (2001), “Global Traffic Exchange among Internet Service Providers
(ISPs)”, OECD - Internet Traffic Exchange, Berlin, June 7

[24] Rekhter, Y. and Li T., (1994), “A Border Gateway Protocol 4 (BGP-4)”, RFC
1654

[25] Stewart, J. (1998), BGP4 Inter-Domain Routing in the Internet, Addison-Wesley


Publishers Company

[26] Halabi, S., McPherson, D. (2000), Internet Routing Architectures, Cisco Press

[27] Milgrom, P., Mitchell, B. and Srinagesh, P. (1999), “Competitive Effects of


Internet Peering Policies”, Telecommunications Policy Research Conference,
Arlington, Virginia, September 25-27

[28] Ramaswami, R., Sivarajan, K. (2001), Optical Networks: A Practical Perspective,


Morgan Kaufmann Publishers

[29] Mindel, J., Sirbu, M. (2001), “Taxonomy of Bandwidth Trading”, ITC meeting,
Massachusetts Institute of Technology, Cambridge, January

3/5/2003 28/40
A Model for Interconnection of IP Networks

[30] Lehr, W., McKnight, L. (2000), “A Broadband Access Market Framework:


Towards Consumer Service Level Agreements”, Telecommunications Policy
Research Conference, 2000, Alexandria, VA September

[31] Linton, J. (2001), “New Directions in Peering for Tier-2 and Content Providers”,
October, NANOG24 Meeting

[32] Chuang, J. and Sirbu, M. (1998), “Pricing Multicast Communications: A Cost-


Based Approach”, Internet Society INET'98 Conference, Geneva, Switzerland,
July 21-24

[33] Mas-Colell, A., Whinston, M. and Green, J. (1995), Microeconomic Theory, pp.
246, Oxford University Press

[34] Ahuja, R., Magnanti, T. and Orlin, J. (1993), Network Flows: Theory, Algorithms,
and Applications, Prentice Hall

3/5/2003 29/40
A Model for Interconnection of IP Networks

Appendixes

Topology of the London Internet Exchange

The London Internet Exchange (LINX) is a totally neutral, not for profit partnership between
Internet Service Providers. Established in 1994, it was initially run on a voluntary basis by the
founder members. Incorporated in 1995, it became a Company limited by guarantee. All
members, regardless of operational size, have an equal share of the company and equality in
discussion and debate. Decisions are made by group consensus. The LINX network infrastructure
operates on a dual switch vendor architecture and consists of a number of high performance layer
two switches. The switches are placed in LINX sites at managed facilities in London. A private
dark fiber ring running Gigabit Ethernet connects the sites together. Figure 9 illustrates the
topology of LINX.

Figure 9 - Topology of the London Internet Exchange.

(source: www.linx.net)

3/5/2003 30/40
A Model for Interconnection of IP Networks

Data collected from Band-X

Band-X, based in London, UK, offers a trading floor that provides a neutral and independent
arena for buyers and sellers to trade clear channel capacity, dark fiber, wavelengths and ducts
Buyers and sellers of capacity can anonymously post bids and offers at Band-X’s website,
www.band-x.com. A posting defines a reservation price, which is the price at which the posting
party commits to do business. If the reservation price is not met when the auction closes, this
party is not obliged to deal with any of respondents. Once an offer is open, members with trading
status can anonymously enter bids online. The bidding period is defined by the party setting up
the bid, usually a couple of weeks. When the auction finishes, Band-X introduces the winning
party, if there is one, and the posting party. If they agree on a transaction, the posting party will
have to pay Band-X a commission of 2.5% of the first $200,000 and 1% hereafter. If the parties
do not agree on a deal, no commission is due to Band-X.

From Band-X’s website, and during the year of 2001, we were able to obtain information on 381
network offers for 1-year long contracts for point-to-point clear channel capacity. For that, we
wrote a script that runs behind the Internet browser and that automatically visits these pages at
Band-X’s website and downloads them to the local computer. Another application was devised to
parse the web pages downloaded and to introduce all the information available into a database,
later used for empirical analysis

The figures below illustrate a data point collected for a STM1 link between Olso, Norway and
Milan, Italy. This link will be ready in 40 working days after the transaction has been agreed
upon, and will be available at least 99.98% of the time over a one year period. There is an
installation charge of 7000 GBP and a monthly usage fee of 55404 GBP. No SLA will be used to
lease this line.

3/5/2003 31/40
A Model for Interconnection of IP Networks

Re: 11379- Oslo, Norway to Milan, Italy

- Oslo, Norway to Milan,


Italy
Type of infrastructure: Clear Channel
Contract type: Lease
Network type: Point to Point
Originates: Oslo, Norway
FROM
Terminates: Milan, Italy
Local loop: NOT Included
Network speed: 155 Mbps (STM-1)
Lead Time: 40 working days
Network availability: 99.9800%
Contract length: 1 years
Network pricing:
155 Mbps
Installation charge: 7000 GBP
Capital payment: 12 monthly rental payments of 55404 GBP

Total charge: 671848 GBP


TO
Uses standard contract: Other

Figure 10 – Illustration of a data point collected from Band-X.

Table 6 provides descriptive statistics for the variables collected38. Most of the link offers are for
low capacity links, in the range between T-1 (1.5 Mbps) and OC-3 (155 Mbps) and for short
links, typically less than a thousand miles. Such bias is due to the fact that most of the countries
engaging in trading bandwidth at Band-X’s are European countries (namely the U.K.), besides the
U.S., possibly because of their proximity to Band-X’s base. The lead time for the circuits traded
at Band-X is about 40 working days and their network availability is around 99.8%. Only 28.3%
of the offers in this database use Band-X’s SLAs. Only 20.2% of them are within the same
country, which shows the international character of bandwidth trading.

Variable Mean Std. Dev. Minimum Maximum


PRICE 402279.95 728206.045 9928 7752226.96
BAND 118.293963 157.751517 1.5 622
MILE 1685.19633 1999.18246 69.0467667 7856.92538
ORDER 1667.98425 835.717092 1 2965
LTIME 36.4855643 15.1013393 0 90
NETAV 99.8096325 0.345917348 97.5 99.99
CTYPE 0.283464567 0.451272548 0 1
SAMEC 0.202099738 0.402094255 0 1

Table 6 - Descriptive statistics for the variables used in Band-X case.

38
The variable SAMEC was introduced a posteriori. The variable MILE was computed by taking the
surface distance between the origin and the termination points for the link using REF

3/5/2003 32/40
A Model for Interconnection of IP Networks

Table 7 provides the correlation matrix for these variables. Correlations are not significant, except
between the contract type and mile (0.37) and between the contract type and the order number
(0.37). This suggests that as time went by and providers became more acquainted to using Band-
X’s website to trade bandwidth, they started trading longer links and using more frequently Band-
X’s SLAs.

PRICE BAND MILE ORDER LTIME NETAV CTYPE SAMEC


PRICE 1 0.22421 0.18141 -0.07913 -0.04167 -0.06329 0.02962 -0.04615
BAND 0.22421 1 -0.04507 0.22935 -0.05356 0.12851 0.02461 0.11294
MILE 0.18141 -0.04507 1 0.19054 0.20057 -0.18778 0.37536 -0.20911
ORDER -0.07913 0.22935 0.19054 1 -0.00637 0.07835 0.37253 0.28969
LTIME -0.04167 -0.05356 0.20057 -0.00637 1 -0.07058 0.20372 -0.20343
NETAV -0.06329 0.12851 -0.18778 0.07835 -0.07058 1 -0.00506 0.08454
CTYPE 0.02962 0.02461 0.37536 0.37253 0.20372 -0.00506 1 0.07503
SAMEC -0.04615 0.11294 -0.20911 0.28969 -0.20343 0.08454 0.07503 1

Table 7 – Correlation matrix for the variables used in the Band-X case

Data collected from Telegeography

Telegeography is an independent subsidiary of Band-X for statistics and analysis. From their
website, at www.telegeography.com, we have downloaded 205 price quotes for 1-year long
contracts for leased lines between NYC and London, UK. The data from Telegeography was
obtained using a free trial demo version of their database on bandwidth prices, which reports
monthly price quotes for T-1, E-1, DS-3 and STM-1 lines. Figure 11 is a snapshot from
Telegeograph’s website that shows a few price quotes for January 1999.

3/5/2003 33/40
A Model for Interconnection of IP Networks

Cities : London-New York


Bandwidth: Contract Type: Date Range: Additional Display Options:
T-1, E-1, DS-3, STM-1 / OC-3 Lease Jan 1999 - Jan 2002 None

City Search : Results


To sort: click arrows

255 Records Found


Location 1 Location 2 Bandwidth Lease Price ($USD)
City One City Two Posted Date Contract Type Type Mbps Lease Install Price Monthly Price
per Mbps
London New York Jan 1999 Lease E-1 2 - 4,000
London New York Jan 1999 Lease DS-3 45 1,874 1,780
London New York Feb 1999 Lease E-1 2 - 3,675
London New York Feb 1999 Lease E-1 2 3,780 5,000

Figure 11 – Snapshot of Telegeography’s website.

Table 8 provides descriptive statistics for the variables collected. In this case, on the whole, there
is no bias towards low capacity links. However, a closer inspection of the data shows that
providers have just recently started using higher capacity links, such as STM-1.

Variable Mean Std.Dev. Minimum Maximum


PRICE 192536 293804 7404 1560000
BAND 80.42439 72.17354 2 155
TIME 27.01463 12.02386 1 42

Table 8 – Descriptive statistics for the variables used in the Telegeography case.

This fact is confirmed on Table 9, which provides the correlation matrix for these variables. The
correlation factor of 0.34 between TIME and BAND suggest that lower capacity links were used
closer to 1999 and larger capacity links have been gradually being used up to June 2002. Note the
strong negative correlation factor between PRICE and TIME, which is indicative of a significant
decline in the price of bandwidth over time.

PRICE BAND TIME


PRICE 1 0.31929 -0.46754
BAND 0.31929 1 0.34339
TIME -0.46754 0.34339 1

Table 9 – Correlation matrix for the variables used in the Telegeography case.

3/5/2003 34/40
A Model for Interconnection of IP Networks

More information on the architecture and prices of routing equipment

Figure 4 illustrated the block diagram of an Internet router from Cisco. The switch fabric, usually
a crossbar, includes two types of cards: the Clock and Scheduler Cards (CSC) and the Switch
Fabric Cards (SFCs). The SFCs handle requests from the line cards and issue grants to access the
fabric. They receive scheduling information and the clocking reference from the CSC cards to
synchronize all the switching functions. There must be at least one CSC card per switch fabric

Each line card is mounted on the appropriate slot (SLC in the figure) and runs a Cisco IP Services
Engine (ISE) to manage the input and output ports. The type of ISE used determines the memory
cards used, which usually range from 128 MB and 512 MB

The GRP determines the network topology and calculates the best path across the network. The
GRP includes an internal CPU, Flash memory for the Internetworking Operating System (IOS),
PCMCIA Type II slots for IOS extensions, an Ethernet card for network management access and
modem ports. There must be at least one GRP per router, which is mounted in the appropriate slot
for the processor (SGRP in the figure). For backup purposes, there is usually another slot that
supports either a line card or a second GRP (SGRPLC in the figure)

Table 10 shows a typical configuration for a Cisco 12008 router, which is a router normally used
for backbone interconnection purposes. This router has one line card for a single OC-48c port.
The list price for this router is $126.5K, in 2001 USD. Note that the port for the OC-48c link is
about 50% of the cost of the router. The Chassis with the GRP processors and the CSC and SFC
controllers amounts to another 25% of the costs.

List Price
Description Product Designation
(KUS$, 2001 )
Cisco 12008, 40 Gbps; 1 GRP, 1 CSC, 3 SFCs, 1 DC GSR 8/40 30.0
256 MB GRP and LC program/route memory (2 x 128 MB) MEM-GRP/LC-256 7.0
Cisco 12008 redundant DC supplies (2 DC supplies) PWR-GSR8-DC/2 1.5
IP IOS software SFGSR-11.2.9 8.5
Cisco 12008 scheduler/fabric/alarm GSR8-CSC/ALRM= 7.5
1-port OC-48c/STM16 SONET/SDH (1310nm; SC connector) OC48E/POS-SR-SC 65.0
256 MB GRP and LC program/route memory (2 x 128 MB) MEM-GRP/LC-256 7.0
Total 126.5
Table 10 – Typical configuration for a Cisco 12000 Series router and the costs of its equipment.

3/5/2003 35/40
A Model for Interconnection of IP Networks

Preliminary results for a single ISP

This section presents preliminary results obtained for an ISP serving 8 cities in the East Cost
(Boston, New York City, Washington, Miami, Atlanta, Chicago, Raleigh and Charlotte) with
local market shares between 5% and 50%. In total, the ISP delivers about 530 Mbps to the
Internet39. Moreover, this ISP is assumed to be a buyer but not a seller of transit service. The
backbone market is defined by the largest 10 IBPs in the US whose market shares amount to
about 84% of the market. The HHI computed for these 10 IBPs is 871. In the end of this
appendix, we provide information on the location of the POPs for these IBPs. The results
presented in this section show the best interconnection strategy for this ISP assuming that all the
IBPs would agree to interconnect without any kind of restrictions.

Figure 12 illustrates the fact that a peering connection is only worth deploying if it takes a certain
amount of traffic. In this figure, the interconnection costs to deliver all traffic in transit are $615K
2001 USD. The cost curve shown is the total cost for delivering some percentage of the traffic
over 1 peering connection and the remaining traffic over the transit connection. As traffic starts
being shifted from the transit connection to the peering connection, total interconnection costs
increase. This is mainly because new routing equipment has to be installed to split the traffic
between the two connections and, for small amounts of traffic in peering, the savings in transit
costs are not enough to cover the additional routing costs.

However, total interconnection costs start decreasing as traffic is diverted from the transit
connection to the peering connection. At some point, it payoffs to deploy the first peering
connection. For the ISP considered in this example that happens when about 8% of the traffic is
delivered in peering. This analysis shows that peering connections with little traffic are not worth
establishing. Clearly, the threshold beyond which it is economical to deploy a peering connection
differs across ISPs.

39
The amount of traffic that the ISP dumps into the network from a particular city was computed by
multiplying the ISP’s market share in that city by the total amount of traffic generated at that city. The
traffic generated at a particular city is computed by multiplying the percentage of telephone lines in that
city (obtained from the Hatfield Model v5.0a) by the total amount of traffic in US backbones in 2001
(about 70 petabytes, according to “Fiber optic network and capacity utilization”, Telecommunications
Industry Association, white paper, September2002)

3/5/2003 36/40
A Model for Interconnection of IP Networks

Cost Curve for total Interconnection Costs


with 1 Peering Connection

640

630
Thousand 2001 USD

1st Peering Connection


620
only worth at 8% of the traffic

610

600
0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10%
% of Traffic in the Peering Connection

Figure 12 – Illustration of a lower-bound threshold for the bandwidth of a peering connection.

Figure 13 shows cost curves for transit, bandwidth for direct peering connections and routing as a
function of the number of peering connections established. We have ordered IBPs according to
their market shares in the horizontal axis given that from the previous figure the most attractive
peering connections are those that take more traffic. Note that more peering connections entail
more traffic delivered in peering and less traffic in transit. That is why the costs of transit
decrease with the number of peering connections deployed. This decline is less pronounced for
the higher order peering connections, since these take less traffic compared to the first ones. The
costs of routing increase linearly with the number of peering connections because we have
assumed that each additional peering connection requires its own port. The accumulated
bandwidth costs for the peering connections increase with the number of peering connections
established but with appreciable economies of scale.

3/5/2003 37/40
A Model for Interconnection of IP Networks

Cost Curves for Transit,


Bandwidth for Direct Peering and Routing

Transit Routing Total Direct Peering


700

23% Savings
600
Minimum at
5 peering connections
Thousand 2001 USD

500

400

300

200

100

0
0 1 2 3 4 5 6 7 8 9 10
Number of Peering Connections

Figure 13 – Cost curves for successive direct peering connections.

This figure shows that the best interconnection strategy for the ISP considered in this example is
to establish 5 direct peering connections at each city served, with the largest 5 IBPs in the
industry, and deliver the remaining traffic in transit. This interconnection solution would save
about 23% in interconnection costs relative to relying on a single transit connection to deliver all
the traffic. This analysis shows that establishing too many direct peering connections does not pay
off. At some point, the ISP is better off by bundling all the remaining traffic into a transit
connection than deploying additional peering connections. Again, the optimal number of peering
connections to deploy differs across ISPs.

3/5/2003 38/40
A Model for Interconnection of IP Networks

Location of the Points of Presence for the IBPs considered

IBP MCI Cable &


WorldCom AT&T Wireless Sprint Level 3
Market share 17.0% 13.0% 11.5% 10.0% 7.0%
Points of Atlanta Atlanta Atlanta Anaheim Atlanta
Presence Austin Boston Boston Atlanta Austin
Boston Chicago Chicago Cheyenne Boston
Chicago Dallas Cleveland Chicago Chicago
Cleveland Detroit Dallas Dallas Cincinnati
Dallas Houston Denver Kansas City Cleveland
Denver Kansas City Detroit New Jersey Dallas
Houston Los Angeles Houston New York City Denver
Kansas City Miami Kansas City Orlando Detroit
Las Vegas Minneapolis Las Vegas Relay Houston
Los Angeles New York City Miami Roachdale Indianapolis
Miami Omaha Minneapolis San Jose Kansas City
New York City Orlando New York City Seattle Las Vegas
Philadelphia Philadelphia Philadelphia Stockton Los Angeles
Phoenix Phoenix Raleigh Washington Miami
Portland Raleigh Salt Lake City New York City
Richmond Sacramento San Diego Philadelphia
Sacramento Salt Lake City San Francisco Phoenix
Salt Lake City San Diego San Jose Portland
San Diego San Francisco Seattle Raleigh
San Francisco San Jose St. Louis Richmond
San Jose Seattle Washington Sacramento
Seattle St. Louis Salt Lake City
St. Louis Washington San Diego
Tampa Wayne San Francisco
Washington San Jose
Seattle
St. Louis
Tampa
Washington

(Note: information about the location of the Points of Presence for these IBPs was obtained from
their websites and from www.boardwatch.com. Distances between cities, used to compute the
length of peering links, were obtained from www.wcrl.ars.usda.gov. A peering link from a city to
an IBP is deployed to the closest POP of that IBP from that city)

3/5/2003 39/40
A Model for Interconnection of IP Networks

(continued from previous page)

IBP Qwest Genuity Williams Verio Teleglobe


Market
share 6.0% 5.5% 5.0% 4.5% 4.5%
Points of Atlanta Atlanta Atlanta Atlanta Atlanta
Presence Boise Albuquerque Boston Boston Boston
Boston Boston Charlotte Chicago Charlotte
Charlotte Charlotte Chicago Dallas Chicago
Chicago Chicago Cincinnati Denver Dallas
Cleveland Cleveland Cleveland Detroit Denver
Dallas Dallas Dallas Kansas City Los Angeles
Denver Denver Denver Los Angeles Miami
Detroit Detroit Detroit Miami New Jersey
El Paso El Paso Indianapolis New Jersey New York City
Houston Houston Jacksonville New York City Portland
Indianapolis Indianapolis Kansas City Philadelphia San Diego
Kansas City Kansas City Los Angeles Phoenix San Francisco
Las Vegas Los Angeles Miami Pittsburgh Seattle
Los Angeles Miami Minneapolis Portland Washington
Memphis Nashville New Jersey San Diego
Miami New York City New York City San Francisco
Nashville Philadelphia Philadelphia San Jose
New York City Phoenix Phoenix Seattle
Philadelphia Salt Lake City Pittsburgh St. Louis
Phoenix San Diego Portland Washington
Salt Lake City San Francisco San Diego
San Antonio San Jose San Francisco
San Diego Seattle San Jose
San Francisco Tallahassee Seattle
San Jose Oklahoma City St. Louis
Seattle Washington Tampa
Tallahassee Tulsa
Tulsa Washington
Washington

(Note: information about the location of the Points of Presence for these IBPs was obtained from
their websites and from www.boardwatch.com. Distances between cities, used to compute the
length of peering links, were obtained from www.wcrl.ars.usda.gov. A peering link from a city to
an IBP is deployed to the closest POP of that IBP from that city)

3/5/2003 40/40

You might also like