You are on page 1of 14

1

Neighborhood Filtering Strategies
for Overlay Construction in P2P-TV Systems:
Design and Experimental Comparison
Stefano Traverso Member, IEEE, Luca Abeni Member, IEEE, Robert Birke Member, IEEE,
Csaba Kiraly Member, IEEE, Emilio Leonardi Senior Member, IEEE,
Renato Lo Cigno Senior Member, IEEE, Marco Mellia Senior Member, IEEE

Abstract—Peer-to-Peer live-streaming (P2P-TV) systems goal is
disseminating real time video content using Peer-to-Peer technology. Their performance is driven by the overlay topology, i.e., the
virtual topology that peers use to exchange video chunks. Several
proposals have been made in the past to optimize it, yet few
experimental studies have corroborated results. The aim of this
paper is to provide a comprehensive experimental comparison
based on PeerStreamer, in order to benchmark different strategies for the construction and maintenance of the overlay topology
in P2P-TV systems. We present only experimental results in
which fully-distributed strategies are evaluated in both controlled
experiments, and the Internet, using thousands of peers.
Results confirm that the topological properties of the overlay
have a deep impact on both user quality of experience and
network load. Strategies based solely on random peer selection
are greatly outperformed by smart, yet simple and actually implementable strategies. The most performing strategy we devise
guarantees to deliver almost all chunks to all peers with a playout delay as low as 6 seconds even when system load approaches
1, and in almost adversarial network scenarios. PeerStreamer
is Open Source to make results reproducible and allow further
research by the community.

I. I NTRODUCTION
Peer-to-Peer live streaming (P2P-TV) applications have
emerged as valid alternative to offer cheap live video streaming
over the Internet. Real-time TV programs are delivered to
millions of users thanks to the high scalability and low costs
of the P2P paradigm. Yet, exploiting the available resources
at peers is a critical aspect, and calls for a very careful design
and engineering of the application.
Similarly to file sharing P2P systems, in P2P-TV systems
the video content is sliced in pieces called chunks, which
are distributed onto an overlay topology (typically a generic
mesh). But, contrary to file sharing P2P systems, chunks are
generated in real time, sequentially and (in general) periodically. They must also be received by the peers within a
Stefano Traverso, Emilio Leonardi and Marco Mellia are with the Department of Electronic and Telecommunications (DET), Politecnico di Torino,
Italy; e-mail: {lastname}@tlc.polito.it
Luca Abeni and Renato Lo Cigno are with the Department of Information
Engineering and Computer Science (DISI), Univesity of Trento, Italy; e-mail
{luca.abeni,renato.locigno}@unitn.it
Csaba Kiraly is with Fondazione Bruno Kessler (FBK), Trento, Italy; email: csaba.kiraly@fbk.eu. Csaba Kiraly was with DISI, University of Trento
at the time of this work.
Robert Birke is with IBM Research Zurich Lab., Switzerland; e-mail:
bir@zurich.ibm.com

deadline to enable real time playback. Chunk timely delivery
is thus the key aspect of P2P-TV systems. This makes the
designs of P2P-TV systems and P2P file sharing applications
deeply different.
Building a mesh-based P2P-TV system, there are two key
components driving performance: i) the algorithms adopted
to build and maintain the overlay topology [1], [2], [3], [4],
[5], [6], and ii) the algorithms employed to trade chunks [7],
[8], [9]. We explicitly focus on the first problem, decoupling
its performance from the choice of chunk trading algorithm.
Most of the previous works have mainly a theoretical flavor,
where performance analysis has been carried out in rather
idealized scenarios, by means of simulations or analytical
models [3], [4], [5], [6]. Few works undergo implementation
and present actual experiments, and even those are usually
limited to few tens of peers [10], [11]. See also Sect. VIII for
a more complete presentation of the related works.
The goal of this paper is to fill this gap: by using the PeerStreamer P2P-TV application, we present a comprehensive and
purely experimental benchmarking of different strategies for
the construction and the maintenance of the overlay topology for P2P-TV systems. We thoroughly evaluate different
policies, assessing the impact of signaling, measurements,
implementation issues, etc. Both synthetic scenarios and PlanetLab experiments are proposed. The first allows assessing the
properties of the different algorithms in increasingly adversarial scenarios. In such scenarios we have the full control
of all network parameters, thus they form a scientific and
reproducible benchmarking set. The second permits us to
further check the validity of the previous conclusions within
the Internet, where unpredictable uncertainty has to be faced.
The system design we present is fully distributed. No centralized tracker or ‘oracle’ is assumed, and peers collaborate
to find the best configuration to use. Topology construction
algorithms are based on selection and replacement criteria
according to which each peer chooses the peers it would like
to download chunks from. This turns out to be very simple
and easy to implement. A blacklist-like hysteresis prevents
peers to select neighbours previously discarded due to poor
performance. Overall, we explore 12 different combinations
of criteria (24 with blacklisting), based on metrics such as
Round Trip Time (RTT), upload capacity, number of received
chunks, etc. Their performance is thoroughly benchmarked by
running fairly large scale experiments (thousands of peers)

or from a webcam). The main loop (at the center of Fig. CHUNKIZER Content Playout or generation 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 FFMPEG or equiv. the arrival of a chunk or signaling message from the messaging layer.g.. the guidelines and results presented in this paper may be useful for researchers.) to prune those combinations that perform poorly. [9]. As such. Offers are sent to neighbors in round-robin. their actual implementation and experimental validation constitute a major step toward the engineering of large scale P2P-TV live streaming systems. PeerStreamer Architecture PeerStreamer is based on chunk diffusion. so that several different codecs (e. Finally. delivery delay.. The overlay management. The monitoring and measures module extracts network information by running passive and/or active measurements [15]. an Open Source P2P-TV system that stems from the developments and research of the NAPA-WINE project [14]. A. missing frame headers. implement a de-chunkiser. H. e. For chunk scheduling. Finally. We adopt Hose Rate Control (HRC) proposed in [17] to automatically adapt the number of offers to peer upload capacity and system demand.g. . II. the presence of NAT.. PeerStreamer peer architecture. and we extend the experiments using PlanetLab. theora.. NAT TRAVERSAL. It is responsible for the correct timing and execution of both semi-periodic tasks. measured by the Structural Similarity Index (SSIM) [12]. The chunkiser is implemented using the ffmpeg libraries2 . Fig. we highlight that the software used in this paper is released as Open Source and includes all the components necessary to build a fully functional P2P-TV system including video transcoding at the source and play-out at clients.g.. The number of offers per second a peer sends plays a key role in performance. This has been proven optimal for streaming systems with 1 Available at http://www. Measurements are performed considering first traditional Quality of Service (QoS) metrics (e. 1 describes the logic and modular organization of PeerStreamer. thus allowing experimental comparison of different choices. The messaging module is a network abstraction layer that masks the application from all details of the networking environment. While the intuition and ideas of our algorithms are simple and well understood in the community. 2 http://www. e. we consider the Quality of Experience (QoE). Receiving peers. frame loss. a set of C libraries implementing blocks that enable building P2P-TV applications with almost arbitrary characteristics. whose overall architecture and vision are described in [15]. The chunking strategy used in PeerStreamer is chosen to avoid mingling its effects with the topology-related ones: one-frame is encapsulated into one-chunk to avoid that a missing chunk would impair several frames due to. etc. etc. It offers a connection-oriented service on top of UDP. They contain the buffermap of the most recent chunks the sender possesses at that time. It implements a chunkiser to process the media stream (e. centralized and distributed scheduling associated to specific peer choices in [7]. After receiving an Offer. a peer selects one chunk based on a “latest useful” policy. which reads from the local chunk buffer and pushes the chunks in the correct sequence to the play-out system.. Figure 1. 1) implements the global application logic. we introduce some theoretical considerations to support algorithm design choices. In this version we include an additional scenario devised to bring the system to its limits. but it must not be too large to cause the accumulation of chunks to be transmitted adding queuing delay prior to chunk transmissions.g. A preliminary version of this paper was presented in [13].. P EER S TREAMER D ESCRIPTION Empowering this work is PeerStreamer1 .org FFMPEG or equiv. it should be large enough to fully exploit the peer upload capacity. sending new offers. a live stream coming from a DVB-T card. thus avoiding multiple transmissions of the same chunk to the same peer. It simply injects copies (5 in our experiments) of the newly generated chunk into the overlay. designers and developers interested in P2P-TV application. Each peer offers a selection of the chunks it possesses to some peers in its neighborhood. the focus of this paper.g.) are supported. DE−CHUNKIZER CHUNK BUFFER Overlay Management Chunk Trading Protocol MAIN LOOP SCHEDULING AND OFFERING CHUNKS NEIGHBORHOOD MANAGER PEER SAMPLER MONITORING AND MEASURES MESSAGING.g. The receiving peer acknowledge the chunks it is interested in. Afterwards. TCP. is detailed in Sect. Simpler trading schemes are less performing and can hide the impact of the overlay on the overall system performance. and asynchronous activities.peerstreamer. instead.ffmpeg. Our tests show that even simple improvements to a random-based policy for the overlay construction may lead to significant QoE enhancement. MPEG. to include all elements in the P2P-TV video distribution chain.. e. while in the following we sketch the high level organization of the other application components.. UDP. Intuitively.2 under different system and network conditions. with a lightweight retransmission mechanism that allows the recovery of lost packets with low retransmission delay. The PeerStreamer architecture is completed by the “messaging” and “monitoring and measures” modules. II-B. The source is a standard peer. PeerStreamer leverages GRAPES [16]. The negotiation and chunk transmission phase is based on signaling exchanges with “Offer” and “Select” messages.264.g. middle-boxes and other communication details. e.org . but it does not participate in the Offer/Select protocol. and sends back a Select message. The results presented have been collected in experiments that amount to more than 1500 hours of tests.

5 HRC adapts the peer offer rate to peer upload capacity. Metrics Driving the Neighborhood Selection At every update. RTT). Every peer p manages a local blacklist of peers in which it can put peers that were perceived as very poorly performing inneighbors. NI (p) is the result of two separate filtering functions: one that selects the peers to drop. The neighborhood manager instead realizes the task of filtering the most appropriate peers for interaction. For example. Similarly. Filtering is based on appropriate metrics and measures. rules and peer sampling.3 In this paper we rely on the measurements of i) end-to-end path delay between peers (e. For these filtering functions we consider both simple network attributes such as peer upload bandwidth.: [ TS = NI (p) × {p} (1) p∈S where S is the set of all the peers in the swarm and the symbol × denotes the Cartesian product operator3 . The peer sampler has the goal of providing p with a stochastically good sample of all the peers in S and their properties: PeerStreamer implements a variation of Newscast [18] for this function. so that churn impairment is limited. The latter two values result in a good compromise between adaptiveness and overhead.3. but the limit O is never reached. . Add Filters We consider the following four criteria to add new inneighbors: RND: Neighbors are chosen uniformly at random: ∀q. mq = 1. etc. the topology management is split into two separate functions. Some metrics are static peer metrics: once estimated. Other metrics instead are path attributes between two peers and must be measured and can only be used as a-posteriori indicators of the quality of the considered inneighbor as perceived by p. Considering a metric. Distinguishing between NI (p) and NO (p) guarantees a larger flexibility in topology management than algorithms imposing the reciprocity between peers. every Tup seconds each peer p independently updates NI (p) by dropping part of the old inneighbors while adding fresh in-neighbors. Thus each peer p actively selects in-neighbors to possibly download chunks when building the set NI (p). i. In particular. and some application layer metrics. we assign a selection probability wq to every candidate q as mq wq = P (3) s∈NS (p) ms where mq is the metric of q and NS is either NI for drop. NO (p) collects all peers that can receive chunks from p (p out-neighbors). or the set of candidate in-neighbors for add. mq = Cq . NI (p) is the set of peers that Offer p new chunks. as it can always be in NO (p). Two parameters are associated to this scheme: the update period Tup and the fraction Fup of peers in NI (p) that is replaced at every update. The overlay topology is represented by a directed graph in which the peer at the edge head receives chunks from the peer at the edge tail. The goal is to let the dynamic filtering functions of peers q ∈ {S \ p} select NO (p) in such a way that the swarm performance is maximized. they do not contribute to O construction of the swarm topology. ii) packet loss rate. Tup = 10 s and Fup = 0.e. and iii) transmission rate of a peer. the in-neighbor update rate can be defined as Fup NI (2) Rup = Tup If not otherwise stated NI = 30. 3 Notice that since N (p) are built passively. There is no limitation to NO (p)4 .g. Each peer p handles thus an “in-neighborhood” NI (p) and an “outneighborhood” NO (p). network fluctuations. 1. Overlay Management The approach for building the overlay topology in PeerStreamer is fully distributed: each peer builds its own neighborhood following only local measures. peers with higher upload capacity should have larger number of outneighbors than peers with little or no upload capacity [4]. which is the one sending Offers. maintaining the topology dynamic and variable. The update of neighborhoods is periodic. while p Offers its chunks to peers in NO (p). The size NO (p) of NO (p) is instead a consequence of the filtering functions of the peers that select p as in-neighbor. BW: Neighbors are weighted according to their upload bandwidth Cq : ∀q. and the swarm can adapt to evolving networking conditions. The add operation guarantees NI (p) has size NI (if at least NI peers are known). N EIGHBORHOOD AND T OPOLOGY C ONSTRUCTION In PeerStreamer every peer p selects other peers as inneighbors and establishes a management connection with them. It can thus be seen as an indirect measure of its available upload bandwidth. Referring again to Fig. Overall. NI (p) collects all peers that can send chunks to p (p in-neighbors). and another one selecting in-neighbors to add. Alternatively.. randomness. path RTT or path packet loss rate. p passively accepts contacts from other peers that will form the set NO (p) of out-neighbors. Blacklist is used by peers just to avoid to select underperforming parents. and it is the main focus of this paper. III. The overlay topology TS is then obtained as union of all the edges connecting peers in NI (p) to p. Peers in the blacklist cannot be selected for inclusion in NI (p). A. Thus. such as the peer offer rate5 or number of received chunks from an in-neighbor. B. 4 In the actual implementation N (p) is limited to 200 peers. Their choice is robust: a sensitivity analysis is presented in Sect VI-C. Both add and drop filtering functions are probabilistic to avoid deadlocks and guarantee a sufficient degree of randomness. a peer is not prevented from receiving the stream. even if blacklisted by some other peers. The size NI of NI (p) is equal for every peer p: its goal is to guarantee that p has enough in-neighbors to sustain the stream download with high probability in face of churn. B. Blacklisted peers are cleared after the expiration of a time-out (set to 50 s in the experiments). they can be broadcasted with gossiping messages and are known a-priori.

Such graphs have high resilience (whenever n is larger than few units). Neighbor selection policies should favour “local” edges (i. We tested also other metrics and combinations. At the same time the massive presence of local edges potentially permits to reduce the negative effects of large latencies. mq = 1. Policies that select/drop neighbors according to the peer-topeer distances (i. mq = 1/RXCq (p). graphs in which edges are established only between nodes which are close with respect to some parameter) exhibit poor diameter/conductance properties (the graph diameter. It is known to produce an overlay topology TS with good global properties. RT Tq (p) = 1 s6 . as defined in (1).e. indeed. since peers upload bandwidth may be highly heterogeneous and edge latencies (i. Rq are advertized by peers7 . V reports results for different resulting combinations. 6 RT T (p) are locally cached at p so that they may be available a priori. for example. good diameter/conductance properties. and high conductance. On the one hand.e. scales as the square root of number of nodes in bi-dimensional geographical graphs).. Combining add and drop criteria we define 12 different overlay construction and maintenance filters. Observe that to guarantee a significant amount of chords our RTT based selection/dropping filter are probabilistic. q Active measurements could also be used to quickly estimate the RTT. As it will be evident in the following. that the graph theoretical concept of node resilience can be almost immediately re-interpreted as resilience to peer churning. the topology may become disconnected (conductance equal to zero) even if the average connectivity is high. while conductance/diameter properties of the graph have been recently shown to be tightly related with the minimum time needed to spread information over a graph with homogeneous edge capacities [19]. logarithmic diameter..4 RTT: Neighbors are weighted according to the inverse of the RTT between p and q: ∀q. edges between peers that are geographically and/or network-wise close) reducing edge latencies (with the positive side effect of localizing the traffic). if RT Tq (p) is still unknown. indeed geographical graphs (i. D.. measured over the last 300 packets received. Some considerations based on elementary notions on random graphs will help to better understand the effect of different neighbors’ selection policies. and for these reasons it is adopted by several P2P systems [20]. the resulting overlay topology TS .. as it is a policy based on pure random sampling of the swarm. Furthermore. C. mq = 1/RT Tq (p). to achieve high and reliable performance it is important to properly balance the previous ingredients. i. Blacklisting Policies Finally a peer in NI (p) is locally blacklisted if any one of the following criterion is met: CMR: the ratio of corrupted/late chunks among the last 100 chunks received by p from q exceeds a threshold of 5%. A Few Theoretical Considerations Depending on the neighbor selection strategy adopted by peers. RTT: Neighbors are dropped with a probability directly proportional to the RTT between p and q: ∀q. Of course the application can profit from neighborhoods localization since message latencies/chunk transfer times are potentially reduced. RXC: Neighbors are dropped with a probability proportional to the inverse of the rate at which it transferred chunks to p: ∀q.e. On the other hand. However.e. RTTs) may play an important role on performance. The danger of an excessive localization of neighborhoods can be easily explained from a graph theoretical perspective. The adoption of policies that select peers taking into account direct or indirect measurements of the peers’ upload bandwidth (such as BW or OFF) make the average out-degree propor- . but privilege strategies that lead to high connectivity degrees for high bandwidth peers to exploit their upload resources at the best. mq = RT Tq (p). PLOSS: the packet loss rate from q to p exceed a threshold of 3%. The simplest possible selection strategy is RND-RND. especially when |S| grows large.. non local edges)..e. In the following. The presence of a significant fraction of random chords guarantees the resulting overlay TS to exhibit “small word” properties. Sect.. just structural graph properties of TS may not be sufficient to represent well how good the topology is. when nodes are non homogeneously distributed over the domain. Indeed the overlay topology resulting by the adoption of RND-RND can be modeled as an n-regular (directed) random graph instance with n = NI . RTT: RT Tq (p) is greater than 1 s. may exhibit fairly different macroscopic properties. RNDRND is used as a baseline benchmark. localization must be pushed only up to a given point. short diameter and large conductance. this metric assigns a quality index related to the in-neighbor ability to successfully transfer chunks to p. e. BWRTT for add BW and drop RTT. mq = Rq . Blacklisting can be superposed (or not) to all of them. Furthermore they should not build topologies with a constant degree of connectivity. a too strict localization of neighborhoods can jeopardize the global properties of TS . with detriment of the application performance as already observed in [5]. guaranteeing the presence of a non-marginal fraction of chords (i. [21]. we name them stating the “ADD”-“DROP” policies. The limit of RND-RND consists in the fact that it completely ignores peer attributes (such as location and upload bandwidth). it would be highly desirable to obtain an overlay topology TS with very good structural properties. E.g. and its impact will be studied selectively. whose results are less interesting and not reported here. Thus. aiming at obtaining a topology TS that blends good global (topological) properties. 7 Note that the number of offers is roughly proportional to the available upload bandwidth of the peer [17]. with a bias toward the exploitation of local properties like vicinity and bandwidth availability. RTTs) better localize the neighborhoods. Observe. RXCq (p) are evaluated on a window of 3 s. Drop Filters We evaluate three criteria to select neighbors to be dropped: RND: Neighbors are dropped randomly: ∀q.e. OFF: Neighbors are weighted according to the rate they send offer messages Rq : ∀q. [22]. such as high node resilience.

The nominal video rate of the encoder rs is 1. with a time resolution equal to 25 frame/s. This scenario is useful to understand the fundamental behavior of different neighborhood filtering strategies. the play-out deadline is only 6 s. but fully controlled network to avoid fluctuations and randomness due to external impairments. 8 OFF can be however affected by other system parameters such as N and I by the system load conditions Class 1 2 3 4 Upload Bandwidth 5 Mb/s ± 10% 1. The nominal sequence length corresponds to 200s. Each PC runs 5 independent instances of PeerStreamer simultaneously. Subnet Number of PCs 1 43 2 63 3 60 Table III C HARACTERISTICS OF PEER CLASSES . Again we ensure that a significant amount of random edges are in TS by adopting probabilistic filtering to guarantee good global properties. i.e. we attempt to correlate neighbor filtering decisions to an estimate of the real neighbors’ “quality”. T EST.e. variable due to the encoded video characteristics.e. as they increase average weight of edges. a swarm of 1020 peers is built in every experiment. is geographically homogeneous: the distribution of the peers of different Cp classes is the same in any area. but the long-haul connections between . assumes that bandwidth rich peers (Class 1) are all concentrated in a single subnet. that also in this case the outcome of the strategy is an increased topology conductance.95. This situation is particularly challenging for a topology management system that tries to localize traffic to reduce the network footprint of the application. This is due to a global policy whose goal is building an n-regular directed graph. G Bias hereafter. as the actual or a-posteriori weight of edges is proportional to the amount of chunks correctly transferred along it. Each peer implements a chunk buffer of 150 chunks. G Homo hereafter. and they rely on different kinds of access technologies.32 Mbit/s is the average upload bandwidth of peers. Each peer selects an average delay E[RT T ] in the tuples detailed in Table II. We selected the H. thus. IV.. This corresponds to a system load ρ = 0. A. Configurations in Tables II and III have been designed to resemble a world-wide geographic scenario. the standard Linux Traffic Controller tool.6 Mb/s ± 10% 0. E[RT T ] ∗ 1. exploiting the feature of a simple leaky bucket (with a memory of 10 Mbytes) to limit the application data rate to a given desired value. together with the netem option to enforce delay and packet dropping probability when needed.B. with 204 PCs divided in four different subnets. We observe that these policies directly influence the conductance of the resulting TS . Having more edges insisting on high bandwidth links and less on low bandwidth ones increases the average weight.b. The third scenario. Peer upload capacities Cp are shown in Table III. The upload bandwidth is limited by the application itself. however. The source node generates a new chunk regularly every ˙ i.b}.64 Mb/s ± 10% 0. The well known Pink of the Aerosmith video sequence has been used as benchmark. so that the number of edges in the topology is roughly constant. A hierarchical type-B frames prediction scheme has been used. The source peer runs at an independent server (not belonging to any of the subnets). 4 38 Table II RTT S IN ms BETWEEN SUBNETS OF PEERS . Network Scenarios The generic setup described above is used as a base for three different scenarios to evaluate significant situations. with the goal of preserving the best in-neighbors (peers that have shown to be helpful in retrieving chunks in the recent past) while discarding bad in-neighbors (peers that were scarcely helpful). It injects in the swarm 5 copies of each newly generated chunk. The second scenario. is again geographically homogeneous. The test-bed is built in laboratories available at Politecnico di Torino. Table I shows the number of PCs in each subnet. B and b.264/AVC codec to encode the video sequence. The chunk size is instead highly 40ms. are: IDR. All figures in this paper report average results considering 10 different runs of 15 minutes each. in order of importance.2 Mbit/s if not otherwise specified. i.BED C ONFIGURATION We need to benchmark the different algorithms in a known and reproducible scenario. It can be argued. but rather they are instrumental to create benchmarking scenarios with different properties. After the initial 12 min of experiment. ADSL or FTTH interfaces. To achieve this goal. P. we run experiments in a possibly complex. The sequence is looped for a total stream duration of about 20 min.. to predict the effect on TS of the adoption of RXC dropping filters. each packet will be delayed according to a uniform distribution [E[RT T ]∗ 0. The GOP structure is IDR×8 {P. if not otherwise stated. Given the oneframe⇔one-chunk mapping. Then. instead. this corresponds to a buffer of 6 s. and 25 frame/s of the video.2 Mb/s. that provide different up-link capacity. where peers are distributed over continents (clusters). every new frame. ± 10% Percentage of Peers 10 % 35 % 35 % 20 % Those configurations are not meant to be representative of any actual case. In this case. obtaining 4 different kinds of frames that.. corresponding to roughly 6 Mbit/s. It is harder. 1 2 3 4 1 20 ± 10% 80 ± 10% 120 ± 10% 160 ± 10% 2 80 ± 10% 20 ± 10% 170 ± 10% 240 ± 10% 3 120 ± 10% 140 ± 10% 20 ± 10% 200 ± 10% 4 160 ± 10% 240 ± 10% 200 ± 10% 20 ± 10% tional to the peer upload bandwidth8 . each peer starts saving on local disk a 3 min long video that we use to compute QoE metrics. The first scenario. so that there is the same distribution of bandwidth everywhere.5 Table I N UMBER OF PC S PER SUBNET. We used tc. G Lossy hereafter.05].9 – defined as ρ = rs /E[Cp ] where E[Cp ] = 1.

This case. but both the average and the percentage of peers with bad quality (Floss (p) > 0. BW-RXC biases too much the choice toward high bandwidth peers.03. as 9 The distribution of N (p) inside classes is binomial as expected from O theory. It would be desirable to have an out-degree of a peer proportional to its up-link bandwidth. Measures of the bandwidth utilization.05. so that the out-degree distribution is a good indicator of over subscriptions. but at the expense of the percentage of peers with bad quality (Floss (p) > 0. To better grasp these effects. Performance Indices To assess the QoE for each peer p. it will be off for an average time equal to 30 sec before re-joining the swarm (with a different ID. i. As peers that receive less than. respectively. which become congested and are not able to sustain the demand. even if adverse. percentage of peers whose Floss (p) > 0. To keep the number of peers constant.. represents an interesting challenge for those strategies that seek the tradeoff between location and bandwidth awareness. However the behavior of BW-RXC. not shown for lack of space. G Adver hereafter. a well-known method for measuring the similarity between two images in the multimedia field [12]. 2 shows the average frame loss probability experienced by different policies. Under BW-RXC. we report both average frame loss. B. churning of peers is modeled: a fraction Pno−ch of peers never leaves the system. let bq (p) the number of bits peer p received from peer q. which is too aggressive in strictly selecting the closest in-neighbors. Fig.03) are worse. G Homo Scenario We start considering the case in which the distribution of Cp is geographically homogeneous. This situation is useful to understand if black-listing can really help in building better topologies. . instead.01. so we apply a smoothing window of length 30 in plotting. or if its use should be limited to isolate misbehaving and malicious nodes. while the four subnets experience the same packet loss configuration on long-haul links described above. combines together G Bias and G Lossy in such a way that large bandwidth peers are amassed in the subnet 1. Sssim (p). As network cost ζ we consider the average of the distance traveled by information units.03). This is strictly valid only if all peers receive the same amount of data. while the intra-subnet links and the links between 1–2 and 3–4 are lossless. percentage of peers whose Floss (p) > 0. high bandwidth peers tends to be oversubscribed while medium and low bandwidth peers may be underutilized. say. In the following. indicates that using a single metric for selecting the neighborhood can be dangerous. while RTT-RTT improves the number of peers with Floss (p) > 0. and it is useful to detect possible conflicts with the black-listing functionality. this further refinement of the metric is not necessary. and the SSIM (Structural Similarity Index).5%. 4 reports the smoothed9 histogram of the out-degree NO (p). show that all bandwidth aware policies select an out-degree that completely saturates the available bandwidth.01 (center). C ONTROLLED E NVIRONMENT E XPERIMENTS A. for which Floss tops at 2. Floss (p). V. 95% of the stream would have unacceptable quality and hence leave the system.03 (right). This is roughly achieved by adopting BWRND policy. the degradation of the received video quality becomes typically noticeable for values of Floss (p) higher than 1%. The fourth and final scenario.e.. i. Finally. RND-RND is the reference. The left-hand plot in Fig. Floss = Ep [Floss (p)]. the subnets 1–3.01. Indeed. as a new peer). As a result.01 and Floss (p) > 0.2 Drop RND Drop RXC Drop RTT 1 0 Add RND Add BW Add OFF Add RTT 10 Drop RND Drop RXC Drop RTT 8 6 4 2 0 Add RND Add BW Add OFF Add RTT Peers over 3% Losses % Lost Chunks % 3 Peers over 1% Losses % 6 3 2 Drop RND Drop RXC Drop RTT 1 0 Add RND Add BW Add OFF Add RTT Figure 2. For instance BW-RTT improves the average loss rate and the percentage of peers with Floss (p) > 0. Performance however should also take into account the cost for the network to support the application. i. the use of policies sensitive to peer bandwidth (BW and OFF for adding and RXC for dropping) appear to be more effective in reducing losses. the degree distribution depends too much on Cp . This distribution results in a large noisiness of the plot. with the exception of RTT-RTT. while Pch = 1 − Pno−ch churning peers have a permanence time uniformly distributed between 4 and 12 min. 1–4. Frame loss for different strategies in G Homo scenario: Floss (average) (left). if the streaming is working correctly. In general. we consider the frame loss probability. 2–3. Formally. and we immediately observe that the other algorithms modify the loss distribution. basically showing the average NO in each class. once a churning peer has left the system.e. Policies sensitive to RTT perform well in the considered scenario. while loss probability of a few percent (3-4%) significantly impair the QoE. respectively. otherwise the average network cost should be weighted by the amount of data received by each peer. Given the highly structured organization of the video streams. NO (p) of peers belonging to different classes is significantly different as long as bandwidth aware policies are adopted. out-degrees are instead independent for RND-RND as expected. the peer p network cost ζ(p) is computed as P q RT Tq (p)bq (p) P ζ(p) = (4) q bq (p) while the average network cost is ζ = Ep [ζ(p)].e.. 2–4 are subject to packet loss with probability p = 0. while center and right-hand plots report the percentages of peers that experienced Floss (p) > 0. they can have a different impact on different percentiles. and the percentage of peers that suffer Floss (p) larger than 1% and 3%.

CDF 1 0. In general. and ii) BW-RXC scheme biases the choice towards large bandwidth peers.7 Figure 3. However. Topology representations and N0 PDFs for RND-RND. so having small neighborhood is desirable.3 0.6 0.30 BW . However. The value of NI is related with the signaling overhead which increases with NI .RXC BW . ii) RTT aware policies reduce the network cost without endangering significantly the video quality if applied to add peers.74 RND . POLICY IN G Homo SCENARIO . IV details the actual graph characteristics). B. G Homo scenario.RND 29052 29. IV. Out-degree distribution of peers. To complement previous information Fig. for several policies when considering G Homo scenario.2 0. G Homo scenario. G Homo with Smaller NI observed in [5].RXC 29550 29. G Homo Table IV T OPOLOGY STATISTICS VS .RND 120 Out Degree 100 80 Class 4 60 Class 3 Class 2 Class 1 40 20 0 0 200 400 600 800 1000 Peer Figure 4.RXC RND . .RND RND . II.e.RTT RTT . 3 and Tab. i. since it requires to measure simple and straightforward metrics. Interestingly.97 0.RXC 29550 29.70 neighbors within the same area. scenario. of Chords RND . Color as well has been made lighter for larger values of NO . are represented by dots whose size is proportional to their out-going degree. coupled with the PDF of NO . This graphical representation easily shows that: i) schemes not implementing any location-aware policy show a larger fraction of chords. policies that force a too strict traffic localization induce performance degradations due to poor topological properties of the swarm. iii) the preference toward high bandwidth peers/nearby peers must be tempered to achieve good performance. grouped in clusters as described in Tab. Edges E[NO ] Frac.RXC 29058 29.RND 0 50 100 150 200 Network Cost [ms] Figure 5. possibly overloading them.93 0.4 0. after having tested a peer as a parent. I and Tab. The policy RTT-RXC improves quality and reduces the network cost at the same time. this policy is also easy to be implemented. This is a more critical situation where choosing the good inneighbors is more important.RTT RND . bandwidth aware add schemes offer better QoE performance. RTT aware policies significantly reduce this index thanks to their ability to select in- We consider the same network scenario but we set NI = 20. Peers. edges connecting peers belonging to different clusters (Tab. This is easy to measure a posteriori. The observations above are confirmed by Fig. offering the best trade-off in this scenario.9 0. a too small NI would impair the availability of chunks.1 0 RTT ..RND RND . at the cost of more cumbersome available capacity estimation.RXC RND .46 0. we can say that: i) bandwidth aware policies improve the application performance. RND-RXC. however. 5 reports the Cumulative Distribution Function (CDF) of network cost ζ(p).5 0. 140 BW .RXC BW . CDF of the distance traveled by information units. the RXC drop filter measures the ability of the parent peer to offer useful chunks as seen by the child peer. when used to drop peers. RTT-RXC and BW-RXC policies. As expected.97 0. The former reports a graphical representation of the overlay topology.7 0. Remark A – As a first consideration.8 0. RTT poses significant bias impairing QoE.73 RTT .

G Bias Scenario Maintaining unchanged the Cp distribution.11 0.Drop RND Drop RXC Drop RTT 10 Add RND Add BW Add OFF Add RTT Drop RND Drop RXC Drop RTT 8 Peers over 3% Losses % 8 7 6 5 4 3 2 1 0 Peers over 1% Losses % Lost Chunks % 8 6 4 2 0 Add RND Add BW Add OFF Add RTT 10 Drop RND Drop RXC Drop RTT 8 6 4 2 0 Add RND Add BW Add OFF Add RTT Figure 6.RND w/o BL RND . as shown in Fig. Results for network cost are similar to those in Fig. aggressively dropping far away. 1 0. Fig.3 0. but perform poorly if the number of peers in the neighborhood is small: all peers suffer 8% of frame loss. The reason is that the out degree of Class 1 peers under RND-RND is often not enough to fully exploit their bandwidth.07 0.RXC RND . G Lossy Scenario We consider another scenario in which large bandwidth peers are uniformly distributed over the four subnets. which are widely employed by the community [20]. RND .bad + far 0.8 0. C.RTT 0.02 0.36 0.68 0.15 0.RND w BL RTT . in this latter case.1 0 RTT .8 0. perform better than RND-RND.34 0. the policy that combines bandwidth and RTT awarenesses (RTT-RXC) improves both performance and network costs. RTTRXC emerges again as the best performing policy and with .2 0 0 0.23 0.RND w BL BW . as long as RTT awareness is tempered with some light bandwidth awareness. percentage of peers whose Floss (p) > 0.local 0.08 0.4 0. and this is the reason why at first sight some policies seem to perform better with a smaller NI ). any policies based on drop RTT perform poorly.bad 0.1 Chunk Loss Probability 0. Strategies RTT-RXC. helping in exploiting the peer’s bandwidth.RXC RND .03 (right).6 0.27 0. wisely selecting high-capacity in-neighbors is vital. the performance degrades unacceptably.RTT RND . in principle.RXC w BL 1 . but high capacity. Indeed as side effect of the localization we can potentially have a “riches with riches”.2 0.5 0.09 4 . Indeed.01 (center). the only policy that can also reduce the network cost is RTT-RXC.good 0. Also RTTRND and RTT-RTT.32 0.RTT RTT .RTT RTT . which are bandwidth unaware.7 0.28 0.6 BW . Bandwidth aware strategies.28 0. 7 reports the CDF of Floss (p) for the strategies performing better in the G Homo scenario. Remark C – This result essentially proves that also in G Bias scenario it is possible to partially localize the traffic without endangering the video quality perceived by the user. instead. 6 (the y-scales in Figs.23 0.9 0.RND w/o BL BW .4 0. however.12 0. the RTT driven policies perform much better if the RTT is used to add peers rather than to drop peers.RND BW . This scenario.RXC w/o BL RTT . G Bias scenario.22 0. “poors with poors” clusterization effect that may endanger the video quality perceived by peers in geographical regions other than 1. Remark B – Random selection policies. Fig 9 plots the CDF of frame losses (top) and the CDF of chunks delivery delays (bottom) for the selected policies. practically making it impossible to decode the video.04 0. as testified by the excellent performance of add BW policies.RTT RND . Blacklisting improves the performance of every policy. The performance of RND-RND significantly degrades in this case. we localize all high bandwidth peers in geographical area 1.13 2 .12 CDF CDF 0. as in RTT-RXC. Frame loss for different strategies in G Homo scenario with NI = 20: Floss (average) (left). In this case if RTT is the only metric used as in RTT-RTT. 5 and are not reported for the sake of brevity. successfully adapt NO (p) to Cp maintaining high performance. percentage of peers whose Floss (p) > 0.24 0.RND RND . CDF of the frame loss probability for four different strategies. Results are plotted in Fig.12 0.RTT BW . since RTT-aware selection policies reduce the latency between an offer and the actual chunk transmission that follows it. Interestingly. RND-RXC and BWRND perform similarly.14 0.RXC RND . are robust. Similarly. plus the benchmark 1 0.13 0. and peers in area 1 are in practice the only one receiving a good service.35 0.RXC RTT .RND 0 50 100 Network Cost [ms] 150 200 Figure 8.05 RND-RND. 8 that reports the CDF of ζ(p).24 0. D. As already seen with NI = 30.70 3 . but packet losses are present in some long haul connections. 2 and 6 are different for readability reasons.14 Figure 7. constitutes a challenge for the policies that try to localize traffic. CDF of distance traveled by information units. in-neighbors penalizes peers which are located in areas where little high capacity peers can be found. In general.RND BW . Table V AVERAGE FRACTIONS OF INCOMING TRAFFIC FOR C LUSTER 2.06 0. G Bias scenario.

4 0 0.RND w BL RND . 7 and 9 (top plot). CDF of chunk loss probability for six different strategies with and without adopting blacklist mechanism in G Adver scenario.4 Mb/s. Remark E – Blacklisting confirms to be in general a useful help in the filtering process.RND w/o BL 0. This instead does not hold completely for RTT-RXC. since the system is facing a very challenging scenario while working with a load of 0. However.RXC w BL RTT .324 Mb/s.RXC w/o BL BW . VI. Values above 0. and are not considered. confirming to be the best choice.8 1 Figure 10.08 0.RXC w/o BL BW .2 0.RND w BL RND .4 0. also in this case blacklisting improves the performance.04 0.14 0. 11 also shows the benefits of the blacklist mechanism for every scheme. letting peers drop lossy connections. and induce to think that the tradeoff between location and bandwidth awareness is enough to provide acceptable performance in this challenging scenario. Remark D – Blacklisting can play a significant role to avoid selecting lossy paths.RND w BL BW . especially of those filtering strategies which do not exploit location-awareness.6 RTT . BW-RND and RTT-RXC with and without blacklisting. Benefits of the blacklisting mechanism are confirmed by Table V that reports the normalized volume of incoming traffic for peers in cluster 2 from peers in all clusters. but the degradation is highly influenced by the topology. Fig. 10 with Figs. Keeping in mind that in G Lossy scenario peers belonging to cluster 2 experience lossy paths from/towards peers in cluster 3 and 4 (as explained in Sec.02 0.2 0 0 1 0. RTT-RXC policies still shows the lowest chunk loss probability.2 0 RTT .12 0. As seen in Sec. exploiting the blacklist mechanism every peer should identify and abandon poorly performing peers. .8 0. We consider G Lossy and G Adver scenarios. Indeed. Video performance versus load We now summarize the results by aessessing the actual average QoE by reporting Sssim for different policies and different system loads. SSIM measures the distortion of the received image compared with the original source (before encoding and chunkization). G Adver Scenario Fig.5 1 1. V-D.9.4 0 0. BW-RND and RTT-RXC have emerged as the most promising criteria (RND-RND being the baseline benchmark).8 CDF CDF CDF 9 2 Figure 9. biasing the neighborhood toward good performing in-neighbors. Negative values correspond to negative images. This is an excellent result.RND w/o BL RND . Notice how RTT-RXC scheme outperforms RND-RND and BW-RND for every value of rs .RXC w BL RTT . it is easy to see that volumes of incoming traffic from cluster 3 and 4 are nicely reduced thanks to blacklisting mechanism. blacklisting practically all peers are able to receive all chunks. RND-RND. when the system load is small ρ << 1.6 Chunk Loss Probability 0. 11 shows the average Sssim considering RND-RND.RND w BL BW .RND w/o BL 0. different policies behave differently: Sssim rapidly drops due to missing chunks which impair the quality of the received video.RXC w BL RTT . The EVQ (Encoded Video Quality) curve in the plot is the reference value for the encoding rate and it obviously increases steadily as rs increases. Comparing Fig.1 1 0. the average Sssim increases for increasing rs thanks to the higher quality of the encoded video. 10 plots the CDF of frame losses for different policies in G Adver scenario when the system load ρ = 0. This effect reinforces policies that naturally bias the selection of neighbor peers employing peer quality.RND w/o BL 0.RND w/o BL RND . It is a highly non linear metric between −1 and 1. E. it is immediately evident how combining together G Bias and G Lossy configurations has a negative impact on performance. and let rs range from 0. Fig. choosing in-neighbors based on their location represents a sufficient criterion to avoid lossy links.2 RTT .RND w/o BL RND . even in this difficult context.RXC w/o BL BW . IV).6 Mb/s to 1. 0.RND w BL RND .985 indicates excellent quality.8 0. RTT-RXC with blacklisting is shown to guarantee excellent performance to all peers even in this almost adversarial scenario. such as RND-RND and BW-RND. and then averaging among all of them. For all policies. given the peculiar configuration of G Lossy.RND w BL BW .4 0. especially when no location-based filtering strategy is adopted. so that blacklisting does not bring any further benefit. CDF of chunk loss probability (top) and CDF of chunk delivery delays (bottom) for six different strategies with and without adopting blacklist mechanism in G Lossy scenario. Recall that the mean peer capacity E[Cp ] = 1.06 0.6 0. Policies which showed best performance in G Lossy where considered.6 0.1 Chunk Loss Probability 0. V IDEO P ERFORMANCE E VALUATION A. Indeed. As ρ approaches 1.5 Chunk Delivery Delay [s] 0 0.9. SSIM has been computed considering the video between min 12 and 13 (60x25 frames) received by 200 peers (50 for each class).

.1 0.. G Lossy scenario.799 0.95 0. Sssim (average) index varying the video rate.3 Figure 12.g. In particular.9 EVQ RTT .4 Loss Rate SSIM 0.0 Mb/s.91 0.e.991 1. join the system abruptly all together after 7 min of experiment generating a flash crowd event.75. especially for the case in which no location-aware policy is employed (see RND-RND when NI = 80).RND RTT . In particular.95 Figure 11.RXC w/o BL BW .3 0. Sensitivity to NI and Update Rate with Churning and Flash Crowd Consider a G Homo scenario with Pchurn fraction of peers that join and leave the swarm. e.0 Mb/s and a fraction of churning peers Pchurn = 0.RND w BL 1040 0.92 0.1 0 400 420 440 460 480 Time [s] 500 520 Figure 14. on the other hand setting too large values of NI leads to a larger overhead and.e. but in this extreme scenario this does not hold for strategies that minimize propagation delays. BW .8 to 0. i. 13 reports Sssim (computed and averaged over peers that never leave the system) varying the size of the incoming neighborhood NI . RND-RND.7 0.979 0. NI = 30 RND-RXC. The other half of the peers.RXC w BL RTT . is not enough to guarantee the peers to find a good and stable set of in-neighbors from which to download chunks. The experiment starts with only half of the peers.6 612 0. G Homo SCENARIO .9 0.RND w/o BL 0.4 SSIM SSIM 10 0. even for ρ << 1: i) RTT-RXC shows definitely the best performance.2 0.7 204 0.e. NI = 5.988 2080 0.5 0.94 0.9.5.RND w BL BW . Increasing N has a negligible impact on performance. B. Fig. Average Sssim vs NI with Pchurn = 0.5.RND w/o BL RND .RND w BL RND .0 Table VI AVERAGE Sssim VERSUS N .3 0. i. N RND .50 RTT-RXC RND-RND 10 20 30 40 50 In-Degree 60 70 80 Figure 13.RND w/o BL 0. rs = 1. We investigate what are the best trade-off values for the size of the incoming neighborhood NI and its update frequency Rup . Fig. observe that both schemes show good performance for whatever value of NI ∈ [25.8 1836 0. both with and without blacklisting functionality. G Adver scenario.7 0. to a worse percieved QoE.8 0. RTT-RXC.3.6 0. i. a too small incoming neighborhood. The simple bandwidth-aware scheme.9 1 rs [Mbps] 1428 0.981 0. RND-RXC and RTT-RXC schemes in .99 0. as defined in (2).RXC EVQ RTT .8 0.783 0.2 M B / S . 14 reports the chunk loss rate (computed and averaged over peers that never leave the system) for different policies and different sizes of the incoming neighborhood NI . BW-RND and RTT-RXC in the G Adver scenario. 12 shows the average Sssim considering RND-RND.0 Mb/s.858 0. Average loss rate vs time before. system load ρ = 0.1 rs [Mbps] 1.98 0.RXC w BL RTT .2 1.988 0. NI = 30 RTT-RXC.2 Pchurn = 0. For this case we consider RNDRND and RTT-RXC schemes in G Homo scenario with rs = 1. We go further in our investigation and consider a flash crowd scenario derived from G Homo.RND w/o BL RND . C. BW-RND. belonging to subnets 3 and 4. First. 45]. G Homo scenario with 1000 peers and rs = 1.812 0.829 0. Remark F – RTT-RXC with blacklisting guarantees optimal QoE for ρ < 1 whereas RND-RND policies are not able to guarantee good QoE for ρ > 0.e. Indeed.8 0. RND-RND and BW-RND. The video is encoded at rs = 1. a remarkable gain. As expected. i.9 1 0. we set Fup = 0. we only report in Table VI the average Sssim for the RND-RND and RTT-RXC schemes. Tup = 10 and change NI ∈ [5. thus. Sssim (average) index varying the video rate.85 ρ=0.6 0. the average Sssim improves from 0.5 1. Fig. Similarly. 80].9 ρ=1.e.. we consider RND-RND. NI = 30 BW-RXC.799 0. ii) enabling blacklisting induces a large gain for not location-aware policies. we study how the system scales when increasing the swarm size N from 200 to 2000 peers. 0.RXC w/o BL BW .2 Mb/s.984 1 1. Due to the lack of space.1 1. during and after a flash crowd event.99.RND w BL RND ..93 0. Schemes RND-RND and RTT-RXC in G Homo scenario with 1000 peers and rs = 1. all peers in subnets 1 and 2. Scaling with swarm size Considering again G Homo scenario. i.. always ensures better performance than RND-RND.97 0.9 1 1.96 0. RTTRXC. the adversarial scenario makes it more challenging to achieve a good QoE. NI = 5 0. especially when the smart RTT-RXC policy is adopted. NI = 30 RTT-RXC.

75 0.0 Mb/s for PlanetLab experiments.8 1 Rank 40 0.88 0.RND 0 0.RND 0. 100] s accordingly. The best trade-off value is Rup = 1 peer/s. Fig. too large values. as peers do not have enough time to collect significant measurements about the in-neighbor “quality” (amount of received chunks). in heavy churning conditions. VII.9 0.5.25 Pchurn=0.3 rs [Mbps] Figure 18. i.86 0.RND RND .85 0.RXC BW . e. excluding those that had .82 0. 1 0.5 1 2 4 0. too low Rup induces peers to react slowly to sudden changes brought by churning peers. The evolution during time of the average outgoing neighborhood size setting different Rup values.8 1 0. 16 reports the evolution over time of the average size of the outgoing neighborhood NO of class 1 peers. second.05 1. Indeed.RND RND . and thus leading to bad system performance. We selected all active PlanetLab nodes.8 0.0 Mb/s.0 0.8 Mb/s and rs = 1.9 0. Fig. but it allows quantifying the reactivity of the topology to such abrupt changes.50 Pchurn=0. impair the performance as well.RXC BW .8 0.7 Upload Bandwidth 2. Scheme RTT-RXC in G Homo scenario with rs = 1.15 1. Also in this case Rup = 1 peer/s setup represents a good trade off. SSIM G Homo scenario with rs = 1. Sssim (average) index when varying video rate for PlanetLab experiments.98 0.RXC BW .94 0..8 1 0.3 and change Tup ∈ [2.e. Remark G – Fast topology updates allow the overlay topology i) to react quickly to changes in the network scenario. such as for Pchurn >= 0.g.8 0 100 200 300 400 500 600 700 800 900 Time [s] SSIM Out-Neighborood size SSIM Rup 100 Percentage of Peers 50 % 50 % Pchurn=0.2 rs = 1. i.125 0.2 rs = 0.e. We set NI = 30.0 Pchurn=0.95 Figure 15.2 peer/s 80 60 RTT ..95 Class 1 2 SSIM 0.RXC RTT . For this case we adopted scheme RTT-RXC and rs = 0.64 Mb/s ± 10% 0. and force all high-bandwidth peers to experience an abrupt up-link bandwidth reduction from 5 Mbit/s to 0.84 0.4 0..64 Mbit/s (on average) at time 480 s.11 Table VII C HARACTERISTICS OF PEER CLASSES IN THE P LANET L AB EXPERIMENT.1 1. to distinguish “good” from “bad” in-neighbors. We consider the RTT-RXC scheme.9 0. when NI = 30.9 RTT . and ii) to prune quickly peers which left the system. We now consider the G Homo scenario again. smaller values of Rup slow down the system reactivity.85 Rup=2 peer/s Rup=1 peer/s Rup=0. e. The plot shows that the system is very robust to different Rup values. On the other hand. Fup = 0. in all cases.9 0. impairing performance.RXC RTT .. and thus find it difficult RTT .92 0.00 Mb/s ± 10% 0. Sssim (p) for rs = 0.85 0. Only under stressed scenarios.RXC RTT . However.g.2 1. P LANET L AB E XPERIMENTS We now present similar experiments on PlanetLab.95 20 0 0.RND RND . Observe that given a reasonable size of the in-neighbors set.96 0.8 ρ=0.6 0.25 0. all considered schemes nicely react to the sudden ingress of 500 peers. Scheme RTT-RXC in G Homo scenario with 1000 peers and rs = 0. Two observations hold: first. the swarm takes only 30s to get back to a stable regime.75 0.5 peer/s Rup=0.9 0. 0 0.25 1.85 Figure 16. Rup = 2 peer/s.8 0.8 Mb/s. This scenario is rather artificial. 15 reports the Sssim (computed and averaged over peers that never leave the system) varying the rate of update Rup .RND 0.0 Mb/s. 0. the system takes a longer time to stabilize. Average Sssim vs Rup for different Pchurn . Tup = 10 s. Rup becomes critical: too high Rup does not let the swarm achieve a stable state.8 Mb/s. too fast updates introduce instability in the overlay construction process.4 0.6 Rank Figure 17. Instead when NI = 5. driving peers to never achieve a stable incoming neighborhood.95 1 1. Different values of inneighborhood update rate Rup are considered.

0 Mbit/s (bottom). we obtained a set of 449 nodes scattered worldwide. peers were interconnected through Pastry overlay topology which implements –as many others P2P substrates like Chord [30] or CAN– some location awareness based on number of IP hops. Average upload capacity results to have an upper bound of 1.8 Mbit/s (top) and rs = 1. For this reason. but the actual value largely depends on the status of PlanetLab nodes and their network connection.8 Mbit/s (top plot). Each curve represents the average of 10 different runs. Based on Gnutella overlay. Observe that when 0. Sssim (p) has been sorted in decreasing values to ease visualization. DONet (or Coolstreaming) [2] is a successful P2P-TV system implementation. PPStream [25]. Many new features were introduced in [31] to improve the streaming service and. then. PlanetLab experiments. but its datadriven overlay topology does not exploit any information from underlying network levels. Observe. to the best of our knowledge. Once more. in particular. i. however. confirming again to be the best choice to adopt. Increasing system load. This design employs a scheduling policy based on chunk rarity and available bandwidth of peers. Observe that this only guarantees an upper bound to the actual available peer upload bandwidth which may be smaller due to competing experiments running on the same PlanetLab node or due to other bottlenecks on the access links of the node.0 Mbit/s (bottom plot). and restricted the scenario to two classes as shown in Table VII: half of the peers have 2 Mbit/s at their uplink. 18 that shows the average Sssim as a function of video-rate rs . Sssim (p). authors proposed a new neighbor re-selection heuristic based only on peers up-link bandwidth. In [32]. Thus. SopCast [26] were proposed in recent years. R ELATED W ORK Many popular commercial applications such as PPLive [23]. that there is always a certain fraction of nodes that cannot receive the video due to congestion at local resources. but also in this case. and it is the only policy that can combine good performance with low network costs. delivering the video in good quality to more peers than the previous two policies.0 Mbit/s). When no location awareness is enabled. As a result. Fig. VIII.e. for the same four policies (RND-RND.e. consistently outperforming other policies for each value of rs . Observe that when the amount of system resources is large enough with respect to the video-rate.2 0 0 50 100 150 200 250 RTT [ms] 300 350 400 Figure 19.4 RTT-RND RTT-RXC BW-RXC RND-RND 0. Although RTT-RXC is only second best in terms of network cost the difference is marginal. 17 are confirmed in Fig. making any statement about their overlay topology design strategies impossible. Again RTT-RXC proves to be the most reliable choice. different schemes for topology management perform rather similarly. A pure bandwidth based policy (BW-RXC) provides good performance. when rs = 0. 17 reports each peer’s individual SSIM performance. rs = 1. RTT aware policies significantly reduce ζ(p) thanks to their ability to select closer in-neighbors. Results depicted in Fig. An early solution called GnuStream was presented in [28].75 (rs = 1. the best quality (the highest portion of peers receiving good quality) is achieved combining bandwidth-awareness with proximitybased schemes (RTT-RXC). this last policy achieves the goal of localizing traffic without impairing performance. the same holds for schemes based on pure proximity that can lead to disconnected topologies and.8) received the video with lower quality than other peers. RTT sensitive policies are adopted. Blacklisting was active by default for these experiments. CDF of the distance traveled by information units. Focusing on available literature on purely mesh-based P2P-TV systems. peer upload capacity has been limited by the PeerStreamer embedded rate limiter. GnuStream implemented a load distribution mechanism where peers were expected to contribute to chunks dissemination in a way proportional to their current capabilities. However. 19 complements the information about users’ perceived performance with statistics about network cost reporting the Cumulative Distribution Function (CDF) of network cost ζ(p) given a system load ρ equal to 0.8 CDF configuration issues or severe reachability problems. contrary to previous experiments. this fraction reduces to less than 30%.. and thus system load ρ.64 Mbit/s the other half. so that connections among these nodes reflect natural Internet conditions. the right side of the RTT-RND curve shows that a group of peers (with rank around 0.32 Mbit/s. Authors proposed an improved seeder choice based on network tomography techniques. A more refined solution called PROMISE was introduced in [29]. to bad QoE performance (RTT-RND). this hybrid mesh-based system relies on an overlay topology that aims at matching the underlying physical network while pruning slow logical connections. BW-RXC. No artificial latency or packet loss were imposed. RTT-RXC). i. In order to achieve a scenario similar to the experiments of the previous section. for rs = 0. authors showed the design aspects of their application called AnySee. thus producing a lighter footprint on network resources. In fact. UUSee [24]. we did not use a 5 Mbit/s class. RTT-RND. more than 55% of data is exchanged with peers with less than 50 ms RTT. none of them provides general and detailed guidelines for the overlay topology design process. and some slight locality-awareness is implemented in PPlive [27]. but almost no information about their internal implementation is available. and 0. Even if partially based on multicast.12 1 0. many solutions can be found. highlights differences among schemes and confirms results obtained in the controlled environment: random-based policies (RND-RND) perform badly in general. .6 0. Only a few recent measurement studies suggest that simple random based policies are adopted by SopCast [20]. increasing both network cost and chunk delivery times. Fig.

C. “Network aware P2P multimedia streaming: Capacity or locality?” in IEEE P2P. Chierichetti. Mellia. J. and M. Picconi and L. Lattanzi. S. March 2005. F. “Can p2p-users benefit from locality-awareness?” in IEEE P2P. 4. Mellia. Li. Muscariello. M. Phoenix. W. and S. Annapolis. 13. DE. D.” in IEEE INFOCOM.com [26] “SOPCast. and remains extremely difficult to study.” Computer Communications. April 2008.” in IFIP Networking. Fortuna. Gupta.acm. [7] Y. June 2011. Leonardi. vol. vol. [13] S. Meo. S. J. L.” in IEEE P2P. US. [3] D. showing that former systems perform better in highly dynamic scenarios as well as in scenarios with bandwidth limitations. Multimedia Comput.1865115 . Despotovic. 2010. but in unstructured systems. FL. D. and M. April 2004. Couto da Silva. Kumar. Bonald. E. August 2010. and A. H. [10] J. I. authors experimentally compare unstructured systems with multiple-tree based ones. Z. Leonardi. August 2011. Its modularity and flexibility allowed the selection of arbitrary strategies for the construction of the overall topology. Fahmy.com [27] L. Kyoto. Available: http://doi. L. Massouli´e. Available: http://www. Williamsburg. Meo. Wang. “Image quality assessment: From error visibility to structural similarity. In highly idealized scenarios. NL. 2010. Ithaca. Available: http://www. October 2010. Birke. Kiraly. They show that good topological properties are guaranteed by location awareness schemes. E. In [4]. Couto da Silva. Mathieu. Meo. H. August 2011. Bermudez. and S. Finally. M. “Passive characterization of SopCast usage in residential isps. Montreal.eu/ [15] R. K. [23] “PPLive. This strengthen our choice of exploring topology management policies for mesh-based streaming systems. M. no deep investigation about performance of their overlay design strategy is provided. T. Aachen. Ren. “Experimental comparison of peer-to-peer streaming overlays: An application perspective. Mellia. T. M. G. “On the Optimal Scheduling of Streaming Applications in Unstructured Meshes. [Online]. Augsburg. we propose in this paper an overlay topology design strategy that.” in IEEE INFOCOM. vol. Liang. Kiraly. “Epidemic live streaming: optimal performance trade-offs. Mellia. A. In [34] a theoretical investigation on optimal topologies is formulated. F. M. August 2010. leads to a win-win situation where the performance of the application is improved while the network usage is reduced compared to a classical benchmark with random peer selection. Kwok.” in ACM SIGMETRICS. M. Delft. Available: http://www. 3. [4] R. AZ. Perino. Oechsner. [2] X. Sambamurthy. but network latencies are still ignored. Florence. October 2008. 600–612.g. IT. Russo. [16] L. 22. Kellerer. “Design and implementation of a generic library for P2P streaming. [8] T. Twigg. PeerStreamer was developed within the framework of the NAPA-WINE project. and M. pp. Abeni. however.com/science/article/ pii/S0140366412002332 [18] N. Mellia. Lo Cigno.13 However. [Online]. A. 18.” [Online]. and M. May 2009. and R. VA. Massouli´e.” in IEEE P2P. US. Miami. Meo. R. Abeni. Kiraly. and T. [9] L. Traverso. Available: http://www.” IEEE Comm. T. the interactions between overlay topology structure and the chunk distribution process are ignored. DE. 451–463. Nov. “Experimental comparison of neighborhood filtering strategies in unstructured p2p-tv systems. JP. JP. The work presented in this paper partially fills this gap. “Is there a future for mesh-based live video streaming?” in IEEE P2P.” [Online]. Similar in spirit. Lo Cigno. Sheikh. DE. E. pp. IX. Birke. March 2011. Lo Cigno. Zage. taking into account both latency and bandwidth. Kyoto. and E. and A. [14] “NAPA-WINE. Kiraly.” [Online]. Biazzini. ES. Available: http://www. Delft. a distributed and adaptive algorithm for the optimization of the overlay topology in heterogeneous environments has been proposed. C. In a fully controlled networking environment. In [33] authors presented a study about some key design issues related to mesh-based P2P-TV systems. coupled with an estimation of the quality of the peer-topeer relation to drop them. has been formulated as an optimization problem. pp. in [10]. no. [12] Z. C ONCLUSIONS P2P-TV systems are extremely complex. and R. E. QC. June 2008. Available: http://www. Bovik. Tropea. NL. Authors of [35] propose a mechanism to build a tree structure on which information is pushed. no.” in Euro-Par.pplive. based on simple RTT measurements to add peers. Pai.” in ACM NOSSDAV. E. and C. K. Mohr.ppstream. 2012. US. Y. K. and M.uusee. [Online]. Zhang. Leonardi. Seedorf. and S. C. [5] A. [6] X. H. no. “Adaptive peer sampling with newscast. Appl. V. P.” IEEE Trans. no. [11] F. 49. E.-K. 4. vol. in [3] the problem of building an efficient overlay topology. September 2012. Austin. Hossfeld. aims at creating an overlay with good properties and low chunk delivery delays. C.telematica. “On Reducing Mesh Delay for Peer-to-Peer Live Streaming. C. the impact of different construction and maintenance strategies for the overlay topology is of the utmost importance. [21] R.” [Online].it/traverso/papers/p2p10. Traverso. S. vol. NY. MD. [19] F. P.napa-wine. “Chunk Distribution in Mesh-Based Large Scale P2P Streaming Systems: a Fluid Approach. T¨olgyesi and M. “Chainsaw: Eliminating trees from overlay multicast. Tarragona. Leonardi. Simoncelli. the degree of peers’ made proportional to their available bandwidth.org/10. L. Kiraly. A.” in Workshop on Advanced Video Streaming Techniques for Peer-to-Peer Networks and Social Networking. Lo Cigno. considering latency and peer bandwidth heterogeneity.. Delft. Mathieu. on Image Processing. Magazine. Some fundamental improvements were introduced: e. and the assessment of their performance through experiments has been rarely undertaken. Tamilmani. Seibert. Turning our attention on more theoretical studies about the overlay topology formation.com [24] “UUSee. Commun.” IEEE Trans. February 2005. E. Liu. Lehrieder. “Adaptive Overlay Topology for Mesh-Based P2P-TV Systems. R. Leonardi. US. “Rumour spreading and graph conductance. September 2007. Jelasity. Mellia. Meo. S. Bakay. [36] shows with simple stochastic models that overlay topologies with small-world properties are particularly suitable for chunk distribution in P2P-TV systems.com [25] “PPStream. M. The Netherlands. C. In particular.1145/1865106. Panconesi. Jin and Y. Birke. June 2009.” in ACM-SIAM SODA. and A. and G. “QoE in Pull Based P2P-TV Systems: Overlay Topology Design Tradeoffs. [17] R. US. Nahrstedt. Michel. 2237 – 2244.” [Online]. Aachen. M. Liu. Nita-Rotaru. on Parallel and Distributed Systems. Chan. Lobb. Niccolini.polito. pp.pdf [22] F.” in IEEE LCN. “A delay-based aggregate rate control for p2p streaming systems. September 2008. TX. scaling laws are thus discussed. They focused on understanding the real limitations of this kind of applications and presented a system based on a directed and randomly generated overlay.” ACM Trans. US. we have run a large campaign of experiments measuring the impact of different filtering functions applied to the management of peer neighborhoods. and J. Mellia. 6. “Coolstreaming/donet: A data-driven overlay network for peer-to-peer live media streaming. Vu.sopcast.” in IPTPS. “Understanding overlay characteristics of a large-scale peer-to-peer iptv system. Abeni. Yum. Traverso. 31:1–31:24. Results show that proper management. “Architecture of a Network-Aware P2P-TV Application: the NAPA-WINE Approach. Szemethy. R EFERENCES [1] V. 35. R. A. “On the minimum delay peer-to-peer video streaming: how realtime can it be?” in ACM Multimedia. CA. Leonardi. M. Available: http://www. [20] I.. taking into account latency and peer heterogeneity.sciencedirect. R. 2009.” in IEEE P2P.

” in IEEE INFOCOM. Rev. and X. Y. Liu. Ed. Currently he works as a postdoc in the system fabrics group at IBM Research Zurich Lab. degree in Electronics Engineering in 1991 and a Ph. and R. ser. degree in Networking Engineering at Politecnico di Torino. University of Trento.D. Locher. His main research interests are real-time operating systems. J. and content delivery networks. from the same university. In 1997.D. Anna. Xu. pp. From 2003 to 2006. Santa Barbara. He is Area Editor of ACM CCR. Qu. Lecture Notes in Computer Science. and datacenter networks with special focus on performance. 31. Wattenhofer. WONS. “Inside the new coolstreaming: Principles. M. Deng.L. Keung. Lin. August 2001. Luca Abeni is assistant professor at DISI. 2003. Li. 388–402. vol. high performance computing. Y. queueing theory and packet switching.D. In 2002 he visited the Sprint Advanced Technology Laboratories. Anchorage . degree from the Politecnico di Torino in 2009 with the telecommunications group under the supervision of professor Fabio Neri. October 2006. Springer Berlin / Heidelberg. His current research interests are in performance evaluation of wired and wireless networks. pp. D. quality of service.D. “Prime: Peer-to-peer receiver-driven meshbased streaming.” in IEEE INFOCOM. During his Ph.14 [28] X. May 2007. Los Angeles. in Telecommunications Engineering in 1995 both from Politecnico di Torino. He is an Associate Professor at the Dipartimento di Elettronica of Politecnico di Torino. and B. scheduling algorithms. Commun. and B. 217–251. Phoenix. He graduated in computer engineering from the University of Pisa in 1998. Washington. AZ. “Anysee: Peer-to-peer live streaming. [32] X. and in 2011 and 2012 he visited Narus Inc. Emilio Leonardi (SM’09) received a Dr. Barcelona. Luca worked in Broadsat S. B.” in IEEE INFOCOM. Springer Berlin / Heidelberg. such as Infocom. B. M. scheduling. as part of the Systems and Networks group. and D. in 2012. “Gnustream: a P2P media streaming system prototype. Germany. Bhargava. L. Xie. 4731.” in ACM Multimedia. at the time of this work. Chakareski. pp. he was with the Computer Science Department. 149–160. New York. and is regularly involved in the organizing and technical committees of leading IEEE. Bhargava. Jin. Ing. DC. Pelc. ES. and audio/video streaming. 2010. measurements and performance implications. Italy. and he received his Ph. all of them in the area of telecommunication networks.” in IEEE ICME. IEEE Globecom and IEEE ICC. ACM CoNEXT. ACM and IFIP Conferences. Magharei and R. CA. He received a degree in Electronic Engineering with a specialization in Telecommunications from Politecnico di Torino in 1988. Y. Globecom.. Italy. degree at ”Scuola Superiore S. network measurements. Botev. Renato Lo Cigno was Area Editor for Computer Networks. Hefeeda. April 2006. and Post-doc he has been visiting Telefonica I+D research center in Barcelona. A. he has been with the Computer Science Department of Carnegie Mellon University. Studies in Computational Intelligence. Liao.D. Pisa” in 2002. His main research interests are peer-to-peer.” in Distributed Computing. [36] J. multimedia applications. MSWiM. congestion control. Csaba Kiraly . was research associate at DISI. Liang. and H. Li. IEEE Infocom. G. Small. P2P.D. Zhang. From 1998 to 2006. Schmid. Jiang. F. His research interest are in the design and investigation of energy efficient networks and in traffic monitoring and analysis. [34] T. [33] N. Robert Birke received his Ph. US. developing IPTV applications and audio/video streaming solutions over wired and satellite (DVB . D. He graduated in Informatics and Telecommunications from the Budapest University of Technology and Economics in 2001. US. and virtualization. ser. 2003. Liu.. vol. Csaba Kiraly is member of the IEEE and ACM. Renato Lo Cigno (SM’11) is Associate Professor at the Department of Computer Science and Telecommunications (DISI) of the University of Trento. “Topology construction and resource allocation in p2p live streaming. and B. Spain. California. P2P networks and networked systems in general. Kaashoek. He is co-author of more than 30 scientific papers. NY. and received a Ph. Dong. R. He participated in the program committees of several conferences including ACM SIGCOMM. Xu. California. streaming systems. Rejaie. “Promise: peer-to-peer media streaming using collectcast. AK. A. His research interests are in the field of: performance evaluation of computer and communication networks. [35] T.” in ACM Multimedia. Balakrishnan. S. Marco Mellia (SM’08) is with the Dipartimento di Elettronica e Telecomunicazioni at Politecnico di Torino as an Assistant Professor. Italy. with particular focus on P2P streaming and privacy enhancing technologies. modeling and simulation techniques. “Scaling laws and tradeoffs in Peer-toPeer live multimedia streaming. “Push-to-Pull Peerto-Peer Live Streaming. S. University of California. Y. “Chord: A scalable peer-to-peer lookup service for internet applications.D. [30] I. Quality of Service management. D. where he leads a research group in computer and communication networks. overlay networks. US. and NEC Laboratories in Heidelberg. He has co-authored over 180 papers published in international journals and presented in leading international conferences. he was a Researcher supported by CSELT. University of Trento. Ni. His main research interests are in design and performance evaluation of overlay networks. the same institution where he worked until 2002. Meier. April 2008.R. Stoica. [31] B. [29] M. US. He is currently the coordinator of the mPlane Integrated Project that focuses on bulding an Intelligent Measurement Plane for Future Network and Application Management . During his Ph. Habib. Morris.” SIGCOMM Comput.MPE) networks. Stefano Traverso received his Ph. R. He is currently a Post-doc Fellow of TNG group of Politecnico di Torino. US. Karger.” in Intelligent Multimedia Communication: Techniques and Applications. H. as a Visiting Scholar for a total of 14 months. He is senior member of IEEE and ACM. ICC. His research interests include P2P networks. 280. vol.. 2007. C.