Professional Documents
Culture Documents
Latency (ms)
3000
2000
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500
a) Sequence number
To respond to a current event-reduction request, 0.8
0.4
δl exceeds a positive threshold (0.1), or to move up one 0.2
level to l + 1 if δl exceeds a negative threshold (−0.1). 0
0 20 40 60 80 100 120 140
Otherwise, PACK uses the previous level. 0.8
b) Timeline (sec)
Packing ratio
0.6
0.2
chose Pastry [10] as the overlay routing protocol, but c) Timeline (sec)
0.6 0.6
with the median interval approximately 11 seconds. The
CDF
0.2 0.2
from the time the AP query is resolved and the times-
0 0 tamp in the original syslog event. Although we set the
0 25 50 75 100 0 25 50 75 100
Packing interval (sec) Query latency (sec) connection timeout to be 30 seconds for each query, the
longest delay to return a query was 84 seconds; some
AP was under heavy load and slow to return results even
Figure 5. Statistics derived from an one- after the connection was established.
hour trace collected by the MAC/IP locator.
6 Related work
Traditional congestion and flow-control protocols
service: given a wireless IP address, the service can concern both unicast and multicast. They are typically
identify the AP where the device is currently associated. transparent to applications and provide semantics such
This enables us to deploy location-based applications, as reliable in-order data transport. When computational
often without modifying legacy software. For instance, and network resources are limited, these protocols have
we modified an open source Web proxy so it can push to either regulate the sender’s rate or disconnect the slow
location-oriented content to any requesting Web browser receivers [5, 9]. The usual alternative, UDP/IP, has no
on wireless devices based on the IP address in the HTTP guarantees about delivery or ordering, and forces appli-
header. Currently we insert information about the build- cations to tolerate any and all loss, end to end. Our goal,
ing as a text bar on top of the client requested page. on the other hand, is to trade reliability for quicker data
To provide this kind of service, a locator subscribes delivery and service continuity for loss-tolerant applica-
to the syslog source and monitors all devices’ associa- tions. Our PACK service applies to data streams with a
tion with the network. The association message contains particular structure. This loss of generality, however, en-
the device’s MAC address and associated AP name, but ables PACK to enforce receiver-specified policies. The
does not always include the IP address of that device. In PACK protocol does not prevent or bound the amount
such cases, the locator queries the AP for the IP address of congestion, which is also dependent on cross traffic.
of its associated clients using a HTTP-based interface But with an appropriate customized policy, a receiver is
(SNMP is another choice, but appears to be slower). The able to get critical data or summary information during
query takes from hundreds of milliseconds to dozens of congestion. For many applications this outcome is bet-
seconds, depending on the AP’s current load and config- ter than a strict reliable service (TCP) or a random-loss
uration. We also do not permit more than one query in (UDP) service.
30 seconds to the same AP so our queries do not pose Performing application-specific computation, includ-
too much overhead over normal traffic. As a result, we ing filtering, inside networks is not a new idea. In par-
frequently find that the locator falls behind the syslog ticular, it is possible to implement our PACK service
event stream, considering the large wireless population using a general Active Network (AN) framework [12].
we have. We, however, chose an overlay network that is easier
We focus our discussion on the subscription made by to deploy and has explicit support for multicast, mo-
the locator to the syslog source, where the events tend bility, and data reduction. Bhattacharjee and others
to overflow the receiver’s queue RB. The locator uses propose to manage congestion by dropping data units
a PACK policy consisting of six filters (we skip their based on source-attached policies [1]. Receiver-driven
description due to space limitation). We collected the layered multicast (RLM) [6], actively detects network
PACK trace for an hour-long run and Figure 5 shows congestion and finds the best multicast group (layer) to
some basic statistics. which the multimedia application should join. Pasquale
The upper-left plot presents the distribution of the fil- et al. put sink-supplied filters as close to the audio/video
tering levels triggered by the PACK service. All filter- source as possible to save network bandwidth [8]. Our
ing levels were triggered, varying from 31 times to 61 work, however, aims at broader categories of applica-
tions and has to support sink-customized policies since High Performance Networking (HPN’97), April 1997.
the source typically cannot predict how the sinks want to [2] A. Carzaniga, D. S. Rosenblum, and A. L. Wolf. Achiev-
manipulate the sensor data. PACK policies thus need to ing scalability and expressiveness in an Internet-scale
be more expressive than the filtering operations on mul- event notification service. In Proceedings of the 19th
Annual Symposium on Principles of Distributed Com-
timedia streams.
puting, pages 219–227, Portland, OR, 2000.
Researchers in the database community provide a [3] M. Castro, M. Jones, A.-M. Kermarrec, A. Rowstron,
query-oriented view on continuous stream processing. M. Theimer, H. Wang, and A. Wolman. An evaluation of
One of the focus is to formally define a SQL-like stream- scalable application-level multicast built using peer-to-
manipulation language, which has the potential to re- peer overlays. In Proceedings of the 22nd Annual Joint
place PACK’s current “ad hoc” XML-based interface. Conference of the IEEE Computer and Communications
In particular, the Aurora system reduces the load by dy- Societies, pages 1510–1520, San Francisco, CA, 2003.
namically injecting data-drop operators in a query net- [4] D. D. Clark and D. L. Tennenhouse. Architectural con-
work [11]. Choosing where to put the dropper and how siderations for a new generation of protocols. In Pro-
much to drop is based on the “QoS graph” specified by ceedings of the 1990 ACM Conference on Applications,
Technologies, Architectures, and Protocols for Com-
applications. Aurora assumes a complete knowledge of
puter Communications, pages 200–208, Philadelphia,
the query network and uses a pre-generated table of drop
PA, 1990.
locations as the search space. The QoS function pro- [5] V. Jacobson. Congestion avoidance and control. In Pro-
vides quantitative feedback when dropping data while ceedings of the Symposium on Communications Archi-
PACK allows explicit summarization of dropped events. tectures and Protocols, pages 314–329, Stanford, CA,
1988.
7 Conclusion and Future Work [6] S. McCanne, V. Jacobson, and M. Vetterli. Receiver-
We present a novel approach to solve the queue driven layered multicast. In Proceedings of the 1996
overflow problem using application-specified poli- ACM Conference on Applications, Technologies, Archi-
cies, taking advantage of the observation that many tectures, and Protocols for Computer Communications,
pages 117–130, Palo Alto, CA, 1996.
context-aware pervasive-computing applications are
[7] J. F. McCarthy and E. S. Meidel. ActiveMap: A Vi-
loss-tolerant in nature. Our PACK service enforces data- sualization Tool for Location Awareness to Support In-
reduction policies throughout the data path when queues formal Interactions. In Proceedings of the First Inter-
overflow, caused by network congestion, slow receivers, national Symposium on Handheld and Ubiquitous Com-
or temporary disconnection. Our sink-based approach puting, pages 158–170, Karlsruhe, Germany, September
and the expressiveness of PACK policies poses a scala- 1999. Springer-Verlag.
bility limitation, but provides more fine-grained results [8] J. C. Pasquale, G. C. Polyzos, E. W. Anderson, and V. P.
tailed to applications’ needs as a trade off. Our experi- Kompella. Filter propagation in dissemination trees:
mental results show that PACK policies also reduce av- Trading off bandwidth and processing in continuous me-
erage delivery latency for fast data streams, and our ap- dia networks. In Proceedings of the 4th International
Workshop on Network and Operating System Support
plication study shows that the PACK service works well
for Digital Audio and Video (NOSSDAV 93), pages 259–
in practice. 268, 1993.
We plan to evaluate PACK service in larger-scale set- [9] S. Pingali, D. Towsley, and J. F. Kurose. A comparison
tings, in particular, to investigate how policy-driven data of sender-initiated and receiver-initiated reliable multi-
reduction affects the fairness across multiple subscrip- cast protocols. In Proceedings of the 1994 ACM Confer-
tions with intersected dissemination paths. PACK cur- ence on Measurement and Modeling of Computer Sys-
rently assumes static policies, and it is another research tems, pages 221–230, Nashville, TN, 1994.
challenge how to allow and enforce dynamic updates of [10] A. Rowstron and P. Druschel. Pastry: Scalable, Decen-
policy throughout the data path while preserving appli- tralized Object Location, and Routing for Large-Scale
Peer-to-Peer Systems. In Proceedings of the 2001 Inter-
cation semantics. It would also be interesting to see
national Middleware Conference, pages 329–350, Hei-
whether PACK can dynamically (re)-configure the ap-
delberg, Germany, November 2001.
propriate filters to minimize a programmer’s effort to [11] N. Tatbul, U. Çetintemel, S. B. Zdonik, M. Cherni-
specify filters in exact sequence. Finally, we plan to ack, and M. Stonebraker. Load Shedding in a Data
extend this policy-driven data reduction approach to an Stream Manager. In Proceedings of the 29th Interna-
infrastructure-free environment, such as an ad hoc net- tional Conference on Very Large Data Bases, pages 309–
work. 320, Berlin, Germany, September 2003.
[12] D. L. Tennenhouse, J. M. Smith, W. D. Sincoskie, D. J.
References Wetherall, and G. J. Minden. A survey of active network
research. IEEE Communications, 35(1):80–86, January
[1] S. Bhattacharjee, K. L. Calvert, and E. W. Zegura. An 1997.
architecture for active networking. In Proceedings of