5, MAY 2010

The key feature of many emerging pervasive computing applications is to proactively provide services to mobile individuals. One major challenge in providing users with proactive services lies in continuously monitoring users’ context based on numerous sensors in their PAN/BAN environments. The context monitoring in such environments imposes heavy workloads on mobile devices and sensor nodes with limited computing and battery power. We present SeeMon, a scalable and energy-efficient context monitoring framework for sensor-rich, resource-limited mobile environments. Running on a personal mobile device, SeeMon effectively performs context monitoring involving numerous sensors and applications. On top of SeeMon, multiple applications on the mobile device can proactively understand users’ contexts and react appropriately. This paper proposes a novel context monitoring approach that provides efficient processing and sensor control mechanisms. We implement and test a prototype system on two mobile devices: a UMPC and a wearable device with a diverse set of sensors. Example applications are also developed based on the implemented system. Experimental results show that SeeMon achieves a high level of scalability and energy efficiency.

INDEX TERMS Context monitoring, shared and incremental processing, sensor control, energy efficiency, personal computing, portable devices, ubiquitous computing, wireless sensor network, pervasive computing.

ABSTRACT Rapid growth of the demand for computational power by scientific, business and web-applications has led to the creation of large-scale data centers consuming enormous amounts of electrical power. We propose an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS). Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes. We present first results of simulation-driven evaluation of heuristics for dynamic reallocation of VMs using live migration according to current requirements for CPU performance. The results show that the proposed technique brings substantial energy savings, while ensuring reliable QoS. This justifies further investigation and development of the proposed resource management system


ABSTRACT Accurate and timely detection of infectious disease outbreaks provides valuable information which can enable public health officials to respond to major public health threats in a timely fashion. However, disease outbreaks are often not directly observable. For surveillance systems used to detect outbreaks, noises caused by routine behavioral patterns and by special events can further complicate the detection task. Most existing detection methods combine a time series filteri ng procedure followed by a statistical surveillance method. The performance of this ―two-step‖ detection method is hampered by the unrealistic assumption that the training data are outbreak -free. Moreover, existing approaches are sensitive to extreme valu es, which are common in real-world data sets. We considered the problem of identifying outbreak patterns in a syndrome count time series using Markov switching models. The disease outbreak states are modeled as hidden state variables which control the observed time series. A jump component is introduced to absorb sporadic extreme values that may otherwise weaken the ability to detect slow-moving disease outbreaks. Our approach outperformed several state-of-the-art detection methods in terms of detection sensitivity using both simulated and realworld data. INDEX TERMS Markov switching models, syndromic surveillance, Gibbs sampling, outbreak detection.

REP is the only range-free protocol for locating sensors with constant number of seeds in anisotropic sensor networks. 1. distributed computing. suffer from poor accuracy and low scalability.IEEE/ACM TRANSACTIONS ON NETWORKING VOL. The existing range -free schemes. we propose the Rendered Path (REP) protocol. FEBRUARY 2010 RENDERED PATH: RANGE-FREE LOCALIZATION IN ANISOTROPIC SENSOR NETWORKS WITH HOLES ABSTRACT Sensor positioning is a crucial part of many location dependent applications that utilize wireless sensor networks (WSNs). Without the help of a large number of uniformly deployed seed nodes. on the other hand. Current localization approaches can be divided into two groups: range -based and range-free. the range -based schemes are often impractical for WSNs. NO. To the best of our knowledge. 18. Due to the high costs and critical assumptions. multisensor systems. those schemes fail in anisotropic WSNs with possible holes. Index Terms Distributed algorithms. To address this issue. . position measurement.

secure data delivery. So once the adversary acquires the routing algorithm. Under our designs. making them quite capable of circumventing black holes. making all information sent over these routes vulnerable to its attacks. INDEX TERMS Randomized multipath routing. 9. we study data delivery mechanisms that can with high probability circumvent black holes formed by these attacks. the generated routes are also highly dispersive and energy efficient. hence. In this paper. it can compute the same routes known to the source. . So even if the routing algorithm becomes known to the adversary. JULY 2010 SECURE DATA COLLECTION IN WIRELESS SENSOR NETWORKS USING RANDOMIZED DISPERSIVE ROUTES ABSTRACT Compromised node and denial of service are two key attacks in wireless sensor networks (WSNs). Extensive simulations are conducted to verify the validity of our mechanisms. 7. the adversary still cannot pinpoint the routes traversed by each packet. We analytically investigate the security and energy performance of the proposed schemes. we develop mechanisms that generate randomized multipath routes.IEEE TRANSACTIONS ON MOBILE COMPUTING VOL. In this paper. mainly due to their deterministic nature. We also formulate an optimization problem to minimize the end-to-end energy consumption under given security constraints. the routes taken by the ―shares‖ of different packets change over time. We argue that classic multipath routing approaches are vulnerable to such attacks. wireless sensor network. NO. Besides randomness.

VOL. that is effective. we consider the following problem: given a packet classifier. Using ternary content addressable memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. Finally. TCAM Razor achieves a total compression ratio of 29. efficient. TCAMs suffer from the well-known range expansion problem. the TCAM Razor. and more rules mean more power consumption and more heat generation for TCAMs. the number of rules in packet classifiers has been increasing rapidly with the growing number of services deployed on the Internet. we propose a systematic approach. unlike many previous range encoding schemes. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. even for large packet classifiers. converting such rules to TCAM-compatible rules may result in an explosive increase in the number of rules. Despite their high speed.0%. router addressable memory (TCAM) optimization. Unfortunately. TCAMs have very limited capacity. In terms of effectiveness. As packet classification rules usually have fields specified as ranges. Even worse. Index Terms Algorithm. how can we generate another semantically equivalent packet classifier that requires the least number of TCAM entries? In this paper. 2. ternary content . APRIL 2010 TCAM RAZOR: A SYSTEMATIC APPROACH TOWARDS MINIMIZING PACKET CLASSIFIERS IN TCAMS ABSTRACT Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. packet classification.IEEE/ACM TRANSACTIONS ON NETWORKING. design. our TCAM Razor approach can be easily deployed as it does not require any modification to existing packet cl assification systems. 18. which is significantly better than the previously published best result of 54%. and practical. in terms of practicality. This is not a problem if TCAMs have large capacities. In this paper. In terms of efficiency. our TCAM Razor prototype runs in seconds. NO.

IEEE TRANSACTIONS ON MOBILE COMPUTING JULY 2010 (VOL. 7) TDMA SCHEDULING WITH OPTIMIZED ENERGY EFFICIENCY AND MINIMUM DELAY IN CLUSTERED WIRELESS SENSOR NETWORKS ABSTRACT In this paper. we first build a nonlinear cross -layer optimization model involving the network. and physical layers. utilizing the slot reuse concept to achieve minimum TDMA frame length. To achieve this objective. We solve this problem by transforming the model into two simpler subproblems. and reduced end-to-end delay. while simultaneously satisfying a specified reliability objective . we then propose an algorithm for deriving the TDMA schedules. Based on the network-wide flow distribution calculated from the optimization model and transmission power on every link. which aims at reducing the overall energy consumption. we propose a solution to the scheduling problem in clustered wireless sensor networks (WSNs). The objective is to provide network-wide optimized time division multiple access (TDMA) schedules that can achieve high power efficiency. 9 NO. medium access control (MAC). zero conflict. Numerical results reveal that our proposed solution reduces the energy consumption and delay significantly.

Pricing. Dynamic Control. Lyapunov analysis. with a tradeoff in average delay. The goal of the AP is to maximize its own time-average profit. We first obtain the optimum time-average profit of the AP and prove the ―Optimality of Two Prices‖ theorem. This holds for general Markovian dynamics for channel and user state variation. and does not require a-priori knowledge of the Markov model. Queueing. The model and methodology developed in this paper are general and apply to other stochastic settings where a single party tr ies to maximize its time-average profit. APRIL 2010 THE OPTIMALITY OF TWO PRICES: MAXIMIZING REVENUE IN A STOCHASTIC COMMUNICATION SYSTEM ABSTRACT This paper considers the problem of pricing and transmission scheduling for an Access Point (AP) in a wireless network. Index Terms Wireless Mesh Network. NO. PP. We then develop an online scheme that jointly solves the pricing and transmission scheduling problem in a dynamic environment. 18. 406 . The scheme uses an admission price and a business decision as tools to regulate the incoming traffic and to maximize revenue. where the AP provides service to a set of mobile users.419. We show the scheme can achieve any average profit that is arbitrarily close to the optimum. Optimization .IEEE/ACM TRANSACTIONS ON NETWORKING VOL. 2.

while making better use of the available memory. INDEX TERMS Query processing. 8. 22. In this paper. thus. NO. we first propose Double Index NEsted-loops Reactive join (DINER). We then extend the applicability of the proposed technique for a more challenging setup: handling more than two inputs. improving pipelining by smoothing join result production and by masking source or network delays. streams. MINER manages to produce a high percentage of early results. thus. join. DINER. and a novel reentrant join technique that allows the algorithm to rapidly switch between processing in-memory and disk-resident tuples. Their main advantage over traditional join techniques is that they can start producing join results as soon as the first input tuples are available. outperforming existing techniques for adaptive multiway join. Our experiments also shows that in the presence of multiple inputs. . DINER combines two key elements: an intuitive flushing policy that aims to increase the productivity of in-memory tuples in producing results during the online phase of the join. AUGUST 2010 ADAPTIVE JOIN OPERATORS FOR RESULT RATE OPTIMIZATION ON STREAMING INPUTS ABSTRACT Adaptive join algorithms have recently attracted a lot of attention in emerging applications where data are provided by autonomous data sources through heterogeneous network environments. a new adaptive two-way join algorithm for result rate maximization. Multiple Index NEsted -loop Reactive join (MINER) is a multiway join operator that inherits its principles from DINER. Our experiments using real and synthetic data sets demonstrate that DINER outperforms previous adaptive join algorithms in producing result tuples at a significantly higher rate. better exploiting temporary delays when new data are no t available. MINER.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING VOL.

opportunistic spectrum access. INDEX TERMS Dynamic spectrum access networks. 9. we consider DS-CDMA/OFDM spectrum sharing systems and obtain the achievable capacity of the secondary service under different subchannel selection policies in the fading environment. 6. Numerical results show that the optimal subchannel selection is based on the minimum value of the subchannel gain between the secondary transmitter and the primary receiver. Uniform subchannel selection is preferred for cases where a priori knowledge on subchannels state information is not available at the secondary transmitter. For cases with available a priori knowledge on subchannels state information. we study various nonuniform subchannel selection policies. OFDM.IEEE TRANSACTIONS ON MOBILE COMPUTING VOL. spectrum sharing. . In each case. and nonuniform subchannel selection. NO. DS-CDMA networks. Then we present results on the scaling law of the opportunistic spectrum sharing in DS-CDMA/OFDM systems with multiple users. interference threshold. JUNE 2010 765 ACHIEVABLE CAPACITY IN HYBRID DS-CDMA/OFDM SPECTRUM-SHARING ABSTRACT In this paper. we obtain the optimum secondary service power allocation and the corresponding maximum achievable capacity. Subchannel selection policies are divided into two categories: uniform subchannel selection.

IEEE/ACM TRANSACTIONS ON NETWORKING CONDITIONAL SHORTEST PATH ROUTING IN DELAY TOLERANT NETWORKS ABSTRACT Delay tolerant networks are characterized by the sporadic connectivity between their nodes and therefore the lack of stable e nd-to-end paths from source to destination. Based on the observations about human mobility traces and the findings of previous work.e. Through trace-driven simulations. average intermeeting time between nodes) extracted from contact history is a challenging problem. we introduce a new metric called conditional intermeeting time. We propose Conditional Shortest Path Routing (CSPR) protocol that routes the messages over conditional shortest paths in which the cost of links between nodes is defined by conditional intermeeting times rather than the conventional intermeeting times. we demonstrate that CSPR achieves higher delivery rate and lower end-to-end delay compared to the shortest path based routing protocols that use the conventional intermeeting time as the link metric . Since the future node connections are mostly unknown in these networks. We then look at the effects of the proposed metric on the shortest path based routing designed for delay tolerant networks. which computes the average intermeeting time between two nodes relative to a meeting with a third node using only the local knowledge of the past contacts. opportunistic forwarding is used to deliver messages. making effective forwarding decisions using only the network characteristics (i. However.

APRIL 2010 CONSTRAINED RELAY NODE PLACEMENT IN WIRELESS SENSOR NETWORKS: FORMULATION AND APPROXIMATIONS ABSTRACT One approach to prolong the lifetime of a wireless sensor network (WSN) is to deploy some relay nodes to communicate with the sensor nodes. we want to place a minimum number of relay nodes to ensure that each sensor node is connected with a base station through a bidirectional path. we want to place a minimum number of relay nodes to ensure that each sensor node is connected with two base stations (or the only base station in case there is only one base station) through two node -disjoint bi-directional paths. In the connected relay node placement problem. relay node placement. where relay nodes can only be placed at a set of candidate locations. we discuss its computational complexity and present a framework of polynomial time approximation algorithms with small approximation ratios. INDEX TERMS Approximation algorithms. 2. Previous studies have concentrated on the unconstrained version of the problem in the sense that relay nodes can be placed anywhere. there may be some physical constraints on the placement of relay nodes. For each of the two problems. and the base stations. other relay nodes. Extensive numerical results showthat our approximation algorithms can produce solutions very close to optimal solutions. To address this issue. we study constrained versions of the relay node placement problem. wireless sensor networks (WSNs). 18. The relay node placement problem for wireless sensor networks is concerned with placing a minimum number of relay nodes into a wireless sensor network to meet certain connectivity or survivability requirements. . In the survivable relay node placement problem. NO. In practice. connectivity and survivability.IEEE/ACM TRANSACTIONS ON NETWORKING VOL.

. as well as a hybrid of the two— both from the point of view of consumers as well as the content distributor. Index Terms Bass diffusion. delay guarantees. In particular. we evaluate the benefits of a hybrid system that combines peer-to-peer and a centralized client–server approach against each method acting alone. peer-to-peer (P2P). A key element of our approach is to explicitly model the temporal evolution of demand. NO. We also show how such awareness can be used to take provisioning decisions. content distribution. 2. we study the relative performance of peer -to-peer and centralized client–server schemes. Using this approach. Our insights are obtained in a fluid model and supported by stochastic simulations. Can the content distributor defray these costs through a more innovative approach to distribution? In this paper. Our analysis is carried out in an order scaling depending on the total potential mass of customers in the market. We show how awareness of demand can be used to attain a given average delay target with lowest possible utilization of the central server by using the hybrid scheme. 18. APRIL 2010 DEMAND-AWARE CONTENT DISTRIBUTION ON THE INTERNET ABSTRACT The rapid growth of media content distribution on the Internet in the past few years has brought with it commensurate increases in the costs of distributing that content.IEEE/ACM TRANSACTIONS ON NETWORKING VOL. we employ a word-of-mouth demand evolution model due to Bass [2] to represent the evolution of interest in a piece of content.

Traditionally. query. Mathematical analysis and comprehensive experiments show that this design can reduce the number of exposed false negatives as well as decrease the likelihood of false positives. which increases the ratio of bits set to a value larger than one without decreasing the ratio of bits set to zero. however. and deletion operations.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING VOL. NO. MAY 2010 651 FALSE NEGATIVE PROBLEM OF COUNTING BLOOM FILTER ABSTRACT Bloom filter is effective. we show that the undetectable incorrect deletion of false positive items and detectable incorrect deletion of multiaddress items are two general causes of false negative in a Bloom filter. researchers often believe that it is possible that a Bloom filter returns a false positive. 5. but it will never return a false negative under well-behaved operations. By investigating the mainstream variants. multichoice counting Bloom filter. We then measure the potential and exposed false negatives theoretically and practically. this is the first work dealing with both the false positive and false negative problems of Bloom filter systematically when supporting standard usages of item insertion. To the best of our knowledge. INDEX TERMS Bloom filter. . In this work. false negative. Inspired by the fact that the potential false negatives are usually not fully exposed. we propose a novel Bloom filter scheme. 22. we observe that a Bloom filter does return false negatives in many scenarios. space-efficient data structure for concisely representing a data set and supporting approximate membership queries.

To facilitate the applications in higher dimensions. relay node placement. these problems are further complicated by the existence of two different kinds of communication paths in heterogeneous wireless sensor networks. INDEX TERMS Heterogeneous wireless sensor networks. along which wireless communications exist in only one direction. MAY 2010 FAULT-TOLERANT RELAY NODE PLACEMENT IN HETEROGENEOUS WIRELESS SENSOR NETWORKS ABSTRACT Existing work on placing additional relay nodes in wireless sensor networks to improve network connectivity typically assumes homogeneous wireless sensor nodes with an identical transmission radius. 9. we develop Oð k2Þ-approximation algorithms for both one-way and two-way partial fault-tolerant relay node placement. which aims to deploy a minimum number of relay nodes to establish kðk 1Þ vertexdisjoint paths between every pair of sensor and/or relay nodes and partial fault-tolerant relay node placement. 5. and one-way paths. we also extend these algorithms and derive their performance ratios in d-dimensional heterogeneous wireless sensor networks ðd 3Þ. In contrast. as well as Oð k3Þapproximation algorithms for both one-way and two-way full fault-tolerant relay node placement ( is the best performance ratio of existing approximation algorithms for finding a minimum k-vertex connected spanning graph). Assuming that sensor nodes have different transmission radii. Finally. two -way paths. heuristic implementations of these algorithms are evaluated via QualNet simulations. while relay nodes use the same transmission radius. Depending on the level of desired fault tolerance. namely. along which wireless communications exist in both directions. which aims to deploy a minimum number of relay nodes to establish kðk 1Þ vertex-disjoint paths only between every pair of sensor nodes. this paper addresses the problem of deploying relay nodes to provide fault tolerance with higher network connectivity in heterogeneous wireless sensor networks. this paper comprehensively analyzes the range of problems introduced by the different levels of fault tolerance (full or partia l) coupled with the different types of path (one-way or twoway). approximation algorithms. su ch problems can be categorized as: 1) full fault-tolerant relay node placement. 2) Due to the different transmission radii of sensor nodes. where sensor nodes possess different transmission radii. .IEEE TRANSACTIONS ON MOBILE COMPUTING VOL. Since each of these problems is NP-hard. NO.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING K-ANONYMITY IN THE PRESENCE OF EXTERNAL DATABASES ABSTRACT The concept of k-anonymity has received considerable attention due to the need of several organizations to release microdata without revealing the identity of individuals. which permits more effective generalization to reduce the information loss. . under the constraint that each group should contain at least one tuple of MT (otherwise. none utilizes PD during the anonymization process. This omission leads to high information loss. INDEX TERMS Privacy. k-anonymity. Motivated by this observation we first introduce the concept of k -join-anonymity (KJA). which includes selected records from PD. We propose two methodologies for adapting k anonymity algorithms to their KJA counterparts. we evaluate the effectiveness of our contributions with an extensive experimental evaluation using real and synthetic datasets. and then refines the resulting groups using PD. Finally. The second anonymizes MT. Although all previous k -anonymity techniques assume the existence of a public database (PD) that can be used to breach privacy. The first generalizes the combination of MT and PD. the group is useless and discarded). KJA anonymizes a superset of MT. Briefly. existing generalization algorithms create anonymous tables using only the microdata table (MT) to be published. Specifically. independently of the external knowledge available.

compared with state of the art over-DHT indexing schemes. In order to support efficient processing of such complex queries. it is not a trivial task to support complex queries (e. 22. INDEX TERMS Distributed hash tables. JANUARY 2010 59 LIGHT: A QUERY-EFFICIENT YET LOW-MAINTENANCE INDEXING SCHEME OVER DHTS ABSTRACT DHT is a widely used building block for scalable P2P systems.. range queries and k nearest-neighbor queries) in DHT-based P2P systems. complex queries. However. . LIGHT employs a novel naming mechanism and a tree summarization strategy for graceful distribution of its index structure. LIGHT saves 50-75 percent of index maintenance cost and substantially improves query performance in terms of both response time and bandwidth consumption. we propose LIGhtweight Hash Tree (LIGHT)—a query-efficient yet low-maintenance indexing scheme.g. LIGHT is designed over generic DHTs and hence can be easily implemented and deployed in any DHT-based P2P system. Extensive experimental results also demonstrate that. 1. Unfortunately.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING VOL. a popular so lution is to build indexes on top of the DHT. In this paper. NO. indexing. In addition. We show through analysis that it can support various complex queries with near -optimal performance. as uniform hashing employed in DHTs destroys data locality. existing over-DHT indexing schemes suffer from either query inefficiency or high maintenance cost.

delay-tolerant applications. NO. node energy constraints. SEPTEMBER 2010 MAXIMIZING THE LIFETIME OF WIRELESS SENSOR NETWORKS WITH MOBILE SINK IN DELAY-TOLERANT APPLICATIONS ABSTRACT This paper proposes a framework to maximize the lifetime of the wireless sensor networks (WSNs) by using a mobile sink when the underlying applications tolerate delayed information delivery to the sink. linear programming. We also show that the delay tolerance level does not affect the maximum lifetime of the WSN. each node does not need to send the data immediately as it becomes available. Within a prescribed delay tolerance level. Instead. INDEX TERMS Wireless sensor network. and flow conservation constraints. the node can store the data temporarily and transmit it when th e mobile sink is at the most favorable location for achieving the longest WSN lifetime. To find the best solution within the proposed framework. 9. We conduct extensive computational experiments on the optimization problems and find that the lifetime can be increased significantly as compared to not only the stationary sink model but also more traditional mobile sink models. . mobile sink. lifetime maximization. 9. we formulate optimization problems that maximize the lifetime of the WSN subject to the delay bound constraints.IEEE TRANSACTIONS ON MOBILE COMPUTING VOL.

However. . subject to a constraint on the expected end-to-end packet-delivery delay. Sleep -wake scheduling is an effective mechanism to prolong the lifetime of these energy-constrained wireless sensor networks. most of the energy is consumed when the radios are on. Our numerical results indicate that the proposed solution can outperform prior heuristic solutions in the literature. where each node opportunistically forwards a packet to the first neighboring node that wakes up among multiple candidate nodes. we first study how to optimize the anycast forwarding schemes for minimizing the expected packet-delivery delays from the sensor nodes to the sink.g.. In such systems. especially under practical scenarios where there are obstructions. e. sleep-wake scheduling could result in substantial delays because a transmitting node needs to wait for its next -hop relay node to wake up. Based on this result. we then provide a solution to the joint control problem of how to optimally control the system parameters of the sleep-wake scheduling protocol and the anycast packet-forwarding protocol to maximize the network lifetime. An interesting line of work attempts to reduce these delays by developing ¿anycast¿-based packet forwarding schemes. In this paper.NETWORKING. IEEE/ACM TRANSACTIONS ON ISSUE DATE: APRIL 2010 MINIMIZING DELAY AND MAXIMIZING LIFETIME FOR WIRELESS SENSOR NETWORKS WITH ANYCAST ABSTRACT In this paper. waiting for a packet to arrive. we are interested in minimizing the delay and maximizing the lifetime of event-driven wireless sensor networks for which events occur infrequently. a lake or a mountain. in the coverage area of the wireless sensor network.

it is not very clear how to address the new RRM challenges (such as enabling distributed algorithms. intense and dynamic co-channel interference. However. are examples of these expectations. load balancing. . fairness is a critical performance aspect that has to be taken into account in the design of prospective RRM schemes. Advanced and intelligent radio resource management (RRM) schemes are known to be crucial towards harnessing these opportunities in the future OFDMA-based relayenhanced cellular networks. and compares the performances of some representative algorithms.IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY JANUARY 2010 OPPORTUNITIES AND CHALLENGES IN OFDMA-BASED CELLULAR RELAY NETWORKS: A RADIO RESOURCE MANAGEMENT PERSPECTIVE ABSTRACT The opportunities and flexibility in relay networks and orthogonal frequency -division multiple-access (OFDMA) make the combination a suitable candidate network and airinterface technologies for providing reliable and ubiquitous high data rate coverage in the next-generation cellular networks. The paper also highlights the fairness aspect in such networks in the light of the recent literature. Employment of conventional RRM schemes in such networks will be highly inefficient if not infeasible. This paper reviews some of the prominent challenges involved in migrating from the conventional cellular architecture to the relay-based type and discusses how intelligent RRM schemes can exploit the opportunities in the relay-enhanced OFDMAbased cellular networks. scheduling. cellular. feedback overhead) in such complex environments comprising a plethora of relay stations of different functionalities and characteristics. throughput. High data rate connectivity with mobility and reliability. provides some example fairness metrics. routing. fairness. INDEX TERMS RRM. OFDMA. Therefore. We identify the role of multi-antenna systems and explore the current approaches in literature to extend the conventional schedulers to the next-generation relay networks. The next-generation networks are required to meet the expectations of all wireless users irrespective of their locations. among other features. intra/inter -cell routing. relaying.

NO.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING VOL. privacy. applications. INDEX TERMS Spatial databases. by designing various client update strategies. Based on the notions of safe region and most probable result. particularly. location-dependent and sensitive. Furthermore. 3. This paper proposes a privacy-aware monitoring (PAM) framework that addresses both issues. or efficiency. The experimental results show that PAM substantially outperforms traditional schemes in terms of monitoring accuracy. PAM performs location updates only when they would likely alter the query results. mobile . MARCH 2010 PAM: AN EFFICIENT AND PRIVACY-AWARE MONITORING FRAMEWORK FOR CONTINUOUSLY MOVING OBJECTS ABSTRACT Efficiency and privacy are two fundamental issues in moving object monitoring. efficiency. We develop efficient query evaluation/reevaluation and safe region computation algorithms in the framework. 22. The framework distinguishes itself from the existing work by being the first to holistically address the issues of location updating in terms of monitoring accuracy. the framework is flexible and able to optimize accuracy. CPU cost. and scalability while achieving close -tooptimal communication cost. when and how mobile clients should send location updates to the server. and privacy.

5 percent improvement). we show that our system i s robust and is able to handle noisy data without compromising performance.8 percent improvement) and the R2L attacks (34. 7. NO. an intrusion detection system must reliably detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic. we address these two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. naive Bayes.IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING VOL. Finally. network security. In this paper. 1. particularly. Experimental results on the benchmark KDD ’99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes. decision trees. We demonstrate that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach. for the U2R attacks (34. . Layered Approach. Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Conditional Random Fields. JANUARY-MARCH 2010 LAYERED APPROACH USING CONDITIONAL RANDOM FIELDS FOR INTRUSION DETECTION ABSTRACT Intrusion detection faces a number of challenges. The improvement in attack detection accuracy is very high. INDEX TERMS Intrusion detection.

2010. Our results provide valuable insights about the structure of the jamming problem and associated defense mechanisms and demonstrate the impact of knowledge as well as adoption of sophisticated strategies on achieving desirable performance. the network defends itself by computing the channel access probability to minimize the jamming detection plus notification time. The jammer action ceases when it is detected by the network ( namely by a monitoring node). The latter is captured by formulating and solving optimization problems where the attacker and the network respond optimally to the worst -case or the average-case strategies of the other party. The jammer is detected by employing an optimal detection test based on the percentage of incurred collisions. The necessary knowledge of the jammer in order to optimize its benefit consists of knowledge about the network channel access probability and the number of neighbors of the monitor node. Accordingly. . VOLUME: 9 ISSUE:8 OPTIMAL JAMMING ATTACK STRATEGIES AND NETWORK DEFENSE POLICIES IN WIRELESS SENSOR NETWORKS ABSTRACT We consider a scenario where a sophisticated jammer jams an area in which a single-channel random-access-based wireless sensor network operates. We study the idealized case of perfect knowledge by both the jammer and the network about the strategy of each other and the case where the jammer and the network lack this knowledge. We extend the problem to the case of multiple observers and adaptable jamming transmission range and propose a meaningful heuristic algorithm for an efficient jamming strategy. We also take into account potential energy constraints of the jammer and the network. IEEE TRANSACTIONS ON ISSUE DATE: AUG. On the other hand. the network needs to know the jamming probability of the jammer.MOBILE COMPUTING. The jammer controls the probability of jamming and the transmission range in order to cause maximal damage to the network in terms of corrupted communication links. and a notification message is transferred out of the jammed region.

Moreover. the progressive query processing mode achieves a shorter response time than the bulk mode by parallelizing the query evaluation and result transmission. NO. To protect location privacy. location privacy. mobile computing. 3. it also raises concerns over potential intrusion into user location privacy. namely MaxAccu_Cloak and MinComm_Cloak. we identify and address three new issues concerning this location cloaking approach. In this paper. Second. query processing. we study the representation of cloaking regions and show that a circular region generally leads to a small result size for region-based queries. one typical approach is to cloak user l ocations into spatial regions based on user-specified privacy requirements. Finally. 21. users with location-aware mobile devices are able to make queries about their surroundings anywhere and at any time. MARCH 2010 313 PRIVACY-CONSCIOUS LOCATION-BASED QUERIES IN MOBILE ENVIRONMENTS ABSTRACT In location-based services. While this ubiquitous computing paradigm brings great convenience for information access. namely bulk and progressive. Two query processing modes. and to transform location-based queries into region-based queries. we develop an efficient polynomial algorithm for evaluating circular region-based kNN queries. are designed based on different performance objectives. are presented to return query results either all at once or in an incremental manner. Two cloaking algorithms. Experimental results show that our proposed mobilityaware cloaking algorithms significantly improve the quality of location cloaking in terms of an entropy measure without compromising much on query latency or communication cost.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. INDEX TERMS Location-based services. . First. we develop a mobility-aware location cloaking technique to resist trace analysis attacks.

2) dynamic and noisy environments. 5. has not yet overcome the challenges of 1) poor ranging measurements. a prototype network based on wireless sensor motes is deployed and the results show that QoT well represents trilateration accuracy. multilateration-based approaches often suffer from poor accuracy and can hardly be employed in practical applications. as a basic building block of localization. wireless ad -hoc and sensor networks. NO.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. trilateration. in which nodes dynamically select trilaterations with the highest quality fo r location computation. Based on QoT. and the proposed scheme significantly improves localization accuracy. MAY 2010 631 QUALITY OF TRILATERATION: CONFIDENCE-BASED ITERATIVE LOCALIZATION ABSTRACT The proliferation of wireless and mobile devices has fostered the demand for context-aware applications. we propose Quality of Trilateration (QoT) that quantifies the geometric relationship of objects and ranging noises. in which location is one of the most significant contexts. In this study. 21. noisy range measurements. we design a confidence-based iterative localization scheme. and 3) fluctuations in wireless communications. however. . INDEX TERMS Localization. Multilateration. Hence. To validate this design.

JUNE 2010 2101 RANDOM ACCESS TRANSPORT CAPACITY ABSTRACT We develop a new metric for quantifying end to end throughput in multihop wireless networks. We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well -known square root scaling law while providing exact expressions for the preconstants. 6. NO. INDEX TERMS Transmission capacity. transport capacity. which we term random access transport capacity. since the interference model presumes uncoordinated transmissions. stochastic geometry. . We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination. and normalized by the network area. ad hoc networks network information theory. multiplied by the communication distance. The metric quantifies the average maximum rate of successful end -to-end transmissions. Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability. 9. which contain most of the design-relevant network parameters.IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS VOL.

In order to fully benefit from the superiority of motion scalability. rate distortion optimization.IEEE TRANSACTIONS ON IMAGE PROCESSING VOL. In this paper.. the determination process first starts off with a brute force searching algorithm. INDEX TERMS Bitstream extractor. are then derived to accurately describe the rate distortion behavior of motion scalability. Although guaranteed by the optimal performance within the search domain. modified searching algorithms are proposed to reduce the complexity (up to five times faster) and to achieve the global optimality. it suffers from high computational complexities. especially in the medium to low range of decoding bit rates and spatial resolutions. MAY 2010 RATE-DISTORTION OPTIMIZED BITSTREAM EXTRACTOR FOR MOTION SCALABILITY IN WAVELET-BASED SCALABLE VIDEO CODING ABSTRACT Motion scalability is designed to improve the coding efficiency of a scalable video coding framework. which determines the optimal motion quality layer for any specific decoding scenario. a rate-distortion optimized bitstream extractor. i. scalable video coding . motion scalability. NO. Two properties. 5. even for those decoding scenarios outside the search domain. 19.e. is required. the monotonically nondecreasing property and the unimodal property. Based on these two properties.

and resilience in a wide range of scenarios. Its average routing stretch is close to 1. . Attempts to reduce routing state can result in undesirable worst -case routing performance. S4 further incorporates local failure recovery to achieve resilience to dynamic topology changes. Small State and Small Stretch (S4). efficiency. S4 uses a combination of beacon distance-vector-based global routing state and scoped distance-vector-based local routing state to achieve a worst-case stretch of 3 using $O(sqrt{N})$ routing state per node in an $N$-node network.NETWORKING. IEEE/ACM TRANSACTIONS ON ISSUE DATE: JUNE 2010 S4: SMALL STATE AND SMALL STRETCH COMPACT ROUTING PROTOCOL FOR LARGE STATIC WIRELESS NETWORKS ABSTRACT Routing protocols for large wireless networks must address the challenges of reliable packet delivery at increasingly large scales and with highly limited resources. which is the ratio of the hop count of the selected path to that of the optimal path. as measured by stretch. which jointly minimizes the state and stretch. The results show that S4 achieves scalability. We use multiple simulation environments to assess performance claims at scale and use experiments in a 42-node wireless sensor network testbed to evaluate performance under realistic RF and failure dynamics. We present a new routing protocol.

Moreover. . Since SigFree is a transparent deployment to the servers being protected. SigFree uses a new data-flow analysis technique called code abstraction that is generic. Unlike the previous code detection algorithms. an online signature-free out-of-the-box applicationlayer method for blocking code-injection buffer overflow attack messages targeting at various Internet services such as Web service. Motivated by the observation that buffer overflow attacks typically contain executables whereas legitimate client requests never contain executables in most Internet services. thus it can block new and unknown buffer overflow attacks. it is good for economical Internet-wide deployment with very low deployment and maintenance cost.DEPENDABLE AND SECURE COMPUTING. IEEE TRANSACTIONS ON ISSUE DATE: JAN. SigFree causes very small extra latency to normal client requests when some requests contain exploit code. SigFree blocks attacks by detecting the presence of code. VOLUME: 7 ISSUE:1 SIGFREE: A SIGNATURE-FREE BUFFER OVERFLOW ATTACK BLOCKER ABSTRACT We propose SigFree. fast.-MARCH 2010. We implemented and tested SigFree. SigFree is signature free. our experimental study shows that the dependency-degree-based SigFree could block all types of codeinjection attack packets (above 750) tested in our experiments with very few false positives. and hard for exploit code to evade. SigFree is also immunized from most attack -side code obfuscation methods.

Based on this idea. unequal power allocation. and differe nt layers are transmitted simultaneously from different transmit antennas using unequal transmit power. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. JPEG. 2. multiple-input multipleoutput systems. NO. .IEEE TRANSACTIONS ON IMAGE PROCESSING VOL. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes. unequal error protection. INDEX TERMS Distortion model. 19. with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios. real-time image and video communication are expected to become quite common. since very high data rates will become available along with improved data reliability. FEBRUARY 2010 UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS ABSTRACT With the introduction of multiple transmit and receive antennas in next generation wireless systems. with a constraint on the total transmit power during any symbol period. joint source-channel coding. The JPEG-compressed image is divided into different quality layers. we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing.

a more economical allocator constrained by the address-ordered first-fit allocation policy is considered. We derive the upper bound of memory usage for all allocators and present a systematic approach to search for allocation/deallocation patterns that might lead to the largest fragmentation. garbage collection. INDEX TERMS Dynamic memory allocation. . In the first case. These results are beneficial in embedded systems where memory usage must be reduced and predictable because of lack of swapping facility. They are also useful in other types of computing systems. we consider a general allocator that can allocate memory blocks anywhere in the available heap space. In the second case. we study the upper bounds of memory storage for two different allocators. storage allocation/deallocation policies.IEEE TRANSACTIONS ON COMPUTERS UPPER BOUNDS FOR DYNAMIC MEMORY ALLOCATION ABSTRACT In this paper. memory storage. first-fit allocator.

Thus. 7) VEBEK: VIRTUAL ENERGY-BASED ENCRYPTION AND KEYING FOR WIRELESS SENSOR NETWORKS ABSTRACT Designing cost-efficient. is able to eliminate malicious data from the network in an energy-efficient manner. a one-time dynamic key is employed for one packet only and different keys are used for the successive packets of the stream. we introduce an energy-efficient Virtual Energy-Based Encryption and Keying (VEBEK) scheme for WSNs that significantly reduces the number of transmissions needed for rekeying to avoid stale keys. The intermediate nodes along the path to the sink are able to verify the authenticity and integrity of the incoming packets using a predicted value of the key generated by the sender's virtual energy. We have evaluated VEBEK's feasibility and performance analytically and through simulations. minimal transmission is imperative for some military applications of WSNs where an adversary could be monitoring the wireless spectrum. In addition to the goal of saving energy. without incurring transmission overhead (increasing packet size or sending control messages for rekeying). each node monitors its one-hop neighbors where VEBEK-II statistically monitors downstream nodes. thus requiring no need for specific rekeying messages. each of which is optimal for different scenarios. Since the communication cost is the most dominant factor in a sensor's energy consumption. secure network protocols for Wireless Sensor Networks (WSNs) is a challenging problem because sensors are resourcelimited wireless devices. .IEEE TRANSACTIONS ON MOBILE COMPUTING JULY 2010 (VOL. In VEBEK -I. VEBEK is a secure communication framework where sensed data is encoded using a scheme based on a permutation code generated via the RC4 encryption mechanism. The VEBEK framework consists of two operational modes (VEBEK -I and VEBEK-II). VEBEK is able to efficiently detect and filter false data injected into the network by malicious outsiders. 9 NO. The key to the RC4 encryption mechanism dynamically changes as a function of the residual virtual energy of the sensor. Our results show that VEBEK. We also show that our framework performs better than other comparable schemes in the literature with an overall 60--100 percent improvement in energy savings without the assumption of a reliable medium access control layer.

This approach primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction. We also propose a new evaluation measure revision to capture the amount of human effort needed to produce perfect extraction. a novel vision-based approach that is Web-page-programming-language-independent is proposed. Our experiments on a large set of Web databases show that the proposed vision-based approach is highly effective for deep Web data extraction. In this paper. the contents on Web pages are always displayed regularly for users to browse. VOLUME: 22 ISSUE:3 VIDE: A VISION-BASED APPROACH FOR DEEP WEB DATA EXTRACTION ABSTRACT Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). . a large number of techniques have been proposed to address this problem. including dat a record extraction and data item extraction. This motivates us to seek a different way for deep Web data extraction to overcome the limitations of previous works by utilizing some interesting common visual features on the deep Web pages. IEEE TRANSACTIONS ON ISSUE DATE: MARCH 2010.KNOWLEDGE AND DATA ENGINEERING. As the popular two-dimensional media. but all of them have inherent limitations because they are Web page-programming-language-dependent. Extracting structured data from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now.

Using the Delaunay Triangulation (DT) protocol as an example.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. SMesh first builds a relatively stable mesh consisting of all hosts for control messaging. FEBRUARY 2010 A DISTRIBUTED PROTOCOL TO SERVE DYNAMIC GROUPS FOR PEER-TO-PEER STREAMING ABSTRACT Peer-to-peer (P2P) streaming has been widely deployed over the Internet. We further study various tree construction mechanisms based on the mesh. bypass. A streaming system usually has multiple channels. including embedded. we propose a distributed overlay framework (called SMesh) for dynamic groups where users may frequently hop from one group to another while the total pool of users remain stable. 2. and will guide the construction of delivery trees. NO. The mesh supports dynamic host joining and leaving. In this paper. dynamic group. we show that Smesh achieves low delay and low link stress. 21. and peers may form multiple groups for content distribution. we show how to construct an efficient mesh with low maintenance cost. . INDEX TERMS Peer-to-peer streaming. Through simulations on Internet-like topologies. and intermediate trees. Delaunay triangulation.

hybrid-ARQ. where is the number of relay nodes. FEBRUARY 2010 505 ADAPTIVE MULTI-NODE INCREMENTAL RELAYING FOR HYBRID-ARQ IN AF RELAY NETWORKS ABSTRACT This paper proposes an adaptive multi-node incremental relaying technique in cooperative communications with ampl ify-and-forward (AF) relays. 2.forward. 9.IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS VOL. In order to reduce the excessive burden of MRC with all diversity paths at the destination node. NO. the destination node decides if it combines signals over the first (< ) time slots/frames or over all of the times slots. amplifyand. . Our analytical and simulation results show that the proposed adaptive multi-node incremental relaying outperforms the conventional MRC in terms of outage probability in AF based cooperative communications since the proposed scheme effectively reduces the spectral efficiency loss. Our asymptotic analysis also shows that the proposed adapti ve multi-node incremental relaying achieves full diversity order INDEX TERMS Relay communications.

first hitting time. we prove a theorem related to problem hardness and the probability conditions of EDAs. in order to gain more insight into EDAs complexity. However. Following this approach. FEBRUARY 2010 1 ANALYSIS OF COMPUTATIONAL TIME OF SIMPLE ESTIMATION OF DISTRIBUTION ALGORITHMS ABSTRACT Estimation of distribution algorithms (EDAs) are widely used in stochastic optimization. while BVLeadingOnes is hard for the UMDA. i.IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION VOL. . we propose a novel approach to analyzing the computational time complexity of UMDA using discrete dynamic systems and Chernoff bounds.. we discuss how to measure the computational time complexity of EDAs. little work has been done on analyzing the computation time of EDAs in relation to the problem size. univariate marginal distribution algorithms. Third. heuristic optimization. INDEX TERMS Computational time complexity. our analysis shows that LeadingOnes is easy for the UMDA. We prove theoretically that the UMDA with margins can solve the BVLeadingOnes problem efficiently. in order to address the key issue of what problem characteristics make a problem hard for UMDA. the univariate marginal distribution algorithm (UMDA). Second. Finally. and another problem derived from LeadingOnes. It is still unclear how well EDAs (with a finite population size larger than two) will scale up when the dimension of the optimization problem (problem size) goes up. Impressive experimental results have been reported in the literature. 1. named BVLeadingOnes. This paper studies the computational time complexity of a simple EDA. A classification of problem hardness based on our discussions is then given. NO. we discuss in depth the idea of ―margins‖ (or relaxation). 14.e. i. First. the LeadingOnes problem..e. Although both problems are unimodal. estimation of distribution alg orithms. we are able to derive a number of results on the first hitting time of UMDA on a well -known unimodal pseudoboolean function.

We propose a novel asymmetric cooperative cache approach. leaving many design and implementation issues unanswered. but the data replies are only transmitted to the cache layer at the intermediate nodes that need to cache the data. such as 802. NO. where the data requests are transmitted to the cache layer on every node. we present our design and implementation of cooperative cache in wireless P2P networks. Fellow.11-based ad hoc networks by removing most of the processing overhead. 21. This solution not only reduces the overhead of copying data between the user space and the kernel space. In this paper. Our results show that the asymmetric approach outperforms the symmetric approach in traditional 802. . Ping Zhang. IEEE ABSTRACT Some recent studies have shown that cooperative cache can improve the system performance in wireless P2P networks such as ad hoc networks and mesh networks. AND EVALUATION Jing Zhao. it also allows data pipelines to reduce the end -toend delay. cooperative cache. and propose solutions to find the best place to cache the data. IMPLEMENTATION.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. all these studies are at a very high level. IEEE. In mesh networks. on the performance of cooperative cache. and Chita R. IEEE. P2P networks. 2. However. Senior Member. We also study the effects of different MAC layers. Das. Guohong Cao. FEBRUARY 2010 229 COOPERATIVE CACHING IN WIRELESS P2P NETWORKS: DESIGN. INDEX TERMS Wireless networks.11based ad hoc networks and multi-interface-multichannel-based mesh networks. the asymmetric approach can significantly reduce the data access delay compared to the symmetric approach due to data pipelines. Student Member.

IEEE/ACM TRANSACTIONS ON NETWORKING ENGINEERING WIRELESS MESH NETWORKS: JOINT SCHEDULING. . routing. POWER CONTROL. scheduling. rate adaptation. we first develop efficient and exact computational tools using column generation with greedy pricing that allow us to compute exact solutions for networks significantly larger than what has been possible so far. We also develop very fast approximations that compute nearly optimal solutions for even larger cases. large size wireless mesh networks (WMNs) when the objective function is to maximize the minimum throughput among all flows. AND RATE ADAPTATION ABSTRACT We present a number of significant engineering insights on what makes a good configuration for medium. Finally. power control. Index Terms Column generation. For this. wireless mesh networks (WMNs). we adapt our tools to the case of proportional fairness and show that the engineering insights are very similar.

leading to low replica utilization. Simulation results demonstrate the effectiveness of IRM in comparison with other approaches. unnecessary replicas and hence extra consistency maintenance overhead. It dramatically reduces overhead and yields significant improvements on the efficiency of both file replication and consistency maintenance approaches. distributed hash table. Instead of passively accepting replicas and updates.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. This paper presents an Integrated file Replication and consistency Maintenance mechanism (IRM) that integrates the two techniques in a systematic and harmonized manner. INDEX TERMS File replication. file replication and consistency maintenance are widely used techniques for high system performa nce. Most file replication methods rigidly specify replica nodes. . Most consistency maintenance methods propagate update messages based on message spreading or a structure without considering file replication dynamism. It achieves high efficiency in file replication and consistency maintenance at a significantly low cost. Despite significant interdependencies between them. JANUARY 2010 IRM: INTEGRATED FILE REPLICATION AND CONSISTENCY MAINTENANCE IN P2P SYSTEMS ABSTRACT In peer-to-peer file sharing systems. consistency maintenance. NO. leading to inefficient file update and hence high possibility of outdated file re sponse. each node determines file replication and update polling by dynamically adapting to time-varying file query and update rates. these two issues are typically addressed separately. 1. 21. which avoids unnecessary file replications and updates. peer-to-peer.

if the network is invaded by a wormhole attack. compared to the localization schemes without wormhole attacks. etc. However. Basically. the attacker can tunnel the packets via the wormhole link to cause severe impacts on the DV-Hop localization process. The distance-vector propagation phase during the DVHop localization even aggravates the positioning result. Based on the basic DV-Hop localization process.IEEE/ACM TRANSACTIONS ON NETWORKING LABEL-BASED DV-HOP LOCALIZATION AGAINSTWORMHOLE ATTACKS IN WIRELESS SENSOR NETWORKS Node localization becomes an important issue in the wireless sensor network as its broad applications in environment monitoring. emergency rescue and battlefield surveillance. . In this paper. Simulation results demonstrate that our proposed secure localization scheme is capable of detecting the wormhole attack and resisting its adverse impacts with a high probability. we propose a label-based secure localization scheme to defend against the wormhole attack. we analyze the impacts of wormhole attack on DV -Hop localization scheme. wireless sensor networks. Keywords: DV-Hop localization. wormhole attack. the DV -Hop localization mechanism can work well with the assistance of beacon nodes that have the capability of self-positioning.

we present a practical LBS system implementation. a natural tension arises between the need for user privacy and the flexible use of location information. MARCH 2009 A FLEXIBLE PRIVACY-ENHANCED LOCATION-BASED SERVICES SYSTEM FRAMEWORK AND PRACTICE Location-based services (LBSs) are becoming increasingly important to the success and attractiveness of next-generation wireless systems. And we propose a hierarchical key distribution method to support these services. The main idea behind the system is to hierarchically encrypt location information under different keys. Four methods are proposed to deliver hierarchical location information while maintaining privacy. 3. Hierarchical location information coding offers flexible location information access which enables a rich set of LBSs. We classify the services according to several basic criteria. VOL. hierarchical key distribution. However. we present a framework to support privacyenhanced LBSs. . Index Terms Location-based services. In this paper. NO. and distribute the appropriate keys only to group members with the necessary permission.IEEE TRANSACTIONS ON MOBILE COMPUTING. Furthermore. social networks. We propose a key tree-rebalancing algorithm to maintain the rekeying performance of the group key management. Our load tests show such a system is highly practical with good efficiency and scalability. location privacy. 8.

we introduce a mathematical framework to model contention. we analyze the performance of routing schemes under contention. 8. the vast majority of these prior studies have ignored wireless contention. FEBRUARY 2009 CONTENTION-AWARE PERFORMANCE ANALYSIS OF MOBILITY-ASSISTED ROUTING A large body of work has theoretically analyzed the performance of mobilityassisted routing schemes for intermittently connected mobile networks. This framework can be used to analyze any routing scheme with any mobility and channel model. 2. In this paper. Then. First. VOL. mobilityassisted routing. performance analysis. . even for sparse networks. Finally. random waypoint. However. NO. Index Terms Delay-tolerant networks. wireless contention. we use this framework to compute the expected delays for different representative mobility-assisted routing schemes under random direction. we use these delay expressions to optimize the design of routing schemes while Demonstrating that designing and optimizing routing schemes using analytical expressions that ignore contention can lead to sub optimal or even erroneous behavior. and community-based mobility models. Recent papers have shown through simulations that ignoring contention leads to inaccurate and misleading results.IEEE TRANSACTIONS ON MOBILE COMPUTING.

heavy-tailed lifetimes. the system monotonically increases its resilience as its age and size grow. the second strategy based on random walks on age-proportional graphs demonstrates that. In this paper. we overcome these limitations by introducing a general node-isolation model for heavy-tailed user lifetimes and arbitrary neighbor-selection algorithms. which dramatically reduces the probability of user isolation and graph partitioning compared with uniform selection of neighbors.IEEE/ACM TRANSACTIONS ON NETWORKING. we analyze two age-biased neighbor-selection strategies and show that they significantly improve the residual lifetimes of chosen users. We finish the paper with simulations in finite-size graphs that demonstrate the effect of this result in practice. 17. Specifically. VOL. peer-to-peer networks. . NO. for lifetimes with infinite variance. node isolation. 1. user churn. we show that the probability of isolation converges to zero as these two metrics tend to infinity. FEBRUARY 2009 NODE ISOLATION MODEL AND AGE-BASED NEIGHBOR SELECTION IN UNSTRUCTURED P2P NETWORKS Previous analytical studies of unstructured P2P resilience have assumed exponential user lifetimes and only considered age-independent neighbor replacement. In fact. Index Terms Age-based selection. Using this model.

3. VOL. a class of error control codes. Here we introduce a novel approach for computation efficient rekeying for multicast key distribution. In order to avoid frequent rekeying as and when the user leaves. to distribute the multicast key dynamically. This approach ensures forward secrecy as well as backward secrecy and significantly reduces the rekeying cost and communication cost.Computer Science and Network Security. MDS Codes. Most of the centralized group key management schemes employ high rekeying cost.9 No. March 2009 A NOVEL APPROACH FOR COMPUTATION-EFFICIENT REKEYING FOR MULTICAST KEY DISTRIBUTION An important problem for secure group communication is key distribution. Key Distribution. This approach reduces the rekeying cost by employing a hybrid group key management scheme (involving both centralized and contributory key management schemes). This scheme well suits wireless applications where portable devices require low computation. Multicast. a novel approach is introduced where clients recompute the new group key with minimal computation. . Index Terms Erasure decoding. The group controller uses the MDS Codes.

Morpheus. 58. The scheme stops collusive piracy without hurting legitimate P2P clients by targeting poisoning on detected violators. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. The basic idea is to detect pirates timely with identity-based signatures and time stamped tokens. Paid clients (colluders) may illegally share copyrighted content files with unpaid clients (pirates). The scheme is shown less effective in protecting some poison-resilient networks like BitTorrent and Azureus. .IEEE TRANSACTIONS ON COMPUTERS. etc. network security. NO. copyright protection. we find 99.98 percent prevention rate on eMule. higher content availability. exclusively. We propose a proactive content poisoning scheme to stop colluders and pirates from alleged copyright infringements in P2P file sharing.9 percent preventio n rate in Gnutella. content poisoning. Such online piracy has hindered the use of open P2P networks for commercial content delivery. The advantage lies mainly in minimum delivery cost. Pirates are thus severely penalized with no chance to download successfully in tolerable time. and copyright compliance in exploring P2P network resources. KaZaA. Our work opens up the low-cost P2P technology for copyrighted content delivery. Index Terms Peer-to-peer networks. Based on simulation results. We achieved 85. and Freenet. Detected pirates will receive poisoned chunks in their repeated attempts. 7. JULY 2009 COLLUSIVE PIRACY PREVENTION IN P2P CONTENT DELIVERY NETWORKS Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. VOL. eDonkey.

2009 We develop opportunistic scheduling policies for cognitive radio networks that maximize the throughput utility of thesecondary (unlicensed) users subject to maximum collision constraints with the primary (licensed) users. scheduling. resource allocation. Index Terms Cognitive radio. Lyapunov optimization. We use the technique of Lyapunov Optimization to design an online flow control.OPPORTUNISTIC SCHEDULING WITH RELIABILITY GUARANTEES IN COGNITIVE RADIO NETWORKS . queuing analysis. We consider a cognitive network with static primary users and potentially mobile secondary users. . and resource allocation algorithm that meets the desired objectives and provides explicit performance guarantees.

which is proved to be much smaller than the conventional sequence model by 25% to 50%. some recent studies. GDB. To have a deeper understanding on the proposed reverse model. and FB in terms of the relaxed client buffer size. CCA. Based on the proposed reverse model. . buffers. video-on-demand (VOD). this paper first introduces an applicable sequence-based broadcasting model that can be used to minimize the required buffer size. cable TV.. and reverse fast broadcasting (RFB) schemes. Index Terms Hot-video broadcasting. Based on the design premises. which can generally improve the existing schemes such as SkB. To study the client segment downloading process. greedy disk-conserving broadcasting (GDB). have been reported. etc. client-centric approach (CCA). including skyscraper broadcasting (SkB). By extending RFB.IEEE TRANSACTIONS ON MULTIMEDIA GENERALIZED SEQUENCE-BASED AND REVERSE SEQUENCEBASED MODELS FOR BROADCASTING HOT VIDEOS – 2009 It has been well recognized as an efficient approach for broadcasting popular videos by partitioning a video data stream into multiple segments and launching each segment through an individual channel simultaneously and periodically. the upper bound of the client buffer requirement is obtained through a comprehensive analysis. this paper further proposes a reverse sequence-based broadcasting model. a reverse sequence-based broadcasting scheme is developed for achieving smaller delay than CCA and GDB.

AND HENRY MUCCINI Introduced in the early stages of software development. . CHARMY aims to provide an easy and practical tool for supporting the iterative modeling and evaluation of software architectures. Java code conforming to structural software architecture constraints is automatically generated through suitable transformations. Most likely poorly defined and understood architectural constraints and requirements force the software architect to accept ambiguities and move forward to the construction of a suboptimal software architecture. 35. From an UML-based architectural design. model checking. CHARMY simulation and model checking features help in understanding the functioning of the system and discovering potential inconsistencies of the design. PAOLA INVERARDI. When a satisfactory and stable software architecture is reached. the software architecture of a system can be established once and forever. an executable prototype is automatically created. Index Terms Software architectures. VOL.IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. NO. MAY/JUNE 2009 CHARMY: A FRAMEWORK FOR DESIGNING AND VERIFYING ARCHITECTURAL SPECIFICATIONS PATRIZIO PELLICCIONE. The overall approach is tool supported. the CHARMY framework assists the software architect in making and evaluating archit ectural choices. 3. Rarely.

Index Terms Key distribution. the scheme employs MDS codes. NO. In this paper. VOL. Easily combined with any key-tree-based schemes. MAY 2008 COMPUTATION-EFFICIENT MULTICAST KEY DISTRIBUTION Efficient key distribution is an important problem for secure group communications. This scheme drastically reduces the computation load of each group member compared to existing schemes employing traditional encryption algorithms. this scheme provides much lower computation complexity while maintaining low and balanced communication complexity and storage complexity for secure dynamic multicast key distribution. we propose a new multicast key distribution scheme whose computation complexity is significantly reduced. Such a scheme is desirable for many wireless applications where portable devices or sensors need to reduce their computation as much as possible due to battery power limitations.IEEE RANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. to distribute multicast key dynamically. computation complexity. 5. MDS codes. a class of error control codes. 19. Instead of using conventional encryption algorithms. multicast. The communication and storage complexity of multicast key distribution problem has been studied extensively. erasure decoding. .

VOL. NO. categorical. In this paper. and hierarchical attributes for a one-pass hierarchical clustering algorithm.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. network monitoring. We demonstrate the improved accuracy and efficiency of our approach in comparison to previous work on clustering network traffic. We develop a framework to deal with mixed type attributes including numerical. Index Terms Traffic analysis. 20. we investigate the use of clustering techniques to identify interesting traffic patterns from network traffic data in an efficient manner. classification and association rules. JUNE 2008 AN EFFICIENT CLUSTERING SCHEME TO EXPLOIT HIERARCHICAL DATA IN NETWORK TRAFFIC ANALYSIS There is significant interest in the data mining and network management communities about the need to improve existing techniques for clustering multivariate network traffic flow records so that we can quickly infer underlying traffic patterns. network management. clustering. hierarchical clustering. . 6.

We also provide a brief description of other branches of anti spam protection and discuss the use of various approaches in commercial and noncommercial anti-spam software solutions . and of the ways of evaluation and comparison of different filtering methods. Among the approaches developed to stop spam. In this paper we give an overview of the state of the art of machine learning applications for spam filtering. filtering is an important and popular one.2008 Email spam is one of the major problems of the today’s Internet. bringing financial damage to companies and annoying individual users.A SURVEY OF LEARNING-BASED TECHNIQUES OF EMAIL SPAM FILTERING .

and phishing. our detection framework clusters similar communication traffic and similar malicious traffic. The essential properties of a botnet are that the bots communicate with some C&C servers/peers. and can become ineffective as botnets change their C&C techniques.. IRC) and structures (e. and has a very low false positive rate.g. Accordingly.BOTMINER: CLUSTERING ANALYSIS OF NETWORK TRAFFIC FOR PROTOCOL. HTTP-based. and P2P botnets including Nugache and Storm worm). we present a general detection framework that is independent of botnet C&C protocol and structure. identity theft.g. In this paper. centralized). . The results show that it can detect realworld botnets (IRC-based. These hosts are thus bots in the monitored network. We start from the definition and essential properties of botnets. We have implemented our BotMiner prototype system and evaluated it using many real network traces. distributed denial-of-service (DDoS). perform malicious activities.. such as spam. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.AND STRUCTURE-INDEPENDENT BOTNET DETECTION Botnets are now the key platform for many Internet attacks. and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. and do so in a similar or correlated way. and C&C server names/addresses). and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures.

link protection. although the backup path lengths may be significantly higher than optimal. The heuristic approach is shown to obtain feasible solutions that are resilient to most dual-link failures. optical networks. It is observed that a solution exists for all of the six networks considered. In addition. Solution methodologies for the BLME problem is developed using two approaches by: 1) formulating the backup path selection as an integer linear program. 1. The ILP formulation and heuristic are applied to six networks and their performance is compared with approaches that assume precise knowledge of dual-link failure. FEBRUARY 2008 DUAL-LINK FAILURE RESILIENCY THROUGH BACKUP LINK MUTUAL EXCLUSION Networks employ link protection to achieve fast recovery from link failures. Index Terms Backup link mutual exclusion. 2) developing a polynomial time heuristic based on minimum cost path routing. there are several alternatives for protecting against the second failure. One of the strategies to recover from dual-link failures is to employ link protection for the two failed links independently. dual-link failures. VOL. the paper illustrates the significance of the knowledge of failure location by illustrating that network with higher connectivity may require lesser capacity than one with a lower connectivity to recover from dual-link failures. which requires that two links may not use each other in their backup paths if they may fail simultaneously. This paper develops the necessary theory to establish the sufficient conditions for existence of a solution to the BLME problem.IEEE/ACM TRANSACTIONS ON NETWORKING. NO. While the first link failure can be protected using link protection. 16. This paper formally classifies the approaches to dual-link failure resiliency. . Such a requirement is referred to as backup link mutual exclusion (BLME) constraint and the problem of identifying a backup path for every link that satisfies the above requirement is referred to as the BLME problem.

with tracking errors converging to zero under the condition that the slopes of unknown dead zones are equal.output (MIMO) nonlinear systems in triangular control structure with unknown nonsymmetric dead zones and control directions. Simulation results demonstrate the effectiveness of the approach.IEEE TRANSACTIONS ON NEURAL NETWORKS. adaptive neural network (NN) tracking control is investigated for a class of uncertain multiple-input–multiple. MARCH 2009 ADAPTIVE NEURAL NETWORK TRACKING CONTROL OF MIMO NONLINEAR SYSTEMS WITH UNKNOWN DEAD ZONES AND CONTROL DIRECTIONS In this paper. 3. VOL. . It is shown that the dead-zone output can be represented as a simple linear system with a static time-varying gain and bounded disturbance by introducing characteristic function. neural network (NN) control. The design is based on the principle of sliding mode control and the use of Nussbaum-type functions in solving the problem of the completely unknow n control directions. NO. dead zone. Nussbaum function. By utilizing the integral-type Lyapunov function and introducing an adaptive compensation term for the upper bound of the optimal approximation error and the dead-zone disturbance. sliding mode control. the closed-loop control system is proved to be semiglobally uniformly ultimately bounded. 20. Index Terms Adaptive control.

linear programing . VSR and localized CDS. The radio resource sharing principles of CSMA-CA is modeled as a set of linear constraints with two models of fairness. i. 2009 The specific challenges of multihop wireles networks lead to a strong research effort on efficient protocols design where the offered capacity is a key objective. the throughput offered to each flow. routing strategy largely impacts the network capacity.e. upper and lower bounds. while the second one assumes that on the radio links. Our approach is independent of the network topology and the routing protocols. Index Terms Network capacity. The first one assumes that nodes have a fair access to the channel. and provides therefore a relevant framework for their comparison.A FRAMEWORK FOR THE CAPACITY EVALUATION OF MULTIHOP WIRELESS NETWORKS. we propose a complete framework to compute the upper and the lower bounds of the network capacity according to a physical topology and a given routing protocol. yielding a lower bound and an upper bound on the network capacity for each fairness case. multihop wireless networks. We apply our models to a comparative analysis of a well-known flat routing protocol OLSR against two main self-organized structure approaches. More specifically. In this work. We then develop a pessimistic and an optimistic scenarios for radio resource sharing.

NO. This paper proposes an adaptive wireless push system that operates efficiently in environments characterized by high broadcasting speeds and a-priori unknown client demands for data items. The proposed system adapts to the demand pattern of the client population in order to reflect the overall popularity of each data ite m. JUNE 2009 CONTINUOUS FLOW WIRELESS DATA BROADCASTING FOR HIGH-SPEED ENVIRONMENTS With the increasing popularity of wireless networks and mobile computing. We propose a method for feedback collection by the server so that the client population can enjoy a performance increase in proportion to the broadcasting speed used by the server. 55.IEEE TRANSACTIONS ON BROADCASTING. 2. . data broadcasting. Simulation results are presented which reveal satisfactory performance in environments with a-priori unknown client demands and under various high broadcasting speeds. data broadcasting has emerged as an efficient way of delivering data to mobile clients having a high degree of commonality in their demand patterns. learning automata. high-speed. Index Terms Adaptive systems. VOL.

Index Terms Distributed Denial of Service Attacks. 2009 Denial of service (DoS) attacks and moreparticularly the distributed ones (DDoS) are one of the latestthreat and pose a grave danger to users. a publicly available benchmark dataset is used. False Positives. Consideration of tolerance factors make proposed detection system scalable to the network conditions and attack loads in real time.DYNAMIC AND AUTO RESPONSIVE SOLUTION FOR DISTRIBUTED DENIAL-OF-SERVICE ATTACKS DETECTION IN ISP NETWORK. FVBA has been extensively evaluated in a controlled test bed environment. some of them being impractical and others not being effective against these attacks. and anomalies whenever traffic goes out of profile. used to identify varying varying Six-sigma method is used to identify threshold values accurately for malicious flows haracterization. organizations and infrastructures of the Internet. Network Security . Attacks are detected by the constant monitoring of propagation of abrupt traffic changes inside ISP network. a newly designed flow-volume based approach (FVBA) is construct profile of the traffic normally seen in the network. For validation. This paper reports the design principles and evaluation results of our prop osed framework that autonomously detects and accurately characterizes a wide range of flooding DDoS attacks in ISP network. For this. ISP Network. but they suffer from a range of problems. Several schemes have b een proposed on how to detect some of these attacks. KDD 99. False Negatives. Detection thresholds and efficiency is justified using receiver operating characteristics (ROC) curve. The results show that our proposed system gives a drastic improvement in terms of detection and false alarm rate.

NO. 2. and achieves strong security for low-power devices in wireless networks. 8. Specifically. our key generation method employs the bit commitment technique to achieve efficiency in key generation and share refreshing. cryptosystems. We demonstrate that previous known approaches are not efficient in wireless networks. distributed key generation. Index Terms Multi-party signature. and the proposed multi-party signature scheme is exible.IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. and it can be verified by any entity who possesses the certified group public key. efficient. elliptic curve . our share refreshing method provides proactive protection to long-lasting secret and allows a new signee to join a signing group. FEBRUARY 2009 Efficient Multi-Party Digital Signature using Adaptive Secret Sharing for Low-Power Devices in Wireless Networks In this paper. we propose an efficient multi-party signature scheme for wireless networks where a given number of signees can jointly sign a document. Our scheme is based on an efficient threshold key generation scheme which is able to defend against both static and adaptive adversaries. VOL.

and |xS(x)| be distance between them. In greedy phase.GUARANTEED DELIVERY FOR GEOGRAPHICAL ANYCASTING IN WIRELESS MULTI-SINK SENSOR AND SENSOR-ACTOR NETWORKS In the anycasting problem. It is done by face traversal toward the nearest connected actor/sink. We describe the first localized anycasting algorithms that guarantee delivery for connected multi sink sensor-actor networks. a sensor wants to report event information to one of sinks or actors. A hop count based and two variants of localized power aware anycasting algorithms are described. Let S(x) be the closest actor/sink to sensor x. A variant is to forward to the first neighbor on the shortest weighted path toward v. a node s forwards the packet to its neighbor v that minimizes the ratio of cost cost(|sv|) of sending packet to v (here we specifically apply hop-count and power consumption metrics) over the reduction in distance (|sS(s)|−|vS(v)|) to the closest actor/sink. where edges are replaced by paths optimizing given cost. If none of neighbors reduces that distance then recovery mode is invoked. We prove guaranteed delivery property analytically and experimentally .

The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. a Gibbs sampling strategy is proposed. VOL. ?. To overcome the complexity of the posterior distribution. MRFM imaging. NO. e. ?. In our fully Bayesian approach the posteriors of all the parameters are available. Index Terms Deconvolution. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. The Gibbs samples can be used to estimate the image to be recovered. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. Thus our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. Bayesian inference. 1 HIERARCHICAL BAYESIAN SPARSE IMAGE RECONSTRUCTION WITH APPLICATION TO MRFM This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. MCMC methods .g.IEEE TRANSACTIONS ON IMAGE PROCESSING. by maximizing the estimated posterior distribution. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument. sparse representation.


Study of Rough Set and Clustering Algorithm in Network Security Management Getting a better grasp of computer network security is of great significance to protect the normal operation of network system. Based on rough set (RS), clustering model, security features reduction and clustering algorithm are presented, which provides a basis of network security strategies. Further research is to mine and process the dynamic risks and management of network security. Using the reduction methods, the simplified network security assessment data set is established. The extraction by the decision-making rules is proposed and verified. Through the results, it is concluded that the method could be in line with the actual situation of decision-making rules. Keywords RS, clustering algorighm, network security, K-W method


Cooperative transmission is an emerging communication technique that takes advantage of the broadcast nature of wireless channels. However, due to low spectral efficiency and the requirement of orthogonal channels, its potential for use in future wireless networks is limited. In this paper, by making use of multi-user detection (MUD) and network coding, cooperative transmission protocols with high spectral efficiency, diversity order, and coding gain are developed. Compared with user detection, proposed MUD improvement of the traditional cooperative transmission protocols with single in which the diversity gain is only for one source user, the cooperative transmission protocols have the merit that the one user’s link can also benefit the other users.

In addition, using MUD at the relay provides an environment in which network coding can be employed. The coding gain and high diversity order can be obtained by fully utilizing the link between the relay and the destination. From the analysis and simulation results, it is seen that the proposed protocols achieve higher diversity gain, better asymptotic efficiency, and lower bit error rate, compared to traditional MUD schemes and to existing cooperative transmission protocols. From the simulation results, the performance of the proposed scheme is near optimal as the performance gap is 0.12dB for average bit erro r rate (BER) 10 −6 and 1.04dB for average BER 10 −3, compared to two performance upper bounds.

Index Terms Detection, coding, communication networks, and cooperative systems.


Joint power-subcarrier-time resource allocation is imperative for wireless mesh networks due to the necessity of packet scheduling for quality-of-service (QoS) provisioning, multi-channel communications, and opportunistic power allocation. In this work, we propose an efficient intra-cluster packet-level resource allocation approach. Our approach takes power allocation, subcarrier allocation, packet scheduling, and QoS support into account. The proposed approach combines the merits of a Karush-KuhnTucker (KKT)-driven approach and a genetic algorithm (GA)-based approach. It is shown to achieve a desired balance between time complexity and system performance. Bounds for the throughputs obtained by real-time and non-real-time traffic are also derived analytically. Index Terms Genetic algorithm (GA), Karush-Kuhn-Tucker (KKT), quality-ofservice (QoS) provisioning, resource allocation, wireless mesh network (WMN).

quality of service. FEBRUARY 2009 MULTI-SERVICE LOAD SHARING FOR RESOURCE MANAGEMENT IN THE CELLULAR/WLAN INTEGRATED NETWORK With the interworking between a cellular network and wireless local area networks (WLANs). Admission control and dynamic vertical handoff are applied to pool the free bandwidths of the two systems to effectively serve elastic data traffic and improve the multiplexing gain. referred to as shortest remaining processing time (SRPT) [1]. data calls in the cell are served under an efficient service discipline. The SRPT can well exploit the heavy-tailedness of data call size to improve the resource utilization. 8. load sharing. An accurate analytical model is developed to determine an appropriate size threshold so that data calls are properly distributed to the integrated cell and WLAN. 2. NO. Index Terms Cellular/WLAN interworking. It is observed from extensive simulation and numerical analysis that the new scheme significantly improves the overall system performance. an essential aspect of resource management is taking advantage of the overlay network structure to efficiently share the multi -service traffic load between the interworked systems.IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. taking into account the load conditions and traffic characteristics. we propose a new load sharing scheme for voice and elastic data services in a cellular/WLAN integrated network. resource management. In this study. vertical handoff. To further combat the cell bandwidth limitation. admission control. . VOL.

SOBIE:A NOVEL SUPER-NODE P2P OVERLAY BASED ON INFORMATION EXCHANGE In order to guarantee both the efficiency and robustness in the Peer to-Peer (P2P) network. information exchange. the SOBIE also guarantees the matching between the physical network and logical network and has small-world characteristic to improve the efficiency. Meanwhile. the average query hops. The main contributions are 1) to select the super-nodes by considering the aggregation of not only the delay. super node. free-ridding . the coverage rate and system connectivity. the SOBIE is a whole new structure to improve the efficiency of searching in the P2P network. but also the information exchange frequency. topology matching. or meshed and tree-like P2P overlay. distance. Differing from current structured and unstructured. exchange time and query similarity especially. 2) to set a score mechanism to identify and prevent the freeriders. the paper designs a novel Super -node Overlay Based on Information Exchange called SOBIE. Large number of experiment results show the advantages of the SOBIE including high efficiency and robustness by such different factors as the query success rate. the total number of query messages. Index Terms P2P overlay.

5. Each packet transmission can be overheard by a subset of receiver nodes. Index Terms Broadcast advantage. and supports a ―blind transmission‖ mode (where error probabilities are not required) in special cases when the power metric is neglected and when there is only a single destination for all traffic streams. JULY 2009 OPTIMAL BACKPRESSURE ROUTING FOR WIRELESS NETWORKS WITH MULTI-RECEIVER DIVERSITY We consider the problem of optimal scheduling and routing in an ad -hoc wireless network with multiple traffic streams and time varying channel reliability. For networks with general inter-channel interference. 7. When channels are orthogonal. PP. scheduling . 862-881. the algorithm can be implemented in a distributed manner using only local link error probability information. we present a distributed algorithm with constant-factor optimality guarantees. mobility. queueing analysis. dynamic control.AD HOC NETWORKS (ELSEVIER). We develop a simple backpressure routing algorithm that maximizes network throughput and expends an average power that can be pushed arbitrarily close to the minimum average power required for network stability. NO. distributed algorithms. with a transmission success probability that may vary from receiver to receiver and may also vary with time. with a corresponding tradeoff in network delay. VOL.

Allowing no overhearing may critically deteriorate the performance of the underlying routing protocol. called RandomCast. NO. However. mobile ad hoc networks. When a node receives an advertised packet that is not destined to itself.11 as well as 802. avoids overhearing and conserves energy. a packet must be advertised before it is actually transmitted. in terms of total energy consumption. consumes energy unnecessarily. saves more energy.IEEE TRANSACTIONS ON MOBILE COMPUTING. via which a sender can specify the desired level of overhearing. since some MANET routing protocols such as Dynamic Source Routing (DSR) collect route information via overhearing. Index Terms Energy balance. energy efficiency. VOL. and energy balance. every node overhears every data transmission occurring in its vicinity and thus. Extensive simulation using ns-2 shows that RandomCast is highly energy-efficient compared to conventional 802.11 PSMbased schemes. network lifetime. energy goodput. This paper proposes a new communication mechanism.11 PSM. and thus. they would suffer if they are used in combination with 802. and thus. In addition. while unconditional overhearing may offset the advantage of using PSM.11 Power Saving Mechanism (PSM). it switches to a low-power sleep state during the data transmission period. XXXXXX 2009 RANDOMCAST: AN ENERGY-EFFICIENT COMMUNICATION SCHEME FOR MOBILE AD HOC NETWORKS In mobile ad hoc networks (MANETs). making a prudent balance between energy and routing performance. In IEEE 802. 8. overhearing. . X. it reduces redundant rebroadcasts for a broadcast packet. power saving mechanism.

the motion compensated spatiotemporal filter (MCSTF) is applied to intraframe and interframe pixels to deal with both spatial and temporal artifacts. motion compensated spatio-temporal filter. Index Terms Artifact reduction. For compressed video sequences. Simulations on compressed images and videos show improvement in artifact reduction of the proposed adaptive fuzzy filter over other conventional spatial or temporal filtering approaches. VOL. 18. fuzzy filter.IEEE TRANSACTIONS ON IMAGE PROCESSING. For JPEG images. JUNE 2009 ADAPTIVE FUZZY FILTERING FOR ARTIFACT REDUCTIONIN COMPRESSED IMAGES AND VIDEOS A fuzzy filter adaptive to both sample’s activity and the relative position between samples is proposed to reduce the artifacts in compressed multidimensional signals. 6. flickering metric. the fuzzy spatial filter is based on the dir ectional characteristics of ringing artifacts along the strong edges. . NO. A new metric which considers the tracking characteristic of human eyes is proposed to evaluate the flickering artifacts.

energy efficient and collision free. The algorithm will calculate the relative position of the nodes with respect to the broadcasting source node. The only information a node has its identity (IP Address) and its position. . VOL. IP Address. The proposed algorithm will adapt itself dynamically to the number of concurrent broadcasts and will give the least finish time for any particular broadcast. its diameter and number of nodes in the network.Computer Science and Network Security. The nodes are mobile and can move from one place to another.9 No. 2009 A NEW RELIABLE BROADCASTING IN MOBILE AD HOC NETWORKS A New Reliable Broadcasting Algorithm for mobile ad hoc networks will guarantee to deliver the messages from different sources to all the nodes of the network. Key words Broadcasting Algorithm. The solution does not require the nodes to know the network size. The nodes that are farthest from the source node will rebroadcast and this will minimize the number of rebroadcasts made by the intermediate nodes and will reduce the delay latency. only a subset of nodes transmits and they transmit only once to achieve reliable broadcasting. April 5. Delay latency. Collision. On average. It will be contention free.4. Mobile Ad Hoc Networks.

Weutilize the industry-standard extensible markup language XML to describe the functionality and architecture of a modeled processor. Thus. 8. 1. and successfully tested with two programs. register set. VOL.IEEE COMPUTER ARCHITECTURE LETTERS. we have generated several multithreaded simulators with different configurations based on the MIPS fivestage processor. . JANUARY-JUNE 2009 AN XML-BASED ADL FRAMEWORK FOR AUTOMATIC GENERATION OF MULTITHREADED COMPUTER ARCHITECTURE SIMULATORS Computer architecture simulation has always played a pivotal role in continuous innovation of computers. we present a novel XML-based ADL. constructing or modifying a high quality simulator is time consuming and error prone. often Architecture Description Languages (ADLs) are used to provide an abstraction layer for describing the computer architecture and automatically generating corresponding simulators. To prove its validity. and execution of a modeled processor. Our ADL framework allows users to easily and quickly modify the structure. However. and a generation methodology to automatically generate multithreaded simulators for computer architecture. Along the line of such research. NO. its compiler.

make maintenance and comprehension more difficult. underlying a collection of refactorings to support usercontrolled automatic clone removal. or code clones. and examines their application in substantial case studies. the refactoring tool developed at Kent for Erlang/OTP Keywords Erlang. This paper proposes a token and AST based hybrid approach to automatically detecting code clones in Erlang/OTP programs. code. Both the clone detector and the refactorings are integrated within Wrangler. A code clone is a code fragment that is identical or similar to another. Wrangler. program .CLONE DETECTION AND REMOVAL FOR ERLANG/OTP WITHIN A REFACTORING ENVIRONMENT A well-known bad code smell in refactoring and software maintenance is duplicated code. and also indicate design problems such as lack of encapsulation or abstraction. program transformation. duplicated analysis. refactoring. Unjustified code clones increase code size.

The next-generation Web architecture. search process. it is always less uncommon that obtained result sets provide a burden of useless pages. Nevertheless. Relevance is measured as the probability that a retrieved resource actually contains those relations whose existence was assumed by the user at the time of query definition. relations.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. query formulation. provides the layered architecture possibly allowing overcoming this limitation. VOL. 21. represented by the Semantic Web. In this paper. in order to rank results. knowledge retrieval. NO. JANUARY 2009 A RELATION-BASED PAGE RANK ALGORITHM FOR SEMANTIC WEB SEARCH ENGINES With the tremendous growth of information available to end users through the Web. Several search engines have been proposed. However. most of the existing solutions need to work on the whole annotated knowledge base. that is. . we propose a relation-based page rank algorithm to be used in conjunction with Semantic Web search engines that simply relies on information that could be extracted from user queries and on annotated resources. 1. because of their general-purpose approach. search engines come to play ever a more critical role. which allow increasing information retrieval accuracy by exploiting a key content of Semantic Web resources. Index Terms Semantic Web.

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. Our approach permits multiple intermediaries to simultaneously perform content services on different portions of the data. 5. MAY 2008 EFFICIENT AND SECURE CONTENT PROCESSING AND DISTRIBUTION BY COOPERATIVE INTERMEDIARIES Content services such as content filtering and transcoding adapt contents to meet system requirements. Index Terms Data sharing. display capacities. NO. VOL. . Data security in such a framework is an important problem and crucial for many Web applications. security. 19. we propose an approach that addresses data integrity and confidentiality in content adaptation and caching by intermediaries. Our experimental results show that our approach is efficient and minimizes the amount of data transmitted across the network. distributed systems. In this paper. Our protocol supports decentralized proxy and key management and flexible delegation of services. integrity. or user preferences.

which optimize the usage of network resources. tree. in the context of some special types of mutual exclusion constraints (tree and clique constraint graphs). that of maximizing the total profit. clique. greedy. mutual exclusion. high multiplicity. In this paper we consider the high multiplicity scheduling of file transfers over multiple classes of paths with the objective of minimizing the makespan. making necessary the development of file transfer scheduling techniques.HIGH MULTIPLICITY SCHEDULING OF FILE TRA SFERS WITH DIVISIBLE SIZES MULTIPLE CLASSES OF PATHS DEC 2008 Distributed applications and services requiring the transfer of large amounts of data have been developed and deployed world wide. . The best effort model of the Internet cannot provide these applications with the so much needed quality of service guarantees. We also consider another objective. dynamic programming. when the files have divisible sizes. binary search. Index Terms file transfer scheduling. makespan minimization. divisible sizes.

Hence it is critical to have a fine memory management technique deployed to handle the stated scenarios. Global Memory Management. The issue of the Global Memory and Local Memory Management is solved with the approach discussed in this paper. scientific or engineering applications. Keywords: High Performance Cluster Computing. Here even if the other factors perform to the maximum possible levels and if memory management is not properly handled the performance will have a proportional degradation.ENHANCED COMMUNAL GLOBAL. Job Scheduling. To overwhelm the stated problem we have extended our previous work with a new technique that manages the data in Global Memory and Local Memory and enhances the performance of communicating across clusters for data access. For example when executing data pertaining to satellite images for remote sensing or defense purposes. specifically when the cost of data access from other clusters is higher and is proportionate to the amount of data. LOCAL MEMORY MANAGEMENT FOR EFFECTIVE PERFORMANCE OF CLUSTER COMPUTING Memory management becomes a prerequisite when handling applications that require immense volume of data in Cluster Computing. Experimental results show performance improvement to considerable levels with the implementation of the concept. Local Memory Management .

PEERTALK: A PEER-TO-PEER MULTI-PARTY VOICEOVER-IP SYSTEM Multi-party voice-over-IP (MVoIP) services allow a group of people to freely communicate with each other via Internet. In this paper. Service Overlay Network. unequal inbound/ out bound bandwidths) so that the system can better adapt to distinct stream mixing and distribution requirements. Failure Resilience . Particularly. The decoupled model allows us to explore the asymmetric property of MVoIP services (e. peerTalk provides light-weight backup schemes to achieve fast failure recovery. We have implemented a prototype of the peerTalk system and evaluated its performance using both large-scale simulation testbed and real Internet environment.g. Index Terms Peer-to-Peer Streaming. peerTalk achieves better scalability and failure resilience by dynamically distributing stream processing workload among different peers. Adaptive System. we present a peer-to-peer MVoIP system called peerTalk. Voice-Over-IP. Our initial implementation demonstrates the feasibility of our approach and shows promising results: peerTalk can outperform existing approaches such as P2P overlay multicast and coupled distributed processing for providing MVoIP services. Compared to traditional approaches such as server -based mixing. which have many important applications such as on-line gaming and teleconferencing. peerTalk decouples the MVoIP service delivery into two phases: mixing phase and distribution phase.. To overcome arbitrary peer departures/failures. Quality-of-Service. distinct speaking/listening activities.

NO. 6. spatial pruning and probabilistic pruning. previous works have studied many query types such as nearest neighbor query. Q. In the context of uncertain databases. Index Terms Probabilistic group nearest neighbor queries. to reduce the PGNN search space. in the uncertain database. top-k query. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed approach. we propose effective pruning methods. previous techniques to answer group nearest neighbor (GNN) query cannot be directly applied to our PGNN problem. of query points. given a set.g. Due to the inherent uncertainty of data objects. Specifically. which also has many applications. namely.. uncertain database . sum. range query. skyline query. a PGNN query retrieves data objects that minimize the aggregate distance (e. and max) to query set Q. In this paper. which can be seamlessly integrated into our PGNN query procedure. min. in terms of the wall clock time and the speed-up ratio against linear scan. Motivated by this. 20. we focus on another important query. namely.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. probabilistic group nearest neighbor (PGNN) query. JUNE 2008 Probabilistic Group Nearest Neighbor Queries in Uncertain Databases The importance of query processing over uncertain data has recently arisen due to its wide usage in many real-world applications. and similarity join. VOL.

Our current software router implementation can run at OC48 speeds. In this paper. at stub network access links. Traffic Engineering. We develop ―redundancy-aware‖ intra. A common aspect of these systems is that they apply to localized settings.2. .2 [Computer Communication Networks]: Routing Protocols General Terms: Algorithms. Measurement. Such a universal deployment would immediately reduce link loads everywhere. while the more recent systems operate on individual packets. We also address key challenges that may hinder implementation of redundancy elimination on fast routers. we explore the benefits of deploying packet-level redundant content elimination as a universal primitive on all Internet routers. e. we argue that far more significant network-wide benefits can be derived by redesigning network routing protocols to leverage the universal deployment. Categories and Subject Descriptors: C. In particular. Keywords: Traffic Redundancy. Design. employing redundancy elimination approaches across redundancy-aware routes can lower intra and inter-domain link loads by 10-50%.PACKET CACHES ON ROUTERS: THE IMPLICATIONS OF UNIVERSAL REDUNDANT TRAFFIC ELIMINATION Many past systems have explored how to eliminate redundant transfers from network links and improve network efficiency. and enhance ISPs’ responsiveness to traffic variations.g. Routing.and inter-domain routing algorithms and show that they enable better traffic engineering. reduce link usage costs. However. Several of these systems operate at the application layer.

. For example. we present a novel query processing technique that. while maintaining high scalability and accuracy. Our approach is based on peer-to-peer sharing. We illustrate the appeal of our technique through extensive simulation results. Efficient processing of LBSQs is of critical importance with the everincreasing deployment and use of mobile technologies.LOCATION-BASED SPATIAL QUERIES WITH DATA SHARING IN WIRELESS BROADCAST ENVIRONMENTS Location-based spatial queries (LBSQs) refer to spatial queries whose answers rely on the location of the inquirer. a significant challenge is presented by wireless broadcasting environments. manages to reduce the latency considerably in answering location-based spatial queries. We show that LBSQs have certain unique characteristics that traditional spatial query processing in centralized databases does not address. In this paper. which enables us to process queries without delay at a mobile host by using query results cached in its neighboring mobile peers. which often exhibit high-latency database access.

. Simulation results focused on the improvement of the packet delivery in the routing protocol compared to standard AODV. The OMNET++ is used to simulate the performance of the metrics. The performance metrics are measured by varying the number of nodes and the speeds. lifetime ratio (LR) of the active route for the intermediate node is introduced to increase the number of unsuccessful packets delivery.INCREASING PACKET DELIVERY IN AD HOC ONDEMAND DISTANCE VECTOR (AODV) ROUTING PROTOCOL Broadcasting in the route discovery and the route maintenance of Ad hoc On-demand Distance Vector (AODV) Routing Protocol provokes a high number of unsuccessful packets deliveries from the source nodes to the destination nodes. Studies have been undertaken to optimize the rebroadcast focused on the route discovery of the AODV. In this study.

we develop an inference violation detection system to protect sensitive data content.PROTECTION OF DATABASE SECURITY VIA COLLABORATIVE INFERENCE DETECTION Malicious users can exploit the correlation among data to infer sensitive information from a series of seemingly innocuous data accesses. . For multi-user cases. An example is given to illustrate the use of the proposed technique to prevent multiple collaborative users from deriving sensitive information via inference. we develop a model to evaluate collaborative inference based on the query sequences of collaborators and their tasksensitive collaboration levels. the detection system will examine his/her past query log and calculate the probability of inferring sensitive information. Therefore. database schema and semantic knowledge. Thus. The SIM is then instantiated to a semantic inference graph (SIG) for query-time inference violation detection. Based on data dependency. Experimental studies reveal that information authoritativeness and communication fidelity are two key factors that affect the level of achievable collaboration. when a user poses a query. the users may share their query answers to increase the inference probability. The query request will be denied if the inference probability exceeds the pre-specified threshold. we constructed a semantic inference model (SIM) that represents the possible inference channels from any attribute to the pre-assigned sensitive attributes. For a single user case.

FEBRUARY 2008 THE SERVER REASSIGNMENT PROBLEM FOR LOAD BALANCING IN STRUCTURED P2P SYSTEMS Application-layer peer-to-peer (P2P) networks are considered to be the most important development for next-generation Internet infrastructure. Most structured P2P systems rely on ID-space partitioning schemes to solve the load imbalance problem and have been known to result in an imbalance factor of in the zone sizes. We demonstrate the superior performance of our proposal in general and its advantages over previous strategies in particular. generalized assignment IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. we systematically characterize the effect of heterogeneity on load balancing algorithm performance and the conditions in which heterogeneity may be easy or hard to deal with based on an extensive study of a wide spectrum of load and capacity scenarios. Index Terms Distributed hash table. Second. AUGUST 2008 . 8. This paper makes two contributions.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. NO. 19. NO. We also explore other important issues vital to the performance in the virtual server framework. load balancing among the peers is critical. VOL. load balance. such as the effect of the number of directories employed in the system and the performance ramification of user registration strategies. structured peer to-peer system. local search. and perhaps more significantly. 19. 2. First. For these systems to be effective. we propose addressing the virtual-server-based load balancing problem systematically using an optimization-based approach and derive an effective algorithm to rearrange loads among the peers. VOL.

With regard to a low systemwide overhead. a P2P network minimizes its traffic in maintaining its performance efficiency and functional correctness. low communication latency among nodes. a P2P network intends to reduce the communication latency between any two nodes as much as possible. we show that the diameter of our tree network is constant). which guarantees that the degree of a node is constant in probability. For fast communication. and in particular. multicast.A TREE-BASED PEER-TO-PEER NETWORK WITH QUALITY GUARANTEES Abstract—Peer-to-peer (P2P) networks often demand scalability. and low systemwide overhead. The merits of our tree-based network include 1) a tree-shaped P2P network. and 2) provable performance guarantees. In this paper. given a physical network with a power-law latency expansion property. regardless of the system size (the network diameter in our tree-based network increases logarithmically with an increase in the system size. For scalability. We evaluate our proposal by a rigorous performance analysis. a node maintains partial states of a P2P network and connects to a few nodes. systems. performance . and we validate this by extensive simulations. Index Terms Peer-to-peer analysis. tree-based networks. we present a novel tree-based P2P network with low communication delay and low systemwide overhead.


Establishing an appropriate semantic overlay on peer -to-peer (P2P) networks to obtain both semantic ability and scalability is a challenge. Current DHT-based P2P networks are limited in their ability to support a semantic search. This paper proposes the Distributed Suffix Tree (DST) overlay as the intermediate layer between the DHT overlay and the semantic overlay to support the search of a keyword sequence. Its time cost is sublinear with the length of the keyword sequence. Analysis and experiments show that the DST-based search is fast, load-balanced, and useful in realizing an accurate content search on P2P networks. Index Terms DHT, knowledge grid, peer-to-peer, semantic overlay, suffix tree, load balance.


An efficient algorithm is presented for the computation of grayscale morphological operations with arbitrary 2-D flat structuring elements (S.E.). The required computing time is independent of the image content and of the number of gray levels used. It always outperforms the only existing comparable method, which was proposed in the work by Van Droogenbroeck and Talbot, by a factor between 3.5 and 35.1, depending on the image type and shape of S.E. So far, filtering using multiple S.E.s is always done by performing the operator for each size and shape of the S.E. separately. With our method, filtering with multiple S.E.s can be performed by a single operator for a slightly reduced computational cost per size or shape, which makes this method more suitable for use in granulometries, dilation-erosion scale spaces, and template matching using the hit-or-miss transform. The discussion focuses on erosions and dilations, from which other transformations can be derived. Index Terms Dilation, dilation-erosion scale spaces, erosion, fast algorithm, hitor-miss transform, mathematical morphology, multiscale analysis.


This paper presents an H-infinity filtering approach to optimize a fuzzy control model used to determine behavior consistent (BC) information-based control strategies to improve the performance of congested dynamic traffic networks. By adjusting the associated membership function parameters to better respond to nonlinearities and modeling errors, the approach is able to enhance the computational performance of the fuzzy control model. Computational efficiency is an important aspect in this problem context, because the information strategies are required in subreal time to be real-time deployable. Experiments are performed to evaluate the effectiveness of the approach. The results indicate that the optimized fuzzy control model contributes in determining the BC information-based control strategies in significantly less computational time than when the default controller is used. Hence, the proposed H-infinity approach contributes to the development of an efficient and robust information-based control approach. Index Terms Fuzzy control, H-infinity filter, information based control.

Important insights can be obtained by inferring duality results for the other problem. More important. VOL. linear programming.IEEE/ACM TRANSACTIONS ON NETWORKING. 16. Therefore. network capacity. 2. rate allocation. APRIL 2008 RATE ALLOCATION AND NETWORK LIFETIME PROBLEMS FOR WIRELESS SENSOR NETWORKS An important performance consideration for wireless sensor networks is the amount of information collected by all the nodes in the network over the course of network lifetime. we show that there exists an elegant duality relationship between theLMMrate allocation problem and the LMM node lifetime problem. We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. flow routing. which we call serial LP with Parametric Analysis (SLP-PA). node lifetime. Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes. Index Terms Energy constraint. we advocate the use of lexicographical max min (LMM) rate allocation. lexicographic max-min. theory . parametric analysis. sensor networks. we develop a pol ynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP). To calculate theLMMrate allocation vector. NO. it is sufficient to solve only one of the two problems.

occasionally network intrusion detection must be performed manually by administrators. making this a tedious and timeconsuming labor. We propose a multimodal human-computer interface to analyze malicious attacks during forensic examination of network logs. We describe a sonification prototype which generates different sounds according to a number of well-known network attacks . To support this. either by detecting the intruders in real-time or by revising network logs. intrusion detection analysis has been carried out using visual.TOWARDS MULTIMODAL INTERFACES FOR INTRUSION DETECTION Network intrusion detection has generally been dealt with using sophisticated software and statistical analysis tools. auditory or tactile sensory information in computer interfaces. However. However. little is known about how to best integrate the sensory channels for analyzing intrusion detection.

Index Terms Compression. 3. is also discussed. while texture synthesis is used for the textured blocks. VOL. If the lost block contained structure. The switch between the two schemes is done in a fully automatic fashion based on the surrounding available blocks. 12. . inpainting. The viability of this method for image compression. wireless transmission. Instead of using common retransmission query protocols. JPEG. interpolation. When such images are transmitted over fading channels. in association with lossy JPEG. The performance of this method is tested for various images and combinations of lost blocks. texture synthesis. images are first tiled into blocks of 8 8 pixels. we aim to reconstruct the lost data using correlation between the lost block and its neighbors.IEEE TRANSACTIONS ON IMAGE PROCESSING. it is reconstructed using an image inpainting algorithm. NO. filling-in. restoration. the effects of noise can destroy entire blocks of the image. MARCH 2003 STRUCTURE AND TEXTURE FILLING-IN OF MISSING IMAGE BLOCKS IN WIRELESS TRANSMISSION AND COMPRESSION APPLICATIONS An approach for filling-in blocks of missing data in wireless image transmission is presented in this paper. When compression algorithms such as JPEG are used as part of the wireless transmission process.

In this paper. even in the presence of ―greedy‖ TCP flows. we explore the opposite approach. The key mechanisms unique to TCP-LP congestion control are the use of oneway packet delays for early congestion indications and a TCP-transparent congestion avoidance policy. Keywords TCP-LP. we develop TCP Low Priority (TCP-LP). (3) substantial amounts of excess bandwidth are available to the low-priority class.TCP-LP: LOW-PRIORITY SERVICE VIA ENDPOINT CONGESTION CONTROL Service prioritization among different traffic classes is an important goal for the Internet. The results of our simulation and Internet experiments show that that: (1) TCP-LP is largely non-intrusive to TCP traffic. multiple TCP-LP flows share excess bandwidth fairly. service prioritization. and devise a new distributed algorithm to realize a low-priority service (as compared to the existing best effort) from the network endpoints. and attempt to develop mechanisms that provide ―better-than-best-effort‖ service. a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the ―fair share‖ of bandwidth as targeted by TCP. (4) the response times of web connections in the best-effort class decrease by up to 90% when long-lived bulk data transfers use TCP-LP rather than TCP. TCP-transparency. To this end. Conventional approaches to solving this problem consider the existing best-effort class as the low-priority class. moreover. (5) despite their low-priority nature. . (2) both single and aggregate TCPLP flows are able to successfully utilize excess network bandwidth. available bandwidth. TCP. TCP-LP flows are able to utilize significant amounts of available bandwidth in a wide-area network environment.

border control . Keywords: Internet. NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network. Simulation results show that NBP effectively eliminates congestion collapse and that. end-to-end argument. thereby preventing congestion within the network. congestion collapse. End-to-end congestion control algorithms alone. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. Moreover. which provides fair bandwidth allocations to competing flows. we propose and investigate a novel congestion avoidance mechanism calledNetwork Border Patrol (NBP). corestateless mechanisms. max-min fairness. congestion control. are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. when combinedwith ECSFQ. To address these maladies. approximately max-min fair bandwidth allocations can be achieved for competing flows. NBP is complemented with the proposed enhanced core-stateless fair queueing (ECSFQ) mechanism. however.NETWORK BORDER PATROL: PREVENTING CONGESTION COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET The Internet’s excellent scalability and robustness result in part from the endto-end nature of Internet congestion control.

and present protocols that allow Bob to be stateless. but none of the configuration information usually assumed to be known a priori in a security scheme. and are minimal in computation and number of messages.e. i. We propose protocols that are secure even if Alice’s password is guessable. We discuss various protocols for doing this that avoid off-line password guessing attacks by someone eavesdropping or impersonating Alice or Bob. and not carrying a smart card. one that has all the necessary software installed.SECURE PASSWORD-BASED PROTOCOL FOR DOWNLOADING A PRIVATE KEY We present protocols that allow a user Alice. We concentrate on the initial retrieval of Alice’s private key from some server Bob on the network.. unauditable on-line attacks. her certificate. We discuss auditable vs. such as Alice’s public and private keys. and the public keys of one or more CAs. allow for salt. to ―log in to the network‖ from a ―generic‖ workstation. . we mean the workstation retrieves this information on behalf of the user. knowing only her name and password. This would be straightforward if Alice had a cryptographically strong password. avoid denial-of-service attacks. By ―logging in‖.

traceback. for the checksums serve both as associative addresses and data integrity verifiers. VOL. which we call randomize-and-link. . probabilistic packet marking. Our approach. 1. FEBRUARY 2008 PROBABILISTIC PACKET MARKING FOR LARGE-SCALE IP TRACEBACK This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. NO.IEEE/ACM TRANSACTIONS ON NETWORKING. 16. IP. checksum cords. distributed denial of service (DDOS). The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages. Index Terms—Associate addresses. uses large checksum cords to ―link‖ message fragments in a way that is highly scalable.

ideally. selecting which patterns to analyze can be nontrivial. since the number of generated patterns can be large. interactive manner.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING VOL. 20. signature-based indexing methods. organizing temporal patterns. . Index Terms Content-based data mining queries. JUNE 2008 A SIGNATURE-BASED INDEXING METHOD FOR EFFICIENT CONTENT-BASED RETRIEVAL OF RELATIVE TEMPORAL PATTERNS A number of algorithms have been proposed for the discovery of temporal patterns. However. NO. There is thus a need for algorithms and tools that can assist in the selection of discovered patterns so that subsequent analysis can be performed in an efficient and. In this paper. we propose a signature-based indexing method to optimize the storage and retrieval of a large collection of relative temporal patterns. 6.

A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The measure. This paper presents the principles and the technology that stand behind the C3 measure. such as comments and identifiers. combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code. fault proneness. reuse. Index Terms Software cohesion. named the Conceptual Cohesion of Classes (C3). 2.IEEE TRANSACTIONS ON SOFTWARE ENGINEERING VOL. program comprehension. Existing approaches are largely based on using the structural information from the source code. Latent Semantic Indexing. textual coherence. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. and maintenance. . NO. such as attribute references. information retrieval. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. in methods to measure cohesion. is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. In addition. fault prediction. 34. MARCH/APRIL 2008 USING THE CONCEPTUAL COHESION OF CLASSES FOR FAULT PREDICTION IN OBJECT-ORIENTED SYSTEMS Abstract—High cohesion is a desirable property of software as it positively impacts understanding.

Astart adaptively and repeatedly resets the Slow-start Threshold (ssthresh) based on an eligible sending rate estimation mechanism proposed in TCP Westwood. to improve the startup performance in such networks. a sender is able to grow the congestion window (cwnd) fast without incurring risk of buffer overflow and multiple losses. The method avoids both under-utilization due to premature Slowstart termination. Simulation experiments show that Astart can significantly improve the link utilization under various bandwidth. When a connection initially begins or re-starts after a coarse timeout. In this paper we evaluate the performance of TCP Reno/Newreno. large bandwidth delay networks . Lab measurements using a FreeBSD Astart implementation are also reported in this paper. Keywords congestion control. rate estimation. slow-start. buffer size and round-trip propagation times. especially during the startup period. as well as multiple losses due to initially setting ssthresh too high. By adapting to network conditions during the startup phase. Vegas and Hoe’s modification in large bandwidth delay networks. called Adaptive Start (Astart). providing further evidence of the gains achievable via Astart. or increasing cwnd too fast.TCP STARTUP PERFORMANCE IN LARGE BANDWIDTH DELAY NETWORKS Next generation networks with large bandwidth and long delay pose a major challenge to TCP performance. Experiments also show that Astart achieves good fairness and friendliness toward TCP NewReno. We propose a modified Slow-start mechanism.

the fraud-detection task exhibits technical problems that include skewed distributions of training data and nonuniform cost per error. . Large-scale data-mining techniques can improve on the state of the art in commercial practice. Our proposed methods of combining multiple learned fraud detectors under a ―cost model‖ are general and demonstrably useful. our empirical results demonstrate that we can significantly reduce loss due to fraud through distributed data mining of fraud models.DISTRIBUTED DATA MINING IN CREDIT CARD FRAUD DETECTION CREDIT CARD TRANSACTIONS Continue to grow in number. Besides scalability and efficiency. Scalable techniques to analyze massive amounts of transaction data that efficiently compute fraud detectors in a timely manner is an important problem. taking an ever-larger share of the US payment system and leading to a higher rate of stolen account numbers and subsequent losses by banks. Improved fraud detection thus has become essential to maintain the viability of the US payment system. In this article. especially for e commerce. both of which have not been widely studied in the knowledge-discovery and datamining community. Banks have used early fraud warning systems for some years. we survey and evaluate a number of techniques that address these three main issues concurrently.

Among them are the introduction of automatic system documentation. . This pilot project of Empirical Software Engineering has allowed the development of techniques to help software managers to better understand. control and ultimately improve the software process. module’s complexity assessment and effort estimation for maintenance activities in the organization.A SOFTWARE DEFECT REPORT AND TRACKING SYSTEM IN AN INTRANET This paper describes a case study where SofTrack .a Software Defect Report and Tracking System – was implemented using internet technology in a geographically distributed organization. They belong to the Portuguese Navy’s Information Systems Infrastructure and were developed using typical legacy systems technology: COBOL with embedded SQL for queries in a Relational Database environment. Four medium to large size information systems with different levels of maturity are being analyzed within the scope of this project.

Parallel genetic . we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences and an effective job scheduling is achieved as time progresses. The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used.PREDICTIVE JOB SCHEDULING IN A CONNECTION LIMITED SYSTEM USING PARALLEL GENETIC ALGORITHM Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the scheduling technique used. Intelligence is the key factor which is lacking in the job scheduling techniques of today. In this paper. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it. Keywords: algorithm Job scheduling. Here we assume that the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed number of jobs. Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics. remote resource. Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations.

it has become painfully clear that traditional manual methods of protection do not suffice. A intrusion detection and intrusion blocker that can provide interim protection against a limited and changing set of high-likelihood or high-priority threats. Keywords: intrusion detection.AN AGENT BASED INTRUSION DETECTION. digital signature used to provide a security. intrusion detection. . digital signature. response. It is expected that this mechanism would be easily and adaptively configured and deployed to keep pace with the ever-evolving threats on the network. response based on active networks that helps to provide rapid response to vulnerability advisories. intrusion detection and response based on agent system. blocking. agents. In this paper we propose the use of agent based intrusion detection and response. RESPONSE AND BLOCKING USING SIGNATURE METHOD IN ACTIVE NETWORKS As attackers use automated methods to inflict widespread damage on vulnerable systems connected to the network. The custom network services can be deployed by the user inside the packets themselves. Active networks are an exciting development in networking services in which the infrastructure provides customizable network services to packets. Agents are integrated with the collaborative IDS in order to provide them with a wider array of information to use their response activities. This paper discusses an intrusion prevention approach.

Prufer number is the most representative method of vertex encoding. However. it is vulnerable to component failure in ad hoc network due to the lack of redundancy. and mobility of network hosts make the multicast routing protocol design particularly challenging in wireless ad hoc networks. However. It has revealed an efficient method of the reconstruction of multicast tree topology and the experimental results demonstrated the effectiveness of GAST compare to GAP technique . Tree graph optimization problems (GOP) are usually difficult and time consuming NP-hard or NP-complete problems.A NEAR-OPTIMAL MULTICAST SCHEME FOR MOBILE AD HOC NETWORKS USING A HYBRID GENETIC ALGORITHM Multicast routing is an effective way to communicate among multiple hosts in a network. we propose a novel GA based on sequence and topology encoding (GAST) for multicast protocol is introduced for multicast routing in wireless ad hoc networks and generalizes the GOP of tree-based multicast protocol as well as three associated operators. in which well-designed chromosomes and appropriate operators are key factors that determine the performance of the GAs. Encoding trees is a critical scheme in GAs for solving these problems because each code should represent a tree. Genetic algorithms (GA) have been proven to be an efficient technique for solving the GOP. In this paper. multiple paths and multicast tree structure. It outperforms the basic broadcast strategy by sharing resources along general links. path constraints. which is a string of n-2 integers and can be transformed to an n-node tree. while changing one element of its vector causes dramatically change in its corresponding tree topology. while sending information to a set of predefined multiple destinations concurrently. Limited link. genetic algorithm based on Prufer encoding (GAP) does not preserve locality.

These cluster head nodes execute administrative functions and network key used for certification. Clustering. We present a novel secure communication framework for ad hoc networks (SCP). Keywords Security. The cluster head nodes (CHs) perform the major operations to achieve our SCP framework with help of Kerberos authentication application and symmetric key cryptography technique. which will be secure. while routing the packets between mobile hosts. . Authentication. Confidentiality. Authentication is one of the important security requirements of a communication network.A NOVEL SECURE COMMUNICATION PROTOCOL FOR AD HOC NETWORKS [SCP] An ad hoc network is a self organized entity with a number of mobile nodes without any centralized access point and also there is a topology control problem which leads to high power consumption and no security. transparent and scalable and will have less overhead. In this paper. This is achieved b y using clustering techniques. reliable. we propose a secure communication protocol for communication between two nodes in ad hoc networks. The common authentication schemes are not applicable in Ad hoc networks. which describes authentication and confidentiality when packets are distributed between hosts with in the cluster and between the clusters.

Prevention mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses in IP packets. IDPFs are constructed from the information implicit in Border Gateway Protocol (BGP) route updates and are deployed in network border routers. NO. BGP. even with partial deployment on the Internet. In this paper. we show that. network-level security and protection. JANUARY-MARCH 2008 CONTROLLING IP SPOOFING THROUGH INTERDOMAIN PACKET FILTERS The Distributed Denial-of-Service (DDoS) attack is a serious threat to the legitimate use of the Internet. they can help localize the origin of an attack packet to a small number of candidate networks. We establish the conditions under which the IDPF framework correctly works in that it does not discard packets with valid source addresses. Index Terms IP spoofing. Based on extensive simulation studies. 1. we propose an interdomain packet filter (IDPF) architecture that can mitigate the level of IP spoofing on the Internet. IDPFs can proactively limit the spoofing capability of attackers. attackers can evade detection and put a substantial burden on the destination network for policing attack packets. DDoS. VOL. A key feature of our scheme is that it does not require global routing information. routing protocols . In addition. 5. By employing IP spoofing.IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING.

we develop a structurebased routing scheme that prevents information leaks in the XML data dissemination. and distribution of selected content portions. AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS. postorder traversal. preorder traversal. and assures that content is delivered to users according to the access control policies. Our dissemination approach thus represents an efficient and secure mechanism for use in applications such as publish–subscribe systems for XML Documents. MAN. NO. Our approach is based on the notion of encrypted postorder numbers that support the integrity and confidentiality requirements of XML content as well as facilitate efficient identification. 3. that is. Also. policy-based routing by combining it with multicast in order to achieve high efficiency in terms of bandwidth usage and speed of data delivery. Index Terms Encryption. The publish–subscribe model restricts the consumer and document source information to the routers to which they register with. Our framework facilitates dissemination of contents with varying degrees of confidentiality and integrity requirements in a mix of trusted and untrusted networks. By using such notion.IEEE TRANSACTIONS ON SYSTEMS. it does not require the routers to be aware of any security policy in the sense that the routers do not need to implement any policy related to access control. MAY 2008 A NEW MODEL FOR SECURE DISSEMINATION OF XML CONTENT The paper proposes an approach to content dissemination that exploits the structural properties of an Extensible Markup Language (XML) document object model in order to provide an efficient dissemination and at the same time assuring content integrity and confidentiality. publish–subscribe. which is prevalent in current settings across enterprise networks and the web. Extensible Markup Language (XML). 38. thereby enhancing scalability. Our proposed dissemination approach further enhances such structurebased. extraction. VOL. trees . policies specifying which users can receive which portions of the contents. structure-based routing. security.

efficient and low-cost unified NMS infrastructure is a key issue in top-level system design. We propose a new infrastructure for a Web-driven. The practical application has been implemented using Java and tested on a unified telecommunication network management system. The key technologies in practical application are investigated in detail.INFRASTRUCTURE OF UNIFIED NETWORK MANAGEMENT SYSTEM DRIVEN BY WEB TECHNOLOGY As distributed network management systems play an increasingly important role in telecommunication network. The experiments and application have demonstrated that the infrastructure is feasible and scalable for operation in current and future telecommunication network management . the flexible. distributed unified network management system.

connection-oriented ATM is a much better candidate. the challenge we face lies in finding an efficient way to integrate the two. Given the popularity of the Internet and the established status of ATM as the broadband transport standard. This article describes a research project reflecting this trend. but for services demanding quality and real-time delivery. and other non-real-time services. Therefore. e-mail. IP and ATM are major examples of the two types. The project aims at efficient integration of the two to eliminate the deficiencies of a standalone ATM or IP network . it is unlikely that one can replace the other. Connectionless IP is more efficient for browsing.A/I NET: A NETWORK THAT INTEGRATES ATM AND IP Future networks need both connectionless and connectionoriented services.

The minimum latency in switches with centralized scheduling comprises two components. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization and incorporates a reliable delivery mechanism to deal with failed speculations. electrooptic switches. scheduling. namely. Using this model. packet switching. the control-path latency and the data-path latency. FEBRUARY 2008 PERFORMANCE OF A SPECULATIVE TRANSMISSION SCHEME FOR SCHEDULING-LATENCY REDUCTION Low latency is a critical requirement in some switching applications. specifically in parallel computer interconnection networks. Index Terms Arbiters. The results demonstrate that the control-path latency can be almost entirely eliminated for loads up to 50%. 16. . which in a practical high-capacity.IEEE/ACM TRANSACTIONS ON NETWORKING. VOL. under certain conditions. An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking input-queued crossbar switch with receivers per output. We introduce a speculative transmission scheme to significantly reduce the average control-path latency by allowing cells to proceed without waiting for a grant. modeling. performance measures such as the mean delay and the rate of successful speculative transmissions are derived. NO. 1. Our simulations confirm the analytical results. distributed switch implementation can be far greater than the cell duration.


Many techniques for multicast authentication employ the principle of delayed key disclosure. These methods introduce delay in authentication, employ receiver-side buffers, and are susceptible to denial-of-service (DoS) attacks. Delayed key disclosure schemes have a binary concept of authentication and do not incorporate any notion of partial trust. This paper introduces staggered timed efficient stream loss-tolerant authentication (TESLA), a method for achieving multigrade authentication in multicast scenarios that reduces the delay needed to filter forged multicast packets and, consequently, mitigates the effects of DoS attacks. Staggered TESLA involves modifications to the popular multicast authentication scheme, TESLA, by incorporating the notion of multilevel trust through the use of multiple, staggered authentication keys in creating message authentication codes (MACs) for a multicast packet. We provide guidelines for determining the appropriate buffer size, and show that the use of multiple MACs and, hence, multiple grades of authentication, allows the receiver to flush forged packets quicker than in conventional TESLA. As a result, staggered TESLA provides an advantage against DoS attacks compared to conventional TESLA. We then examine two new strategies for reducing the time needed for complete authentication. In the first strategy, the multicast source uses assurance of the trustworthiness of entities in a neighborhood of the source, in conjunction with the multigrade authentication provided by staggered TESLA. The second strategy achieves reduced delay by introducing additional key distributors in the network.

Index Terms Denial-of-service (DoS) attacks, forge-capable area, message authentication code (MAC), multigrade source authentication, queueing theory, timed ef ficient stream loss-tolerant authentication (TESLA), trust.


In this paper, we propose a new mechanism to select the cells and the wireless technologies for layer-encoded video multicasting in the heterogeneous wireless networks. Different from the previous mechanisms, each mobile host in our mechanism can select a different cell with a different wireles s technology to subscribe each layer of a video stream, and each cell can deliver only a subset of layers of the video stream to reduce the bandwidth consumption. We formulate the Cell and Technology Selection Problem (CTSP) to multicast each layer of a video stream as an optimization problem. We use Integer Linear Programming to model the problem and show that the problem is NP-hard. To solve the problem, we propose a distributed algorithm based on Lagrangean relaxation and a protocol based on the proposed algorithm. Our mechanism requires no change of the current video multicasting mechanisms and the current wireless network infrastructures. Our algorithm is adaptive not only to the change of the subscribers at each layer, but also the change of the locations of each mobile host. Index Terms Multicast, layer-encoded video, heterogeneous wireless networks

On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-MaxLife, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. Index Terms Mobile ad hoc networks, On-demand routing protocols, Mobility, Distributed cache updating

routing protocols for MANETs are designed based on the assumption that all participating nodes are fully cooperative.IEEE TRANSACTIONS ON MOBILE COMPUTING. One such routing misbehavior is that some selfish nodes will participate in the route discovery and maintenance processes but refuse to forward data packets. node misbehaviors may exist. VOL. In this paper. routing misbehavior. However. In order to reduce additional routing overhead. only a fraction of the received data packets are acknowledged in the 2ACK scheme. 5. NO. network security. we propose the 2ACK scheme that serves as an add-on technique for routing schemes to detect routing misbehavior and to mitigate their adverse effect. Analytical and simulation results are presented to evaluate the performance of the proposed scheme. MAY 2007 AN ACKNOWLEDGMENT-BASED APPROACH FOR THE DETECTION OF ROUTING MISBEHAVIOR IN MANETS We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. Index Terms Mobile Ad Hoc Networks (MANETs). due to the open structure and scarcely available batterybased energy. . node misbehavior. 6. The main idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path. Dynamic Source Routing (DSR). In general.

staggered authentication keys in creating message authentication codes (MACs) for a multicast packet. employ receiver-side buffers. As a result. the multicast source uses assurance of the trustworthiness of entities in a neighborhood of the source. trust.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY VOL. and show that the use of multiple MACs and. message authentication code (MAC). The second strategy achieves reduced delay by introducing additional key distributors in the network. JUNE 2006 REDUCING DELAY AND ENHANCING DOS RESISTANCE IN MULTICAST AUTHENTICATION THROUGH MULTIGRADE SECURITY Many techniques for multicast authentication employ the principle of delayed key disclosure. Delayed key disclosure schemes have a binary concept of authentication and do not incorporate any notion of partial trust. In the first strategy. hence. consequently. in conjunction with the multigrade authentication provided by staggered TESLA. Staggered TESLA involves modifications to the popular multicast authentication scheme. These methods introduce delay in authentication. Index Terms Denial-of-service (DoS) attacks. TESLA. 1. mitigates the effects of DoS attacks. staggered TESLA provides an advantage against DoS attacks compared to conventional TESLA. queueing theory. We then examine two new strategies for reducing the time needed for complete authentication. allows the receiver to flush forged packets quicker than in conventional TESLA. a method for achieving multigrade authentication in multicast scenarios that reduces the delay needed to filter forged multicast packets and. This paper introduces staggered timed efficient stream loss-tolerant authentication (TESLA). We provide guidelines for determining the appropriate buffer size. multigrade source authentication. forge-capable area. by incorporating the notion of multilevel trust through the use of multiple. multiple grades of authentication. and are susceptible to denial-of-service (DoS) attacks. . NO. 2. timed efficient stream loss-tolerant authentication (TESLA).

mobile ad hoc network. based on implicit addressing. Here we report about its performance in simulated scenarios as well as in real-world experiments. COMAN is implemented and publicly available.A SELF-REPAIRING TREE TOPOLOGY ENABLING CONTENT-BASED ROUTING IN MOBILE AD HOC NETWORKS Content-based routing (CBR) provides a powerful and flexible foundation for distributed applications. fosters decoupling among the communicating components. In this paper we present COMAN. topological achieve this goal through repair strategies that m inimize the changes that may impact the CBR layer exploiting the tree. Index Terms Content-based routing. including mobile ad hoc networks (MANETs). publish-subscribe. Unfortunately. query-advertise. a protocol to organize the nodes of a MANET in a tree-shaped network able to i) ii) selfrepair to tolerate the frequent reconfigurations typical of MANETs. . The results confirm that its characteristics enable reliable and efficient CBR on MANETs. the characteristics of the CBR model are only rarely met by available systems. therefore meeting the needs of many dynamic scenarios. Its communication model. which typically assume that application level routers are organized in a tree-shaped network with a fixed topology.

Data sets. selection. 4. VOL.IMAGE TRANSFORMATION USING GRID The objective of this paper is to design and implement an algorithm to transform an available 2D image into a 3D image. Conventionally there are some software packages available for converting a 2D image to 3D image. Load distribution IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING. In java graphics API (jdk 1. Event. Keywords: Grid service. To illustrate the phenomenon we have developed software which transforms a two dimensional objects to three dimensional objects. patter ns. Since the available algorithms are time consuming enhance the efficiency. Grid computing has been used to implement the same. JANUARY-MARCH 2007 . Grid computing will be very useful in projects where complexity and time factors are essential. But to analyze the images that too in engineering applications the same two-dimensional image if transformed into a three-dimensional image without appreciable data loss will be very useful and effective for analysis. But when such an algorithm is being designed and developed it was found that it consumed much time for execution. In this module. It can also be used effectively to improve the overall system efficiency. Grid computing has been employed as a platform since it is a rapidly developing field. So a new approach is used here for running such an algorithm over the concept of grid.5) a package for 2D to 3D is there with defined methods for generating algorithms. 1. Images captured by devices such as digital camera are generally of two dimensional in nature. NO. Heterogeneity. Grid computing phenomenon can be defined as ―A paradigm/infrastructure that enabling the sharing. Object. concept hierarchies. & aggregation of geographically distributed resources‖.

A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. By mining anomalous traffic episodes from Internet connections. by automated data mining and signature generation over Internet connection episodes.HYBRID INTRUSION DETECTION WITH WEIGHTED SIGNATURE GENERATION OVER ANOMALOUS INTERNET EPISODES This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). respectively. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. This sharp increase in detection rate is obtained with less than 3 percent false alarms. SNORT and Bro systems. Internet episodes. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data set. 6. JUNE 2007 PFUSION: A P2P ARCHITECTURE FOR INTERNET-SCALE CONTENT-BASED SEARCH AND RETRIEVAL . This hybrid system combines the advantages of low false-positive rate of signaturebased intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. false alarms. The HIDS approach proves the vitality of detecting intrusions and anomalies. anomaly detection. we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. The signatures generated by ADS upgrade the SNORT performance by 33 percent. our experimental results show a 60 percent detection rate of the HIDS. NO. 18. VOL. simultaneously. compared with 30 percent and 22 percent in using the SNORT and Bro systems. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. intrusion detection systems. signature generation. traffic data mining. Index Terms Network security.

and the Text REtrieval Conference (TREC) show that the architecture we propose is both efficient and practical. In this paper. using the pFusion middleware architecture and data sets from Akamai’s Internet mapping infrastructure (AKAMAI). Our empirical results. the Active Measurement Project (NLANR). . including files and documents. peer-to-peer. We present the Peer Fusion (pFusion) architecture that aims to efficiently integrate heterogeneous information that is geographically scattered on peers of different networks. overlay construction algorithms. Our approach builds on work in unstructured P2P systems and uses only local knowledge. inherently hinders the efficient retrieval of information. The distributed nature of these systems. Index Terms Information retrieval. we consider the effects of topologically aware overlay construction techniques on efficient P2P keyword search algorithms. where nodes are typically located across different networks and domains.The emerging Peer-to-Peer (P2P) model has become a very powerful and attractive paradigm for developing Internet-scale systems for sharing resources.

distributed computing model. which provides upper-layer applications with process state information according to the current system synchrony (or QoS). NO. 4. quality of service. Considering such a context. and. Index Terms Adaptability. such a composition can vary over time. Providing fault tolerance for such dynamic environments is a challenging task.g. The underlying system model is hybrid. However. This paper also presents an implementation of the model that relies on a negotiated quality of service (QoS) for communication channels. processes are not required to share the same view of the system synchrony at a given time. when the underlying system QoS degrade) or totally synchronous. . consensus. fault tolerance. VOL.. To illustrate what can be done in this programming model and how to use it. in particular. the system may become totally asynchronous (e. asynchronous/synchronous distributed system. Moreover. the consensus problem is taken as a benchmark problem. composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). this paper proposes an adaptive programming model for fault-tolerant distributed computing. 1. JANUARY-MARCH 2007 AN ADAPTIVE PROGRAMMING MODEL FOR FAULTTOLERANT DISTRIBUTED COMPUTING The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QoS) cannot always be delivered between processes.IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING.

forge-capable area. and show that the use of multiple MACs and. NO. staggered TESLA provides an advantage against DoS attacks compared to conventional TESLA. We provide guidelines for determining the appropriate buffer size. This paper introduces staggered timed efficient stream loss-tolerant authentication (TESLA).IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY. hence. multigrade source authentication. the multicast source uses assurance of the trustworthiness of entities in a neighborhood of the source. Staggered TESLA involves modifications to the popular mult icast authentication scheme. and are susceptible to denial-of-service (DoS) attacks. As a result. 1. multiple grades of authentication. TESLA. VOL. . in conjunction with the multigrade authentication provided by staggered TESLA. employ receiver-side buffers. consequently. message authentication code (MAC). mitigates the effects of DoS attacks. allows the receiver to flush forged packets quicker than in conventional TESLA. In the first strategy. Delayed key disclosure schemes have a binary concept of authentication and do not incorporate any notion of partial trust. 2. staggered authentication keys in creating message authentication codes (MACs) for a multicast packet. The second strategy achieves reduced delay by introducing additional key distributors in the network. by incorporating the notion of multilevel trust through the use of multiple. Index Terms Denial-of-service (DoS) attacks. JUNE 2006 REDUCING DELAY AND ENHANCING DOS RESISTANCE IN MULTICAST AUTHENTICATION THROUGH MULTIGRADE SECURITY Many techniques for multicast authentication employ the principle of delayed key disclosure. These methods introduce delay in authentication. queueing theory. timed efficient stream loss-tolerant authentication (TESLA). trust. a method for achieving multigrade authentication in multicast scenarios that reduces the delay needed to filter forged multicast packets and. We then examine two new strategies for reducing the time needed for complete authentication.

we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. such as higher access frequency and smaller memory. The above optimization problem is known to be NP-hard. We simulate our distributed algorithm using a network simulator (ns2). . In this article.BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS Data caching can significantly improve the efficiency of i nformation access in a wireless ad hoc network by reducing the access latency and bandwidth usage. and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [30]) in all important performance metrics. which is shown via simulations to perform close to the approximation algorithm. designing efficient distributed caching algorithms is non-trivial when network nodes have limited memory. The approximation algorithm is amenable to localized distribute d implementation. Defining benefit as the reduction in total access cost. However. The performance differential is particularly large in more challenging scenarios. Our distributed algorithm naturally extends to networks with mobile nodes. we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least one-fourth (one-half for uniform-size data items) of the optimal benefit.

and stateless. Mobile Ad hoc networks. In this paper. Efficien t and scalable multicast routing in MANETs is a difficult issue. Scalability . In addition to the conventional multicast routing algorithms. Stateless multicasting. we have proposed a framework for hierarchical multicasting in MANET environments. To enhance performance and enable scalability. Overlay multicasting. are proposed. Domain-based multicasting. recent protocols have adopted the following new approaches: overlays. Results obtained through simulations demonstrate enhanced performance and scalability of the proposed techniques. termed as domain-based and overlay-based. we study these approaches from the protocol state management point of view. and compare their scalability behaviors. backbone based. We have considered a variety of approaches that are suitable for different mobility patterns and multicast group sizes. Multicasting is an useful operation that facilitates group communications. Index Terms Hierarchical multicasting.SCALABLE MULTICASTING IN MOBILE AD HOC NETWORKS Many potential applications of Mobile Ad hoc Networks (MANETs) involve group communications among the nodes. Two classes of hierarchical multicasting approaches.

describe the algorithms and methodology behind our system. We use machine learning algorithms to predict a shopping list fo r the customer's current trip and present this list on the device. In order for shopping assistant devices to be effective. Economics. we believe that they have to be powered by algorithms that are tuned for individual customers and can make accurate predictions about an individual's actions. and the development of promotion planning simulation tools to enable retailers to plan personalized promotions delivered through such a shopping assistant. We formally frame the shopping list prediction as a classification problem. . and recall.8 Database Management Database Applications [Data Mining] General Terms: Algorithms. Beyond the prediction of shopping lists we briery introduce other aspects of the shopping assistant project. Keywords: Retail applications. Experimentation. personalized promotions are presented using consumer models derived from loyalty card data for each individual. such as the use of consumer models to select appropriate promotional tactics. Classication. and show that shopping list prediction can be done with high levels of accuracy. As they navigate through the store.2. Machine learning. precision. Categories and Subject Descriptors: H.BUILDING INTELLIGENT SHOPPING ASSISTANTS USING INDIVIDUAL CONSUMER MODELS This paper describes an Intelligent Shopping Assistant designed for a shopping cart mounted tablet PC that enables individual interactions with customers.

3-D SPIHT-BPCS steganography and Motion. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. which are the integration of 3-D SPIHT video coding and BPCS steganography. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography.JPEG2000-BPCS steganography are presented and tested. and that of Motion-JPEG2000 and BPCS. respectively. . wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain.APPLICATION OF BPCS STEGANOGRAPHY TO WAVELET COMPRESSED VIDEO This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. In waveletbased video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and MotionJPEG2000.

making it difficult if not impossible to combine them in a central location. so they exchange numerous messages during the mining process. Modern organizations are geographically distributed. A significant area of data mining research is association rule mining. each site locally stores its ever increasing amount of day-to-day data. data sets. However. for geographically distributed data sets. and message exchanges. called Optimized Distributed Association Mining. they require external communications throughout the entire process. . ODAM generates support counts of candidate itemsets quicker than other DARM algorithms and reduces the size of average transactions. Using centralized data mining to discover useful patterns in such organizations' data isn't always feasible because merging data sets from different sites into a centralized site incurs huge network communication costs. However. Typically.ODAM: AN OPTIMIZED DISTRIBUTED ASSOCIATION RULE MINING ALGORITHM Association rule mining is an active data mining research area. Distributed ARM algorithms. aim to generate rules from different data sets spread over various geographical sites. ODAM is a distributed algorithm for geographically distributed data sets that reduces communication costs. most ARM algorithms cater to a centralized environment. We have developed a distributed algorithm. most DARM algorithms don't have an efficient message optimization technique. on the other hand. Unfortunately. Distributed data mining has thus emerged as an active subarea of data mining research. DARM algorithms must reduce communication costs so that generating global association rules costs less than combining the participating sites' data sets into a centralized site. Data from these organizations are not only distributed over various locations but also vertically fragmented. most ARM algorithms 1-9 focus on a sequential or centralized environment where no external communication is required. In contrast to previous ARM algorithms. hence.

which allow the transparent support of unicast -only routers. Index Terms Multicast.Hop multicast routing protocol). Additionally. Many reasons are responsible for this status. We show through simulation that HBH outperforms other multicast routing protocols in terms of the delay experienced by the receivers and the bandwidth consumption of the multicast trees. the Internet is likely to be organized with both unicast and multicast enabled networks. Hence. the unicast asymmetries impact the structure of the multicast trees. Since most multicast routing protocols rely on the unicast infrastructure. service deployment . routing. it is of utmost importance to design protocols that allow the progressive deployment of the multicast service by supporting unicast clouds. This paper presents HBH (Hop-By. HBH adopts the source-specific channel abstraction to simplify address allocation and implements data distribution using recursive unicast trees. we show that HBH can be incrementally deployed and that with a small fraction of HBH-enabled routers in the network HBH outperforms application-layer multicast. An important original feature of HBH is its tree construction algorithm that takes into account the unicast routing asymmetries. Thus.INCREMENTAL SERVICE DEPLOYMENT USING THE HOP BY HOP MULTICAST ROUTING PROTOCOL IP Multicast is facing a slow take-off although it is a hotly debated topic since more than a decade.

1. personal mobility. 12. This paper proposes a scalable. Analysis model and numerical results are presented to evaluate the efficiency of the proposed database architecture. and service provider portability. each of which is a three-level tree structure and is connected to the others only through its root. The proposed multitree database architecture consists of a number of database subsystems. Results have revealed that the proposed database architecture for location management can effectively support the anticipated high user density in the future mobile networks.independent PTNs. location tracking. This necessitates research into the design and performance of high throughput database technologies used in mobile syste ms to ensure that future systems will be able to carry efficiently the anticipated loads. the proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location registration and call delivery procedures. mobile . efficient location database architecture based on the location. By exploiting the localized nature of calling and mobility patterns. However. are proposed for the location databases to further improve their throughput. networks. memory-resident direct file and T-tree. NO. FEBRUARY 2004 A DISTRIBUTED DATABASE ARCHITECTURE FOR GLOBAL ROAMING IN NEXT-GENERATION MOBILE NETWORKS The next-generation mobile network will support terminal mobility. the nongeographic PTNs coupled with the anticipated large number of mobile users in future mobile networks may introduce very large centralized databases. making global roaming seamless. Index Terms Database architecture. two memory-resident database indices. robust. A location-independent personal telecommunication number (PTN) scheme is conducive to implementing such a global mobile system. In addition. location management.IEEE/ACM TRANSACTIONS ON NETWORKING VOL.

and link state routing (Terminode Local Routing. TRR uses anchored paths.IEEE TRANSACTIONS ON MOBILE COMPUTING VOL. TRR). It uses a combination of location-based routing (Terminode Remote Routing. used when the destination is far. In larger networks that are not uniformly populated with nodes. ad hoc network. However. robustness to location inaccuracy. location-based routing method. Our simulation results show that terminode routing performs well in networks of various sizes. presented here. 4. Terminode routing. mobility model. . used when the destination is close. 2. terminode routing outperforms existing location-based or MANET routing protocols. MARCH/APRIL 2005 A LOCATION-BASED ROUTING METHOD FOR MOBILE AD HOC NETWORKS Using location information to help routing is often proposed as a means to achieve scalability in large mobile ad hoc networks. a list of geographic points (not nodes) used as loose source routing information. using one of two low overhead protocols: Friend Assisted Path Discovery and Geographical Map-based Path Discovery. addresses these issues. the performance is comparable to MANET routing protocols. TLR). In smaller networks. scalable routing. Index Terms Restricted random waypoint. Anchored paths are discovered and managed by sources. NO. location-based routing is difficult when there are holes in the network topology and nodes are mobile or frequently disconnected to save battery.

the face images are mapped into a face subspace for analysis. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. LDA. NO. subspace learning.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE VOL. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space. Theoretical analysis shows that PCA. the unwanted variations resulting from changes in lighting. Experimental results suggest that the proposed Laplacianface approach provides a better representatio n and achieves lower error rates in face recognition. face manifold. In this way. and obtains a face subspace that best detects the essential face manifold structure. and pose may be eliminated or reduced. 27. LPP finds an embedding that preserves local information. By using Locality Preserving Projections (LPP). principal component analysis. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. facial expression. Index Terms Face recognition. linear discriminant analysis. 3. . locality preserving projections. and LPP can be obtained from different graph models. MARCH 2005 FACE RECOGNITION USING LAPLACIANFACES We propose an appearance-based face recognition method called the Laplacianface approach.

2005 IEEE IEEE INTERNET COMPUTING SECURE ELECTRONIC DATA INTERCHANGE OVER THE INTERNET Numerous retailers.1 EDI-INT has since become the leading means of business-to-business (B2B) transport for retail and other industries. Although invisible to the consumer. and other companies within business supply chains are leveraging Applicability Statement #2 (AS2) and other standards developed by the IETF’s Electronic Data Interchange over the Internet (EDI-INT) working group (www. invoices. and FTP payloads through authentication. Web. Founded in 1996 to develop a secure transport service for EDI business documents. confidentiality. standards for secure electronic communication of purchase orders. It began by providing the digital security and message-receipt validation for Internet communication for MIME (Multipurpose Internet Mail Extensions) packaging of EDI. the EDI-INT WG later expanded its focus to include XML and virtually any other electronic businessdocumentation format. . EDI-INT provides digital security of and receipt validation. and other business transactions are helping enterprises drive down costs and offer flexibility in B2B relationships. manufacturers. contentintegrity.

and to 95.5 percent for complete text lines consisting of an average of seven words. The proposed system attains an overall classification accuracy of 87. Han.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. Cyrillic. JANUARY 2004 ONLINE HANDWRITTEN SCRIPT RECOGNITION Automatic identification of handwritten script facilitates many important applications such as automatic transcription of multilingual documents and search for documents on the Web containing a particular script. The classification accuracy improves to 95 percent as the number of words in the test sample is increased to five. online document. 1. NO. evidence accumulation. The increase in usage of handheld devices which accept handwritten input has created a growing demand for algorithms that can efficiently analyze and retrieve handwritten data. Devnagari. 26. VOL. Hebrew. Index Terms Document understanding. handwritten script identification. feature design.1 percent at the word level with 5-fold cross validation on a data set containing 13. or Roman. The classification is based on 11 different spatial and temporal features extracted from the strokes of the words.379 words. . This paper proposes a method to classify words and lines in an online handwritten document into one of the six major scripts: Arabic.

This results in a significant reduction in the number of routing messages. We present two algorithms to determine the request zone. the proposed Location-Aided Routing (LAR) protocols limit the search for a new route to a smaller ―request zone‖ of the ad hoc network. obtained using the global positioning system) to improve performance of routing protocols for ad hoc networks. Movement of hosts results in a change in routes. This paper suggests an approach to utilize location information (for instance. and also suggest potential optimizations to our algorithms . Several routing protocols have already been proposed for ad hoc networks. By using location information.LOCATION-AIDED ROUTING (LAR) IN MOBILE AD HOC NETWORKS A mobile ad hoc network consists of wireless hosts that may move often. requiring some mechanism for determining new routes.

Index Terms Additive noise. noise reduction. VOL. AUGUST 2003 NOISE REDUCTION BY FUZZY IMAGE FILTERING A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter can be applied iteratively to effectively reduce heavy noise. NO. In particular. edge preserving filtering. The filter consists of two stages. 11. fuzzy image filtering. Astatistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. the shape of the membership functions is adapted according to the remaining noise level after each iteration. These results are also compared to other filters by numerical measures and visual inspection. 4. . Experimental results are obtained to show the feasibility of the proposed approach. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values.IEEE TRANSACTIONS ON FUZZY SYSTEMS. making use of the distribution of the homogeneity in the image. Both stages are based on fuzzy rules which make use of membership functions. The first stage computes a fuzzy derivative for eight different directions.

and is customizable for specific image formats (e. network . implementation. uses the congestion manager (CM) to adapt to network congestion. JPEG and JPEG2000). ITP runs over UDP. which provides a generic reliable in-order bytestream abstraction. This paper describes the design.ITP: AN IMAGE TRANSPORT PROTOCOL FOR THE INTERNET Images account for a significant and growing fraction of Web downloads.. adaptation. and show that the in-order delivery abstraction provided by a TCP-based approach prevents the receiver application from processing and rendering portions of an image when they actually arrive. congestion control. and evaluation of the image transport protocol (ITP) for image transmission over loss-prone congested or wireless networks. ITP enables a variety of new receiver post-processing algorithms such as error concealment that further improve the interactivity and responsiveness of reconstructed images. The traditional approach to transporting images uses TCP.g. achieving significantly better interactive performance as measured by the evolution of peak signal-to-noise ratio (PSNR) with time at the receiver. transport protocols. The end result is that an image is rendered in bursts interspersed with long idle times rather than smoothly. Internetworking. Index Terms Computer networks. ITP improves user-perceived latency using application-level framing (ALF) and out-of order application data unit (ADU) delivery. but which is overly restrictive for image data. Performance experiments using our implementation across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of image downloads at the receiver. selective reliability. incorporates receiver-driven selective reliability. We analyze the progression of image quality at the receiver with time.

We also developed a concrete method to validate the proposed trust model and obtained initial results. This paper presents a coherent adaptive trust model for quantifying and comparing the trustworthiness of peers based on a transaction-based feedback system. for example. Such reputation information can help estimating the trustworthiness and predicting the future behavior of peers. One way to minimize threats in such an open community is to use communitybased reputations. In addition to feedback a peer receives through its transactions with other peers. to allow the metric to adapt to different domains and situations and to address common problems encountered in a variety of online communities. . We introduce three basic trust parameters in computing trustworthiness of peers. which can be computed. we incorporate the total number of transactions a peer performs. we argue that the trust models based solely on feedback from other peers in the community is inaccurate and ineffective.A REPUTATION BASED TRUST MODEL FOR PEER TO PEER ECOMMERCE COMMUNITIES Peer-to-Peer eCommerce communities are commonly perceived as an environment ordering both opportunities and threats. First. through feedback about peers' transaction histories. we introduce two adaptive factors. showing the feasibility an d benefit of our approach. There are two main features of our model. Second. and the credibility of the feedback sources into the model for evaluating the trustworthiness of peers. the transaction context factor and the community context factor.

1. for example. proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. routing . etc. military networks. spray routing not only performs significantly fewer transmissions per message. we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. conventional routing schemes fail. Finally. 16. they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. because they try to establish complete end-to-end paths.We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays. FEBRUARY 2008 EFFICIENT ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKS: THE MULTIPLE-COPY CASE Intermittently connected mobile networks are wireless networks where mo st of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model. NO.IEEE/ACM TRANSACTIONS ON NETWORKING VOL. Furthermore. vehicular ad hoc networks. but also has lower average delivery delays than existing schemes. To deal with such networks researchers have suggested to use flooding-based routing schemes. and then route each copy independently towards the destination. before any data is sent. wildlife tracking sensor networks. Index Terms Ad hoc networks. it is highly scalable and retains good performance under a large range of scenarios. intermittent connectivity. While flooding-based schemes have a high probability of delivery. With this in mind. we introduce a new family of routing schemes that ―spray‖ a few message copies into the network. In this context. We show that. if carefully designed. furthermore. delay tolerant networks.

We discuss the impact of the misuse on the system and the best strategies for each actor. freeloaders. We further show that peer-peer file sharing systems can tolerate a significant number of freeloaders without suffering much performance degradation. We propose a distributed intrusion detection system in which each node monitors the traffic flow on the network and collects relevant statistics about it. freeloaders can benefit from the available spare capacity of peer peer systems and increase overall system throughput. we observe that a system with distributed indexing and flooded queries cannot exploit the full capacity of peer-peer systems. and jamming in particular. However. In this paper we develop simple mathematical models to explore and illustrate fundamental performance issues of peer-peer file sharing systems. By combining each node’s view we are able to tell if (and which type of) an attack happened or if the channel is just saturated. Through the specification of model parameters. In many cases. we investigate the effects of system scaling. The modeling framework introduced and the corresponding solution method are flexible enough to accommodate different characteristics of such systems. and distributed indexing with hashing directed queries. Our work shows that simple models coupled with efficient solution methods can be used to understand and answer questions related to the performance of peer-peer file sharing systems. this system opens the possibility for misuse. we apply our framework to three different peer-peer architectures: centralized indexing. file popularity and availability on system performance. Using our model. In particular.MODELING PEER-PEER FILE SHARING SYSTEMS Peer-peer networking has recently emerged as a new paradigm for building distributed networked applications. . A wireless distributed intrusion detection system and a new attack model Denial-of-Service attacks. distributed indexing with flooded queries. are a threat to wireless networks because they are at the same time easy to mount and difficult to detect and stop.

peer-to-peer applications or content distribution networks to speed up delivery of popular content. In contrast to retrieving a file from a single server. The dynamic parallel-access scheme presented in this paper does not require any modifications to servers or content and can be easily included in browsers. we propose a parallel-access scheme where end users access multiple servers at the same time. parallel access. AUGUST 2002 DYNAMIC PARALLEL ACCESS TO REPLICATED CONTENT IN THE INTERNET Popular content is frequently replicated in multiple servers or caches in the Internet to offload origin servers and improve end-user experience.IEEE/ACM TRANSACTIONS ON NETWORKING. NO. VOL. If the available resources at a server or along the path change during the download of a file. Index Terms Content distribution. 4. . Moreover. 10. HTTP. there is no need for complicated server selection algorithms and load is dynamically shared among all servers. However. choosing the best server is a nontrivial task and a bad choice may provide poor end user experience. Internet. The end result is that users experience significant speedups and very consistent response times. a dynamic parallel access will automatically shift the load from congested locations to less loaded parts (server and links) of the Internet. replication. Faster servers will deliver bigger portions of a file while slower servers will deliver smaller portions. Web. The amount of data retrieved from a particular server depends on the resources available at that server or along the path from the user to the server. fetching different portions of that file from different servers and reassembling them locally. peer-to-peer. mirroring.

APRIL 2007 A FULLY DISTRIBUTED PROACTIVELY SECURE THRESHOLD-MULTISIGNATURE SCHEME Threshold-multisignature schemes combine the properties of threshold group oriented signature schemes and multisignature schemes to yield a signature scheme that allows a threshold or more group members to collaboratively sign an arbitrary message. NO. The paper also proposes a discrete logarithm based distributed -key management infrastructure (DKMI). but are publicly identifiable from the information contained in the valid threshold-multisignature. In contrast to threshold group signatures. VOL. Index Terms Security and protection. The efficiency of the proposed scheme is analyzed and shown to be superior to its counterparts. 18. distributed-key generation (DKG) protocol and a one round. secret sharing.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. threshold-multisignature. publicly verifiable. The main objective of this paper is to propose such a secure and efficient threshold-multisignature scheme. 4. thus avoiding multiple protocol executions. The round optimal DKRU protocol solves a major problem with existing secret redistribution/updating schemes by giving group members a mechanism to identify malicious or faulty share holders in the first round. group-oriented cryptography. distributed-key management infrastructure. The paper uniquely defines the fundamental properties of threshold multisignature schemes and shows that the proposed scheme satisfies these properties and eliminates the latest attacks to which other similar schemes are subject. publicly verifiable distributed-key redistribution . publicly verifiable distributed-key generation. the individual signers do not remain anonymous. distributed-key redistribution/updating (DKRU) protocol. distributed systems. publicly verifiable. publicly verifiable distributed-key update. which consists of a round optimal.

counterexamples to previous works have often been identified despite the significant progress made on this topic over the past 15 years. VOL. Index Terms Consistency control. but also attempt to preserve intentions of operations. MARCH 2007 A NEW OPERATIONAL TRANSFORMATION FRAMEWORK FOR REAL-TIME GROUP EDITORS Group editors allow a group of distributed human users to edit a shared multimedia document at the same time over a computer network. 18. groupware. operational transformation . group editors. 3. Operational transformation (OT) is a well-established method for optimistic consistency control in this context and has drawn continuing research attention since 1989. However. NO. Consistency control in this environment must not only guarantee convergence of replicated data. This paper analyzes the root of correctness problems in OT and establishes a novel operational transformation framework for developing OT algorithms and proving their correctness.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS.

. recomputing the group key after every join or leave request. including different population sizes and different join/leave probabilities. In particular.. Moreover. (2) the Batch algorithm. (2) collaborative nature in which the group key is contributive. We analyze the performance of these distributed algorithms under different settings.e. i. and (3) dynamic nature in which existing members can leave the group while new members may join. the Queue-batch algorithm has the intrinsic property of balancing the computation/ communication workload such that the dynamic peer group can quickly begin secure group communication. This provides fundamental understanding about establishing a collaborative group key for a distributed dynamic peer group. and (3) the Queuebatch algorithm. i. and that the Queue-batch algorithm performs the best among the three distributed algorithms. we consider an interval based approach of rekeying. Instead of performing individual rekey operations. We show that these three distributed algorithms significantly outperform the individual rekey algorithm. They are (1) distributed nature in which there is no centralized key server.e.DISTRIBUTED COLLABORATIVE KEY AGREEMENT PROTOCOLS FOR DYNAMIC PEER GROUPS We consider several distributed collaborative key agreement protocols for dynamic peer groups. . This problem has several important characteristics which make it different from traditional secure group communication. each group member will collaboratively contribute its part to the global group key. we consider three distributed algorithms for updating the group key: (1) the Rebuild algorithm.

The reduction in memory space is similar. 1. In this paper.IEEE/ACM TRANSACTIONS ON NETWORKING. . Our simulations show that the new algorithms reduce the execution time by an order of magnitude on power-law topologies with 1000 nodes.. In particular. A common approach is to discretize (i. and traffic engineering. QoS routing. NO. constrained shortest paths. Because it is NP-complete. which transforms the original problem to a simpler one solvable in polynomial time. much research has been designing heuristic algorithms that solve the -approximation of the problem with an adjustable accuracy. MPLS path selection. Reducing the overhead of computing constrained shortest paths is practically important for the successful design of a high-throughput QoS router. ATM circuit routing. 16. FEBRUARY 2008 105 TWO TECHNIQUES FOR FAST COMPUTATION OF CONSTRAINED SHORTEST PATHS Computing constrained shortest paths is fundamental to some important network functions such as QoS routing. scale and round) the link delay or link cost. The problem is to find the cheapest path that satisfies certain constraints. which is limited at both processing power and memory space. The efficiency of the algorithms directly relates to the magnitude of the errors introduced during discretization. finding the cheapest delay-constrained path is critical for real-time data flows such as voice/video calls. we propose two techniques that reduce the discretization errors.e. which allows faster algorithms to be designed. VOL. Index Terms Approximation algorithms.

Sign up to vote on this title
UsefulNot useful