DATA ALCOTT SYSTEMS

Ph: (0)9600095047

3rd Floor, Old No.13/1, New No.27, Third Floor,Brindavan ieeeraja@gmail.com Street West Mambalam Chennai-600033 S.No Title Domain Technology

1

A TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN MOBILE COMPUTING WIRELESS SENSOR NETWORKS

DOT NET

2

ROUTE STABILITY IN MANETS DIRECTION MOBILITY MODEL

UNDER

THE

RANDOM

MOBILE COMPUTING

DOT NET

3

GREEDY ROUTING WITH ANTI-VOID WIRELESS SENSOR NETWORKS

TRAVERSAL

FOR

MOBILE COMPUTING

DOT NET

4

CELL BREATHING TECHNIQUES FOR LOAD BALANCING IN MOBILE COMPUTING WIRELESS LANS

DOT NET

5

RESEQUENCING ANALYSIS OF STOP-AND-WAIT PARALLEL MULTICHANNEL COMMUNICATIONS

ARQ

FOR

NETWORKING

DOT NET

6

RESOURCE ALLOCATION COMMUNICATIONS SYSTEMS SERVICES

IN OFDMA SUPPORTING

WIRELESS MULTIMEDIA NETWORKING

DOT NET

7

ENHANCING PRIVACY AND AUTHORIZATION SCALABILITY IN THE GRID THROUGH ONTOLOGIES

CONTROL

INFORMATION TECHNOLOGY BIOMEDICINE

IN JAVA

8

COMBINATORIAL APPROACH INJECTION ATTACKS

FOR

PREVENTING

SQL ADVANCE CONFERENCE

COMPUTING

J2EE

9

DYNAMIC SEARCH ALGORITHM IN UNSTRUCTURED PEER-TO- PARALLEL AND JAVA PEER NETWORKS DISTRIBUTED SYSTEMS

10

ANALYSIS OF SHORTEST PATH ROUTING FOR LARGE MULTINETWORKING HOP WIRELESS NETWORKS

DOT NET

11

SECURE AND POLICY-COMPLIANT SOURCE ROUTING

NETWORKING

DOT NET

12

FLEXIBLE DETERMINISTIC PACKET MARKING: AN IP PARALLEL AND JAVA TRACEBACK SYSTEM TO FIND THE REAL SOURCE OF ATTACKS DISTRIBUTED SYSTEMS

13 NODE ISOLATION MODEL AND AGE-BASED SELECTION IN UNSTRUCTURED P2P NETWORKS NEIGHBOR NETWORKING JAVA 14 DISTRIBUTED ALGORITHMS FOR CONSTRUCTING PARALLEL AND APPROXIMATE MINIMUM SPANNING TREES IN WIRELESS JAVA DISTRIBUTED SYSTEMS SENSOR NETWORKS 15 MOBILITY MANAGEMENT APPROACHES FOR NETWORKS: PERFORMANCE COMPARISON RECOMMENDATIONS MOBILE IP AND USE NETWORKING JAVA 16 SINGLE-LINK FAILURE DETECTION IN ALL-OPTICAL NETWORKING NETWORKS USING MONITORING CYCLES AND PATHS DOT NET .

17 A FAITHFUL DISTRIBUTED MECHANISM FOR SHARING THE PARALLEL AND J2EE COST OF MULTICAST TRANSMISSIONS DISTRIBUTED SYSTEMS 18 ATOMICITY ANALYSIS OF SERVICE COMPOSITION ACROSS SOFTWARE ENGINEERING ORGANIZATIONS J2EE 19 DYNAMIC ROUTING WITH SECURITY CONSIDERATIONS PARALLEL AND JAVA DISTRIBUTED SYSTEMS 20 CAR: CONTEXT-AWARE ADAPTIVE TOLERANT MOBILE NETWORKS ROUTING FOR DELAY- MOBILE COMPUTING JAVA .

21 COLLUSIVE PIRACY PREVENTION IN P2P CONTENT DELIVERY COMPUTERS NETWORKS J2EE 22 SPREAD SPECTRUM WATERMARKING SECURITY INFORMATION AND SECURITY FORENSICS DOT NET 23 LOCAL CONSTRUCTION OF NEAR-OPTIMAL POWER SPANNERS MOBILE COMPUTING FOR WIRELESS AD-HOC NETWORKS DOT NET 24 MULTIPLE ROUTING NETWORK RECOVERY CONFIGURATIONS FOR FAST IP NETWORKING JAVA .

25 COMPACTION OF SCHEDULES AND A TWO-STAGE APPROACH PARALLEL AND DOT NET FOR DUPLICATION-BASED DAG SCHEDULING DISTRIBUTED SYSTEMS 26 THE EFFECTIVENESS NETWORKS OF CHECKSUMS FOR EMBEDDED DEPENDABLE AND SECURE DOT NET COMPUTING 27 DETECTING MALICIOUS PACKET LOSSES PARALLEL AND JAVA DISTRIBUTED SYSTEMS 28 VIRUS SPREAD IN NETWORKS NETWORKING DOT NET .

29 BIASED RANDOM WALKS IN UNIFORM WIRELESS NETWORKS MOBILE COMPUTING DOT NET 30 INFORMATION CONTENT-BASED SENSOR SELECTION AND TRANSMISSION POWER ADJUSTMENT FOR COLLABORATIVE MOBILE COMPUTING TARGET TRACKING DOT NET 31 PRESTO: FEEDBACKDRIVEN DATA MANAGEMENT IN SENSOR NETWORKING NETWORKS DOT NET 32 EXPLICIT LOAD BALANCING TECHNIQUE FOR NGEO SATELLITE NETWORKING IP NETWORKS WITH ON-BOARD PROCESSING CAPABILITIES DOT NET .

33 DELAY ANALYSIS FOR MAXIMAL SCHEDULING WITH FLOW NETWORKING CONTROL IN WIRELESS NETWORKS WITH BURSTY TRAFFIC DOT NET 34 OPTIMIZED RESOURCE ALLOCATION FOR SOFTWARE RELEASE SOFTWARE ENGINEERING PLANNING DOT NET 35 AUTOMATIC EXTRACTION OF HEAP REFERENCE PROPERTIES SOFTWARE ENGINEERING IN OBJECT-ORIENTED PROGRAMS DOT NET 36 ENERGY MAPS FOR MOBILE WIRELESS MOBILE COMPUTING NETWORKS:COHERENCE TIME VERSUS SPREADING PERIOD DOT NET .

37 RANDOMCAST: AN ENERGY EFFICIENT SCHEME FOR MOBILE AD HOC NETWORKS COMMUNICATION MOBILE COMPUTING DOT NET 38 EFFICIENT MULTICAST RESOURCE ALLOCATION FOR WIRELESS MOBILE COMPUTING JAVA 39 MINING FILE DOWNLOADING TIME IN STOCHASTIC PEER TO NETWORKING PEER NETWORKS DOT NET 40 ENHANCING SEARCH PERFORMANCE IN UNSTRUCTURED P2P NETWORKING NETWORKS BASED ON USERS' COMMON INTEREST JAVA .

41 QUIVER: CONSISTENT OBJECT SHARING FOR EDGE SERVICES PARALLEL AND JAVA DISTRIBUTED SYSTEMS 42 BRA: A BIDIRECTIONAL ROUTING ABSTRACTION ASYMMETRIC MOBILE AD HOC NETWORKS FOR NETWORKING JAVA 43 AN EFFICIENT CLUSTERING SCHEME TO EXPLOIT KNOWLEDGE AND HIERARCHICAL DATA IN NETWORK TRAFFIC ANALYSIS ENGINEERING DATA JAVA 44 RATE & DELAY GUARANTEES PROVIDED BY CLOSE PACKET NETWORKING SWITCHES WITH LOAD BALANCING JAVA 45 GEOMETRIC APPROACH TO IMPROVING ACTIVE PACKET LOSS NETWORKING MEASUREMENT JAVA .

46 A PRECISE TERMINATION CONDITION OF THE PROBALASTIC DEPENDABLE AND SECURE JAVA PACKET MARKING ALGORITHM COMPUTING 47 INTRUSION DETECTION IN HOMOGENEOUS HETEROGENEOUS WIRELESS SENSOR NETWORKS & MOBILE COMPUTING JAVA 48 A DISTRIBUTED AND SCALABLE ROUTING TABLE MANAGER FOR THE NEXT GENERATION OF IP ROUTERS DOT NET 49 PERFORMANCE OF A SPECULATIVE TRANSMISSION SCHEME NETWORKING FOR SCHEDULING LATENCY REDUCTION JAVA 50 EFFICIENT 2-D GRAY SCALE MORPHOLOGICAL TRANSFORMATIONS WITH ARBITRALY FLAT STRUCTURING IMAGE PROCESSING ELEMENTS DOT NET .

51 RATE ALLOCATION & NETWORK LIFETIME PROBLEM FOR NETWORKING WIRELESS SENSOR NETWORKS DOT NET 52 VISION BASED PROCESSING FOR REAL TIME 3-D DATA IMAGE PROCESSING ACQUISITION BASED CODE STRUCTURED LIGHT DOT NET 53 USING THE CONCEPTUAL COHESION OF CLASSES FOR FAULT SOFTWARE ENGINEERING PREDICTION IN OBJECT ORIENTED SYSTEMS JAVA 54 LOCATION BASED SPATIAL QUERY PROCESSING IN WIRELESS MOBILE COMPUTING BROADCAST ENVIRONMENTS JAVA .

55 BANDWIDTH ESTIMATION FOR IEEE 802.11 BASED ADHOC MOBILE COMPUTING NETWORK JAVA 56 MODELING & AUTOMATED CONTAINMENT OF WORMS DEPENDABLE AND SECURE JAVA COMPUTING 57 TRUST WORTHY COMUTING UNDER RESOURCE CONSTRAINTS DEPENDABLE AND SECURE DOT NET WITH THE DOWN POLICY COMPUTING 58 BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS MOBILE COMPUTING JAVA 59 STATISTICAL TECHNIQUES FOR DETECTING ANOMALIES THROUGH PACKET HEADER DATA TRAFFIC NETWORKING DOT NET .

60 HBA DISTRIBUTED METADATA MANAGEMENT FOR LARGE PARALLEL AND DOT NET SCALE CLUSTER BASED STORAGE SYSTEM DISTRIBUTED SYSTEMS 61 TEMPORAL PORTIONING OF COMMUNICATION RESOURCES IN DEPENDABLE AND SECURE DOT NET AN INTEGRATED ARCHITECTURE COMPUTING 62 THE EFFECT OF PAIRS IN PROGRAM DESIGN TASKS SOFTWARE ENGINEERING DOT NET 63 CONSTRUCTING INTER-DOMAIN PACKET FILTERS CONTROL IP SPOOFING BASED ON BGP UPDATES TO DEPENDABLE AND SECURE JAVA COMPUTING .

A HIGH-CAPACITY MULTIMEDIA APPROACH DOT NET 65 PROTECTION OF DATABASE SECURITY VIA COLLABORATIVE KNOWLEDGE AND INFERENCE DETECTION ENGINEERING DATA J2EE 66 ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL: SOFTWARE ENGINEERING ED3M DOT NET 67 ACTIVE LEARNING RETRIEVAL METHODS FOR INTERACTIVE IMAGE IMAGE PROCESSING DOT NET .64 ORTHOGONAL DATA EMBEDDING FOR BINARY IMAGES IN MORPHOLOGICAL TRANSFORM DOMAIN.

68 LOCALIZED SENSOR AREA COMMUNICATION OVERHEAD COVERAGE WITH LOW MOBILE COMPUTING DOT NET 69 HARDWARE ENHANCED ASSOCIATION RULE MINING WITH KNOWLEDGE AND HASHING AND PIPELINING ENGINEERING DATA DOT NET 70 EFFICIENT MULTICAST RESOURCE ALLOCATION FOR WIRELESS MOBILE COMPUTING DOT NET 71 EFFICIENT ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKING NETWORKS: THE MULTIPLE COPY CASE DOT NET .

72 A NOVEL FRAMEWORK FOR SEMANTIC ANNOTATION AND MULTIMEDIA PERSONALIZED RETRIEVAL OF SPORTS VIDEO DOT NET 73 TWO TECHNIQUES FOR FAST CONSTRAINED SHORTEST PATHS COMPUTATION OF NETWORKING JAVA 74 WATERMARKING RELATIONAL OPTIMIZATION-BASED TECHNIQUES DATABASES USING KNOWLEDGE AND ENGINEERING DATA DOT NET 75 PROBABILISTIC TRACE BACK PACKET MARKING FOR LARGE-SCALE IP NETWORKING DOT NET 76 DUAL-LINK FAILURE RESILIENCY THROUGH BACKUP LINK NETWORKING MUTUAL EXCLUSION JAVA .

77 TRUTH DISCOVERY WITH MULTIPLE INFORMATION PROVIDERS ON THE WEB CONFLICTING KNOWLEDGE AND ENGINEERING DATA J2EE 78 DYNAMIC LOAD BALANCING IN DISTRIBUTED SYSTEMS IN THE PARALLEL AND JAVA PRESENCE OF DELAYS: A REGENERATION-THEORY APPROACH DISTRIBUTED SYSTEMS 79 A SEMI FRAGILE CONTENT BASED IMAGE WATERMARKING FOR AUTHENTICATION IN SPATIAL DOMAIN USING DISCRETE JOURNAL COSINE TRANSFORM JAVA 80 OCGRR: A NEW SCHEDULING DIFFERENTIATED SERVICES NETWORKS ALGORITHM FOR PARALLEL AND JAVA DISTRIBUTED SYSTEMS 81 AN ADAPTIVE PROGRAMMING MODEL FOR FAULT-TOLERANT DEPENDABLE AND SECURE JAVA DISTRIBUTED COMPUTING COMPUTING .

82 AN ACKNOWLEDGMENT-BASED APPROACH FOR DETECTION OF ROUTING MISBEHAVIOR IN MANETS THE MOBILE COMPUTING JAVA 83 HYBRID INTRUSION DETECTION WITH WEIGHTED SIGNATURE DEPENDABLE AND SECURE J2EE GENERATION OVER ANOMALOUS INTERNET EPISODES(HIDS) COMPUTING 84 PFUSION: A P2P ARCHITECTURE FOR CONTENT-BASED SEARCH AND RETRIEVAL INTERNET-SCALE PARALLEL AND DOT NET DISTRIBUTED SYSTEMS 85 ROUTE RESERVATION IN AD HOC WIRELESS NETWORKS MOBILE COMPUTING JAVA 86 DISTRIBUTED CACHE UPDATING FOR THE DYNAMIC SOURCE MOBILE COMPUTING ROUTING PROTOCOL JAVA .

87 DIGITAL IMAGE DETECTION AND PAINTINGS PROCESSING TECHNIQUES FOR THE REMOVAL OF CRACKS IN DIGITIZED IMAGE PROCESSING DOT NET 88 NOISE REDUCTION BY FUZZY IMAGE FILTERING FUZZY SYSTEMS JAVA 89 A NOVEL SECURE COMMUNICATION PROTOCOL FOR AD HOC NETWORKS [SCP] JAVA 90 FACE RECOGNITION USING LAPLACIAN FACES PATTERN ANALYSIS AND JAVA MACHINE INTELLIGENCE 91 INTERNATIONAL PREDICTIVE JOB SCHEDULING IN A CONNECTION LIMITED CONFERENCE SYSTEM USING PARALLEL GENETIC ALGORITHM INTELLIGENT ADVANCED SYSTEMS ON JAVA AND .

ECOMMERCE AND E-SERVICE 93 A DISTRIBUTED DATABASE ARCHITECTURE FOR GLOBAL NETWORKING ROAMING IN NEXT-GENERATION MOBILE NETWORKS JAVA 94 STRUCTURE AND TEXTURE FILLING-IN OF MISSING IMAGE BLOCKS IN WIRELESS TRANSMISSION AND COMPRESSION IMAGE PROCESSING APPLICATIONS JAVA 95 NETWORK BORDER PATROL: PREVENTING CONGESTION NETWORKING COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET JAVA 96 APPLICATION OF BPCS COMPRESSED VIDEO STEGANOGRAPHY TO WAVELET IMAGE PROCESSING JAVA .92 PERSONALIZED WEB SEARCH WITH SELF-ORGANIZING MAP INTERNATIONAL CONFERENCE ON EJ2EE TECHNOLOGY.

97 IMAGE PROCESSING FOR EDGE DETECTION DOT NET 98 DOUBLE-COVERED BROADCAST (DCB): A SIMPLE RELIABLE CONFERENCE-IEEE BROADCAST ALGORITHM IN MANETS INFOCOM JAVA .

Year 2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2009 2009 2009 .

2009 2008 2008 2008 .

2008 2008 2008 2008 2008 .

2008 2008 2008 2008 2008 .

2008 2008 2008 2008 .

2008 2008 2008 2008 2008 .

2008 2008 2008 2008 .

2008 2008 2008 2008 .

2008 2008 2008 2008 .

2008 2008 2008 2008 2008 .

2008 2007 2007 2007 2007 .

2007 2007 2007 2007 2006 .

2006 2006 2006 2005 2005 .

2005 2004 2004 2004 2004 .

2004 .

Moreover. while there exist other schemes that can guarantee the delivery of packets with the excessive consumption of control overheads. This goal is typically achieved when the load of access points (APs) is balanced. a greedy antivoid routing (GAR) protocol is proposed to solve the void problem with increased routing efficiency by exploiting the boundary finding technique for the unit disk graph (UDG). Some of the current research work cannot fully resolve the void problem. the partial UDG construction (PUC) mechanism is proposed to transform the non-UDG into UDG setting for a portion of nodes that facilitate boundary traversal. The proposed rolling-ball UDG boundary traversal (RUT) is employed to completely guarantee the delivery of packets from the source to the destination node under the UDG network. .e. which is based on network energy maps and Quality-of-Service (QoS) requirements. distributed method. In particular. Through our results. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. This paper proposes a new centralized clustering method for a data collection mechanism in wireless sensor networks. The proofs of correctness for the GAR scheme are also given in this paper. Compared to other methods (CPLEXbased method. The boundary map (BM) and the indirect map searching (IMS) scheme are proposed as efficient algorithms for the realization of the RUT technique.Abstract The main challenge in wireless sensor network deployment pertains to optimizing energy consumption when collecting data from sensor nodes. These three schemes are incorporated within the GAR protocol to further enhance the routing performance with reduced communication overhead. Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). we propose an approach to improve the efficiency of reactive routing protocols. the results show that our tabu search-based approach returns high-quality solutions in terms of cluster cost and execution time. To alleviate such imbalance of load. we study both the availability and the duration probability of a routing path that is subject to link failures caused by node mobility. we focus on the case where the network nodes move according to the Random Direction model. we study the problem of selecting an optimal route in terms of path availability. The unreachability problem (i.e. The clustering problem is modeled as a hypergraph partitioning and its resolution is based on a tabu search heuristic. As a result. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i. while the intersection navigation (IN) mechanism is proposed to obtain the best rolling direction for boundary traversal with the adoption of shortest path criterion. which is conceptually similar to cell breathing in cellular networks. Finally. the so-called void problem) that exists in the greedy routing algorithms has been studied for the wireless sensor networks. Recent studies on operational WLANs. In this paper. The proposed scheme does not require any modification to the users neither the IEEE 802. Comparing with the existing localized routing algorithms. simulated annealing-based method). AP’s coverage range).. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. however. In order to maintain the network requirement of the proposed RUT scheme under the non-UDG networks.. In this work. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. the hop count reduction (HCR) scheme is utilized as a short-cutting technique to reduce the routing hops by listening to the neighbor’s traffic. the simulation results show that the proposed GAR-based protocols can provide better routing efficiency. Our approach defines moves using largest size cliques in a feasibility cluster graph. A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes. this approach is suitable for handling network extensibility in a satisfactory manner. We also consider the problem of network-wide min-max load balancing. several load balancing schemes have been proposed. have shown that AP load is often substantially uneven. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods.11 standard. and we derive both exact and approximate (but simple) expressions of these probabilities. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce the latency and the overhead due to route reconstruction.

we analyze trends in the mean resequencing buffer occupancy and the mean resequencing delay as functions of system parameters. Under the assumption that all channels have the same transmission rate but possibly different time-invariant error rates. We present in this paper a privacy-enhancing technique that uses encryption and relates to the structure of the data and their organizations. In signature based method It uses an approach called Hirschberg algorithm. We evaluate the resequencing delay and the resequencing buffer occupancy. One is the required average transmission rate for both RT and BE services. which is used to control the fluctuation in transmission rates and to limit the RT packet delay to a moderate level. On the other hand from the Auditing based method standpoint of view. The other is the tolerable average absolute deviation of transmission rate (AADTR) just for the RT services. resequencing delay. However. Index Terms—In-sequence delivery. We expect that the modeling technique and analytical approach used in this paper can be applied to the performance evaluation of other ARQ protocols (e. we compute the probability mass functions of the resequencing buffer occupancy and the resequencing delay for time-invariant channels. Simulation results show that the proposed algorithm well meets the QoS requirements with the high throughput and outperforms the modified largest weighted delay first (M-LWDF) algorithm that supports similar QoS requirements. the selective-repeat ARQ) over multiple time-varying channels. and also shows results obtained in a medical imaging platform. these approaches are additional layers that should be managed along with the authorization policies. SW-ARQ. we extend our analysis to time-varying channels. The paper describes the architecture and processes. multichannel data communications. The proposed algorithm aims at maximizing system throughput while satisfying quality of service (QoS) requirements of the RT and BE services. it presents a detection mode for SQL injection using pair wise sequence alignment of amino acid code formulated from web application form parameter sent via web server. From signature based method standpoint of view. The use of data Grids for sharing relevant data has proven to be successful in many research disciplines. We take two kinds of QoS requirements into account. the use of these environments when personal data are involved (such as in health) is reduced due to its lack of trust. This system was able to stop all of the successful attacks and did not generate any false positives. it analyzes the transaction to find out the malicious access. we consider a multichannel data communication system in which the stop-and-wait automatic-repeat request protocol for parallel channels with an in-sequence delivery guarantee (MSW-ARQ-inS) is used for error control. providing a natural way to propagate authorization and also a framework that fits with many use cases.. resequencing buffer occupancy. Through examples. respectively. We design a resource allocation algorithm for down-link of orthogonal frequency division multiple access (OFDMA) systems supporting real-time (RT) and best-effort (BE) services simultaneously over a time-varying wireless channel. by assuming the Gilbert–Elliott model for each channel.g.In this paper. which is a novel idea of incorporating the uniqueness of Signature based method and auditing method. it is a divide and conquer approach to reduce the time and space complexity. . The major issue of web application security is the SQL Injection. From numerical and simulation results. which can give the attackers unrestricted access to the database that underlie Web applications and has become increasingly frequent and serious. We formulate the optimization problem representing the resource allocation under consideration and solve it by using the dual optimization technique and the projection stochastic subgradient method. A combinatorial approach for protecting Web applications against SQL injection is discussed in this paper. Then. we derive the probability generating function of the resequencing buffer occupancy and the probability mass function of the resequencing delay. There are many approaches that provide encrypted storages and key shares to prevent the access from unauthorized users. However. modeling and performance.

It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic. it also adaptively changes its marking rate according to the load of the participating router by a flexible flow-based marking scheme.Designing efficient search algorithms is a key challenge in unstructured peer-to-peer networks. reliability. we present a novel and practical IP traceback system called Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. our analysis shows that the nodal load at each node is a function of the node’s Voronoi cell. On average. FDPM provides innovative features to trace the source of IP packets and can obtain better tracing capability than others. We estimate the nodal load. depending on the traffic patterns. For a given total offered load on the network. However. Numerical results show that DS provides a good tradeoff between search performance and cost. Platypus caters to the needs of both end users and ISPs: users gain the ability to pool their resources and select routes other than the default. the node’s location in the network. Our results show that incremental deployment of Platypus can achieve immediate gains. and whose packets traverse their networks. we show that each node’s probability that the node serves a packet arriving to the network approaches the products of half the length of the Voronoi cell perimeter and the load density function that a packet goes through the node’s location. While researchers have proposed a number of source routing techniques to combat this limitation. . add little additional load to routers and can trace a large number of sources in one traceback process with low false positive rates. It has a wide array of applications for other security systems. Flooding searches aggressively and covers the most nodes. The density function depends on the traffic pattern generated by straight line routing. DS takes advantage of various contexts under which each previous search algorithm performs well. Evaluations on both simulation and real system implementation demonstrate that FDPM requires a moderately small number of packets to complete the traceback process. On the contrary. We analyze the performance of DS based on some performance metrics including the success rate. it generates a large amount of query messages and. straight line routing can balance the load over the network. The motivation of this traceback system is from DDoS defense. which is a generalization of flooding and RW. In today’s Internet. While a number of other traceback schemes exist. DS could be further combined with knowledge-based search mechanisms to improve the search performance. contrary to conventional wisdom. In this paper. It only generates a fixed amount of query messages at each hop but would take longer search time. query hits. Flooding and random walk (RW) are two typical search algorithms. and utility of the network for end users and ISPs alike. Internet Protocol (IP) traceback is the enabling technology to control Internet crime. while ISPs maintain control over where. which allow for accountable. nevertheless. query efficiency. The built-in overload prevention mechanism makes this system capable of achieving a satisfactory traceback result even when the router is heavily loaded. RW searches conservatively. which is defined as the number of packets served at a node. query messages. fine-grained path selection by cryptographically attesting to policy compliance at each hop along a source route. We propose the dynamic search (DS) algorithm. we analyze the impact of straight line routing in large homogeneous multi-hop wireless networks. thus. FDPM adopts a flexible mark length strategy to make it compatible to different network environments. induced by straight line routing. nor to accurately determine the correct party to charge for forwarding the traffic. such control could improve the performance. when. We present Platypus. Capabilities can be composed to construct routes through multiple ASes and can be delegated to third parties. and determines where the hot spot is created in the network. and about 186 times better than flooding and 120 times better than RW in bimodal topologies. search time. and the traffic pattern specified by the source and destination randomness and straight line routing. In this paper. DS performs about 25 times better than flooding and 58 times better than RW in power-law graphs. inter-domain route control remains elusive. and search efficiency. an authenticated source routing system built around the concept of network capabilities. and evaluate its performance and security. In the asymptotic regime. It resembles flooding for short-term search and RW for long-term search. We describe the design and implementation of an extensive Platypus policy framework that can be used to address several issues in wide-area routing at both the edge and the core. Hence. Moreover. In particular. there has thus far been no way for independent ASes to ensure that such traffic does not circumvent local traffic policies. does not scale.

we design and analyze a class of simple and local distributed algorithms called Nearest Neighbor Tree (NNT) algorithms for energy-efficient construction of an approximate MST in wireless networks. Our results. for lifetimes with infinite variance. However. yielding to high-mobility signaling. Assuming that the nodes are uniformly distributed. For this case. For high-mobility MTs. the MHMIP outperforms the DHMIP and MIP strategies in almost all the studied cases. efficient management of mobility is a crucial issue to support mobile users. the second strategy based on random walks on age-proportional graphs demonstrates that. In this paper. the Hierarchical MIP (HMIP) and Dynamic HMIP (DHMIP) strategies localize the registration in FAs and GFAs. we describe a fault localization technique that uses both MPs and MCs and that employs multiple monitoring locations. Using this model. we show provable bounds on both the quality of the spanning tree produced and the energy needed to construct them. we overcome these limitations by introducing a general node-isolation model for heavy-tailed user lifetimes and arbitrary neighbor-se-lection algorithms. In wireless networks. The Multicast HMIP strategy limits the registration processes in the GFAs. The Mobile Internet Protocol (MIP) has been proposed to support global mobility in IP networks. and Spira. For high-mobility MTs. the system monotonically increases its resilience as its age and size grow. In such networks. we prove that three-edge connectivity is a necessary and sufficient condition for constructing MCs that uniquely identify any single-link failure in the network. While there are distributed algorithms for the minimum spanning tree (MST) problem. Humblet. these algorithms require relatively large number of messages and time. we show that the probability of isolation converges to zero as these two metrics tend to infinity. We also perform extensive simulations. and are fairly involved. demonstrate the first tradeoff between the quality of approximation and the energy required for building spanning trees on wireless networks. For an arbitrary network (not necessarily threeedge connected). local. . We finish the paper with simulations in finite-size graphs that demonstrate the effect of this result in practice. Several mobility management strategies have been proposed which aim reducing the signaling traffic related to the Mobile Terminals (MTs) registration with the Home Agents (HAs) whenever their Care-of-Addresses (CoAs) change. to the best of our knowledge. For a network with only one monitoring location. In fact. They are constructed such that any single-link failure results in the failure of a unique combination of MCs and MPs that pass through the monitoring location(s). which dramatically reduces the probability of user isolation and graph partitioning compared with uniform selection of neighbors. We also develop heuristic approaches for constructing MCs in the presence of one or more monitoring locations. Specifically. we analyze two age-biased neighbor-selection strategies and show that they significantly improve the residual lifetimes of chosen users. the NNTs can be maintained dynamically with polylogarithmic rearrangements under node insertions/deletions. it is resource consuming strategy unless for frequent MT mobility. In our analysis. We also provide a linear-time algorithm to compute the minimum number of required monitoring locations. which show that the bounds are much better in practice. and motivate similar considerations for other important problems. In this paper. we demonstrate the effectiveness of the proposed monitoring technique. We show that while NNT produces a close approximation to the MST. Hence. we propose an analytic model to evaluate the mean signaling delay and the mean bandwidth per call according to the type of MT mobility.Previous analytical studies of unstructured P2P resilience have assumed exponential user lifetimes and only considered age-independent neighbor replacement. Motivated by these considerations. making them impractical for resourceconstrained networks such as wireless sensor networks. it provides lowest mobility signaling delay compared to the HMIP and DHMIP approaches. Through extensive simulations. we formulate the problem of constructing MCs as an integer linear program (ILP). MCs and MPs are required to pass through one or more monitoring locations. We introduce the concept of monitoring cycles (MCs) and monitoring paths (MPs) for unique identification of single-link failures. and energy efficient. They use different Foreign Agents (FAs) and Gateway FAs (GFAs) hierarchies to concentrate the registration processes. a sensor has very limited power. we consider the problem of fault localization in all-optical networks. it consumes asymptotically less energy than the classical message-optimal distributed MST algorithm due to Gallagery. and any algorithm needs to be simple. The main contribution of this paper is the analytic model that allows the mobility management approaches performance evaluation. Further.

they are not attractive to the content provider. Such an assumption is invalid when some service providers do not release all details of their backend processes to service consumers outside the organizations. On the other hand. To achieve atomicity. we will propose a dynamic routing algorithm that could randomize delivery paths for data transmission. The large scale performance of the CAR protocol are evaluated using simulations based on a social network founded mobility model. Based on the theoretical result. we propose a process algebraic framework to publish atomicity-equivalent public views from the backend processes. applications of decentralised mobile systems are often characterised by network partitions. and two mechanisms. We show that the proposed mechanism is a faithful implementation of the Shapley Value mechanism. We propose a distributed Shapley Value mechanism in which the participating nodes do not have incentives to deviate from the mechanism specifications. Two case studies from the supply chain and insurance domains are given to evaluate our proposal and demonstrate the applicability of our approach. a purely random one and real traces from Dartmouth College. we present algorithms to construct atomicity-equivalent public views and to analyze the atomicity sphere for a service composition. We compare the execution time of MC and SH mechanisms for the Tamper-Proof and Autonomous Node models. These public views extract relevant task properties and reveal only partial process details that service providers need to expose. such as the Routing Information Protocol in wired networks and DestinationSequenced Distance Vector protocol in wireless networks. Security has become one of the major issues for data communication over wired and wireless networks. Different from the past work on the designs of cryptography algorithms and system infrastructures. Our framework enables the analysis of the atomicity sphere for service compositions using these public views instead of their backend processes. Atomicity is a highly desirable property for achieving application consistency in service compositions. and thus. We show that the MC mechanisms generate a smaller revenue compared to the SH mechanisms. We also study the convergence and scalability of the mechanisms by varying the number of nodes and the number of users per node. We discuss the implementation of CAR over an opportunistic networking framework. a structural criterion for the backend processes of involved services. Although both of them are strategy proof mechanisms. We also show that increasing the number of users per node is beneficial for the systems implementing the SH mechanisms from both computational and economic perspectives. The choice of the best carrier is made using Kalman filter based prediction techniques and utility theory. As a consequence delay tolerant networking research has received considerable attention in the recent years as a means to obviate to the gap between ad hoc network research and real applications. To address this problem.The problem of sharing the cost of multicast transmissions was studied in the past. outlining possible applications of the general principles at the basis of the proposed approach. Existing analysis techniques for the atomicity sphere generally assume complete knowledge of all involved backend processes. without introducing extra control messages. The algorithm is easy to implement and compatible with popular routing protocols. In this paper we present the design. a service composition should satisfy the atomicity sphere. . The protocol is based on the idea of exploiting nodes as carriers of messages among network partitions to achieve delivery. This allows service consumers to choose suitable services such that their composition satisfies the atomicity sphere without disclosing the details of their backend processes. implementation and evaluation of the context-aware adaptive routing (CAR) protocol for delay tolerant unicast communication in intermittently connected mobile ad hoc networks. Marginal Cost (MC) and Shapley Value (SH). and a series of simulation experiments are conducted to verify the analytic results and to show the capability of the proposed algorithm Most of the existing research work in mobile ad hoc networking is based on the assumption that a path exists between the sender and the receiver. An analytic study on the proposed algorithm is presented. the distributed protocols implementing them are susceptible to manipulation by autonomous nodes. were proposed to solve it. We experimentally investigate the performance of the existing and the proposed cost-sharing mechanisms by implementing and deploying them on PlanetLab.

On the theoretical side. MRC is strictly connectionless. Such online piracy has hindered the use of open P2P networks for commercial content delivery. . and assumes only destination based hop-by-hop forwarding. security is understood as the difficulty of estimating the secret parameters of the embedding function based on the observation of watermarked signals. In this paper we present MRC. given a wireless ad hoc network modeled as a unit disk graph U in the plane. providing a comparison with previous approaches. We show that this bound is near-optimal by proving that the slightly smaller stretch factor of 1 + (2sin pi/k+1)p is unattainable for the same degree bound k. Paid clients (colluders) may illegally share copyrighted content files with unpaid clients (pirates). backup path lengths. the security is quantified from an information-theoretic point of view by means of the equivocation about the secret parameters. and load distribution after a failure.9 percent prevention rate in Gnutella. 5] is the power exponent constant. higher content availability. As the Internet takes an increasingly central role in our communications infrastructure. Finally. it relies on deep insights on the geometry of unit disk graphs and novel techniques that are of independent interest. In this context. eDonkey. The scheme stops collusive piracy without hurting legitimate P2P clients by targeting poisoning on detected violators. Our proposed scheme guarantees recovery in all single failure scenarios. and the tradeoff between robustness and security. Our work opens up the low-cost P2P technology for copyrighted content delivery. and copyright compliance in exploring P2P network resources. On the practical side. the slow convergence of routing protocols after a network failure becomes a growing problem. and allows packet forwarding to continue on an alternative output link immediately after the detection of a failure. where k ges 10 is an integer parameter and p isin [2. the presented algorithm is local.Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. Detected pirates will receive poisoned chunks in their repeated attempts. As a consequence. such as the impact of the embedding parameters. The scheme is shown less effective in protecting some poison-resilient networks like BitTorrent and Azureus. the stretch factor of our algorithm significantly improves the previous best bounds by Song et al. The main results reveal fundamental limits and bounds on security and provide insight into other properties. using a single mechanism to handle both link and node failures. This paper presents both theoretical and practical analyses of the security offered by watermarking and data hiding methods based on spread spectrum. and analyze its performance with respect to scalability. and showing that the security of many schemes used in practice can be fairly low. KaZaA. we find 99. exclusively. For the same degree bound k. The basic idea is to detect pirates timely with identity-based signatures and time stamped tokens. and without knowing the root cause of the failure. To assure fast recovery from link and node failures in IP networks. workable estimators of the secret parameters are proposed and theoretically analyzed for a variety of scenarios. we present a new recovery scheme called Multiple Routing Configurations (MRC). etc. the algorithm is highly scalable and robust. Pirates are thus severely penalized with no chance to download successfully in tolerable time. We propose a proactive content poisoning scheme to stop colluders and pirates from alleged copyright infringements in P2P file sharing. constructs a planar power spanner of U whose degree is bounded by k and whose stretch factor is bounded by 1 + (2sin pi/k)p. It can be implemented with only minor changes to existing solutions. The advantage lies mainly in minimum delivery cost. and thus reduce the chances of congestion when MRC is used. We also show how an estimate of the traffic demands in the network can be used to improve the distribution of the recovered traffic. Based on simulation results. and Freenet. Morpheus. We achieved 85-98 percent prevention rate on eMule. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. In contrast to previous algorithms for the problem. MRC is based on keeping additional routing information in the routers. while the algorithm is efficient and easy to implement in practice. We present a local distributed algorithm that.

an art and will certainly create unnecessary false positives or mask highly focused attacks. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities. having the same time complexity as SC. Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets imply malicious intent. based on measured traffic rates and buffer sizes. if this ratio is smaller than the spectral radius. to minimize the processor requirement of any given valid schedule. SC. then the final infected population is also small in a sense that can be made precise. and other self-replicating malware is affected by the logical topology of the network over which they propagate. We study the error detection effectiveness of the following commonly used checksum computations for embedded networks: exclusive or (XOR). we propose a generic algorithm. it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Fletcher checksum. two’s complement addition. one’s complement addition. the number of congestive packet losses that will occur. SDS and SC together form a two-stage scheduling algorithm that produces schedules with high quality and low processor requirement. We characterize how the size of the population that eventually becomes infected depends on the network topology. However. developed. and cyclic redundancy codes (CRC). To address this problem. and Adler checksums are suboptimal for typical application use. To the best of our knowledge. we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some victim. schedules generated by SDS are only 3% longer than CPFD (O(|N |4) ). However. infected or removed (cured and no longer susceptible to infection). We consider a model in which each host can be in one of 3 possible states . Adler checksum. Conversely. respectively. two’s complement addition. and the initial infected population is small. We study how the spread of computer viruses. To take advantage of these features of SC. we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph. and has lower complexity than the comparable algorithms that produce similar high quality results. SC preserves the schedule length of the original schedule and reduces processor count by merging processor schedules and removing redundant duplicate tasks. SC algorithm has a low complexity (O(|N |3) ) compared to most duplication based algorithms. setting this threshold is. and CRCs for applications willing to pay a higher compute cost for further improved error detection In this paper. Our experiments demonstrate that.Many DAG scheduling algorithms generate schedules that require prohibitively large number of processors. Moreover. On the average. We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior. this is the first algorithm to address this highly unexplored aspect of DAG scheduling. A study of error detection capabilities for random independent bit errors and burst errors reveals that XOR. Instead. subsequent packet losses can be attributed to malicious actions. we also propose a scheduling algorithm SDS. then we show in some graph models of practical interest (including power law random graphs) that the final infected population is large. and implemented a compromised router detection protocol that dynamically infers. TCSD and CPFD algorithms. Fletcher checksum for applications looking for a balance of error detection and compute cost. SC reduced the processor requirement 91%. In particular. Once the ambiguity from congestion is removed. We have designed. Unfortunately. design decisions about which checksum to use are difficult because of a lack of information about the relative effectiveness of available options. this heuristic is fundamentally unsound. Specially. worms. These results yield insights into what the critical parameters are in determining virus spread in networks. we consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. . it decouples processor economization from schedule length minimization problem. 82% and 72% for schedules generated by PLW.susceptible. Embedded control networks commonly use checksums to detect data transmission errors. one of the best algorithms in that respect. one’s complement addition should be used for applications willing to sacrifice error detection effectiveness to reduce compute cost. at best.

PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. primarily destined to the satellite. due to several geographical and/or climatic constraints. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. This paper presents PRESTO. especially in very dynamic systems-like mobile wireless networks. via the retrieved paths. it requests its neighboring satellites to decrease their data forwarding rates by sending them a self status notification signaling message. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. Simulations demonstrate the performance gains offered by MISS and ICTP in terms of power consumption and target localization accuracy. The interactions of ELB with mechanisms that provide different QoS by differentiating traffic (e. This operation avoids both congestion and packet drops at the satellite. where nodes with more information use higher transmission powers than those that are less informative to share their target state information with the neighboring nodes. In addition to supporting queries on current data. For a walk running over a network with uniform node distribution. MISS allows the sensor nodes with the highest mutual information about the target state to transmit data so that the energy consumption is reduced while the desired target position estimation accuracy is met. In addition. Such scenario obviously leads to congestion of the heavily loaded links. Hence. Differentiated Services) are also discussed. this phenomenon results in unnecessary shrinkage of the data transmission rate. wireless sensor nodes provide accurate information since they can be deployed and operated near the phenomenon. While the multi-path routing concept of ELB has many advantages. proactive or model-driven pull approaches. Random walk-based search algorithms are often suggested for tackling the search problem. In case of connection. It ultimately results in buffer overflows. This paper studies the effect of biasing random walk toward the target on the hitting time.A recurrent problem when designing distributed applications is to search for a node with known property. Indeed. and significant packet drops. the neighboring satellites search for less congested paths that do not include the satellite in question and communicate a portion of data. resource discovery in service-oriented architectures (SOAs).oriented protocols. a satellite notifies its congestion status to its neighboring satellites. PRESTO also supports queries on historical data using interpolation and local archival at sensors. To guarantee a better distribution of traffic among satellites. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. When it is about to get congested. to have a low hitting time is a critical goal. while ensuring that anomalous data trends are never missed. The good performance of ELB. A mutual-information-based sensor selection (MISS) algorithm is adopted for participation in the fusion process. a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. The proposed solution is for unstructured wireless mobile networks. PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand.. this paper proposes an explicit exchange of information on congestion status among neighboring satellites. The cost and the effectiveness of a random walk-based search algorithm are measured by the excepted number of transmissions required before hitting the target. Given the non-uniform distribution of users in satellite footprints. a novel approach to energy savings in WSNs is devised in the informationcontrolled transmission power (ICTP) adjustment. In response. File searching in peer-to-peer (P2P) applications. higher queuing delays. whose results are interpreted in the light of the theoretical study. The proposed scheme is dubbed “Explicit Load Balancing” (ELB) scheme. and path discovery in routing can all be cast as a search problem. These sensing devices have the opportunity of collaboration among themselves to improve the target localization and tracking accuracies. For target tracking applications. . The key result is that even a modest bias level is able to reduce the hitting time significantly. a simple upper bound that connects the hitting time to the bias level is obtained. some Inter-Satellite Links (ISLs) are expected to be heavily loaded with data packets while others remain underutilized. Such a model-driven push approach is energyefficient. An energy-efficient collaborative target tracking paradigm is developed for wireless sensor networks (WSNs).g. This paper also proposes a search protocol for mobile wireless networks. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. it may lead to persistent packet reordering. A solution to this issue is also incorporated in the design of ELB. It also ensures a better distribution of traffic over the entire satellite constellation. Our experiments show that in a temperature monitoring application.

and describe a novel method by which an energy map can be constructed and refined in the joint memory of the mobile nodes. We show that the well known maximal scheduling algorithm achieves average delay that grows at most logarithmically in the largest number of interferers at any link. risk. Finally. Release planning for incremental software development assigns features to releases such that technical. each role represents an abstract object state intended to be of interest to the developer. We allow the developer to customize the analysis to explore the object states and behavior of the program at multiple different and potentially complementary levels of abstraction. resource. these abstractions provide useful information about important object and data structure properties and how the actions of the program affect these properties. The analysis uses roles as the basis for three abstractions: role transition diagrams. In the context of release planning. Our experience indicates that. and the spreading period as the minimum duration required to spread QoS information to all the nodes. the end-toend quality-of-service (QoS) metrics can be stationary when the mobile network is viewed in the aggregate. . which present the observed transitions between roles and the methods responsible for the transitions. which present the observed referencing relationships between objects playing different roles. Our analysis treats cases both with and without flow control. The results are compared with a heuristic that locally allocates resources based on a greedy search We present a new technique for helping developers understand heap referencing properties of object-oriented programs and how the actions of the program affect these properties. when combined with a powerful graphical user interface. We focus on the energy consumption as the end-to-end QoS metric. Further. which present the observed roles of method parameters. Together. To address the inherent difficulty of this process. We have implemented the role analysis and have used this implementation to explore the behavior of several Java programs. A feature can be offered as part of a release only if all of its necessary tasks are done before the given release date. We define the coherence time as the maximum duration for which the end-to-end QoS metric remains roughly constant. and enhanced method interfaces. in the important special case when each Markov process has at most two states (such as bursty ON/OFF sources). roles are a useful abstraction for helping developers explore and understand the behavior of object-oriented programs. Phase 1 applies integer linear programming to a relaxed version of the full problem. We assume a given pool of human resources with different degrees of productivity to perform different types of tasks. and budget constraints are met. and hence is order-optimal. the question studied in this paper is how to allocate these resources to the tasks of implementing the features such that the value gained from the released features is maximized. We first treat the case when arrivals are modulated by independent finite state Markov chains.We consider the delay properties of one-hop networks with general interference constraints and multiple traffic streams with time-correlated arrivals. Our dynamic analysis uses the aliasing properties of objects to synthesize a set of roles. we propose a two-phase optimization approach called OPTIMIZERASORP that combines the strength of two existing solution methods. Planning of software releases and allocation of resources cannot be handled in isolation. These are perhaps the first order-optimal delay results for controlled queueing networks that explicitly account for such statistical information. We show that even though mobile networks are highly unpredictable when viewed at the individual node scale. We provide tight delay bounds in terms of the individual auto-correlation parameters of the traffic sources. Phase 2 uses genetic programming in a reduced search space to generate operational resource allocation plans. The method is evaluated for a series of 600 randomly generated problems with varying problem parameters. we prove that average delay is independent of the number of nodes and links in the network. we show how energy maps can be utilized by an application that aims to minimize a node’s total energy consumption over its near-future trajectory. We show that if the coherence time is greater than the spreading period. role relationship diagrams. the end-to-end QoS metric can be tracked.

In mobile ad hoc networks (MANETs). Peer-to-peer (P2P) networks establish loosely coupled application-level overlays on top of the Internet to facilitate efficient sharing of resources. Without stringent constraints over the network topology. The simulation results show that our mechanism can effectively save the wireless and wireline bandwidth as compared to the traditional IP multicast. consumes energy unnecessarily. Each node maintains in its cache table the information necessary for cache updates. we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless networks. unstructured P2P networks can be constructed very efficiently and are therefore considered suitable to the Internet environment. In this paper. We formulate the selection of the cell and the wireless technology for each mobile host in the heterogeneous wireless networks as an optimization problem. while unconditional overhearing may offset the advantage of using PSM. thus making route caches fully adaptive to topology changes. On-demand routing protocols use route caches to make routing decisions.11 PSM. Our mechanism enables more mobile hosts to cluster together and leads to the use of fewer cells to save the scarce wireless bandwidth. energy goodput and energy balance. When a node receives an advertised packet that is not destined to itself. Moreover. prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. cached routes easily become stale. Allowing no overhearing may critically deteriorate the performance of the underlying routing protocol. Besides. heuristics cannot accurately estimate timeouts because topology changes are unpredictable. an adaptive timeout mechanism for link caches. When a link failure is detected. However.11 PSM-based schemes. We define a new cache structure called a cache table and present a distributed cache update algorithm. we propose a distributed algorithm based on Lagrangean relaxation and a network protocol based on the algorithm. in terms of total energy consumption. a packet must be advertised before it is actually transmitted. called RandomCast. The algorithm does not use any ad hoc parameters. via which a sender can specify the desired level of overhearing. every node overhears every data transmission occurring in its vicinity and thus. and thus. since some MANET routing protocols such as Dynamic Source Routing (DSR) collect route information via overhearing. the random search strategies adopted by these networks usually perform poorly with a large network size. the algorithm notifies all reachable nodes that have cached the link in a distributed manner. To solve the problem. Our mechanism supports the dynamic group membership and offers mobility of group members. Both theoretical and experimental analyses are conducted and demonstrated the effectiveness and efficiency of our approach . we seek to enhance the search performance in unstructured P2P networks through exploiting users' common interest patterns captured within a probabilitytheoretic framework termed the user interest model (UIM). To address the cache staleness issue. However. We show that the algorithm outperforms DSR with path caches and with Link-Max Life. We reduce the bandwidth cost of an Internet Protocol (IP) multicast tree by adaptively selecting the cell and the wireless technology for each mobile host to join the multicast group. we propose proactively disseminating the broken link information to the nodes that have that link in their caches. it switches to a lowpower sleep state during the data transmission period. This paper proposes a new communication mechanism. We use Integer Linear Programming to model the problem and show that the problem is NP-hard. In this paper. In addition. A search protocol and a routing table updating protocol are further proposed in order to expedite the search process through self organizing the P2P network into a small world. However. our mechanism requires no modification to the current IP multicast routing protocols. the paths in the multicast tree connecting to the selected cells share more common links to save the wireline bandwidth. avoids overhearing and conserves energy.11 Power Saving Mechanism (PSM). We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. They can be roughly classified as either structured or unstructured networks. Extensive simulation using ns-2 shows that RandomCast is highly energy-efficient compared to conventional 802. it reduces redundant rebroadcasts for a broadcast packet and thus saves more energy.11 as well as 802. In IEEE 802. making a prudent balance between energy and routing performance. In this paper. Due to mobility. they would suffer if they are used in combination with 802.

our method entails probe experiments that follow a geometric distribution to 1) enable an explicit trade-off between accuracy and impact on the network. In particular. In this system the operations performed in First-In-First-Out process. We demonstrate the improved accuracy and efficiency of our approach in comparison to previous work on clustering network traffic. categorical. More important. Specifically. called BADABING.modulated end-to-end measurements of loss in a controlled laboratory environment using IP routers and commodity end hosts. and hierarchical attributes for a one-pass hierarchical clustering algorithm. Therefore. routing protocols for mobile ad hoc networks typically work well only in bidirectional networks. It then presents a framework called BRA that provides a bidirectional abstraction of the asymmetric network to routing protocols. we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools. we investigate the use of clustering techniques to identify interesting traffic patterns from network traffic data in an efficient manner. Motivated by these observations. This paper first presents a simulation study quantifying the impact of asymmetric links on network connectivity and routing performance. we study the network capacity problem under a given network lifetime requirement. dispersing the computation load across the proxies and saving the costs of sending operation parameters over the wide area when these are large. This system handles two process serializability and strict serializabilty for durability in the consistent object sharing . This system reduces the workload in the server. To calculate the LMM rate allocation vector. and detection packet loss on unidirectional links. if all nodes are required to live up to a certain lifetime criterion. Specifically. Our tests show that loss characteristics reported from such Poisson-modulated probe tools can be quite inaccurate over a range of traffic conditions. since migrating this object into the vicinity of proxies hosting these operations will benefit all such operations. we consider an overarching problem that encompasses both performance metrics. Wireless links are often asymmetric due to heterogeneity in the transmission power of devices. we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP). The objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. non-uniform environmental noise. reverse route forwarding of control packets to enable off-the-shelf routing protocols. which only routes on bidirectional links. BRA works by maintaining multi-hop reverse routes for unidirectional links and provides three new abilities: improved connectivity by taking advantage of the unidirectional links. Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes. Finally. . involuntary disconnection. Extensive simulations of AODV layered on BRA show that packet delivery increases substantially (two-fold in some instances) in asymmetric networks compared to regular AODV. and other signal propagation phenomenons. it is sufficient to solve only one of the two problems. we show that there exists an elegant duality relationship between the LMM rate allocation problem and the LMM node lifetime problem. there has been little analysis of the accuracy of these tools or their impact on the network. While active probe tools are commonly used to measure packet loss on end-to-end paths. Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and typically short duration of packet loss episodes. Important insights can be obtained by inferring duality results for the other problem. a system that coordinates service proxies placed at the “edge” of the Internet to serve distributed clients accessing a service involving mutable objects. we advocate the use of lexicographical max-min (LMM) rate allocation.We present Quiver. we discuss the use of Quiver to build an e-commerce application and a distributed network traffic modeling service. Other workloads benefit from Quiver. Quiver also supports optimizations for single-object reads that do not involve migrating the object. There is significant interest in the data mining and network management communities about the need to improve existing techniques for clustering multivariate network traffic flow records so that we can quickly infer underlying traffic patterns. and voluntary departure of proxies. We begin by testing the capability of standard Poisson. These migrations dramatically improve performance when operations involving an object exhibit geographic locality. In this paper. Unfortunately. Quiver enables these proxies to perform consistent accesses to shared objects by migrating the objects to proxies performing operations on those objects. which we call serial LP with Parametric Analysis (SLP-PA). We develop a framework to deal with mixed type attributes including numerical. We detail the protocols for implementing object operations and for accommodating the addition. We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. It performs the all operations in the proxies itself. In this paper. for a wireless sensor network where each node is provisioned with an initial energy. We evaluate the capabilities of our methodology experimentally by developing and implementing a prototype tool. The experiments demonstrate the trade-offs between impact on the network and measurement accuracy. and 2) enable more accurate measurements than standard Poisson probing at the same rate. We show that BADABING reports loss characteristics far more accurately than traditional loss measurement tools.

The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph that the attack packets traversed during a distributed denial-of-service attack. However, the PPM algorithm is not perfect, as its termination condition is not well defined in the literature. More importantly, without a proper termination condition, the attack graph constructed by the PPM algorithm would be wrong. In this work, we provide a precise termination condition for the PPM algorithm and name the new algorithm the Rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm is that when the algorithm terminates, the algorithm guarantees that the constructed attack graph is correct, with a specified level of confidence. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different probabilities that a router marks the attack packets and 2) different structures of the network graph. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means of enhancing the reliability of the PPM algorithm.

Intrusion detection in Wireless Sensor Network (WSN) is of practical interest in many applications such as detecting an intruder in a battlefield. The intrusion detection is defined as a mechanism for a WSN to detect the existence of inappropriate, incorrect, or anomalous moving attackers. In this paper, we consider this issue according to heterogeneous WSN models. Furthermore, we consider two sensing detection models: single-sensing detection and multiple-sensing detection... Our simulation results show the advantage of multiple sensor heterogeneous WSNs.

In recent years, the exponential growth of Internet users with increased bandwidth requirements has led to the emergence of the next generation of IP routers. Distributed architecture is one of the promising trends providing petabit routers with a large switching capacity and high-speed interfaces. Distributed routers are designed with an optical switch fabric interconnecting line and control cards. Computing and memory resources are available on both control and line cards to perform routing and forwarding tasks. This new hardware architecture is not efficiently utilized by the traditional software models where a single control card is responsible for all routing and management operations. The routing table manager plays an extremely critical role by managing routing information and in particular, a forwarding information table. This article presents a distributed architecture set up around a distributed and scalable routing table manager. This architecture also comes provides improvements in robustness and resiliency. The proposed architecture

This work was motivated by the need to achieve low latency in an input centrally-scheduled cell switch for highperformance computing applications; specifically, the aim is to reduce the latency incurred between issuance of a request and arrival of the corresponding grant. We introduce a speculative transmission scheme to significantly reduce the average latency by allowing cells to proceed without waiting for a grant. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization. An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking N*NR input-queued crossbar switch with receivers R per output. The results demonstrate that the can be almost entirely eliminated for loads up to 50%. Our simulations confirm the analytical results.

An efficient algorithm is presented for the computation of grayscale morphological operations with arbitrary 2-D flat structuring elements (S.E.). The required computing time is independent of the image content and of the number of gray levels used. It always outperforms the only existing comparable method, which was proposed in the work by Van Droogen broeck and Talbot, by a factor between 3.5 and 35.1, depending on the image type and shape of S.E. So far, filtering using multiple S.E.s is always done by performing the operator for each size and shape of the S.E. separately. With our method, filtering with multiple S.E.s can be performed by was proposed in the work by Van Droogen broeck and Talbot, by a factor between 3.5 and 35.1, depending on the image type and shape of S.E. So far, filtering using multiple S.E.s is always done by performing the operator for each size and shape of the S.E. separately. With our method, filtering with multiple S.E.s can be performed by a single operator for a slightly reduced computational cost per size or shape, which makes this method more suitable for use in granulometries, dilation-erosion scale spaces, and template matching using the hit-or-miss transform. The discussion focuses on erosions and dilations, from which other transformations can be derived. a single operator for a slightly reduced computational cost per size or shape, which makes this method more suitable for use in granulometries, dilation-erosion scale spaces, and template matching using the hit-or-miss transform. The discussion focuses on erosions and dilations, from which other transformations can be derived.

In this paper, we consider an overarching problem that encompasses both performance metrics. In particular, we study the network capacity problem under a given network lifetime requirement. Specifically, for a wireless sensor network where each node is provisioned with an initial energy, if all nodes are required to live up to a certain lifetime criterion, Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes, we advocate the use of lexicographical max-min (LMM) rate allocation. To calculate the LMM rate allocation vector, we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP), which we call serial LP with Parametric Analysis (SLP-PA). We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. More important, we show that there exists an elegant duality relationship between the LMM rate allocation problem and the LMM node lifetime problem. Therefore, it is sufficient to solve only one of the two problems. Important insights can be obtained by inferring duality results for the other problem.

The structured light vision system is a successfully used for the measurement of 3D surface in vision. There is some limitation in the above scheme, that is tens of picture are captured to recover a 3D sense. This paper presents an idea for real-time Acquisition of 3-D surface data by a specially coded vision system. To achieve 3-D measurement for a dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3D reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a vision processing method for fast 3-D data acquisition. Practical experimental performance is provided to analyze the efficiency of the proposed methods

High cohesion is desirable property in software systems to achieve reusability and maintainability. In this project we are measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. In existing approaches the cohesion is calculate from the structural information for example method attributes and references. In conceptual cohesion of classes, i.e. in our project we are calculating the unstructured information from the source code such as comments and identifiers. Unstructured information is embedded in the source code. To retrieve the unstructured information from the source code Latent Semantic Indexing is used. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. In our project we are achieving the high cohesion and we are predicting the fault in Object –Oriented Systems

Location-based spatial queries (LBSQ s) refer to spatial queries whose answers rely on the location of the inquirer. Efficient processing of LBSQ s is of critical importance with the ever-increasing deployment and use of mobile technologies. We show that LBSQ s has certain unique characteristics that the traditional spatial query processing in centralized databases does not address. For example, a significant challenge is presented by wireless broadcasting environments, which have excellent scalability but often exhibit high-latency database access. In this paper, we present a novel query processing technique that, though maintaining high scalability and accuracy, manages to reduce the latency considerably in answering LBSQ s. Our approach is based on peer-to-peer sharing, which enables us to process queries without delay at a mobile host by using query results cached in its neighboring mobile peers. We demonstrate the feasibility of our approach through a probabilistic analysis, and we illustrate the appeal of our technique through extensive simulation results.

Since 2005, IEEE 802.11-based networks have been able to provide a certain level of quality of service (QoS) by the means of service differentiation, due to the IEEE 802.11e amendment. However, no mechanism or method has been standardized to accurately evaluate the amount of resources remaining on a given channel. Such an evaluation would, however, be a good asset for bandwidth-constrained applications. In multihop ad hoc networks, such evaluation becomes even more difficult. Consequently, despite the various contributions around this research topic, the estimation of the available bandwidth still represents one of the main issues in this field. In this paper, we propose an improved mechanism to estimate the available bandwidth in IEEE 802.11-based ad hoc networks. Through simulations, we compare the accuracy of the estimation we propose to the estimation performed by other state-of-the-art QoS protocols, BRuIT, AAC, and QoS-AODV.

Self-propagating codes, called worms, such as Code Red, Nimda, and Slammer, have drawn significant attention due to their enormously adverse impact on the Internet. Thus, there is great interest in the research community in modeling the spread of worms and in providing adequate defense mechanisms against them. In this paper, we present a (stochastic) branching process model for characterizing the propagation of Internet worms. The model is developed for uniform scanning worms and then extended to preference scanning worms. This model leads to the development of an automatic worm containment strategy that prevents the spread of a worm beyond its early stage. Specifically, for uniform scanning worms, we are able to determine whether the worm spread will eventually stop. We then extend our results to contain uniform scanning worms. Our automatic worm containment schemes effectively contain both uniform scanning worms and local preference scanning worms, and it is validated through simulations and real trace data to be non intrusive.

In this project we present a simple way to resolve a complicated network security. This is done by the following two ways. They are as follows, first is the decrypt only when necessary (DOWN) policy, which can substantially improve the ability of low-cost to protect the secrets. The DOWN policy relies on the ability to operate with fractional parts of secrets. We discuss the feasibility of extending the DOWN policy to various asymmetric and symmetric cryptographic primitives. The second is cryptographic authentication strategies which employ only symmetric cryptographic primitives, based on novel ID-based key pre-distribution schemes that demand very low complexity of operations to be performed by the secure coprocessors (ScP) and can take good advantage of the DOWN policy.

Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is non-trivial when network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least onefourth (one-half for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a network simulator (ns2), and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [30]) in all important performance metrics. The performance differential is particularly large in more challenging scenarios, such as higher access frequency and smaller memory. before they have had much time to propagate across the network. In this paper, we study the possibilities of trafficanalysis based mechanisms for attack and anomaly detection. The motivation for this work came from a need to reduce the likelihood that an attacker may hijack the campus machines to stage an attack on a third party. A campus may want to prevent or limit misuse of its machines in staging attacks, and possibly limit the liability from such attacks. In particular, we study the utility of observing packet header data of outgoing traffic, such as destination addresses, port numbers and the number of flows, in order to detect attacks/anomalies originating from the campus at the edge of a campus. Detecting anomalies/attacks close to the source allows us to limit the potential damage close to the attacking machines. Traffic monitoring close to the source may enable the network operator quicker identification of potential anomalies and allow better control of administrative domain’s resources. Attack propagation could be slowed through early detection. Our approach passively monitors network traffic at regular intervals and analyzes it to find any abnormalities in the aggregated traffic. By observing the traffic and correlating it to previous states of traffic, it may be possible to see whether the current traffic is behaving in a similar (i.e., correlated) manner. The network traffic could look different because of flash crowds, changing access patterns, infrastructure problems such as router failures, and DoS attacks. In the case of bandwidth attacks, the usage of network may be increased and abnormalities may show up in traffic volume. Flash crowds could be observed through sudden increase in traffic volume to a single destination. Sudden increase of traffic on a certain port could signify the onset of an anomaly such as worm

and latency jitter). . Similarly. A key feature of our scheme is that it does not require global routing information. At the level of the communication system. An integrated architecture shares the system’s communication resources by using a single physical network for exchanging messages of multiple application subsystems. attackers can evade detection and put a substantial burden on the destination network for policing attack packets. Programming aptitude tests (PATs) have been shown to correlate with programming performance. The other array is used to maintain the destination metadata information of all files. For this reason. even with partial deployment on the Internet. By conducting two controlled experiments. In both experiments. Simulation results show our HBA design to be highly effective and efficient in improving the performance and scalability of file systems in clusters with 1. HBA is reducing metadata operation by using the single metadata architecture instead of 16 metadata server. IDPFs are constructed from the information implicit in Border Gateway Protocol (BGP) route updates and are deployed in network border routers. Algorithm design and its implementation are normally merged and it provides feedback to enhance the design. By employing IP spoofing. Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better coordination of application subsystems compared to federated systems. bandwidth. In addition. we show that.000 to 10. the temporal properties of messages sent by a software component are independent from the behavior of other software components. IDPFs can proactively limit the spoofing capability of attackers.An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. the DECOS integrated architecture encapsulates application subsystems and their constituting software components. virtual networks on top of an underlying time-triggered physical network exhibit predefined temporal properties (that is. The Distributed Denial-of-Service (DDoS) attack is a serious threat to the legitimate use of the Internet. with full-time professional programmers being the subjects who worked on increasingly complex programming aptitude tasks related to problem solving and algorithmic design. PATs do not require understanding of programming instructions and do not require a skill in any specific computer language. The Bloom filter arrays with different levels of accuracies are used on each metadata server. providing evidence of the value of pairs in program design-related tasks. in particular from those within other application subsystems In this project efficiency of pairs in program design tasks is identified by using pair programming concept. Due to encapsulation. Based on extensive simulation studies. In this paper. they can help localize the origin of an attack packet to a small number of candidate networks. Prevention mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses in IP packets. Variations in programmer skills in a particular language or an integrated development environment and the understanding of programming instructions can cover the skill of subjects in program design-related tasks. here the technique used called HIERARCHICAL BLOOM FILTER ARRAYS (HBA) to map filenames to the metadata servers holding their metadata. it is important to ensure that the software components do not interfere through the use of these shared resources.000 nodes (or super clusters) and with the amount of data in the petabyte scale or higher. pairs significantly outperformed individuals. we propose an inter-domain packet filter (IDPF) architecture that can mitigate the level of IP spoofing on the Internet. Previous controlled pair programming experiments did not explore the efficacy of pairs against individuals in program design-related tasks. In order to support a seamless system integration without unintended side effects in such an integrated architecture. memory and CPU time) of each node computer are available to multiple software components. The first one with low accuracy and used to capture the destination metadata server information of frequently accessed files. Pair programming involves two developers simultaneously collaborating with each other on the same programming task to design and code a solution. the computational resources (for example. We establish the conditions under which the IDPF framework correctly works in that it does not discard packets with valid source addresses. latency.

For a single user case. it is difficult to use the detail coefficients directly as a location map to determine the data-hiding locations.This paper proposes a data-hiding technique for binary images in morphological transform domain for authentication purpose. Initially developed within a classification framework. Focusing on interactive methods. Here. First. ED3M is based on estimation theory. the users may share their query answers to increase the inference probability. Based on this observation. which considers both backwardly those neighboring processed embeddable candidates and forwardly those unprocessed flippable candidates that may be affected by flipping the current pixel. the technique presented here does not depend on historical data from previous projects or any assumptions about the requirements and/or testers’ productivity. the total visual distortion can be minimized. This is a key advantage of the ED3M approach as it makes it widely applicable in different testing environments. Unlike existing block-based approach. we process an image in 2times2 pixel blocks. Malicious users can exploit the correlation among data to infer sensitive information from a series of seemingly innocuous data accesses. It is a completely automated approach that relies only on the data collected during an ongoing testing process. we develop a model to evaluate collaborative inference based on the query sequences of collaborators and their task-sensitive collaboration levels. active learning strategy is then described. a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Unlike many existing approaches. Active learning methods have been considered with increased interest in the statistical learning community. An example is given to illustrate the use of the proposed technique to prevent multiple collaborative users from deriving sensitive information via inference. Thus. as any active method is sensitive to the boundary estimation between classes. we view flipping an edge pixel in binary images as shifting the edge location one pixel horizontally and vertically. the RETIN strategy carries out a boundary correction to make the retrieval process more robust. communication fidelity and honesty in collaboration are three key factors that affect the level of achievable collaboration. Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. Experimental studies reveal that information authoritativeness. in which the block size is constrained by 3times3 pixels or larger. To achieve blind watermark extraction. . we constructed a semantic inference model (SIM) that represents the possible inference channels from any attribute to the pre-assigned sensitive attributes. the ED3M approach has been evaluated using five data sets from large industrial projects and two data sets from the literature. Based on data dependency. The results are very promising. This allows flexibility in tracking the edges and also achieves low computational complexity. The query request will be denied if the inference probability exceeds the pre specified threshold. a lot of extensions are now being proposed to handle multimedia applications. A novel effective Backward-Forward Minimization method is proposed. which thus facilitates blind watermark extraction and incorporation of cryptographic signature. In addition. In this way. Hence. An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the product’s required maintenance. The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. database schema and semantic knowledge. called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the defects in an ongoing testing process. The SIM is then instantiated to a semantic inference graph (SIG) for query-time inference violation detection. while only using defect data as the input. Experimental results demonstrate the validity of our arguments. we develop an inference violation detection system to protect sensitive data content. For multi-user cases. we propose an interlaced morphological binary wavelet transform to track the shifted edges. Third. a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The two processing cases that flipping the candidates of one does not affect the flippability conditions of another are employed for orthogonal embedding. Therefore. which renders more suitable candidates can be identified such that a larger capacity can be achieved. when a user poses a query. Here. the detection system will examine his/her past query log and calculate the probability of inferring sensitive information. Second. This paper provides algorithms within a statistical framework to extend active learning for online contentbased image retrieval (CBIR). the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. a new approach. they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques.

We formulate the selection of the cell and the wireless technology for each mobile host in the heterogeneous wireless networks as an optimization problem. Each sensor has a time out period and listens to messages sent by respective nodes before the time out expires. In this paper. military networks. with or without transmitting a withdrawal message to inform neighbors about the status. proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. Data mining techniques have been widely used in various applications. HAPPI solves the bottleneck problem in a priori-based hardware schemes. The simulation results show that our mechanism can effectively save the wireless and wireline bandwidth as compared to the traditional IP multicast. We reduce the bandwidth cost of a IP multicast tree by adaptively selecting the cell and the wireless technology for each mobile host to join the multicast group. With this in mind. sensor decides to sleep only if neighbor sensor is active or not covered. Covered nodes decide to sleep. Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. because they try to establish complete end-to-end paths. To deal with such networks researchers have suggested to use flooding-based routing schemes. inactive sensors may observe that they became covered and may decide to alter their original decision and transmit a retreat message. we introduce a new family of routing schemes that “spray” a few message copies into the network. one has to load candidate item sets and a database into the hardware. before any data is sent. we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless networks. We use Integer Linear Programming to model the problem and show that the problem is NP-hard. Besides. Moreover. We show that. The time complexity of those steps that need to load candidate item sets or database items into the hardware is in proportion to the number of candidate item sets multiplied by the number of items in the database. Too many candidate item sets and a large database would create a performance bottleneck. In our approach. Our mechanism enables more mobile hosts to cluster together and lead to the use of fewer cells to save the scarce wireless bandwidth. they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. In this paper. To solve the problem. we can effectively reduce the frequency of loading the database into the hardware. wildlife tracking sensor networks. we propose an distributed algorithm based on Lagrangean relaxation and a network protocol based on the algorithm.We propose several localized sensor area coverage protocols for heterogeneous sensors. After hearing from more neighbors. we propose a HAsh-based and PiPelIned (abbreviated as HAPPI) architecture for hardware enhanced association rule mining. Our mechanism supports the dynamic group membership and offers mobility of group members. and then route each copy independently towards the destination. Since the capacity of the hardware architecture is fixed. conventional routing schemes fail. if the number of candidate item sets or the number of items in the database is larger than the hardware capacity. the paths in the multicast tree connecting to the selected cells share more common links to save the wireline bandwidth. the items are loaded into the hardware separately. etc. One of the most important data mining applications is association rule mining. vehicular ad hoc networks. Furthermore. Therefore. There are many real networks that follow this model. While flooding-based schemes have a high probability of delivery. if carefully designed. Apriori-based association rule mining in hardware. Sensor nodes whose sensing area is not fully covered (or fully covered but with a disconnected set of active sensors) when the deadline expires decide to remain active for the considered round and transmit an activity message announcing it. spray routing . each with arbitrary sensing and transmission radii. for example. In this context. our mechanism requires no modification on the current IP multicast routing protocols.

we propose two techniques that reduce the discretization errors. Different from the state of the art sports video analysis methods which heavily rely on audio/visual features. In particular. Our watermarking technique is resilient to watermark synchronization errors because it uses a partioning approach that does not require marker tuple. although the backup path lengths may be significantly higher than optimal. Solution methodologies for the BLME problem is developed using two approaches by: 1) formulating the backup path selection as an integer linear program. which transforms the original problem to a simpler one solvable in polynomial time. which requires that two links may not use each other in their backup paths if they may fail simultaneously. player or team according to user’s preference.e. text/video alignment. and personalized retrieval. 1) The event detection accuracy is significantly improved due to the incorporation of web-casting text analysis. Reducing the overhead of computing constrained shortest paths is practically important for the successful design of a high-throughput QoS router. the paper illustrates the significance of the knowledge of failure location by illustrating that network with higher connectivity may require lesser capacity than one with a lower connectivity to recover from dual-link failures. The problem is to find the cheapest path that satisfies certain constraints. MPLS path selection. The ILP formulation and heuristic are applied to six networks and their performance is compared with approaches that assume precise knowledge of dual-link failure. This paper develops the necessary theory to establish the sufficient conditions for existence of a solution to the BLME problem. The evaluation on personalized retrieval is effective in helping meet users’ expectations. While the first link failure can be protected using link protection. Our simulations show that the new algorithms reduce the execution time by an order of magnitude on power-law topologies with 1000 nodes. the contributions of our approach include the following. 2) developing a polynomial time heuristic based on minimum cost path routing.. alteration and insertion attacks An approach to IP traces back based on the probabilistic packet marking paradigm. 3) The proposed method is able to create personalized summary from both general and specific point of view related to particular game. Because it is NP-complete. We formulate the watermarking of relational databases as a constrained optimization problem and discus efficient techniques to solve the optimization problem and to handle the onstraints. It is observed that a solution exists for all of the six networks considered. In this paper. The heuristic approach is shown to obtain feasible solutions that are resilient to most dual-link failures. and traffic engineering. ATM circuit routing. Watermark decoding is based on a threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors. 2) The proposed approach is able to detect exact event boundary and extract event semantics that are very difficult or impossible to be handled by previous approaches. In this paper. uses large checksum cords to “link” message fragments in a way that is highly scalable. Networks employ link protection to achieve fast recovery from link failures. The efficiency of the algorithms directly relates to the magnitude of the errors introduced during discretization. Such a requirement is referred to as backup link mutual exclusion (BLME) constraint and the problem of identifying a backup path for every link that satisfies the above requirement is referred to as the BLME problem. One of the strategies to recover from dual-link failures is to employ link protection for the two failed links independently.Sports video annotation is important for sports video semantic analysis such as event detection and personalization. finding the cheapest delay-constrained path is critical for real-time data flows such as voice/video calls. . much research has been designing heuristic algorithms that solve the -approximation of the problem with an adjustable accuracy. Computing constrained shortest paths is fundamental to some important network functions such as QoS routing. A common approach is to discretize (i. scale and round) the link delay or link cost. We implemented a proof of concept implementation of our watermarking technique and showed by experimental results that our technique is resilient to tuple deletion. we present a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data. Proving ownerships rights on outsourced relational database is a crucial issue in today's internet based application environments and in many content distribution applications. the proposed approach incorporates web-casting text into sports video analysis. In addition. video analysis. Compared with previous approaches. which allow faster algorithms to be designed. We propose a novel approach for sports video semantic annotation and personalized retrieval. We present the framework of our approach and details of text analysis. Our approach overcomes a major weakness in previously proposed watermarking techniques. which we call randomize-and-link. Our approach. for the checksums serve both as associative addresses and data integrity verifiers. event. This paper formally classifies the approaches to dual-link failure resiliency. The experimental results on event boundary detection in sports video are encouraging and comparable to the manually selected events. there are several alternatives for protecting against the second failure. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages. which is limited at both processing power and memory space.

Moreover. Unfortunately. In the proposed scheme. Experimental results prove the efficiency of the scheme. which provides upper-layer applications with process state information according to the current system synchrony (or QOS). the consensus problem is taken as a benchmark problem. In this paper. The optimal one-shot load balancing policy is developed and subsequently extended to develop an autonomous and distributed load-balancing policy that can dynamically reallocate incoming external loads at each node. Each block is scaled and a quantization function is used to construct the watermark bit from each block. i. and a piece of information is likely to be true if it is provided by many trustworthy websites. streams may be isolated in separate buffers before being scheduled in a frame. thus reducing the intertransmission time from the same stream and achieving a smaller jitter and startup latency. To illustrate what can be done in this programming model and how to use it. . Each stream within a class can transmit a number of packets in the frame based on its available grant. and. but only one packet per small round. Abstract—We propose a new fair scheduling technique. We define a stream to be the same-class packets from a given immediate upstream router destined to an output port of the core router.g.. The proposed scheme has been designed as public watermarking.The World Wide Web has become the most important information source for most of us. Considering such a context.e.. The sequence of traffic transmission in a frame starts from higher-priority traffic and goes down to lower-priority traffic. called TRUTHFINDER. The discrete cosine transform of each block is computed. processes are not required to share the same view of the system synchrony at a given time. We design a general framework for the Veracity problem and invent an algorithm. different websites often provide conflicting information on a subject. The image is reconstructed by computing the inverse cosine transform. for the support of DiffServ traffic in a core router. in particular. The proposed scheme can be used to authenticate digital documents of high significance. Providing fault tolerance for such dynamic environments is a challenging task.. composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). such as different specifications for the same product. The performance of the proposed dynamic loadbalancing policy is compared to that of static policies as well as existing dynamic load-balancing policies by considering the average completion time per task and the system processing rate in the presence of random arrivals of the external loads A new semi fragile method for embedding watermark data into gray scale images has been proposed. i. This adaptive and dynamic load balancing policy is implemented and evaluated in a two-node distributed system. The underlying system model is hybrid. the cover image is partitioned into non-overlapping blocks of size 8 x 8 pixels. jitter. and start-up latency. The gray threshold value of the image in spatial domain is computed. we propose a new problem. which studies how to find true facts from a large amount of conflicting information on many subjects that is provided by various websites. The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QOS) cannot always be delivered between processes. This paper also presents an implementation of the model that relies on a negotiated quality of service (QOS) for communication channels. The grant can be adjusted in a way to prevent the starvation of lower priority classes. Our experiments show that TRUTHFINDER successfully finds true facts among conflicting information and identifies trustworthy websites better than the popular search engines. a website is trustworthy if it provides many pieces of true information. when the underlying system QOS degrade) or totally synchronous. it does not require the original image to verify its integrity.e. this paper proposes an adaptive programming model for fault-tolerant distributed computing. We also verify and demonstrate the good performance of our scheduler by simulation and comparison with other algorithms in terms of queuing delay. such a composition can vary over time. Moreover. conformity to truth. called Veracity. there is no guarantee for the correctness of information on the Web. At each output port. A frame may have a number of small rounds for each class. A regeneration-theory approach is undertaken to analytically characterize the average overall completion time in a distributed system. An iterative method is used to infer the trustworthiness of websites and the correctness of information from each other. which utilizes the relationships between websites and their information. The watermark is embedded to all pixels whose pixel intensity is less than the gray threshold. However. the system may become totally asynchronous (e. The approach considers the heterogeneity in the processing rates of the nodes as well as the randomness in the delays imposed by the communication medium. called OCGRR (Output Controlled Grant-based Round Robin).

Analytical and simulation results are presented to evaluate the performance of the proposed scheme. Our approach builds on work in unstructured P2P systems and uses only local knowledge. prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. One such routing misbehavior is that some selsh nodes will participate in the route discovery and maintenance processes but refuse to forward data packets. heuristics cannot accurately estimate timeouts because topology changes are unpredictable. simultaneously. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data set. due to the open structure and scarcely available battery-based energy. Each node maintains in its cache table the information necessary for cache updates. To address the cache staleness issue. the requirements (in terms of route discovery. This advantage. In this paper. The distributed nature of these systems. We present the Peer Fusion (pFusion) architecture that aims to efficiently integrate heterogeneous information that is geographically scattered on peers of different networks. By mining anomalous traffic episodes from Internet connections. On-demand routing protocols use route caches to make routing decisions.We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. we consider the effects of topologically aware overlay construction techniques on efficient P2P keyword search algorithms. This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. cached routes easily become stale. only a fraction of the received data packets are acknowledged in the 2ACK scheme. When a link failure is detected. an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. by automated data mining and signature generation over Internet connection episodes. inherently hinders the efficient retrieval of information. A novel analytical framework is developed and the network performance under both RB and NRB schemes is quantified. The signatures generated by ADS upgrade the SNORT performance by 33 percent. the Active Measurement Project (NLANR). However. respectively. The emerging Peer-to-Peer (P2P) model has become a very powerful and attractive paradigm for developing Internetscale systems for sharing resources. routing protocols for MANETs are designed based on the assumption that all participating nodes are fully cooperative. and pipelining. and the Text Retrieval Conference (TREC) show that the architecture we propose is both efficient and practical This paper investigates whether and when route reservation-based (RB) communication can yield better delay performance than non-reservation-based (NRB) communication in ad hoc wireless networks. our experimental results show a 60 percent detection rate of the HIDS. In addition to posing this fundamental question. the algorithm notifies all reachable nodes that have cached the link in a distributed manner. We define a new cache structure called a cache table and present a distributed cache update algorithm. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. However. The main idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path. compared with 30 percent and 22 percent in using the SNORT and Bro systems. Due to mobility. In general. medium access control (MAC) protocol. Our empirical results. We show that the algorithm outperforms DSR with path caches and with Link-Max Life. however. . where nodes are typically located across different networks and domains. The algorithm does not use any ad hoc parameters. thus making route caches fully adaptive to topology changes.) for making RB switching superior to NRB switching are also identified. etc. In order to reduce additional routing overhead. we propose the 2ACK scheme that serves as an add-on technique for routing schemes to detect routing misbehavior and to mitigate their adverse effect. This sharp increase in detection rate is obtained with less than 3 percent false alarms. we propose proactively disseminating the broken link information to the nodes that have that link in their caches. using the pFusion middleware architecture and data sets from Akamai’s Internet mapping infrastructure (AKAMAI). HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. node misbehaviors may exist. It is shown that if the aforementioned requirements are met. This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. In this paper. The HIDS approach proves the vitality of detecting intrusions and anomalies. comes at the expense of lower throughput and goodput compared to NRB schemes. In this paper. including files and documents. then RB schemes can indeed yield better delay performance than NRB schemes.

the unwanted variations resulting from changes in lighting. making use of the distribution of the homogeneity in the image. In this paper. facial expression. Afterward. Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations. and pose may be eliminated or reduced. Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics. By using Locality Preserving Projections (LPP). Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Another possible application would be to integrate this technology into an artificial intelligence system for more realistic interaction with humans. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables). The cracks are detected by threshold the output of the morphological top-hat transform. The first stage computes a fuzzy derivative for eight different directions. Experimental results are obtained to show the feasibility of the proposed approach. redundancy removal. These results are also compared to other filters by numerical measures and visual inspection. Authentication is one of the important security requirements of a communication network. Theoretical analysis shows that PCA. The common authentication schemes are not applicable in Ad hoc networks. This is achieved by using clustering techniques. The jobs which PCA can do are prediction. which describes authentication and confidentiality when packets are distributed between hosts with in the cluster and between the clusters. In this way. Finally. which are needed to describe the data economically. Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis. LDA. We present a novel secure communication framework for ad hoc networks (SCP). We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. An ad hoc network is a self organized entity with a number of mobile nodes without any centralized access point and also there is a topology control problem which leads to high power consumption and no security. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. the shape of the membership functions is adapted according to the remaining noise level after each iteration. we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences and an effective job scheduling is achieved as time progresses. We propose an appearance-based face recognition method called the Laplacianface approach. we propose a secure communication protocol for communication between two nodes in ad hoc networks. In this paper. the thin dark brush strokes which have been misidentified as cracks are removed using either a median radial basis function neural network on hue and saturation data or a semi-automatic procedure based on region growing. Here we assume that the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed number of jobs. A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. while routing the packets between mobile hosts. and obtains a face subspace that best detects the essential face manifold structure. the face images are mapped into a face subspace for analysis. The Laplacian faces are the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. etc. . LPP finds an embedding that preserves local information. The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used. crack filling using order statistics filters or controlled anisotropic diffusion is performed.An integrated methodology for the detection and removal of cracks on digitized paintings is presented in this project. These cluster head nodes execute administrative functions and network key used for certification. The cluster head nodes (CHs) perform the major operations to achieve our SCP framework with help of Kerberos authentication application and symmetric key cryptography technique which will be secure reliable transparent and scalable and will have less over head. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it. recognition. The filter can be applied iteratively to effectively reduce heavy noise. The methodology has been shown to perform very well on digitized paintings suffering from cracks. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space. In particular. This is the case when there is a strong correlation between observed variables. and LPP can be obtained from different graph models. data compression. feature extraction. Because PCA is a known powerful Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the scheduling technique used. Intelligence is the key factor which is lacking in the job scheduling techniques of today. The filter consists of two stages. Both stages are based on fuzzy rules which make use of membership functions.

making global roaming seamless. respectively. in association with loss JPEG. approximately max-min fair bandwidth allocations can be achieved for competing flows. When compression algorithms such as JPEG are used as part of the wireless transmission process. This necessitates research into the design and performance of high-throughput database technologies used in mobile systems to ensure that future systems will be able to carry efficiently the anticipated loads. The next-generation mobile network will support terminal mobility. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and motion-JPEG2000. To address these maladies. each of which is a three-level tree structure and is connected to the others only through its root. Moreover. we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). Experimental results show that 3-D SPIHT-BPCS is superior to motion-JPEG2000-BPCS with regard to embedding performance. The switch between the two schemes is done in a fully automatic fashion based on the surrounding available blocks. The Internet's excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. which are the integration of 3-D SPIHT video coding and BPCS steganography and that of motion-JPEG2000 and BPCS. End-to-end congestion control algorithms alone. The performance of this method is tested for various images and combinations of lost blocks. memory-resident direct file and T-tree. Results have revealed that the proposed database architecture for location management can effectively support the anticipated high user density in the future mobile networks. the proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location registration and call delivery procedures. the non-geographic PTNs coupled with the anticipated large number of mobile users in future mobile networks may introduce very large centralized databases. are proposed for the location databases to further improve their throughput. personal mobility. the effects of noise can destroy entire blocks of the image. However. is also discussed. In addition. thereby preventing congestion within the network. . images are first tiled into blocks of 8 x 8 pixels. Analysis model and numerical results are presented to evaluate the efficiency of the proposed database architecture. The PAWS intelligently utilizes the Self-Organizing Map (SOM) as the user’s profile and therefore. although the users may have different preferences. which provides fair bandwidth allocations to competing flows. An approach for filling-in blocks of missing data in wireless image transmission is presented in this paper. when combined with ECSFQ. In order to reduce the wastage of time on browsing unnecessary documents. it is reconstructed using an image in painting algorithm. and service provider portability. two memory-resident database indices. This paper proposes a scalable. efficient location database architecture based on the location-independent PTNs. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. robust. while texture synthesis is used for the textured blocks. Simulation results show that NBP effectively eliminates congestion collapse and that. however. NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network. this paper presents an intelligent Personal Agent forWeb Search (PAWS). is capable of providing high quality answer set to the user. 3-D SPIHT-BPCS steganography and motion-JPEG2000-BPCS steganography are presented and tested. we aim to reconstruct the lost data using correlation between the lost block and its neighbors. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. Instead of using common retransmission query protocols. When such images are transmitted over fading channels. The proposed multi tree database architecture consists of a number of database subsystems. The viability of this method for image compression. If the lost block contained structure. wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain.The widely used web search engines give different users the same answer set. This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. Personalized web search carry out the search for each user with his preference. are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. By exploiting the localized nature of calling and mobility patterns. A location-independent personal telecommunication number (PTN) scheme is conducive to implementing such a global mobile system. NBP is complemented with the proposed enhanced core-stateless fair queueing (ECSFQ) mechanism.

An edge-detection filter can also be used to improve the appearance of blurred or anti-aliased image streams. it will resend the packet until the maximum times of retry is reached. we propose a simple. Forward nodes are selected in such a way that (1) the sender’s 2-hop neighbors are covered and (2) the sender’s 1-hop neighbors are either a forward node. or a non-forward node but covered by at least two forwarding neighbors. reliable broadcast algorithm. is prone to the broadcast storm problem if forward nodes are not carefully designated. If the sender does not detect all its forward nodes’ retransmissions. The basic edge-detection operator is a matrix area gradient operation that determines the level of variance between different pixels. commonly vertical and horizontal. Prewitt. and Sobel operators. This process detects outlines of an object and boundaries between objects and the background in the image. only selected forward nodes retransmit the broadcast message. which are orthogonal to each other. The edge-detection operator is calculated by forming a matrix centered on a pixel chosen as the center of the matrix area. All the gradient-based algorithms have kernel operators that calculate the strength of the slope in directions.Edge detection is a fundamental tool used in most image processing applications to obtain information from the frames as a precursor step to feature extraction and object segmentation. If the value of this matrix area is above a given threshold. Among 1-hop neighbors of the sender. Simulation results show that the algorithm provides good performance for a broadcast operation under high transmission error rate environment . The Prewitt operator measures two components. Later. called double-covered broadcast (DCB). Mobile ad hoc networks (MANETs) suffer from high transmission error rate because of the nature of radio communications. The non-forward 1-hop neighbors of the sender do not acknowledge the reception of the broadcast. the contributions of the different components of the slopes are combined to give the total value of the edge strength. as a fundamental service in MANETs. The retransmissions of the forward nodes are received by the sender as confirmation of their receiving the packet. The objective of reducing the broadcast redundancy while still providing high delivery ratio for each broadcast packet is a major challenge in a dynamic environment. Examples of gradient-based edge detectors are Roberts. The broadcast operation. then the middle pixel is classified as an edge. In this paper. that takes advantage of broadcast redundancy to improve the delivery ratio in the environment that has rather high transmission error rate.

Sign up to vote on this title
UsefulNot useful