This action might not be possible to undo. Are you sure you want to continue?
3rd Floor, Old No.13/1, New No.27, Brindavan Street West Mambalam Chennai-600033 S.No Title
A TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN MOBILE COMPUTING WIRELESS SENSOR NETWORKS
ROUTE STABILITY IN MANETS DIRECTION MOBILITY MODEL
GREEDY ROUTING WITH ANTI-VOID WIRELESS SENSOR NETWORKS
CELL BREATHING TECHNIQUES FOR LOAD BALANCING IN MOBILE COMPUTING WIRELESS LANS
RESEQUENCING ANALYSIS OF STOP-AND-WAIT PARALLEL MULTICHANNEL COMMUNICATIONS
RESOURCE ALLOCATION COMMUNICATIONS SYSTEMS SERVICES
IN OFDMA SUPPORTING
WIRELESS MULTIMEDIA NETWORKING
ENHANCING PRIVACY AND AUTHORIZATION SCALABILITY IN THE GRID THROUGH ONTOLOGIES
INFORMATION TECHNOLOGY BIOMEDICINE
COMBINATORIAL APPROACH INJECTION ATTACKS
SQL ADVANCE CONFERENCE
DYNAMIC SEARCH ALGORITHM IN UNSTRUCTURED PEER-TO- PARALLEL AND JAVA PEER NETWORKS DISTRIBUTED SYSTEMS
ANALYSIS OF SHORTEST PATH ROUTING FOR LARGE MULTINETWORKING HOP WIRELESS NETWORKS
SECURE AND POLICY-COMPLIANT SOURCE ROUTING
FLEXIBLE DETERMINISTIC PACKET MARKING: AN IP PARALLEL AND JAVA TRACEBACK SYSTEM TO FIND THE REAL SOURCE OF ATTACKS DISTRIBUTED SYSTEMS
13 DISTRIBUTED ALGORITHMS FOR CONSTRUCTING PARALLEL AND APPROXIMATE MINIMUM SPANNING TREES IN WIRELESS JAVA DISTRIBUTED SYSTEMS SENSOR NETWORKS 14 MOBILITY MANAGEMENT APPROACHES FOR NETWORKS: PERFORMANCE COMPARISON RECOMMENDATIONS MOBILE IP AND USE NETWORKING JAVA 15 SINGLE-LINK FAILURE DETECTION IN ALL-OPTICAL NETWORKING NETWORKS USING MONITORING CYCLES AND PATHS DOT NET 16 A FAITHFUL DISTRIBUTED MECHANISM FOR SHARING THE PARALLEL AND J2EE COST OF MULTICAST TRANSMISSIONS DISTRIBUTED SYSTEMS .
17 ATOMICITY ANALYSIS OF SERVICE COMPOSITION ACROSS SOFTWARE ENGINEERING ORGANIZATIONS J2EE 18 DYNAMIC ROUTING WITH SECURITY CONSIDERATIONS PARALLEL AND JAVA DISTRIBUTED SYSTEMS 19 COLLUSIVE PIRACY PREVENTION IN P2P CONTENT DELIVERY COMPUTERS NETWORKS J2EE 20 SPREAD SPECTRUM WATERMARKING SECURITY INFORMATION AND SECURITY FORENSICS DOT NET .
21 LOCAL CONSTRUCTION OF NEAR-OPTIMAL POWER SPANNERS MOBILE COMPUTING FOR WIRELESS AD-HOC NETWORKS DOT NET 22 MULTIPLE ROUTING NETWORK RECOVERY CONFIGURATIONS FOR FAST IP NETWORKING JAVA 23 COMPACTION OF SCHEDULES AND A TWO-STAGE APPROACH PARALLEL AND DOT NET FOR DUPLICATION-BASED DAG SCHEDULING DISTRIBUTED SYSTEMS 24 THE EFFECTIVENESS NETWORKS OF CHECKSUMS FOR EMBEDDED DEPENDABLE AND SECURE DOT NET COMPUTING .
25 DETECTING MALICIOUS PACKET LOSSES PARALLEL AND JAVA DISTRIBUTED SYSTEMS 26 VIRUS SPREAD IN NETWORKS NETWORKING DOT NET 27 BIASED RANDOM WALKS IN UNIFORM WIRELESS NETWORKS MOBILE COMPUTING DOT NET 28 INFORMATION CONTENT-BASED SENSOR SELECTION AND TRANSMISSION POWER ADJUSTMENT FOR COLLABORATIVE MOBILE COMPUTING TARGET TRACKING DOT NET .
29 PRESTO: FEEDBACKDRIVEN DATA MANAGEMENT IN SENSOR NETWORKING NETWORKS DOT NET 30 EXPLICIT LOAD BALANCING TECHNIQUE FOR NGEO SATELLITE NETWORKING IP NETWORKS WITH ON-BOARD PROCESSING CAPABILITIES DOT NET 31 DELAY ANALYSIS FOR MAXIMAL SCHEDULING WITH FLOW NETWORKING CONTROL IN WIRELESS NETWORKS WITH BURSTY TRAFFIC DOT NET 32 ENERGY MAPS FOR MOBILE WIRELESS MOBILE COMPUTING NETWORKS:COHERENCE TIME VERSUS SPREADING PERIOD DOT NET .
33 RANDOMCAST: AN ENERGY EFFICIENT SCHEME FOR MOBILE AD HOC NETWORKS COMMUNICATION MOBILE COMPUTING DOT NET 34 MINING FILE DOWNLOADING TIME IN STOCHASTIC PEER TO NETWORKING PEER NETWORKS DOT NET 35 QUIVER: CONSISTENT OBJECT SHARING FOR EDGE SERVICES PARALLEL AND JAVA DISTRIBUTED SYSTEMS 36 RATE & DELAY GUARANTEES PROVIDED BY CLOSE PACKET NETWORKING SWITCHES WITH LOAD BALANCING JAVA .
37 GEOMETRIC APPROACH TO IMPROVING ACTIVE PACKET LOSS NETWORKING MEASUREMENT JAVA 38 A PRECISE TERMINATION CONDITION OF THE PROBALASTIC DEPENDABLE AND SECURE JAVA PACKET MARKING ALGORITHM COMPUTING 39 INTRUSION DETECTION IN HOMOGENEOUS HETEROGENEOUS WIRELESS SENSOR NETWORKS & MOBILE COMPUTING JAVA 40 A DISTRIBUTED AND SCALABLE ROUTING TABLE MANAGER FOR THE NEXT GENERATION OF IP ROUTERS DOT NET 41 PERFORMANCE OF A SPECULATIVE TRANSMISSION SCHEME NETWORKING FOR SCHEDULING LATENCY REDUCTION JAVA .
42 EFFICIENT 2-D GRAY SCALE MORPHOLOGICAL TRANSFORMATIONS WITH ARBITRALY FLAT STRUCTURING IMAGE PROCESSING ELEMENTS DOT NET 43 RATE ALLOCATION & NETWORK LIFETIME PROBLEM FOR NETWORKING WIRELESS SENSOR NETWORKS DOT NET 44 VISION BASED PROCESSING FOR REAL TIME ACQUISITION BASED CODE STRUCTURED LIGHT 3-D DATA IMAGE PROCESSING DOT NET 45 USING THE CONCEPTUAL COHESION OF CLASSES FOR FAULT SOFTWARE ENGINEERING PREDICTION IN OBJECT ORIENTED SYSTEMS JAVA .
46 LOCATION BASED SPATIAL QUERY PROCESSING IN WIRELESS MOBILE COMPUTING BROADCAST ENVIRONMENTS JAVA 47 BANDWIDTH ESTIMATION FOR IEEE 802.11 BASED ADHOC MOBILE COMPUTING NETWORK JAVA 48 MODELING & AUTOMATED CONTAINMENT OF WORMS DEPENDABLE AND SECURE JAVA COMPUTING 49 TRUST WORTHY COMUTING UNDER RESOURCE CONSTRAINTS DEPENDABLE AND SECURE DOT NET WITH THE DOWN POLICY COMPUTING .
50 BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS MOBILE COMPUTING JAVA 51 STATISTICAL TECHNIQUES FOR DETECTING ANOMALIES THROUGH PACKET HEADER DATA TRAFFIC NETWORKING DOT NET 52 HBA DISTRIBUTED METADATA MANAGEMENT FOR LARGE PARALLEL AND DOT NET SCALE CLUSTER BASED STORAGE SYSTEM DISTRIBUTED SYSTEMS 53 TEMPORAL PORTIONING OF COMMUNICATION RESOURCES IN DEPENDABLE AND SECURE DOT NET AN INTEGRATED ARCHITECTURE COMPUTING .
A HIGH-CAPACITY MULTIMEDIA APPROACH DOT NET 57 PROTECTION OF DATABASE SECURITY VIA COLLABORATIVE KNOWLEDGE AND INFERENCE DETECTION ENGINEERING DATA J2EE .54 THE EFFECT OF PAIRS IN PROGRAM DESIGN TASKS SOFTWARE ENGINEERING DOT NET 55 CONSTRUCTING INTER-DOMAIN PACKET FILTERS CONTROL IP SPOOFING BASED ON BGP UPDATES TO DEPENDABLE AND SECURE JAVA COMPUTING 56 ORTHOGONAL DATA EMBEDDING FOR BINARY IMAGES IN MORPHOLOGICAL TRANSFORM DOMAIN.
58 ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL: SOFTWARE ENGINEERING ED3M DOT NET 59 ACTIVE LEARNING RETRIEVAL METHODS FOR INTERACTIVE IMAGE IMAGE PROCESSING DOT NET 60 LOCALIZED SENSOR AREA COMMUNICATION OVERHEAD COVERAGE WITH LOW MOBILE COMPUTING DOT NET 61 HARDWARE ENHANCED ASSOCIATION RULE MINING WITH KNOWLEDGE AND HASHING AND PIPELINING ENGINEERING DATA DOT NET .
62 EFFICIENT MULTICAST RESOURCE ALLOCATION FOR WIRELESS MOBILE COMPUTING DOT NET 63 EFFICIENT ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKING NETWORKS: THE MULTIPLE COPY CASE DOT NET 64 A NOVEL FRAMEWORK FOR SEMANTIC ANNOTATION AND MULTIMEDIA PERSONALIZED RETRIEVAL OF SPORTS VIDEO DOT NET 65 TWO TECHNIQUES FOR FAST CONSTRAINED SHORTEST PATHS COMPUTATION OF NETWORKING JAVA .
66 WATERMARKING RELATIONAL OPTIMIZATION-BASED TECHNIQUES DATABASES USING KNOWLEDGE AND ENGINEERING DATA DOT NET 67 PROBABILISTIC TRACE BACK PACKET MARKING FOR LARGE-SCALE IP NETWORKING DOT NET 68 DUAL-LINK FAILURE RESILIENCY THROUGH BACKUP LINK NETWORKING MUTUAL EXCLUSION JAVA 69 TRUTH DISCOVERY WITH MULTIPLE INFORMATION PROVIDERS ON THE WEB CONFLICTING KNOWLEDGE AND ENGINEERING DATA J2EE 70 DYNAMIC LOAD BALANCING IN DISTRIBUTED SYSTEMS IN THE PARALLEL AND JAVA PRESENCE OF DELAYS: A REGENERATION-THEORY APPROACH DISTRIBUTED SYSTEMS .
71 A SEMI FRAGILE CONTENT BASED IMAGE WATERMARKING FOR AUTHENTICATION IN SPATIAL DOMAIN USING DISCRETE JOURNAL COSINE TRANSFORM JAVA 72 AN ADAPTIVE PROGRAMMING MODEL FOR FAULT-TOLERANT DEPENDABLE AND SECURE JAVA DISTRIBUTED COMPUTING COMPUTING 73 AN ACKNOWLEDGMENT-BASED APPROACH FOR DETECTION OF ROUTING MISBEHAVIOR IN MANETS THE MOBILE COMPUTING JAVA 74 HYBRID INTRUSION DETECTION WITH WEIGHTED SIGNATURE DEPENDABLE AND SECURE J2EE GENERATION OVER ANOMALOUS INTERNET EPISODES(HIDS) COMPUTING 75 PFUSION: A P2P ARCHITECTURE FOR CONTENT-BASED SEARCH AND RETRIEVAL INTERNET-SCALE PARALLEL AND DOT NET DISTRIBUTED SYSTEMS .
76 ROUTE RESERVATION IN AD HOC WIRELESS NETWORKS MOBILE COMPUTING JAVA 77 DISTRIBUTED CACHE UPDATING FOR THE DYNAMIC SOURCE MOBILE COMPUTING ROUTING PROTOCOL JAVA 78 DIGITAL IMAGE DETECTION AND PAINTINGS PROCESSING TECHNIQUES FOR THE REMOVAL OF CRACKS IN DIGITIZED IMAGE PROCESSING DOT NET 79 NOISE REDUCTION BY FUZZY IMAGE FILTERING FUZZY SYSTEMS JAVA 80 A NOVEL SECURE COMMUNICATION PROTOCOL FOR AD HOC NETWORKS [SCP] JAVA .
ECOMMERCE AND E-SERVICE 84 A DISTRIBUTED DATABASE ARCHITECTURE FOR GLOBAL NETWORKING ROAMING IN NEXT-GENERATION MOBILE NETWORKS JAVA 85 STRUCTURE AND TEXTURE FILLING-IN OF MISSING IMAGE BLOCKS IN WIRELESS TRANSMISSION AND COMPRESSION IMAGE PROCESSING APPLICATIONS JAVA .81 FACE RECOGNITION USING LAPLACIAN FACES PATTERN ANALYSIS AND JAVA MACHINE INTELLIGENCE 82 INTERNATIONAL PREDICTIVE JOB SCHEDULING IN A CONNECTION LIMITED CONFERENCE SYSTEM USING PARALLEL GENETIC ALGORITHM INTELLIGENT ADVANCED SYSTEMS ON JAVA AND 83 PERSONALIZED WEB SEARCH WITH SELF-ORGANIZING MAP INTERNATIONAL CONFERENCE ON EJ2EE TECHNOLOGY.
86 NETWORK BORDER PATROL: PREVENTING CONGESTION NETWORKING COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET JAVA 87 APPLICATION OF BPCS COMPRESSED VIDEO STEGANOGRAPHY TO WAVELET IMAGE PROCESSING JAVA 88 IMAGE PROCESSING FOR EDGE DETECTION DOT NET 89 DOUBLE-COVERED BROADCAST (DCB): A SIMPLE RELIABLE CONFERENCE-IEEE BROADCAST ALGORITHM IN MANETS INFOCOM JAVA .
Year 2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2009 2009 2009 .
2009 2008 2008 2008 .
2008 2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 .
2008 2008 2008 2008 2007 .
2007 2007 2007 2007 2007 .
2007 2006 2006 2006 2006 .
2005 2005 2005 2004 2004 .
2004 2004 2004 .
we study the problem of selecting an optimal route in terms of path availability. This goal is typically achieved when the load of access points (APs) is balanced. . In order to maintain the network requirement of the proposed RUT scheme under the non-UDG networks.e. The proposed rolling-ball UDG boundary traversal (RUT) is employed to completely guarantee the delivery of packets from the source to the destination node under the UDG network. In this paper. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods. A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes. while the intersection navigation (IN) mechanism is proposed to obtain the best rolling direction for boundary traversal with the adoption of shortest path criterion. which is conceptually similar to cell breathing in cellular networks. As a result. we propose an approach to improve the efficiency of reactive routing protocols. Finally. while there exist other schemes that can guarantee the delivery of packets with the excessive consumption of control overheads. several load balancing schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i. the results show that our tabu search-based approach returns high-quality solutions in terms of cluster cost and execution time. a greedy antivoid routing (GAR) protocol is proposed to solve the void problem with increased routing efficiency by exploiting the boundary finding technique for the unit disk graph (UDG). have shown that AP load is often substantially uneven. which is based on network energy maps and Quality-of-Service (QoS) requirements. however. Our approach defines moves using largest size cliques in a feasibility cluster graph. The unreachability problem (i. this approach is suitable for handling network extensibility in a satisfactory manner. In particular. The proposed scheme does not require any modification to the users neither the IEEE 802. We also consider the problem of network-wide min-max load balancing. Comparing with the existing localized routing algorithms. This paper proposes a new centralized clustering method for a data collection mechanism in wireless sensor networks. the partial UDG construction (PUC) mechanism is proposed to transform the non-UDG into UDG setting for a portion of nodes that facilitate boundary traversal. the hop count reduction (HCR) scheme is utilized as a short-cutting technique to reduce the routing hops by listening to the neighbor’s traffic. The clustering problem is modeled as a hypergraph partitioning and its resolution is based on a tabu search heuristic. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce the latency and the overhead due to route reconstruction. To alleviate such imbalance of load.. In this work. and we derive both exact and approximate (but simple) expressions of these probabilities. Compared to other methods (CPLEXbased method.11 standard. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. The proofs of correctness for the GAR scheme are also given in this paper. Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs).Abstract The main challenge in wireless sensor network deployment pertains to optimizing energy consumption when collecting data from sensor nodes. The boundary map (BM) and the indirect map searching (IMS) scheme are proposed as efficient algorithms for the realization of the RUT technique. Recent studies on operational WLANs. the simulation results show that the proposed GAR-based protocols can provide better routing efficiency. distributed method. AP’s coverage range). Some of the current research work cannot fully resolve the void problem.. Through our results. Moreover. simulated annealing-based method). the so-called void problem) that exists in the greedy routing algorithms has been studied for the wireless sensor networks. These three schemes are incorporated within the GAR protocol to further enhance the routing performance with reduced communication overhead. we study both the availability and the duration probability of a routing path that is subject to link failures caused by node mobility. we focus on the case where the network nodes move according to the Random Direction model.e.
modeling and performance. which is a novel idea of incorporating the uniqueness of Signature based method and auditing method. Through examples. The use of data Grids for sharing relevant data has proven to be successful in many research disciplines. multichannel data communications. We evaluate the resequencing delay and the resequencing buffer occupancy. The paper describes the architecture and processes. We design a resource allocation algorithm for down-link of orthogonal frequency division multiple access (OFDMA) systems supporting real-time (RT) and best-effort (BE) services simultaneously over a time-varying wireless channel.In this paper. we derive the probability generating function of the resequencing buffer occupancy and the probability mass function of the resequencing delay. we consider a multichannel data communication system in which the stop-and-wait automatic-repeat request protocol for parallel channels with an in-sequence delivery guarantee (MSW-ARQ-inS) is used for error control. by assuming the Gilbert–Elliott model for each channel. Simulation results show that the proposed algorithm well meets the QoS requirements with the high throughput and outperforms the modified largest weighted delay first (M-LWDF) algorithm that supports similar QoS requirements. we compute the probability mass functions of the resequencing buffer occupancy and the resequencing delay for time-invariant channels. In signature based method It uses an approach called Hirschberg algorithm. which is used to control the fluctuation in transmission rates and to limit the RT packet delay to a moderate level.. Then. However. SW-ARQ. From numerical and simulation results. The other is the tolerable average absolute deviation of transmission rate (AADTR) just for the RT services. resequencing delay. From signature based method standpoint of view. One is the required average transmission rate for both RT and BE services. and also shows results obtained in a medical imaging platform. There are many approaches that provide encrypted storages and key shares to prevent the access from unauthorized users. resequencing buffer occupancy. The major issue of web application security is the SQL Injection. On the other hand from the Auditing based method standpoint of view. This system was able to stop all of the successful attacks and did not generate any false positives. the use of these environments when personal data are involved (such as in health) is reduced due to its lack of trust. we extend our analysis to time-varying channels. We present in this paper a privacy-enhancing technique that uses encryption and relates to the structure of the data and their organizations. Index Terms—In-sequence delivery. Under the assumption that all channels have the same transmission rate but possibly different time-invariant error rates. . respectively. it is a divide and conquer approach to reduce the time and space complexity. the selective-repeat ARQ) over multiple time-varying channels. it analyzes the transaction to find out the malicious access. it presents a detection mode for SQL injection using pair wise sequence alignment of amino acid code formulated from web application form parameter sent via web server. these approaches are additional layers that should be managed along with the authorization policies. which can give the attackers unrestricted access to the database that underlie Web applications and has become increasingly frequent and serious. We take two kinds of QoS requirements into account. we analyze trends in the mean resequencing buffer occupancy and the mean resequencing delay as functions of system parameters. However. We expect that the modeling technique and analytical approach used in this paper can be applied to the performance evaluation of other ARQ protocols (e. We formulate the optimization problem representing the resource allocation under consideration and solve it by using the dual optimization technique and the projection stochastic subgradient method. The proposed algorithm aims at maximizing system throughput while satisfying quality of service (QoS) requirements of the RT and BE services. A combinatorial approach for protecting Web applications against SQL injection is discussed in this paper.g. providing a natural way to propagate authorization and also a framework that fits with many use cases.
it generates a large amount of query messages and. The density function depends on the traffic pattern generated by straight line routing. our analysis shows that the nodal load at each node is a function of the node’s Voronoi cell. We present Platypus. We propose the dynamic search (DS) algorithm. FDPM adopts a flexible mark length strategy to make it compatible to different network environments. On the contrary. Flooding and random walk (RW) are two typical search algorithms. Capabilities can be composed to construct routes through multiple ASes and can be delegated to third parties. which is defined as the number of packets served at a node. While a number of other traceback schemes exist. and the traffic pattern specified by the source and destination randomness and straight line routing. query hits. query efficiency. . it also adaptively changes its marking rate according to the load of the participating router by a flexible flow-based marking scheme. and determines where the hot spot is created in the network. It resembles flooding for short-term search and RW for long-term search. DS takes advantage of various contexts under which each previous search algorithm performs well. The motivation of this traceback system is from DDoS defense. The built-in overload prevention mechanism makes this system capable of achieving a satisfactory traceback result even when the router is heavily loaded. nor to accurately determine the correct party to charge for forwarding the traffic. when. However. RW searches conservatively. Evaluations on both simulation and real system implementation demonstrate that FDPM requires a moderately small number of packets to complete the traceback process. which is a generalization of flooding and RW. We analyze the performance of DS based on some performance metrics including the success rate. search time. and search efficiency. For a given total offered load on the network. there has thus far been no way for independent ASes to ensure that such traffic does not circumvent local traffic policies. In this paper. straight line routing can balance the load over the network. Moreover. We describe the design and implementation of an extensive Platypus policy framework that can be used to address several issues in wide-area routing at both the edge and the core. Hence. FDPM provides innovative features to trace the source of IP packets and can obtain better tracing capability than others. an authenticated source routing system built around the concept of network capabilities. and whose packets traverse their networks. we present a novel and practical IP traceback system called Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. In particular. which allow for accountable. the node’s location in the network. inter-domain route control remains elusive. While researchers have proposed a number of source routing techniques to combat this limitation. and utility of the network for end users and ISPs alike. DS could be further combined with knowledge-based search mechanisms to improve the search performance. In today’s Internet. such control could improve the performance. thus. nevertheless. Our results show that incremental deployment of Platypus can achieve immediate gains. we analyze the impact of straight line routing in large homogeneous multi-hop wireless networks. induced by straight line routing. depending on the traffic patterns. while ISPs maintain control over where. Flooding searches aggressively and covers the most nodes. In this paper. add little additional load to routers and can trace a large number of sources in one traceback process with low false positive rates. We estimate the nodal load. In the asymptotic regime. we show that each node’s probability that the node serves a packet arriving to the network approaches the products of half the length of the Voronoi cell perimeter and the load density function that a packet goes through the node’s location. reliability. It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic. It only generates a fixed amount of query messages at each hop but would take longer search time. and about 186 times better than flooding and 120 times better than RW in bimodal topologies. query messages. Internet Protocol (IP) traceback is the enabling technology to control Internet crime. Platypus caters to the needs of both end users and ISPs: users gain the ability to pool their resources and select routes other than the default. does not scale. DS performs about 25 times better than flooding and 58 times better than RW in power-law graphs. contrary to conventional wisdom.Designing efficient search algorithms is a key challenge in unstructured peer-to-peer networks. Numerical results show that DS provides a good tradeoff between search performance and cost. and evaluate its performance and security. On average. It has a wide array of applications for other security systems. fine-grained path selection by cryptographically attesting to policy compliance at each hop along a source route.
We show that the MC mechanisms generate a smaller revenue compared to the SH mechanisms. The problem of sharing the cost of multicast transmissions was studied in the past. MCs and MPs are required to pass through one or more monitoring locations. For an arbitrary network (not necessarily threeedge connected). Motivated by these considerations. They use different Foreign Agents (FAs) and Gateway FAs (GFAs) hierarchies to concentrate the registration processes. In wireless networks. it is resource consuming strategy unless for frequent MT mobility. and any algorithm needs to be simple. The main contribution of this paper is the analytic model that allows the mobility management approaches performance evaluation. The Multicast HMIP strategy limits the registration processes in the GFAs. the NNTs can be maintained dynamically with polylogarithmic rearrangements under node insertions/deletions. Several mobility management strategies have been proposed which aim reducing the signaling traffic related to the Mobile Terminals (MTs) registration with the Home Agents (HAs) whenever their Care-of-Addresses (CoAs) change. We propose a distributed Shapley Value mechanism in which the participating nodes do not have incentives to deviate from the mechanism specifications. We also provide a linear-time algorithm to compute the minimum number of required monitoring locations. We also study the convergence and scalability of the mechanisms by varying the number of nodes and the number of users per node. For high-mobility MTs. We introduce the concept of monitoring cycles (MCs) and monitoring paths (MPs) for unique identification of single-link failures. and two mechanisms. For a network with only one monitoring location. Marginal Cost (MC) and Shapley Value (SH). We also perform extensive simulations. we design and analyze a class of simple and local distributed algorithms called Nearest Neighbor Tree (NNT) algorithms for energy-efficient construction of an approximate MST in wireless networks. In our analysis. and thus. we show provable bounds on both the quality of the spanning tree produced and the energy needed to construct them. Our results.While there are distributed algorithms for the minimum spanning tree (MST) problem. We show that while NNT produces a close approximation to the MST. the distributed protocols implementing them are susceptible to manipulation by autonomous nodes. it provides lowest mobility signaling delay compared to the HMIP and DHMIP approaches. Further. making them impractical for resourceconstrained networks such as wireless sensor networks. these algorithms require relatively large number of messages and time. We compare the execution time of MC and SH mechanisms for the Tamper-Proof and Autonomous Node models. local. Humblet. efficient management of mobility is a crucial issue to support mobile users. Assuming that the nodes are uniformly distributed. we formulate the problem of constructing MCs as an integer linear program (ILP). We also develop heuristic approaches for constructing MCs in the presence of one or more monitoring locations. and Spira. we propose an analytic model to evaluate the mean signaling delay and the mean bandwidth per call according to the type of MT mobility. and are fairly involved. For this case. yielding to high-mobility signaling. which show that the bounds are much better in practice. and motivate similar considerations for other important problems. Through extensive simulations. it consumes asymptotically less energy than the classical message-optimal distributed MST algorithm due to Gallagery. However. we prove that three-edge connectivity is a necessary and sufficient condition for constructing MCs that uniquely identify any single-link failure in the network. a sensor has very limited power. For high-mobility MTs. the MHMIP outperforms the DHMIP and MIP strategies in almost all the studied cases. We also show that increasing the number of users per node is beneficial for the systems implementing the SH mechanisms from both computational and economic perspectives. we consider the problem of fault localization in all-optical networks. Although both of them are strategy proof mechanisms. they are not attractive to the content provider. We experimentally investigate the performance of the existing and the proposed cost-sharing mechanisms by implementing and deploying them on PlanetLab. demonstrate the first tradeoff between the quality of approximation and the energy required for building spanning trees on wireless networks. . We show that the proposed mechanism is a faithful implementation of the Shapley Value mechanism. The Mobile Internet Protocol (MIP) has been proposed to support global mobility in IP networks. were proposed to solve it. In this paper. we describe a fault localization technique that uses both MPs and MCs and that employs multiple monitoring locations. we demonstrate the effectiveness of the proposed monitoring technique. the Hierarchical MIP (HMIP) and Dynamic HMIP (DHMIP) strategies localize the registration in FAs and GFAs. Hence. They are constructed such that any single-link failure results in the failure of a unique combination of MCs and MPs that pass through the monitoring location(s). and energy efficient. to the best of our knowledge. In such networks.
a structural criterion for the backend processes of involved services. An analytic study on the proposed algorithm is presented. . the security is quantified from an information-theoretic point of view by means of the equivocation about the secret parameters. The basic idea is to detect pirates timely with identity-based signatures and time stamped tokens. Two case studies from the supply chain and insurance domains are given to evaluate our proposal and demonstrate the applicability of our approach. To address this problem. and copyright compliance in exploring P2P network resources. The algorithm is easy to implement and compatible with popular routing protocols. Our framework enables the analysis of the atomicity sphere for service compositions using these public views instead of their backend processes. The main results reveal fundamental limits and bounds on security and provide insight into other properties.9 percent prevention rate in Gnutella. Such online piracy has hindered the use of open P2P networks for commercial content delivery. Our work opens up the low-cost P2P technology for copyrighted content delivery. eDonkey. and Freenet. providing a comparison with previous approaches. Detected pirates will receive poisoned chunks in their repeated attempts. To achieve atomicity. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. higher content availability. This allows service consumers to choose suitable services such that their composition satisfies the atomicity sphere without disclosing the details of their backend processes. On the practical side. Existing analysis techniques for the atomicity sphere generally assume complete knowledge of all involved backend processes. The scheme stops collusive piracy without hurting legitimate P2P clients by targeting poisoning on detected violators. Based on the theoretical result. The scheme is shown less effective in protecting some poison-resilient networks like BitTorrent and Azureus. We achieved 85-98 percent prevention rate on eMule. we present algorithms to construct atomicity-equivalent public views and to analyze the atomicity sphere for a service composition. we will propose a dynamic routing algorithm that could randomize delivery paths for data transmission. workable estimators of the secret parameters are proposed and theoretically analyzed for a variety of scenarios. a service composition should satisfy the atomicity sphere. and the tradeoff between robustness and security. Security has become one of the major issues for data communication over wired and wireless networks. and a series of simulation experiments are conducted to verify the analytic results and to show the capability of the proposed algorithm Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. etc. Morpheus. The advantage lies mainly in minimum delivery cost. security is understood as the difficulty of estimating the secret parameters of the embedding function based on the observation of watermarked signals. KaZaA. we propose a process algebraic framework to publish atomicity-equivalent public views from the backend processes. and showing that the security of many schemes used in practice can be fairly low. without introducing extra control messages. such as the Routing Information Protocol in wired networks and DestinationSequenced Distance Vector protocol in wireless networks. On the theoretical side. Pirates are thus severely penalized with no chance to download successfully in tolerable time. These public views extract relevant task properties and reveal only partial process details that service providers need to expose. such as the impact of the embedding parameters. Based on simulation results. exclusively. Such an assumption is invalid when some service providers do not release all details of their backend processes to service consumers outside the organizations. In this context.Atomicity is a highly desirable property for achieving application consistency in service compositions. we find 99. Different from the past work on the designs of cryptography algorithms and system infrastructures. This paper presents both theoretical and practical analyses of the security offered by watermarking and data hiding methods based on spread spectrum. We propose a proactive content poisoning scheme to stop colluders and pirates from alleged copyright infringements in P2P file sharing. Paid clients (colluders) may illegally share copyrighted content files with unpaid clients (pirates).
We present a local distributed algorithm that, given a wireless ad hoc network modeled as a unit disk graph U in the plane, constructs a planar power spanner of U whose degree is bounded by k and whose stretch factor is bounded by 1 + (2sin pi/k)p, where k ges 10 is an integer parameter and p isin [2, 5] is the power exponent constant. For the same degree bound k, the stretch factor of our algorithm significantly improves the previous best bounds by Song et al. We show that this bound is near-optimal by proving that the slightly smaller stretch factor of 1 + (2sin pi/k+1)p is unattainable for the same degree bound k. In contrast to previous algorithms for the problem, the presented algorithm is local. As a consequence, the algorithm is highly scalable and robust. Finally, while the algorithm is efficient and easy to implement in practice, it relies on deep insights on the geometry of unit disk graphs and novel techniques that are of independent interest.
As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP networks, we present a new recovery scheme called Multiple Routing Configurations (MRC). Our proposed scheme guarantees recovery in all single failure scenarios, using a single mechanism to handle both link and node failures, and without knowing the root cause of the failure. MRC is strictly connectionless, and assumes only destination based hop-by-hop forwarding. MRC is based on keeping additional routing information in the routers, and allows packet forwarding to continue on an alternative output link immediately after the detection of a failure. It can be implemented with only minor changes to existing solutions. In this paper we present MRC, and analyze its performance with respect to scalability, backup path lengths, and load distribution after a failure. We also show how an estimate of the traffic demands in the network can be used to improve the distribution of the recovered traffic, and thus reduce the chances of congestion when MRC is used.
Many DAG scheduling algorithms generate schedules that require prohibitively large number of processors. To address this problem, we propose a generic algorithm, SC, to minimize the processor requirement of any given valid schedule. SC preserves the schedule length of the original schedule and reduces processor count by merging processor schedules and removing redundant duplicate tasks. To the best of our knowledge, this is the first algorithm to address this highly unexplored aspect of DAG scheduling. On the average, SC reduced the processor requirement 91%, 82% and 72% for schedules generated by PLW, TCSD and CPFD algorithms, respectively. SC algorithm has a low complexity (O(|N |3) ) compared to most duplication based algorithms. Moreover, it decouples processor economization from schedule length minimization problem. To take advantage of these features of SC, we also propose a scheduling algorithm SDS, having the same time complexity as SC. Our experiments demonstrate that, schedules generated by SDS are only 3% longer than CPFD (O(|N |4) ), one of the best algorithms in that respect. SDS and SC together form a two-stage scheduling algorithm that produces schedules with high quality and low processor requirement, and has lower complexity than the comparable algorithms that produce similar high quality results.
Embedded control networks commonly use checksums to detect data transmission errors. However, design decisions about which checksum to use are difficult because of a lack of information about the relative effectiveness of available options. We study the error detection effectiveness of the following commonly used checksum computations for embedded networks: exclusive or (XOR), two’s complement addition, one’s complement addition, Fletcher checksum, Adler checksum, and cyclic redundancy codes (CRC). A study of error detection capabilities for random independent bit errors and burst errors reveals that XOR, two’s complement addition, and Adler checksums are suboptimal for typical application use. Instead, one’s complement addition should be used for applications willing to sacrifice error detection effectiveness to reduce compute cost, Fletcher checksum for applications looking for a balance of error detection and compute cost, and CRCs for applications willing to pay a higher compute cost for further improved error detection
In this paper, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with a user-defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions. We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate network behavior.
We study how the spread of computer viruses, worms, and other self-replicating malware is affected by the logical topology of the network over which they propagate. We consider a model in which each host can be in one of 3 possible states - susceptible, infected or removed (cured and no longer susceptible to infection). We characterize how the size of the population that eventually becomes infected depends on the network topology. Specially, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, and the initial infected population is small, then the final infected population is also small in a sense that can be made precise. Conversely, if this ratio is smaller than the spectral radius, then we show in some graph models of practical interest (including power law random graphs) that the final infected population is large. These results yield insights into what the critical parameters are in determining virus spread in networks.
A recurrent problem when designing distributed applications is to search for a node with known property. File searching in peer-to-peer (P2P) applications, resource discovery in service-oriented architectures (SOAs), and path discovery in routing can all be cast as a search problem. Random walk-based search algorithms are often suggested for tackling the search problem, especially in very dynamic systems-like mobile wireless networks. The cost and the effectiveness of a random walk-based search algorithm are measured by the excepted number of transmissions required before hitting the target. Hence, to have a low hitting time is a critical goal. This paper studies the effect of biasing random walk toward the target on the hitting time. For a walk running over a network with uniform node distribution, a simple upper bound that connects the hitting time to the bias level is obtained. The key result is that even a modest bias level is able to reduce the hitting time significantly. This paper also proposes a search protocol for mobile wireless networks, whose results are interpreted in the light of the theoretical study. The proposed solution is for unstructured wireless mobile networks.
For target tracking applications, wireless sensor nodes provide accurate information since they can be deployed and operated near the phenomenon. These sensing devices have the opportunity of collaboration among themselves to improve the target localization and tracking accuracies. An energy-efficient collaborative target tracking paradigm is developed for wireless sensor networks (WSNs). A mutual-information-based sensor selection (MISS) algorithm is adopted for participation in the fusion process. MISS allows the sensor nodes with the highest mutual information about the target state to transmit data so that the energy consumption is reduced while the desired target position estimation accuracy is met. In addition, a novel approach to energy savings in WSNs is devised in the informationcontrolled transmission power (ICTP) adjustment, where nodes with more information use higher transmission powers than those that are less informative to share their target state information with the neighboring nodes. Simulations demonstrate the performance gains offered by MISS and ICTP in terms of power consumption and target localization accuracy.
This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energyefficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. geostationary counterparts. They are seen as an integral part of next generation ubiquitous communication systems. Given the non-uniform distribution of users in satellite footprints, due to several geographical and/or climatic constraints, some Inter-Satellite Links (ISLs) are expected to be heavily loaded with data packets while others remain underutilized. Such scenario obviously leads to congestion of the heavily loaded links. It ultimately results in buffer overflows, higher queuing delays, and significant packet drops. To guarantee a better distribution of traffic among satellites, this paper proposes an explicit exchange of information on congestion status among neighboring satellites. Indeed, a satellite notifies its congestion status to its neighboring satellites. When it is about to get congested, it requests its neighboring satellites to decrease their data forwarding rates by sending them a self status notification signaling message. In response, the neighboring satellites search for less congested paths that do not include the satellite in question and communicate a portion of data, primarily destined to the satellite, via the retrieved paths. This operation avoids both congestion and packet drops at the satellite. It also ensures a better distribution of traffic over the entire satellite constellation. The proposed scheme is dubbed “Explicit Load Balancing” (ELB) scheme. While the multi-path routing concept of ELB has many advantages, it may lead to persistent packet reordering. In case of connection- oriented protocols, this phenomenon results in unnecessary shrinkage of the data transmission rate. A solution to this issue is also incorporated in the design of ELB. The interactions of ELB with mechanisms that provide different QoS by differentiating traffic (e.g., Differentiated Services) are also discussed. The good performance of ELB, in terms of better traffic distribution, higher throughput, and lower packet drops, is verified via a set of simulations We consider the delay properties of one-hop networks with general interference constraints and multiple traffic streams with time-correlated arrivals. We first treat the case when arrivals are modulated by independent finite state Markov chains. We show that the well known maximal scheduling algorithm achieves average delay that grows at most logarithmically in the largest number of interferers at any link. Further, in the important special case when each Markov process has at most two states (such as bursty ON/OFF sources), we prove that average delay is independent of the number of nodes and links in the network, and hence is order-optimal. We provide tight delay bounds in terms of the individual auto-correlation parameters of the traffic sources. These are perhaps the first order-optimal delay results for controlled queueing networks that explicitly account for such statistical information. Our analysis treats cases both with and without flow control.
We show that even though mobile networks are highly unpredictable when viewed at the individual node scale, the end-toend quality-of-service (QoS) metrics can be stationary when the mobile network is viewed in the aggregate. We define the coherence time as the maximum duration for which the end-to-end QoS metric remains roughly constant, and the spreading period as the minimum duration required to spread QoS information to all the nodes. We show that if the coherence time is greater than the spreading period, the end-to-end QoS metric can be tracked. We focus on the energy consumption as the end-to-end QoS metric, and describe a novel method by which an energy map can be constructed and refined in the joint memory of the mobile nodes. Finally, we show how energy maps can be utilized by an application that aims to minimize a node’s total energy consumption over its near-future trajectory.
This paper proposes a new communication mechanism. we discuss the use of Quiver to build an e-commerce application and a distributed network traffic modeling service. for a wireless sensor network where each node is provisioned with an initial energy. we consider an overarching problem that encompasses both performance metrics. More important. Specifically. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. an adaptive timeout mechanism for link caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Important insights can be obtained by inferring duality results for the other problem. Due to mobility. In addition. it switches to a lowpower sleep state during the data transmission period. we propose proactively disseminating the broken link information to the nodes that have that link in their caches. if all nodes are required to live up to a certain lifetime criterion. and thus. . Finally. On-demand routing protocols use route caches to make routing decisions. Other workloads benefit from Quiver. To address the cache staleness issue. dispersing the computation load across the proxies and saving the costs of sending operation parameters over the wide area when these are large. Each node maintains in its cache table the information necessary for cache updates.11 PSM. Quiver enables these proxies to perform consistent accesses to shared objects by migrating the objects to proxies performing operations on those objects. In this paper. In this system the operations performed in First-In-First-Out process. These migrations dramatically improve performance when operations involving an object exhibit geographic locality. since migrating this object into the vicinity of proxies hosting these operations will benefit all such operations. However. involuntary disconnection.11 as well as 802. we advocate the use of lexicographical max-min (LMM) rate allocation. since some MANET routing protocols such as Dynamic Source Routing (DSR) collect route information via overhearing. When a node receives an advertised packet that is not destined to itself. heuristics cannot accurately estimate timeouts because topology changes are unpredictable. We detail the protocols for implementing object operations and for accommodating the addition. energy goodput and energy balance. cached routes easily become stale. When a link failure is detected. a packet must be advertised before it is actually transmitted. We show that the algorithm outperforms DSR with path caches and with Link-Max Life. we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP). We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. It performs the all operations in the proxies itself. and voluntary departure of proxies. The algorithm does not use any ad hoc parameters. We present Quiver. In this paper. via which a sender can specify the desired level of overhearing. they would suffer if they are used in combination with 802. in terms of total energy consumption. Extensive simulation using ns-2 shows that RandomCast is highly energy-efficient compared to conventional 802. Allowing no overhearing may critically deteriorate the performance of the underlying routing protocol. Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes. This system reduces the workload in the server. we show that there exists an elegant duality relationship between the LMM rate allocation problem and the LMM node lifetime problem. the algorithm notifies all reachable nodes that have cached the link in a distributed manner. it is sufficient to solve only one of the two problems. Quiver also supports optimizations for single-object reads that do not involve migrating the object. In particular. However. prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route.11 PSM-based schemes. while unconditional overhearing may offset the advantage of using PSM. thus making route caches fully adaptive to topology changes. a system that coordinates service proxies placed at the “edge” of the Internet to serve distributed clients accessing a service involving mutable objects. consumes energy unnecessarily. making a prudent balance between energy and routing performance. Therefore. To calculate the LMM rate allocation vector. This system handles two process serializability and strict serializabilty for durability in the consistent object sharing .11 Power Saving Mechanism (PSM). it reduces redundant rebroadcasts for a broadcast packet and thus saves more energy.In mobile ad hoc networks (MANETs). we study the network capacity problem under a given network lifetime requirement. avoids overhearing and conserves energy. In IEEE 802. every node overhears every data transmission occurring in its vicinity and thus. called RandomCast. which we call serial LP with Parametric Analysis (SLP-PA).
Computing and memory resources are available on both control and line cards to perform routing and forwarding tasks. we consider two sensing detection models: single-sensing detection and multiple-sensing detection. we consider this issue according to heterogeneous WSN models. Our simulation results show the advantage of multiple sensor heterogeneous WSNs. Specifically. Our simulations confirm the analytical results. This architecture also comes provides improvements in robustness and resiliency. and 2) enable more accurate measurements than standard Poisson probing at the same rate. However.Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and typically short duration of packet loss episodes. Furthermore. with a specified level of confidence. The intrusion detection is defined as a mechanism for a WSN to detect the existence of inappropriate. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination. We introduce a speculative transmission scheme to significantly reduce the average latency by allowing cells to proceed without waiting for a grant. and it is a promising means of enhancing the reliability of the PPM algorithm. the algorithm guarantees that the constructed attack graph is correct. An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking N*NR input-queued crossbar switch with receivers R per output. The experiments demonstrate the trade-offs between impact on the network and measurement accuracy. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different probabilities that a router marks the attack packets and 2) different structures of the network graph. Distributed architecture is one of the promising trends providing petabit routers with a large switching capacity and high-speed interfaces. Motivated by these observations. our method entails probe experiments that follow a geometric distribution to 1) enable an explicit trade-off between accuracy and impact on the network. there has been little analysis of the accuracy of these tools or their impact on the network. called BADABING. the attack graph constructed by the PPM algorithm would be wrong. The results demonstrate that the can be almost entirely eliminated for loads up to 50%. While active probe tools are commonly used to measure packet loss on end-to-end paths. The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph that the attack packets traversed during a distributed denial-of-service attack. Our tests show that loss characteristics reported from such Poisson-modulated probe tools can be quite inaccurate over a range of traffic conditions. In recent years. The proposed architecture This work was motivated by the need to achieve low latency in an input centrally-scheduled cell switch for highperformance computing applications. Intrusion detection in Wireless Sensor Network (WSN) is of practical interest in many applications such as detecting an intruder in a battlefield.modulated end-to-end measurements of loss in a controlled laboratory environment using IP routers and commodity end hosts. Distributed routers are designed with an optical switch fabric interconnecting line and control cards. We show that BADABING reports loss characteristics far more accurately than traditional loss measurement tools. the aim is to reduce the latency incurred between issuance of a request and arrival of the corresponding grant. . The most significant merit of the RPPM algorithm is that when the algorithm terminates. We evaluate the capabilities of our methodology experimentally by developing and implementing a prototype tool. In this work. More importantly.. In this paper. The routing table manager plays an extremely critical role by managing routing information and in particular. we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools. or anomalous moving attackers. This new hardware architecture is not efficiently utilized by the traditional software models where a single control card is responsible for all routing and management operations. We begin by testing the capability of standard Poisson. incorrect. as its termination condition is not well defined in the literature. This article presents a distributed architecture set up around a distributed and scalable routing table manager. a forwarding information table. specifically. we provide a precise termination condition for the PPM algorithm and name the new algorithm the Rectified PPM (RPPM) algorithm. the PPM algorithm is not perfect.. the exponential growth of Internet users with increased bandwidth requirements has led to the emergence of the next generation of IP routers. without a proper termination condition. The objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization.
To retrieve the unstructured information from the source code Latent Semantic Indexing is used. Specifically. which makes this method more suitable for use in granulometries. from which other transformations can be derived. This paper presents an idea for real-time Acquisition of 3-D surface data by a specially coded vision system. by a factor between 3. The structured light vision system is a successfully used for the measurement of 3D surface in vision. A color projector is controlled by a computer to generate the desired color patterns in the scene. To calculate the LMM rate allocation vector.s can be performed by a single operator for a slightly reduced computational cost per size or shape. which makes this method more suitable for use in granulometries. More important. It always outperforms the only existing comparable method. In existing approaches the cohesion is calculate from the structural information for example method attributes and references. In particular. and template matching using the hit-or-miss transform.5 and 35. So far. In this paper.s is always done by performing the operator for each size and shape of the S. in our project we are calculating the unstructured information from the source code such as comments and identifiers. The discussion focuses on erosions and dilations. dilation-erosion scale spaces. the data acquisition must be performed with only a single image. A scheme is presented to describe such a vision processing method for fast 3-D data acquisition. In our project we are achieving the high cohesion and we are predicting the fault in Object –Oriented Systems . from which other transformations can be derived. The required computing time is independent of the image content and of the number of gray levels used. With our method. we advocate the use of lexicographical max-min (LMM) rate allocation. Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate allocation among the nodes. filtering using multiple S. we consider an overarching problem that encompasses both performance metrics.). if all nodes are required to live up to a certain lifetime criterion. and template matching using the hit-or-miss transform. The discussion focuses on erosions and dilations. In this project we are measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. filtering with multiple S.1.s is always done by performing the operator for each size and shape of the S.e. separately.E. Important insights can be obtained by inferring duality results for the other problem. by a factor between 3. Practical experimental performance is provided to analyze the efficiency of the proposed methods High cohesion is desirable property in software systems to achieve reusability and maintainability. a single operator for a slightly reduced computational cost per size or shape. that is tens of picture are captured to recover a 3D sense.E.E. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency.E.E.1. Unstructured information is embedded in the source code. it is sufficient to solve only one of the two problems. separately. We show that the SLP-PA can be also employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in the literature. which was proposed in the work by Van Droogen broeck and Talbot. i.An efficient algorithm is presented for the computation of grayscale morphological operations with arbitrary 2-D flat structuring elements (S. There is some limitation in the above scheme. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3D reconstruction can be performed with only local analysis of a single image.E. which we call serial LP with Parametric Analysis (SLP-PA). With our method.E. Therefore.s can be performed by was proposed in the work by Van Droogen broeck and Talbot. So far. depending on the image type and shape of S. we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from linear program (LP). dilation-erosion scale spaces.5 and 35. In conceptual cohesion of classes. The matrix is produced by a special code sequence and a number of state transitions. filtering with multiple S.E. for a wireless sensor network where each node is provisioned with an initial energy. filtering using multiple S. depending on the image type and shape of S. we show that there exists an elegant duality relationship between the LMM rate allocation problem and the LMM node lifetime problem. we study the network capacity problem under a given network lifetime requirement. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. To achieve 3-D measurement for a dynamic scene.E.
AAC. Thus. manages to reduce the latency considerably in answering LBSQ s. Efficient processing of LBSQ s is of critical importance with the ever-increasing deployment and use of mobile technologies. In this paper. This model leads to the development of an automatic worm containment strategy that prevents the spread of a worm beyond its early stage.11-based networks have been able to provide a certain level of quality of service (QoS) by the means of service differentiation. such as Code Red. Self-propagating codes. a significant challenge is presented by wireless broadcasting environments. first is the decrypt only when necessary (DOWN) policy. such evaluation becomes even more difficult. BRuIT. We discuss the feasibility of extending the DOWN policy to various asymmetric and symmetric cryptographic primitives. Such an evaluation would. Through simulations. we compare the accuracy of the estimation we propose to the estimation performed by other state-of-the-art QoS protocols. IEEE 802. They are as follows. and it is validated through simulations and real trace data to be non intrusive. . which can substantially improve the ability of low-cost to protect the secrets. no mechanism or method has been standardized to accurately evaluate the amount of resources remaining on a given channel. We show that LBSQ s has certain unique characteristics that the traditional spatial query processing in centralized databases does not address. though maintaining high scalability and accuracy.11-based ad hoc networks. due to the IEEE 802. This is done by the following two ways.11e amendment. have drawn significant attention due to their enormously adverse impact on the Internet. however. and we illustrate the appeal of our technique through extensive simulation results. called worms. Specifically. and Slammer.Location-based spatial queries (LBSQ s) refer to spatial queries whose answers rely on the location of the inquirer. The second is cryptographic authentication strategies which employ only symmetric cryptographic primitives. and QoS-AODV. However. In this paper. which enables us to process queries without delay at a mobile host by using query results cached in its neighboring mobile peers. we are able to determine whether the worm spread will eventually stop. In multihop ad hoc networks. We demonstrate the feasibility of our approach through a probabilistic analysis. for uniform scanning worms. The model is developed for uniform scanning worms and then extended to preference scanning worms. be a good asset for bandwidth-constrained applications. we present a (stochastic) branching process model for characterizing the propagation of Internet worms. Our automatic worm containment schemes effectively contain both uniform scanning worms and local preference scanning worms. there is great interest in the research community in modeling the spread of worms and in providing adequate defense mechanisms against them. the estimation of the available bandwidth still represents one of the main issues in this field. Our approach is based on peer-to-peer sharing. Consequently. For example. In this paper. we propose an improved mechanism to estimate the available bandwidth in IEEE 802. despite the various contributions around this research topic. Nimda. We then extend our results to contain uniform scanning worms. Since 2005. The DOWN policy relies on the ability to operate with fractional parts of secrets. In this project we present a simple way to resolve a complicated network security. which have excellent scalability but often exhibit high-latency database access. based on novel ID-based key pre-distribution schemes that demand very low complexity of operations to be performed by the secure coprocessors (ScP) and can take good advantage of the DOWN policy. we present a novel query processing technique that.
we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity.000 to 10. which is shown via simulations to perform close to the approximation algorithm. changing access patterns. Our approach relies on analyzing packet header data in order to provide indications of Possible An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. port numbers and the number of flows. At the level of the communication system. anomalies and take action to suppress them before they have had much time to propagate across the network. Our approach passively monitors network traffic at regular intervals and analyzes it to find any abnormalities in the aggregated traffic. We simulate our distributed algorithm using a network simulator (ns2). memory and CPU time) of each node computer are available to multiple software components. The approximation algorithm is amenable to localized distributed implementation. In the case of bandwidth attacks. here the technique used called HIERARCHICAL BLOOM FILTER ARRAYS (HBA) to map filenames to the metadata servers holding their metadata. The above optimization problem is known to be NP-hard. Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better coordination of application subsystems compared to federated systems. HBA is reducing metadata operation by using the single metadata architecture instead of 16 metadata server. In this paper. the DECOS integrated architecture encapsulates application subsystems and their constituting software components.e. Similarly. we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least onefourth (one-half for uniform-size data items) of the optimal benefit. latency. The motivation for this work came from a need to reduce the likelihood that an attacker may hijack the campus machines to stage an attack on a third party. it may be possible to see whether the current traffic is behaving in a similar (i. For this reason.000 nodes (or super clusters) and with the amount of data in the petabyte scale or higher. it is important to ensure that the software components do not interfere through the use of these shared resources. Flash crowds could be observed through sudden increase in traffic volume to a single destination. designing efficient distributed caching algorithms is non-trivial when network nodes have limited memory. it could become possible to detect the attacks. the computational resources (for example. The other array is used to maintain the destination metadata information of all files. tools were available. Sudden increase of traffic on a certain port could signify the onset of an anomaly such as worm propagation. However. we study the utility of observing packet header data of outgoing traffic. correlated) manner. The first one with low accuracy and used to capture the destination metadata server information of frequently accessed files. A campus may want to prevent or limit misuse of its machines in staging attacks. and DoS attacks. such as destination addresses. and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao ) in all important performance metrics.Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage.. Due to encapsulation. In this article. An integrated architecture shares the system’s communication resources by using a single physical network for exchanging messages of multiple application subsystems. virtual networks on top of an underlying time-triggered physical network exhibit predefined temporal properties (that is. In order to support a seamless system integration without unintended side effects in such an integrated architecture. in order to detect attacks/anomalies originating from the campus at the edge of a campus. the temporal properties of messages sent by a software component are independent from the behavior of other software components. Defining benefit as the reduction in total access cost. and latency jitter). the usage of network may be increased and abnormalities may show up in traffic volume. we study the possibilities of trafficanalysis based mechanisms for attack and anomaly detection. The Bloom filter arrays with different levels of accuracies are used on each metadata server. By observing the traffic and correlating it to previous states of traffic. In particular. bandwidth. The network traffic could look different because of flash crowds. infrastructure problems such as router failures. The performance differential is particularly large in more challenging scenarios. in particular from those within other application subsystems . Detecting anomalies/attacks close to the source allows us to limit the potential damage close to the attacking machines. Attack propagation could be slowed through early detection. Simulation results show our HBA design to be highly effective and efficient in improving the performance and scalability of file systems in clusters with 1. such as higher access frequency and smaller memory. and possibly limit the liability from such attacks. Traffic monitoring close to the source may enable the network operator quicker identification of potential anomalies and allow better control of administrative domain’s resources. Our distributed algorithm naturally extends to networks with mobile nodes.
we develop a model to evaluate collaborative inference based on the query sequences of collaborators and their task-sensitive collaboration levels. A key feature of our scheme is that it does not require global routing information. we develop an inference violation detection system to protect sensitive data content. the detection system will examine his/her past query log and calculate the probability of inferring sensitive information. we propose an interlaced morphological binary wavelet transform to track the shifted edges.In this project efficiency of pairs in program design tasks is identified by using pair programming concept. IDPFs are constructed from the information implicit in Border Gateway Protocol (BGP) route updates and are deployed in network border routers. we view flipping an edge pixel in binary images as shifting the edge location one pixel horizontally and vertically. In this way. In both experiments. A novel effective Backward-Forward Minimization method is proposed. Hence. The query request will be denied if the inference probability exceeds the pre specified threshold. By employing IP spoofing. Based on data dependency. attackers can evade detection and put a substantial burden on the destination network for policing attack packets. pairs significantly outperformed individuals. Experimental studies reveal that information authoritativeness. which considers both backwardly those neighboring processed embeddable candidates and forwardly those unprocessed flippable candidates that may be affected by flipping the current pixel. we show that. The SIM is then instantiated to a semantic inference graph (SIG) for query-time inference violation detection. The Distributed Denial-of-Service (DDoS) attack is a serious threat to the legitimate use of the Internet. Thus. communication fidelity and honesty in collaboration are three key factors that affect the level of achievable collaboration. they can help localize the origin of an attack packet to a small number of candidate networks. Based on extensive simulation studies. Pair programming involves two developers simultaneously collaborating with each other on the same programming task to design and code a solution. Experimental results demonstrate the validity of our arguments. with full-time professional programmers being the subjects who worked on increasingly complex programming aptitude tasks related to problem solving and algorithmic design. which thus facilitates blind watermark extraction and incorporation of cryptographic signature. Variations in programmer skills in a particular language or an integrated development environment and the understanding of programming instructions can cover the skill of subjects in program design-related tasks. Unlike existing block-based approach. PATs do not require understanding of programming instructions and do not require a skill in any specific computer language. we propose an inter-domain packet filter (IDPF) architecture that can mitigate the level of IP spoofing on the Internet. In this paper. the total visual distortion can be minimized. By conducting two controlled experiments. in which the block size is constrained by 3times3 pixels or larger. An example is given to illustrate the use of the proposed technique to prevent multiple collaborative users from deriving sensitive information via inference. we constructed a semantic inference model (SIM) that represents the possible inference channels from any attribute to the pre-assigned sensitive attributes. Based on this observation. This paper proposes a data-hiding technique for binary images in morphological transform domain for authentication purpose. the users may share their query answers to increase the inference probability. Therefore. For multi-user cases. In addition. when a user poses a query. We establish the conditions under which the IDPF framework correctly works in that it does not discard packets with valid source addresses. IDPFs can proactively limit the spoofing capability of attackers. providing evidence of the value of pairs in program design-related tasks. Malicious users can exploit the correlation among data to infer sensitive information from a series of seemingly innocuous data accesses. Programming aptitude tests (PATs) have been shown to correlate with programming performance. even with partial deployment on the Internet. which renders more suitable candidates can be identified such that a larger capacity can be achieved. Prevention mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses in IP packets. This allows flexibility in tracking the edges and also achieves low computational complexity. The two processing cases that flipping the candidates of one does not affect the flippability conditions of another are employed for orthogonal embedding. . Algorithm design and its implementation are normally merged and it provides feedback to enhance the design. For a single user case. Previous controlled pair programming experiments did not explore the efficacy of pairs against individuals in program design-related tasks. we process an image in 2times2 pixel blocks. To achieve blind watermark extraction. database schema and semantic knowledge. it is difficult to use the detail coefficients directly as a location map to determine the data-hiding locations.
Initially developed within a classification framework. we propose a HAsh-based and PiPelIned (abbreviated as HAPPI) architecture for hardware enhanced association rule mining. The results are very promising. . Each sensor has a time out period and listens to messages sent by respective nodes before the time out expires. The time complexity of those steps that need to load candidate item sets or database items into the hardware is in proportion to the number of candidate item sets multiplied by the number of items in the database. Data mining techniques have been widely used in various applications. while only using defect data as the input. One of the most important data mining applications is association rule mining. In addition. Sensor nodes whose sensing area is not fully covered (or fully covered but with a disconnected set of active sensors) when the deadline expires decide to remain active for the considered round and transmit an activity message announcing it. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. In our approach. as any active method is sensitive to the boundary estimation between classes. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). the ED3M approach has been evaluated using five data sets from large industrial projects and two data sets from the literature. The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Second. Here. After hearing from more neighbors. Unlike many existing approaches. called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the defects in an ongoing testing process. Since the capacity of the hardware architecture is fixed. active learning strategy is then described. In this paper. This is a key advantage of the ED3M approach as it makes it widely applicable in different testing environments. a batch processing of images is proposed. Apriori-based association rule mining in hardware. HAPPI solves the bottleneck problem in a priori-based hardware schemes. the technique presented here does not depend on historical data from previous projects or any assumptions about the requirements and/or testers’ productivity. sensor decides to sleep only if neighbor sensor is active or not covered. Covered nodes decide to sleep. Therefore.An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the product’s required maintenance. they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques. a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. Active learning methods have been considered with increased interest in the statistical learning community. we can effectively reduce the frequency of loading the database into the hardware. inactive sensors may observe that they became covered and may decide to alter their original decision and transmit a retreat message. the RETIN strategy carries out a boundary correction to make the retrieval process more robust. each with arbitrary sensing and transmission radii. This paper provides algorithms within a statistical framework to extend active learning for online contentbased image retrieval (CBIR). ED3M is based on estimation theory. We propose several localized sensor area coverage protocols for heterogeneous sensors. a new approach. a lot of extensions are now being proposed to handle multimedia applications. First. It is a completely automated approach that relies only on the data collected during an ongoing testing process. the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. the items are loaded into the hardware separately. Focusing on interactive methods. one has to load candidate item sets and a database into the hardware. Too many candidate item sets and a large database would create a performance bottleneck. if the number of candidate item sets or the number of items in the database is larger than the hardware capacity. with or without transmitting a withdrawal message to inform neighbors about the status. Third. Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies. Here.
Different from the state of the art sports video analysis methods which heavily rely on audio/visual features. player or team according to user’s preference. before any data is sent. and then route each copy independently towards the destination. 2) The proposed approach is able to detect exact event boundary and extract event semantics that are very difficult or impossible to be handled by previous approaches. We show that. we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless networks. scale and round) the link delay or link cost. The evaluation on personalized retrieval is effective in helping meet users’ expectations. Moreover. event.In this paper. In this paper. finding the cheapest delay-constrained path is critical for real-time data flows such as voice/video calls. video analysis. which is limited at both processing power and memory space. which transforms the original problem to a simpler one solvable in polynomial time. While flooding-based schemes have a high probability of delivery. we propose two techniques that reduce the discretization errors. The simulation results show that our mechanism can effectively save the wireless and wireline bandwidth as compared to the traditional IP multicast.e. We propose a novel approach for sports video semantic annotation and personalized retrieval. To deal with such networks researchers have suggested to use flooding-based routing schemes. Besides. we propose an distributed algorithm based on Lagrangean relaxation and a network protocol based on the algorithm. much research has been designing heuristic algorithms that solve the -approximation of the problem with an adjustable accuracy. etc. Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. we introduce a new family of routing schemes that “spray” a few message copies into the network. The efficiency of the algorithms directly relates to the magnitude of the errors introduced during discretization. Our mechanism enables more mobile hosts to cluster together and lead to the use of fewer cells to save the scarce wireless bandwidth. There are many real networks that follow this model. for example. Compared with previous approaches. which allow faster algorithms to be designed. vehicular ad hoc networks. 3) The proposed method is able to create personalized summary from both general and specific point of view related to particular game. our mechanism requires no modification on the current IP multicast routing protocols. and personalized retrieval. the contributions of our approach include the following. spray routing Sports video annotation is important for sports video semantic analysis such as event detection and personalization. text/video alignment. In particular. A common approach is to discretize (i. The problem is to find the cheapest path that satisfies certain constraints. The experimental results on event boundary detection in sports video are encouraging and comparable to the manually selected events. Furthermore. conventional routing schemes fail. and traffic engineering. 1) The event detection accuracy is significantly improved due to the incorporation of web-casting text analysis. because they try to establish complete end-to-end paths. To solve the problem. wildlife tracking sensor networks. if carefully designed. Our simulations show that the new algorithms reduce the execution time by an order of magnitude on power-law topologies with 1000 nodes. MPLS path selection. Reducing the overhead of computing constrained shortest paths is practically important for the successful design of a high-throughput QoS router. ATM circuit routing. the paths in the multicast tree connecting to the selected cells share more common links to save the wireline bandwidth. Computing constrained shortest paths is fundamental to some important network functions such as QoS routing. Because it is NP-complete. military networks. We reduce the bandwidth cost of a IP multicast tree by adaptively selecting the cell and the wireless technology for each mobile host to join the multicast group. proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. the proposed approach incorporates web-casting text into sports video analysis. We formulate the selection of the cell and the wireless technology for each mobile host in the heterogeneous wireless networks as an optimization problem. In this context. . they waste a lot of energy and suffer from severe contention which can significantly degrade their performance.. We present the framework of our approach and details of text analysis. We use Integer Linear Programming to model the problem and show that the problem is NP-hard. Our mechanism supports the dynamic group membership and offers mobility of group members. With this in mind.
called Veracity.Proving ownerships rights on outsourced relational database is a crucial issue in today's internet based application environments and in many content distribution applications. a website is trustworthy if it provides many pieces of true information. which requires that two links may not use each other in their backup paths if they may fail simultaneously. i. i. In this paper. we present a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data. which we call randomize-and-link. Our approach. This paper develops the necessary theory to establish the sufficient conditions for existence of a solution to the BLME problem. Solution methodologies for the BLME problem is developed using two approaches by: 1) formulating the backup path selection as an integer linear program. conformity to truth. This paper formally classifies the approaches to dual-link failure resiliency. One of the strategies to recover from dual-link failures is to employ link protection for the two failed links independently. A regeneration-theory approach is undertaken to analytically characterize the average overall completion time in a distributed system. The ILP formulation and heuristic are applied to six networks and their performance is compared with approaches that assume precise knowledge of dual-link failure.e. It is observed that a solution exists for all of the six networks considered. which studies how to find true facts from a large amount of conflicting information on many subjects that is provided by various websites. called TRUTHFINDER. different websites often provide conflicting information on a subject. alteration and insertion attacks An approach to IP traces back based on the probabilistic packet marking paradigm. Networks employ link protection to achieve fast recovery from link failures. Watermark decoding is based on a threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors. In addition. We implemented a proof of concept implementation of our watermarking technique and showed by experimental results that our technique is resilient to tuple deletion. In this paper. We formulate the watermarking of relational databases as a constrained optimization problem and discus efficient techniques to solve the optimization problem and to handle the onstraints. although the backup path lengths may be significantly higher than optimal. Moreover.. such as different specifications for the same product. The approach considers the heterogeneity in the processing rates of the nodes as well as the randomness in the delays imposed by the communication medium. 2) developing a polynomial time heuristic based on minimum cost path routing. This adaptive and dynamic load balancing policy is implemented and evaluated in a two-node distributed system. An iterative method is used to infer the trustworthiness of websites and the correctness of information from each other. Our approach overcomes a major weakness in previously proposed watermarking techniques. Our experiments show that TRUTHFINDER successfully finds true facts among conflicting information and identifies trustworthy websites better than the popular search engines. The optimal one-shot load balancing policy is developed and subsequently extended to develop an autonomous and distributed load-balancing policy that can dynamically reallocate incoming external loads at each node. Such a requirement is referred to as backup link mutual exclusion (BLME) constraint and the problem of identifying a backup path for every link that satisfies the above requirement is referred to as the BLME problem. there is no guarantee for the correctness of information on the Web. and a piece of information is likely to be true if it is provided by many trustworthy websites. which utilizes the relationships between websites and their information.e.. Unfortunately. the paper illustrates the significance of the knowledge of failure location by illustrating that network with higher connectivity may require lesser capacity than one with a lower connectivity to recover from dual-link failures. we propose a new problem. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages. While the first link failure can be protected using link protection. We design a general framework for the Veracity problem and invent an algorithm. The performance of the proposed dynamic loadbalancing policy is compared to that of static policies as well as existing dynamic load-balancing policies by considering the average completion time per task and the system processing rate in the presence of random arrivals of the external loads . The heuristic approach is shown to obtain feasible solutions that are resilient to most dual-link failures. there are several alternatives for protecting against the second failure. for the checksums serve both as associative addresses and data integrity verifiers. Our watermarking technique is resilient to watermark synchronization errors because it uses a partioning approach that does not require marker tuple. uses large checksum cords to “link” message fragments in a way that is highly scalable. The World Wide Web has become the most important information source for most of us.
simultaneously. The underlying system model is hybrid. The proposed scheme has been designed as public watermarking. This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. inherently hinders the efficient retrieval of information. Providing fault tolerance for such dynamic environments is a challenging task. the Active Measurement Project (NLANR). The image is reconstructed by computing the inverse cosine transform. In this paper. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data set. One such routing misbehavior is that some selsh nodes will participate in the route discovery and maintenance processes but refuse to forward data packets. our experimental results show a 60 percent detection rate of the HIDS. composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). This paper also presents an implementation of the model that relies on a negotiated quality of service (QOS) for communication channels. the consensus problem is taken as a benchmark problem. we propose the 2ACK scheme that serves as an add-on technique for routing schemes to detect routing misbehavior and to mitigate their adverse effect. By mining anomalous traffic episodes from Internet connections. node misbehaviors may exist. in particular. To illustrate what can be done in this programming model and how to use it. and the Text Retrieval Conference (TREC) show that the architecture we propose is both efficient and practical . processes are not required to share the same view of the system synchrony at a given time.. when the underlying system QOS degrade) or totally synchronous. The to authenticate digital documents of distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QOS) cannot always be delivered between processes. Moreover. The main idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path. only a fraction of the received data packets are acknowledged in the 2ACK scheme. The watermark is embedded to all pixels whose pixel intensity is less than the gray threshold. In this paper. this paper proposes an adaptive programming model for fault-tolerant distributed computing. The discrete cosine transform of each block is computed. due to the open structure and scarcely available battery-based energy. Considering such a context. using the pFusion middleware architecture and data sets from Akamai’s Internet mapping infrastructure (AKAMAI). we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. Each block is scaled and a quantization function is used to construct the watermark bit from each block. routing protocols for MANETs are designed based on the assumption that all participating nodes are fully cooperative. Our empirical results. Experimental results prove the efficiency of the scheme. where nodes are typically located across different networks and domains. it does not require the original image to verify its integrity. which provides upper-layer applications with process state information according to the current system synchrony (or QOS). the cover image is partitioned into non-overlapping blocks of size 8 x 8 pixels. and. The signatures generated by ADS upgrade the SNORT performance by 33 percent. such a composition can vary over time. The distributed nature of these systems. the system may become totally asynchronous (e. by automated data mining and signature generation over Internet connection episodes. The gray threshold value of the image in spatial domain is computed. we consider the effects of topologically aware overlay construction techniques on efficient P2P keyword search algorithms. This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). respectively. Our approach builds on work in unstructured P2P systems and uses only local knowledge. including files and documents. The emerging Peer-to-Peer (P2P) model has become a very powerful and attractive paradigm for developing Internetscale systems for sharing resources. In the proposed scheme. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. The proposed scheme can be usedcapability of dynamically adapting to high significance. This sharp increase in detection rate is obtained with less than 3 percent false alarms.A new semi fragile method for embedding watermark data into gray scale images has been proposed. We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. In order to reduce additional routing overhead. However. However. compared with 30 percent and 22 percent in using the SNORT and Bro systems.g. In general. We present the Peer Fusion (pFusion) architecture that aims to efficiently integrate heterogeneous information that is geographically scattered on peers of different networks. The HIDS approach proves the vitality of detecting intrusions and anomalies. Analytical and simulation results are presented to evaluate the performance of the proposed scheme.
In this paper. Each node maintains in its cache table the information necessary for cache updates. This is achieved by using clustering techniques. then RB schemes can indeed yield better delay performance than NRB schemes. however. etc. These results are also compared to other filters by numerical measures and visual inspection. Finally. which describes authentication and confidentiality when packets are distributed between hosts with in the cluster and between the clusters. In particular. We present a novel secure communication framework for ad hoc networks (SCP). cached routes easily become stale. A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. comes at the expense of lower throughput and goodput compared to NRB schemes. thus making route caches fully adaptive to topology changes. This advantage. In this paper. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. we propose proactively disseminating the broken link information to the nodes that have that link in their caches. . medium access control (MAC) protocol. To address the cache staleness issue. On-demand routing protocols use route caches to make routing decisions. The filter consists of two stages. Experimental results are obtained to show the feasibility of the proposed approach.) for making RB switching superior to NRB switching are also identified. The common authentication schemes are not applicable in Ad hoc networks. we propose a secure communication protocol for communication between two nodes in ad hoc networks. the requirements (in terms of route discovery. The methodology has been shown to perform very well on digitized paintings suffering from cracks. When a link failure is detected. Both stages are based on fuzzy rules which make use of membership functions. Afterward. The cracks are detected by threshold the output of the morphological top-hat transform. heuristics cannot accurately estimate timeouts because topology changes are unpredictable. making use of the distribution of the homogeneity in the image. The filter can be applied iteratively to effectively reduce heavy noise. the shape of the membership functions is adapted according to the remaining noise level after each iteration. We define a new cache structure called a cache table and present a distributed cache update algorithm. In addition to posing this fundamental question. while routing the packets between mobile hosts. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility. Authentication is one of the important security requirements of a communication network. the thin dark brush strokes which have been misidentified as cracks are removed using either a median radial basis function neural network on hue and saturation data or a semi-automatic procedure based on region growing. An integrated methodology for the detection and removal of cracks on digitized paintings is presented in this project. and pipelining. We show that the algorithm outperforms DSR with path caches and with Link-Max Life. A novel analytical framework is developed and the network performance under both RB and NRB schemes is quantified. Due to mobility. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. An ad hoc network is a self organized entity with a number of mobile nodes without any centralized access point and also there is a topology control problem which leads to high power consumption and no security. an adaptive timeout mechanism for link caches. prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. It is shown that if the aforementioned requirements are met. the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters. crack filling using order statistics filters or controlled anisotropic diffusion is performed. However. The first stage computes a fuzzy derivative for eight different directions. The cluster head nodes (CHs) perform the major operations to achieve our SCP framework with help of Kerberos authentication application and symmetric key cryptography technique which will be secure reliable transparent and scalable and will have less over head.This paper investigates whether and when route reservation-based (RB) communication can yield better delay performance than non-reservation-based (NRB) communication in ad hoc wireless networks. These cluster head nodes execute administrative functions and network key used for certification.
the unwanted variations resulting from changes in lighting. while texture synthesis is used for the textured blocks. images are first tiled into blocks of 8 x 8 pixels. and obtains a face subspace that best detects the essential face manifold structure. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it. The proposed multi tree database architecture consists of a number of database subsystems. Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics. LPP finds an embedding that preserves local information. The viability of this method for image compression. The switch between the two schemes is done in a fully automatic fashion based on the surrounding available blocks. redundancy removal. robust. although the users may have different preferences. The next-generation mobile network will support terminal mobility. The widely used web search engines give different users the same answer set. In addition. and pose may be eliminated or reduced. in association with loss JPEG. and service provider portability. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. A location-independent personal telecommunication number (PTN) scheme is conducive to implementing such a global mobile system. each of which is a three-level tree structure and is connected to the others only through its root. which are needed to describe the data economically. In order to reduce the wastage of time on browsing unnecessary documents. Results have revealed that the proposed database architecture for location management can effectively support the anticipated high user density in the future mobile networks. Instead of using common retransmission query protocols. Intelligence is the key factor which is lacking in the job scheduling techniques of today. memory-resident direct file and T-tree. Another possible application would be to integrate this technology into an artificial intelligence system for more realistic interaction with humans. The PAWS intelligently utilizes the Self-Organizing Map (SOM) as the user’s profile and therefore. The performance of this method is tested for various images and combinations of lost blocks. By exploiting the localized nature of calling and mobility patterns. This necessitates research into the design and performance of high-throughput database technologies used in mobile systems to ensure that future systems will be able to carry efficiently the anticipated loads. The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used. By using Locality Preserving Projections (LPP). feature extraction. . Personalized web search carry out the search for each user with his preference. the face images are mapped into a face subspace for analysis. personal mobility. making global roaming seamless. applications having linear models are suitable. data compression. This is the case when there is a strong correlation between observed variables. the non-geographic PTNs coupled with the anticipated large number of mobile users in future mobile networks may introduce very large centralized databases. The Laplacian faces are the optimal linear approximations to the eigen functions of the Laplace Beltrami operator on the face manifold. etc. Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations. We propose an appearance-based face recognition method called the Laplacianface approach. The jobs which PCA can do are prediction. Analysis model and numerical results are presented to evaluate the efficiency of the proposed database architecture. If the lost block contained structure. However. When compression algorithms such as JPEG are used as part of the wireless transmission process. Because PCA is a known powerful technique which can do something in the linear domain. is capable of providing high quality answer set to the user. it is reconstructed using an image in painting algorithm. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables). are proposed for the location databases to further improve their throughput. we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences and an effective job scheduling is achieved as time progresses. In this way. two memory-resident database indices. Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis. In this paper. Here we assume that the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed number of jobs. is also discussed. Theoretical analysis shows that PCA. efficient location database architecture based on the location-independent PTNs. An approach for filling-in blocks of missing data in wireless image transmission is presented in this paper. the proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location registration and call delivery procedures. and LPP can be obtained from different graph models. LDA. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. this paper presents an intelligent Personal Agent forWeb Search (PAWS). facial expression. we aim to reconstruct the lost data using correlation between the lost block and its neighbors. the effects of noise can destroy entire blocks of the image. When such images are transmitted over fading channels. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space. such as signal Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the scheduling technique used.authentication system could be put in place to allow computer access or access to a specific room using face recognition. This paper proposes a scalable.
it will resend the packet until the maximum times of retry is reached. Later. as a fundamental service in MANETs. This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. Mobile ad hoc networks (MANETs) suffer from high transmission error rate because of the nature of radio communications.The Internet's excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. NBP is complemented with the proposed enhanced core-stateless fair queueing (ECSFQ) mechanism. In this paper. This process detects outlines of an object and boundaries between objects and the background in the image. which are the integration of 3-D SPIHT video coding and BPCS steganography and that of motion-JPEG2000 and BPCS. Examples of gradient-based edge detectors are Roberts. The Prewitt operator measures two components. Moreover. we propose a simple. Among 1-hop neighbors of the sender. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and motion-JPEG2000. Experimental results show that 3-D SPIHT-BPCS is superior to motion-JPEG2000-BPCS with regard to embedding performance. The retransmissions of the forward nodes are received by the sender as confirmation of their receiving the packet. 3-D SPIHT-BPCS steganography and motion-JPEG2000-BPCS steganography are presented and tested. Forward nodes are selected in such a way that (1) the sender’s 2-hop neighbors are covered and (2) the sender’s 1-hop neighbors are either a forward node. Simulation results show that the algorithm provides good performance for a broadcast operation under high transmission error rate environment . the contributions of the different components of the slopes are combined to give the total value of the edge strength. NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network. respectively. thereby preventing congestion within the network. and Sobel operators. commonly vertical and horizontal. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. All the gradient-based algorithms have kernel operators that calculate the strength of the slope in directions. The objective of reducing the broadcast redundancy while still providing high delivery ratio for each broadcast packet is a major challenge in a dynamic environment. wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. only selected forward nodes retransmit the broadcast message. is prone to the broadcast storm problem if forward nodes are not carefully designated. are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. which are orthogonal to each other. End-to-end congestion control algorithms alone. which provides fair bandwidth allocations to competing flows. Edge detection is a fundamental tool used in most image processing applications to obtain information from the frames as a precursor step to feature extraction and object segmentation. If the sender does not detect all its forward nodes’ retransmissions. reliable broadcast algorithm. The edge-detection operator is calculated by forming a matrix centered on a pixel chosen as the center of the matrix area. however. Prewitt. that takes advantage of broadcast redundancy to improve the delivery ratio in the environment that has rather high transmission error rate. approximately max-min fair bandwidth allocations can be achieved for competing flows. called double-covered broadcast (DCB). The non-forward 1-hop neighbors of the sender do not acknowledge the reception of the broadcast. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. An edge-detection filter can also be used to improve the appearance of blurred or anti-aliased image streams. we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). or a non-forward node but covered by at least two forwarding neighbors. when combined with ECSFQ. If the value of this matrix area is above a given threshold. The basic edge-detection operator is a matrix area gradient operation that determines the level of variance between different pixels. The broadcast operation. Simulation results show that NBP effectively eliminates congestion collapse and that. then the middle pixel is classified as an edge. To address these maladies.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.