You are on page 1of 29

CHAPTER 1

INTRODUCTION

INTRODUCTION ABOUT THE PROJECT


Wireless sensor network are provides two different technologies such as: computation
and communication. It consists of large number of sensing devices also support for: Physically
and Environmental conditions like: Humidity, Temperature, Pressure, Sound etc. Data collected
by sensing devices and also transmitted to the destination .It also known as base station or sink.
WSN’s have various security challenges as compared to traditional network. The sensor nodes
generally support for tamper resistances behind the hardware. It also spread in insecure
environments. where they are not grunted to capture and compromise attack. These replicas can
be used for various launch stealth attack depending on the attackers motives. The such as listen
secretly to private on network communication or controlling the source areas. This type of attack
is also known as "Replica attack". Accordingly, without using hardware like: GPS, we design
low price replica detection solution for static wireless sensor network by using "Bloom Filter"
and "Sequential delivery algorithm”. Neighbouring nodes IDs also presented with constant size
by using Bloom Filter. “Bloom Filter Output" (BFO): uses for proof. The in this methods slowly
increase traffic between the neighbouring node and randomly selected nodes ,then exiting system
generates heavy traffic by transmitting proofs form the starting. The entire result shows that the
proposed solution is more energy efficient than exiting system. The contribution of purposed
solution as follows:low price solution: 1) The proposed solution also reduces the cost of building
wireless Sensor Network replica detect ion. 2) Efficient -energy detection: energy efficiency is
important in wireless sensor network. we consider node in environment are often non
rechargeable and hence availability depends on energy efficiency support for large scale .
[1]Replica Attack and Detection Scenario :An attacker captures one or more nodes deployed in
the network and then obtains secret information from them. Next, the attacker makes multiple
replicas by using this information and then deploys them into targeted areas.
CHAPTER 2
LITERATURE SURVEY

2.1 SURVEY INTERNET OF THINGS: VISION, APPLICATIONS AND RESEARCH


CHALLENGES
The phrase Internet of Things (IoT) heralds a vision of the future Internet where
connecting physical things, from banknotes to bicycles, through a network will let them take an
active part in the Internet, exchanging information about themselves and their surroundings. This
will give immediate access to information about the physical world and the objects in it leading
to innovative services and increase in efficiency and productivity. This paper studies the state-of-
the-art of IoT and presents the key tech nological drivers,potential applications, challenges and
future research areas in the domain of IoT. IoT definitions from different perspective in
academic and industry communities are also discussed and compared. Finally some major issues
of future research in IoT are identified and discussed briefly.Keywords Internet of Things (IoT)
During the past few years, in the area of wireless communications and networking, a novel
paradigm named the Internet of Things (IoT) which was first introduced by Kevin Ashton in the
year 1998, has gained increasingly more attention in the academia and industry [1]. By
embedding short-range mobile transceivers into a wide array of additional gadgets and everyday
items, enabling new forms of communication between people and things, and be tween things
themselves, IoT would add a new dimension to the world of information and communication.
Unquestionably, the main strength of the IoT vision is the high impact it will have on several
aspects of every-day life and behavior of pote ntial users. From the point of view of a private
user, the most obvious effects of the IoT will be visible in both working and domestic fields. In
this context, assisted living, smart homes and offices, e-health, enhanced learning are only a few
examples of possible application scenarios in which the new paradigm will play a leading role in
the near future [2]. Similarly, from the perspective of business users, the most apparent
consequences will be equally visible in fields such as automation and industrial manufacturing,
logistics, business process management, intelligent transportation of people and goods.However,
many challenging issues still need to be addressed and both technological as well as social knots
need to be united before the vision of IoT becomes a reality. The central issues are how to
achieve full interoperability between interconnected devices, and how to provide them with a hig
h degree of smartness by enabling their adaptation and autonomous behavior, while guaranteeing
trust, security, and privacy of the users and their data . Moreover, IoT will pose several new
problems concerning issues related to efficient utilization of resources in low-powered resource
constrained objects.Several industrial, standardization and research bodies are currently in-
volved in the activity of development of solutions to fulfill the techno logical requirements of
IoT. The objective of this paper is to provide the reader a comprehensive discussion on the
current state of the art of IoT, with particular focus on what have been done in the areas of
protocol, algorithm an d system design and development, and what are the future research and
technology trends.

2.2 SECURITY AND PRIVACY CHALLENGES IN THE INTERNET OF THINGS


In the past decade, internet of things (IoT) has been a focus of research. Security and
privacy are the key issues for IoT applications, and still face some enormous challenges. In order
to facilitate this emerging domain, we in brief review the research progress of IoT, and pay
attention to the security. By means of deeply analyzing the security architecture and features, the
security requirements are given. On the basis of these, we discuss the research status of key
technologies including encryption mechanism, communication security, protecting sensor data
and cryptographic algorithms, and briefly outline the challenges .The low cost, off the shelf
hardware components in un shielded sensor network nodes leave them vulnerable to
compromise. With little effort, an adversary may capture nodes, analyse and replicate them, and
surreptitiously insert these replicas at strategic locations within the network. Such attacks may
have severe consequences; they may allow the adversary to corrupt network data or even
disconnect significant parts of the network.
Previous node replication detection schemes depend primarily on centralized mechanisms
with single points of failure, or on neighbourhood voting protocols that fail to detect distributed
replications. To address these fundamental limitations, we propose two new algorithms based on
emergent properties (Gligor (2004)), i.e., properties that arise only through the collective action
of multiple nodes. Randomized multicast distributes node location information to randomly
-selected witnesses, exploiting the birthday paradox to detect replicated nodes, while line
selected multicast uses the topology of the network to detect replication. Both algorithms provide
globally aware, distributed node replica detection, and line selected multicast displays
particularly strong performance characteristics.
We show that emergent algorithms represent a promising new approach to sensor
network security; moreover, our results naturally extend to other classes of networks in which
nodes can be captured, replicated and reinserted by an adversary. Wireless Sensor Networks
(WSNs) are often deployed in hostile environments where an adversary can physically capture
some of the nodes, first can reprogram, and then, can replicate them in a large number of clones,
easily taking control over the network.

2.3 DISTRIBUTED DETECTION OF NODE REPLICATION ATTACKS IN SENSOR


NETWORKS
A few distributed solutions to address this fundamental problem have been recently
proposed. However, these solutions are not satisfactory. First, they are energy and memory
demanding: A serious drawback for any protocol to be used in the WSN resource constrained
environment. Further, they are vulnerable to the specific adversary models introduced in this
paper. The contributions of this work are threefold. First, we analyze the desirable properties of a
distributed mechanism for the detection of node replication attacks. Second, we show that the
known solutions for this problem do not completely meet our requirements. Third, we propose a
new self healing, Randomized, Efficient, and Distributed (RED) protocol for the detection of
node replication attacks, and we show that it satisfies the introduced requirements. Finally,
extensive simulations show that our protocol is highly efficient in communication, memory, and
computation; is much more effective than competing solutions in the literature; and is resistant to
the new kind of attacks introduced in this paper, while other solutions are not. Sensor nodes that
are deployed in hostile environments are vulnerable to capture and compromise. An adversary
may obtain private information from these sensors, clone and intelligently deploy them in the
network to launch a variety of insider attacks.This attack process is broadly termed as a clone
attack. Currently, the defences against clone attacks are not only very few, but also suffer from
selective interruption of detection and high overhead (computation and memory).
2.4 SET: DETECTING NODE CLONES IN SENSOR NETWORKS
In this paper, we propose a new effective and efficient scheme, called SET, to detect such
clone attacks. The key idea of SET is to detect clones by computing set operations (intersection
and union) of exclusive subsets in the network. First, SET securely forms exclusive unit subsets
among one hop neighbours in the network in a distributed way. This secure subset formation
also provides the authentication of nodes’ subset membership. SET then employs a tree structure
to compute non overlapped set operations and integrates interleaved authentication to prevent
unauthorized falsification of subset information during forwarding. Randomization is used to
further make the exclusive subset and tree formation unpredictable to an adversary. We show the
reliability and resilience of SET by analyzing the probability that an adversary may effectively
obstruct the set operations. Performance analysis and simulations also demonstrate that the
proposed scheme is Wireless sensor networks are vulnerable to the node clone attack because of
low cost, resourceconstrained sensor nodes, and uncontrolled environments where they are left
unattended. Several distributed protocols have been proposed for detecting clone. However,
some protocols rely on an implicit assumption that every node is aware of all other nodes’
existence; other protocols using an geographic hash table require that nodes know the general
network deployment graph. Those assumptions hardly hold for many sensor net works. In this
paper, we present a novel node clone detection protocol based on Distributed Hash Table (DHT).
DHT provides good distributed properties and our protocol is practical for every kind of sensor
networks. We analyse the protocol performance theo retically. Moreover, we implement our
protocol in the OMNeT++ simulation framework. The extensive simulation results show that our
protocol can detect clone efficiently and holds strong
CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

WSNs have encountered a variety of security challenges, as compared to traditional


networks, because the sensor nodes generally lack hardware support for tamper-resistance and
are often deployed in physically insecure environments, where they are vulnerable to capture and
compromise by attackers. A harmful consequence of a node compromise attack is that once an
attacker has acquired the credentials of a sensor, he/she can fabricate replicas with these
credentials and then surreptitiously insert them at selected target positions within the network.

DISADVANTAGES OF EXISTING SYSTEM

Replicas can be used to launch various stealth attacks depending on the attacker’s
motives, such as eavesdropping on network communications or control-ling the target areas. This
type of attack, which is called a replica attack.

3.2 PROPOSED SYSTEM

In spite of the low price of a sensor node, most existing schemes in static WSNs assume
that a sensor node has expensive hardware, i.e., a global positioning system (GPS) receiver for
acquiring the location information of a sensor node, which is used as proof of identification. The
intensive approach used in existing schemes greatly increases the unit price of a sensor; hence, it
is not suitable for resource-limited sensor applications. Accordingly, without using additional
hardware, we design a low-priced replica detection solution for static WSNs by using Bloom
filter and sequential delivery approaches. The proposed solution uses neighboring node IDs,
instead of location information, in order to detect replicas. Neighboring node IDs are presented
with a constant size using a Bloom filter. The Bloom filter output (BFO) is used as a proof. A
newly deployed node generates different proofs according to the collected neighboring node IDs,
until collecting the entire neighboring node IDs. The proofs are delivered to a randomly selected
node in the network. Here, the delivery frequency increases proportionally to the number of the
collected neighboring node IDs. The strategy slowly increases traffic between the neighboring
nodes and their randomly selected nodes; however, existing schemes generate heavy traffic by
transmitting several proofs from the beginning.

ADVANTAGES OF PROPOSED SYSTEM

 The strategy disperses traffic over the entire network, resulting in small packet loss and
considerable energy saving.
 We show that the proposed solution provides a high detection ratio as well as short
detection time for detecting replicas without the use of GPS, as com-pared to existing
schemes.
 The proposed solution is more energy-efficient than existing schemes.

3.3 FIVE COMMON FACTORS


Technology and System Feasibility

The assessment is based on an outline design of system requirements in terms of Input,


Processes, Output, Fields, Programs, and Procedures. This can be quantified in terms of volumes
of data, trends, frequency of updating, etc. in order to estimate whether the new system will
perform adequately or not this means that feasibility is the study of the based in outline.

Economic Feasibility

Economic analysis is the most frequently used method for evaluating the effectiveness of
a new system. More commonly known as cost/benefit analysis, the procedure is to determine the
benefits and savings that are expected from a candidate system and compare them with costs. If
benefits outweigh costs, then the decision is made to design and implement the system. An
entrepreneur must accurately weigh the cost versus benefits before taking an action. Time Based:
Contrast to the manual system management can generate any report just by single click .
Cost Based: No special investment is needed to manage the tool. No specific training is
required for employees to use the tool. Investment requires only once at the time of installation.
The software used in this project is freeware so the cost of developing the tool is minimal

Operational Feasibility

Operational feasibility is mainly concerned with issues like whether the system will be used
if it is developed and implemented. Whether there will be resistance from users that will affect
the possible application benefits? The essential questions that help in testing the operational
feasibility of a system are following.

 Does management support the project?

 Are the users not happy with current business practices? Will it reduce the time
(operation) considerably? If yes, then they will welcome the change and the new system.

 Have the users been involved in the planning and development of the project? Early
involvement reduces the probability of resistance towards the new system.

 Will the proposed system really benefit the organization? Does the overall response
increase? Will accessibility of information be lost? Will the system effect the customers
in considerable way?
3.4 SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENT:

 Processor : Pentium IV
 Processor Speed : 2.80GHz
 Main Storage : 512MB RAM
 Hard Disk Capacity : 20GB
 Keyboard : Default
 Mouse : Default
 Scanner : Normal

SOFTWARE REQUIREMENT

 Operating System : Windows XP


 Front end : JAVA
CHAPTER – 4

SOFTWARE DESCRIPTION

4.1 SOFTWARE FEATURES

The Java Language

Java is also unusual in that each Java program is both compiled and interpreted. With a
compiler, you translate a Java program into an intermediate language called Java byte codes--the
platform-independent codes interpreted by the Java interpreter. With an interpreter, each Java
byte code instruction is parsed and run on the computer. Compilation happens just once;
interpretation occurs each time the program is executed. This figure illustrates how this works.

Java byte codes can be considered as the machine code instructions for the Java Virtual Machine
(Java VM). Every Java interpreter, whether it's a Java development tool or a Web browser that can run
Java applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.

Java byte codes help make "write once, run anywhere" possible. The Java program can be
compiled into byte codes on any platform that has a Java compiler. The byte codes can then be
run on any implementation of the Java VM. For example, the same Java program can run on
Windows NT, Solaris, and Macintosh.
The Java Platform
A platform is the hardware or software environment in which a program runs. The Java
platform differs from most other platforms in that it's a software-only platform that runs on top
of other, hardware-based platforms. Most other platforms are described as a combination of
hardware and operating system.

The Java platform has two components:

 The Java Virtual Machine (Java VM)


 The Java Application Programming Interface (Java API)

The Java API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into
libraries (packages) of related components.

The following figure depicts a Java program, such as an application or applet, that's
running on the Java platform. As the figure shows, the Java API and Virtual Machine insulates
the Java program from hardware dependencies.
As a platform-independent environment, Java can be a bit slower than native code.
However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can
bring Java's performance close to that of native code without threatening portability.

How does the Java API support all of these kinds of programs? With packages of
software components that provide a wide range of functionality. The core API is the API
included in every full implementation of the Java platform. The core API gives you the following
features:

 The Essentials: Objects, strings, threads, numbers, input and output, data structures,
system properties, date and time, and so on.
 Applets: The set of conventions used by Java applets.
 Networking: URLs, TCP and UDP sockets, and IP addresses.
 Internationalization: Help for writing programs that can be localized for users
worldwide. Programs can automatically adapt to specific locales and be displayed in the
appropriate language.
 Security: Both low-level and high-level, including electronic signatures, public/private
key management, access control, and certificates.
 Software components: Known as JavaBeans, can plug into existing component
architectures such as Microsoft's OLE/COM/Active-X architecture, OpenDoc, and Netscape's
Live Connect.
 Object serialization: Allows lightweight persistence and communication via Remote
Method Invocation (RMI).
 Java Database Connectivity (JDBC): Provides uniform access to a wide range of
relational databases.
Java not only has a core API, but also standard extensions. The standard extensions
define APIs for 3D, servers, collaboration, telephony, speech, animation, and more.

Applets

A Web page received from the Web tier can include an embedded applet. An applet is a
small client application written in the Java programming language that executes in the Java
virtual machine installed in the Web browser. However, client systems will likely need the Java
Plug-in and possibly a security policy file in order for the applet to successfully execute in the
Web browser.

Web components are the preferred API for creating a Web client program because no
plug-ins or security policy files are needed on the client systems. Also, Web components enable
cleaner and more modular application design because they provide a way to separate
applications programming from Web page design. Personnel involved in Web page design thus
do not need to understand Java programming language syntax to do their jobs.

Database Access

The relational database provides persistent storage for application data. A J2EE
implementation is not required to support a particular type of database, which means that the
database supported by different J2EE products can vary. See the Release Notes included with
the J2EE SDK download for a list of the databases currently supported by the reference
implementation.

JDBC API 2.0

The JDBC API lets you invoke SQL commands from Java programming language
methods. You use the JDBC API in an enterprise bean when you override the default container-
managed persistence or have a session bean access the database. With container-managed
persistence, database access operations are handled by the container, and your enterprise bean
implementation contains no JDBC code or SQL commands. You can also use the JDBC API
from a servlet or JSP page to access the database directly without going through an enterprise
bean.

The JDBC API has two parts: an application-level interface used by the application
components to access a database, and a service provider interface to attach a JDBC driver to the
J2EE platform.

MYSQL

MySQL is a relational database management system (RDBMS) which has more than 11
million installations. The program runs as a server providing multi-user access to a number of
databases.

MySQL is owned and sponsored by a single for-profit firm, the Swedish company
MySQL AB, now a subsidiary of Sun Microsystems, which holds the copyright to most of the
code base. The project's source code is available under terms of the GNU General Public
License, as well as under a variety of proprietary agreements.

"MySQL" is officially pronounced (My S Q L), not "My sequel" This adheres to the
official ANSI pronunciation; SEQUEL was an earlier IBM database language, a predecessor to
the SQL language. The company does not take issue with the pronunciation "My sequel" or
other local variations.

Uses

MySQL is popular for web applications and acts as the database component of the
LAMP, BAMP, MAMP, and WAMP platforms (Linux/BSD/Mac/Windows-Apache-MySQL-
PHP/Perl/Python), and for open-source bug tracking tools like Bugzilla. Its popularity for use
with web applications is closely tied to the popularity of PHP and Ruby on Rails, which are often
combined with MySQL. PHP and MySQL are essential components for running popular content
management systems such as Drupal, e107, Joomla!, WordPress and some BitTorrent trackers.
Wikipedia runs on MediaWiki software, which is written in PHP and uses a MySQL database.
Platforms and interfaces

The MySQL Administrator in Linux.

MySQL is written in C and C++. The SQL parser uses yacc and a home-brewed lexer.
MySQL works on many different system platforms, including AIX, BSDi, FreeBSD, HP-UX,
i5/OS, Linux, Mac OS X, NetBSD, Novell NetWare, OpenBSD, eComStation , OS/2 Warp,
QNX, IRIX, Solaris, SunOS, SCO OpenServer, SCO UnixWare, Sanos, Tru64, Windows 95,
Windows 98, Windows ME, Windows NT, Windows 2000, Windows XP, and Windows Vista.
A port of MySQL to OpenVMS is also available.

Libraries for accessing MySQL databases are available in all major programming
languages with language-specific APIs. In addition, an ODBC interface called MyODBC allows
additional programming languages that support the ODBC interface to communicate with a
MySQL database, such as ASP or ColdFusion. The MySQL server and official libraries are
mostly implemented in ANSI C/ANSI C++.

To administer MySQL databases one can use the included command-line tool
(commands: mysql and mysqladmin). Also downloadable from the MySQL site are GUI
administration tools: MySQL Administrator and MySQL Query Browser. Both of the GUI tools
are now included in one package called tools/5.0.html MySQL GUI Tools.

In addition to the above-mentioned tools developed by MySQL AB, there are several
other commercial and non-commercial tools available. Examples include phpMyAdmin, a free
Web-based administration interface implemented in PHP, or SQLyog Community Edition, a free
desktop based GUI tool.

Issues

There has been some controversy regarding the distribution of GPL licensed MySQL library files
with other open source applications. The biggest controversy arose with PHP, which has a
license incompatible with the GPL. This was later resolved when MySQL created a license
exception that explicitly allows the inclusion of the MySQL client library in open source projects
that are licensed under a number of OSI-compliant Open Source licenses, including the PHP
License.

In September 2005, MySQL AB and SCO forged a partnership for "joint certification,
marketing, sales, training and business development work for a commercial version of the
database for SCO's new Open Server 6 version of Unix". SCO raised controversy beginning in
2003 with a number of high-profile lawsuits related to the Linux Operating System. Various
MySQL employees expressed that the company was committed to serving its end users,
regardless of their operating system choice, that the company would leave it to the courts to
resolve the SCO licensing controversy, and that other common open source databases have also
been ported to, and support, SCO OpenServer.

In October 2005, Oracle Corporation acquired Innobase OY, the Finnish company that
developed the InnoDB storage engine that allows MySQL to provide such functionality as
transactions and foreign keys. A press release by Oracle that was issued after the acquisition,
mentioned that the contracts that make the company's software available to MySQL AB would
be due for renewal (and presumably renegotiation) some time in 2006. During the MySQL Users
Conference in April 2006, MySQL issued a press release which confirmed that MySQL and
Innobase OY agreed to a multi-year extension of their licensing agreement.

In February 2006, Oracle Corporation acquired Sleepycat Software, makers of the Berkeley
DB, a database engine onto which another MySQL storage engine was built.

Criticism

MySQL's divergence from the SQL standard on the subject of treatment of NULL values and
default values has been criticized. Its handling of dates in versions prior to 5.0 allows storing a
date with a day beyond the last day of a month with fewer than 31 days, and arithmetic
operations are vulnerable to either integer overflow or floating point truncation. Since version 5
of the server, the treatment of illegal values varies according to use of the "SQL Mode" set in the
server, which is by default set to the unusually tolerant state that critics dislike.
When the beta version of MySQL 5.0 was released in March 2005, David Axmark, a co-founder
of MySQL, said that "People have been criticizing MySQL since we started for not having stored
procedures, triggers and views “and” We’re fixing 10 years of criticism in one release." MySQL
5.0's 13 October build 5.0.15 was released for production use on 24 October 2005, after more
than two million downloads in the 5.0 beta cycle.

Critical bugs sometimes do not get fixed for long periods of time. An example is a bug
with status critical existing since 2003.

MySQL shows poor performance when used for Data Warehousing; this is partly due to
inability to utilize multiple CPU cores for processing a single query.
CHAPTER 5

PROJECT DESCRIPTION
5.1 PROJECT OVERVIEW
A wireless sensor network consists of hundreds or even thousands of small nodes which
are distributed over the network. These nodes sense the sensitive data from the location and send
the sensitive message to the base station.The base station will verify the data and ID which is
send by the sensor nodes. These sensor nodes are deployed in hostile environment and the nodes
are unattended which makes an adversary to compromise the sensor nodes and make many
replicas of them. These replica nodes are dangerous to the network communication. Advances in
robotics develop a variety of new architectures for autonomous wireless sensor networks. Mobile
nodes in network communication are useful for network repair and event detection. These
advanced sensor network architecture could be used in variety of application including intruder
detection, border monitoring, and military patrols. The compromised mobile nodes inject the
fake data and disrupt network operations and eavesdrop on network communications. The
dangerous attack is the compromised node attacks in which the adversary takes the secret keying
materials from a compromised node, and generates large number of attacker controlled replicas
throughout the network. An adversary can take the single sensor ID and make many replicas of
them[7]. The time and effort needed to inject these replica nodes into the network should be
much less than the effort to capture and compromise the equivalent number of original nodes.
The replica nodes are controlled by an adversary. A solution to stop replica node attacks is to
prevent the adversary from extracting the secret key materials from the mobile nodes by using
temper resistant hardware, which makes significantly harder and more time consuming.

5.2 MODULES

 Node Formation
 Find Attacker
 Replica Attack and Detection Using Bloom Filter
 Validation of Node
5.3 MODULES DESCRIPTION:

1. Node Formation

Neighboring node IDs are presented with a constant size using a Bloom filter. The Bloom filter
output (BFO) is used as a proof. A newly deployed node generates different proofs according to
the collected neighboring nodeIDs, until collecting the entire neighboring node IDs. The proofs
are delivered to a randomly selected node in the network. Here, the delivery frequency increases
proportionally to the number of the collected neighboring node IDs. The strategy slowly
increases traffic between the neighboring nodes and their randomly selected nodes;

2. Find Attacker

With regard to this attack, it is assumed that an attacker captures only a small fraction of nodes in
the network because capturing a large fraction may not require replicas any more, and it may be
more costly and detectable. It is reasonable to assume that an attacker captures only a few nodes
and obtains secret information from the captured nodes. Then the attacker makes replicas by
storing secret information in a large number of commodity sensor nodes. The replicas are evenly
deployed in the network so as to achieve his/her objectives, such as eavesdropping on network
communications or controlling the target areas. Since the attacker already knows the secret
information of the captured node, it is futile to employ existing cryptographic solutions, in which
their security depends on the secret information.

3. Replica Attack and Detection Using Bloom Filter

         An attacker captures one or more nodes deployed in the network and then obtains secret
information from them. Next, the attacker makes multiple replicas by using this information and
then deploys them into targeted areas. Here, the neighboring nodes recognize replicas as newly
deployed nodes. For obtaining useful information from the neighboring nodes in the target areas
or controlling the neighboring nodes, replicas should prove that they are legitimate nodes with
valid secret information. However, since replicas already know the secret information, they can
prove it to the neighboring nodes without difficulty.
4. Validation of Node

       The RDB-R consists of three stages: proof generation, proof delivery, and proof validation.
Henceforth, we explain the three stages with new deployment node A, the neighboring node C,
and the witness node U. In the First Stage a proof for identifying a replica is created and updated
in a newly added node A, which may be a replica. Second Stage, checks whether neighboring
node IDs are registered to a proof (i.e., BFOA) or whether the received IDs belong to a two-hop
neighbor list. In Final stage is to determine whether the source node, which is a node generating
the proof, is a replica through a subset checking method.
CHAPTER -6

SYSTEM TESTING AND IMPLEMENTATION

6.1 SYSTEM IMPLEMENTATION

Implementation is the process of converting a new or revised system design into an


operational one. The implementation is the final and important phase. It involves ser training,
system testing and successfully running of developed proposed system. The user tests the
developed system and changes are made according to their needs. The testing phase involves the
testing of developed system using various kinds of data.

An elaborate testing of data is prepared and the system is tested using that test data. The
corrections are also noted for future use. The users are trained to operate the developed system.
Both the hardware and software securities are made to run the developed system successfully in
future.

Implementation is the process of converting a new or revised system design in to an


operational one. Education of user should really have taken place much earlier in the project
when they were being involved in the investigation and design work. Training has to be given to
the user regarding the new system. Once the user has been trained, the system can be tested
hardware and software securities are to run the developed system successfully in the future.

6.2 REQUIREMENTS GATHERING

The first phase of software project is to gather requirements. Gathering software


requirements begins as a creative brainstorming process in which the goal is to develop an idea
for a new product that no other software vendor has thought. New software product ideas
normally materialize as a result of analyzing market data and interviewing customers about their
product needs.
The main function of the requirements gathering phase is to take an abstract idea that fills
a particular need or that solves a particular problem and create a real world project with a
particular set of objectives, a budget, a timeline and a team.

6.3 DESIGN

The design phase is the one, where the technical problems are really solved that makes
the project a reality. In this phase the relationship of the code, database, user interface, and
classes begin to take shape in the minds of the project team. During the design phase, project
team is responsible for seven deliverables:

 Data model design


 User interface design
 Functional specifications
 Documentation plan
 Software Quality Assurance (SQA) test plan
 Test cases
 Detailed design specifications.

Data model or schema

The primary objective in designing the data model or schema is to meet the high level
software specifications that the requirement document outlines. Usually the database
administrator (DBA) designs the data model for the software project.

User interface

The user interface is the first part of the software application that is visible to the user.
The UI provides the user with the capability of navigating through the software application. The
UI is often known in the software industry as the look and feel aspect of the software application.
The design of the UI must be such that, the software application provides an interface that is as
user friendly and as cosmetically attractive as possible.
Prototype

After the data model and UI design are ready, project team can design the prototype for
the project. Sales and marketing teams generally cannot wait to get prototype in hands to show it
off to sales prospects and industry trade shows.

Functional specification

It provides the definitive overview of what is included in the project. This deliverables
incorporates many of the documents that prepare to this point, gathering them into one place for
easy reference.

6.4 TESTING

Testing is not isolated to only one phase of the project but should be exercised in all
phases of the project. After developing each unit of the software product, the developers go
through an extensive testing process of the software. After the development of software modules,
developers perform a thorough unit testing of each software component. They also perform
integration testing of all combined modules.

6.5 INTEGRATION TESTING

When the individual components are working correctly and meeting the specified
objectives, they are combined into a working system. This integration is planned and co-
coordinated so that when a failure occurs, there is some idea of what caused it. In addition, the
order in which components are tested, affects the choice of test cases and tools. This test strategy
explains why and how the components are combined to test the working system. It affects not
only the integration timing and coding order, but also the cost and thoroughness of the testing.
6.5.1 BOTTOM-UP INTEGRATION

One popular approach for merging components to the larger system is bottom-up testing.
When this method is used, each component at the lowest level of the system hierarchy is tested
individually. Then, the next components to be tested are those that call the previously tested
ones. This approach is followed repeatedly until all components are included in the testing.

Bottom-up method is useful when many of the low-level components are general-purpose
utility routines that are invoked often by others, when the design is object-oriented or when the
system is integrated using a large number of stand-alone reused components.

6.5.2 TOP-DOWN INTEGRATION

Many developers prefer to use a top-down approach, which in many ways is the reverse
of bottom-up. The top level, usually one controlling component, is tested by itself. Then, all
components called by the tested components are combined and tested as a larger unit. This
approach is reapplied until all components are incorporated.

A component being tested may call another that is not yet tested, so we write a stub, a
special-purpose program to stimulate the activity of the missing component. The stub answers
the calling sequence and passes back the output data that lets the testing process continue.

For example, if a component is called to calculate the next available address but that
component is not yet tested, then a stub is created for it, that may pass back a fixed address
which allows only testing to proceed. As with drivers, stubs need not be complex or logically
complete.
6.5.3 BIG-BANG INTEGRATION

When all components are tested in isolation, it is tempting to mix them together as the
final system and see if it works the first time. Many programmers use the big-bang approach for
small systems, but it is not practical for large ones.

In fact, since big-bang testing has several disadvantages, it is not recommended for any
system. First, it requires both stubs and drives to test the independent components. Second,
because all components are merged at once, it is difficult to find the cause of any failure. Finally,
interface faults cannot be distinguished easily from other types of faults.

6.6 BLACK BOX TESTING

Black Box Testing involves testing without knowledge of the internal workings of the
item being tested.  For example, when black box testing is applied to software engineering, the
tester would only know the "legal" inputs and what the expected outputs should be, but not how
the program actually arrives at those outputs.  It is because of this that black box testing can be
considered testing with respect to the specifications, no other knowledge about the program is
necessary.  For this reason, the tester and the programmer can be independent of one another,
avoiding programmer bias toward his own work.  For this testing, test groups are often used.
Also, due to the nature of black box testing, the test planning can begin as soon as the
specifications are written.  The opposite of this would be glass box testing where test data are
derived from direct examination of the code to be tested.  For glass box testing, the test cases
cannot be determined until the code has actually been written.  Both of these testing techniques
have advantages and disadvantages, but when combined, they help to ensure thorough testing of
the product.

6.7 WHITE BOX TESTING

White box testing uses an internal perspective of the system to design test cases based on
internal structure. It is also known as glass box, structural, clear box and open box testing. It
requires programming skills to identify all paths of the software. The tester chooses test case
inputs to exercise all paths and to determine the appropriate outputs. In electrical hardware,
testing every node in a circuit may be probed and measured. Eg: in-circuit testing (ICT).
Since the tests are based on the actual implementation, when the implementation changes
the tests also change probably. For instance, ICT needs update if the component value changes,
and needs modified/new fixture if the circuit changes. This adds financial resistance to the
change process, thus buggy products may stay buggy. Automated Optical Inspection (AOI)
offers similar component level correctness checking without the cost of ICT fixtures. However
changes still require test updates.

While white box testing is applicable at the unit, integration and system levels of the
software testing process, it is typically applied to the unit. So when it normally tests paths within
a unit, it can also test paths between units during integration, and between subsystems during a
system level test. Though this method of test design can uncover an overwhelming number of
test cases, it might not detect unimplemented parts of the specification or missing requirements.
But it is sure that all the paths through the test objects are executed.

Typical white box test design techniques include:

 Control flow testing


 Data flow testing

6.7.1 WHITE BOX TESTING STRATEGY

White box testing strategy deals with the internal logic and structure of the code. The
tests that are written based on the white box testing strategy incorporates coverage of the code
written, branches, paths, statements and internal logic of the code etc.

In order to implement white box testing, the tester has to deal with the code and hence he
should possess knowledge of coding and logic i.e. internal working of the code. White box
testing also needs the tester to look into the code and find out which unit/ statement/ chunk of the
code is malfunctioning.
6.8 SCREEN SHORT
CHAPTER 7
CONCLUSION
In this paper, we proposed a low priced and energy efficient solved to detect duplicate
node for static wireless sensor network. Proposed does not use any additional hardware. Where
existing system need of expensive hardware like as GPS receiver. Proposed solution use exhibits
duplicate node or good performance than existing scheme. When one or more replicas detects
within the short duration time and increase the high performance also gain the less energy.In this
paper conclude that the duplicates nodes in Wireless sensor networks are detected by using a
new Static testing technique called sequential probability. Using this technique the settlement
made with the sensor nodes .nodes is detected efficiently in mobile sensor networks.
REFERENCES
[1]C.P. Mayer, “Security and Privacy Challenges in the Internet of Things,” Electronic
Comm. EASST, vol. 17, pp. 1-12, 2009.
[2] D. Miorandi, S. Sicari, F. De Pellegrini, and I. Chlamtac, “Survey Internet of Things:
Vision, Applications and Research Challenges,” J. Ad Hoc Networks, vol. 10, no. 7, pp. 1497-
1516, Sept. 2012.
[3] B. Parno, A. Perrig, and V. Gligor, “Distributed Detection of Node Replication
Attacks in Sensor Networks,” Proc. IEEE Symp. Security and Privacy, pp. 49 63, 2005.
[4] M. Conti, R.D. Pietro, L. Mancini, and A. Mei, “Distributed Detection of Clone
Attacks in Wireless Sensor Networks,” IEEE Trans. Dependable and Secure Computing, vol. 8,
no. 5, pp. 685 698, Sept. 2011.
[5] C.A. Melchor, B. Ait Salem, and P. Gaborit, “Active Detection of Node Replication
Attacks,” Int’l J. Computer Science and Network Security, vol. 9, no. 2, pp. 13 - 21, 2009.
[6] H. Choi, S. Zhu, and T.F.L. Porta, “Set: Detecting Node Clones in Sensor Networks,”
Proc. Third Int’l Conf. Security and Privacy in Comm. Networks and the Workshops
(SecureComm ’07), pp. 341 - 350, 2007.
[7] Z. Li and G. Gong, “DHT Based Detection of Node Clone in Wireless Sensor
Networks,” Proc. First Int’l Conf. Adhoc Networks, pp. 240 - 255, 2009.
[8] K. Xing, F. Liu, X. Cheng, and D.H.C. Du, “Real Time Detection of Clone Attacks in
Wireless Sensor Networks,” Proc. 28th Int’l Conf. Distributed Computing Systems (ICDCS ’07),
pp. 310, 2008.
[9] Y. Zeng, J. Cao, S. Zhang, S. Guo, and L. Xie, “RandomWalk Based Approach to
Detect Clone Attacks in Wireless Sensor Networks,” IEEE J. Selected Areas Comm., vol. 28, no.
5, pp. 677 -691, June 2010.
[10] B. Zhu, S. Setia, S. Jajodia, S. Roy, and L. Wang, “Localized Multicast: Efficient
and Distributed Replica Detection in Large Scale Sensor Networks,” IEEE Trans. Mobile
Computing, vol. 9, no. 7, pp. 913 926, July 2010.

You might also like