Professional Documents
Culture Documents
A PROJECT REPORT
Submitted For the Partial Fulfillment of the Requirement for theDegree of Master of
Computer Application
By
BALAJI A
REG No: A20105PCA6214
JUNE 2022
1
BONAFIDE CERTIFICATE
Dr. R. Velmurugan, M.C.A., M.Phil., Ph.D., M.B.A Dr. S. Sasikala, M.C.A., M.Phil., Ph.D
Assis Prof, PG and Research Dept of Computer Science Assoc.Prof, Dept of MCA,
Presidency College, Chennai IDE, University of Madras.
Date:06/08/2022
Examiners:
1. Name :
Signature :
2. Name :
Signature :
2
ACKNOWLEDGEMENT
I list here the various eminent personalities; their indomitable spirit helped
me to taste the fruits of success.
I would like to express my sincere thanks to all the staff members of our
department for their support throughout the project.
Last but not least, I bowed my head to my parents for inspirations and
encouragement without which I may not completed this project productively.
3
CONSULTANT OFFICE:
USA
6 LEVAN CT
BRIDGEWATE
R,
NEW JERSEY - 08807 908-
393-2156
Date: 27 - June - 2022
(HR MANAGER)
ADDRESS: 13/6, KUMARAN COLONY, 3RD STREET, VADAPALANI, CHENNAI, INDIA - 600
026. FAX: 9144- 45032415 I HTTP://WWW_,_UJMINOSOFT.NETI E-MAIL:
tffi@LUMWOSQFT.NET
4
INDEX
5
1. INTRODUCTION
sensor nodes with limited resources. Since WSNs can be deployed in an unattended or
importance to the data authenticity and integrity in WSNs. In this respect, many
has been demonstrated in and to, be feasible, the computation overhead is still rather
injection attack , by which the adversary injects false data, attempting to either deceive
the base station (BS, or data sink), and path-based denial of service (PDoS) attack , by
which the adversary sends bogus messages to randomly selected nodes so as to waste
the energy of forwarding nodes.1 Several so-called en-route filtering schemes have
been proposed to quickly discover and remove the bogus event report injected by the
adversary. Here, “en-route filtering” means that not only the destination node but also
the intermediate nodes can check the authenticity of the message in order to reduce the
number of hops the bogus message travels and, thereby, conserve energy. Hence, it is
especially useful in mitigating false data injection attack and PDoS attack [13],
6
1.1Company Profile
and providing innovative business solutions that can be delivered quickly and cost-
The company provides solutions for sales analytics and sales forecast which fits
for multiple business areas including banking, insurance, manufacturing, supply chain,
should free you, not tie you down. It is this philosophy that has been the driving force
behind all our innovations and product developments. Having experienced resources
the business in real time, besides addressing the entire business cycle from transaction
7
to analytics. You can easily meet various business demands, across several industries
and verticals.
All of these solutions are configurable, extendable and scalable to meet the
We bring the best people and the best ideas together to deliver value to our
customers. We believe that companies can innovate and deliver outstanding service
only if they tap the commitment, energy, and imagination of their employees. Value
must therefore be created for those employees in order to motivate and enable them.
Value for employees includes being treated respectfully and being involved in
decision making.
employees chart their own career path to suit their interests and inherent strengths. An
opportunity for continuous growth and learning, acquiring new credentials that help
1.2Project Overview
The network security consists of the provisions and policies adopted by the
8
networks are vulnerable to false data injection attack and path-based denial of service
attack. Which rely on complicated security associations among sensor nodes, our
design, which directly exploits an en-route filtering hash function, appears to be novel.
In proposed system en-route filtering scheme, enabling each forwarding node to check
the authenticity of the received message, acts as a defense against these two attacks.
The CFA of en-route filtering scheme, enabling each forwarding node to check the
1.3 ABSTRACT
Sensor networks are vulnerable to false data injection attack , by which the
adversary injects false data, attempting to either deceive the base station (BS, or data
sink), and path-based denial of service (PDoS) attack , by which the adversary sends
nodes. Sensor networks are vulnerable to false data injection attack and path-based
insufficient for solving these security conflicts, an en-route filtering scheme, enabling
each forwarding node to check the authenticity of the received message, acts as a
9
2. SYSTEM ANALYSIS
Our project meet the performance metric my archiving the following feasibility
study
• Technical Feasibility
This is concerned with specifying equipment and software that will successfully
satisfy the user requirement. The technical needs of the system may vary considerably,
This project produces the output in milliseconds thus by avoiding the time
complexity
Here in this project we switch on and off the sensor nodes in specific condition
10
• Facility to communicate data to distant locations.
Our project deal with the WIFI nodes which can communicate with the distant
nodes
importance than the actual make of hardware. The configuration has given the
• Economic Feasibility
Our project has reduces the cost and increase the lifetime of the WIFI nodes by
making the WIFI nodes to go to semi sleep state which reduces the power conception
• Operational Feasibility
There is only an little changes that was brought to the system which will run
• What new skills will be required? Do the existing staff members have these skills? If
11
The concept in our project runs automatically so there are no new skills
required.
This feasibility study is carried out by a small group of people who are familiar
Proposed projects are beneficial only if they can be turned into information
system that will meet the operating requirements of the organization. This test of
feasibility asks if the system will work when it is developed and installed.
In the Existing system the Sensor networks are vulnerable to false data
injection attack , by which the adversary injects false data, attempting to either
deceive the base station (BS, or data sink), and path-based denial of service (PDoS)
attack , by which the adversary sends bogus messages to randomly selected nodes so
as to waste the energy of forwarding nodes. Sensor networks are vulnerable to false
data injection attack and path-based denial of service (PDoS) attack. While
conflicts, an en-route filtering scheme, enabling each forwarding node to check the
authenticity of the received message, acts as a defense against these two attacks.
12
functionality.. Together with the redundancy property of sensor networks, which
means that an event can be simultaneously observed by multiple sensor nodes, the
scheme. In addition to the resilience against false data injection and PDoS attacks,
among sensor nodes, our design, which directly exploits an en-route filtering hash
function, appears to be novel. We examine the CFA and CFAEF schemes from both
the theoretical and numerical aspects to demonstrate their efficiency and effectiveness.
1) Resilience to node compromise (RNC), which means that the compromised nodes
cannot forge the messages sent from the genuine nodes; 2) immediate authentication
(IA), which can be thought of as a synonym to en-route filtering and can be used to
independence of network setting (INS), which means that CFA can be applied to the
networks with different network settings; 4) efficiency (EFF), which means that CFA
CFA-based en-route filtering (CFAEF) scheme can be constructed in such a way that
the source node sends to the destination node a message, together with the
advantages of applying CFA on MAC generation are that the filtering capability can
be improved, the resilience against FEDoS attack can be achieved, and the impractical
13
3. SYSTEM CONFIGURATION
Front-end : DOTNET(C#)
OPERATING SYSTEM
between hardware and user; it is responsible for the management and coordination of
The operating system acts as a host for applications that are run on the machine.
As a host, one of the purposes of an operating system is to handle the details of the
This relieves application programs from having to manage these details and
14
Almost all computers, including handheld computers, desktop computers,
supercomputers, and even video game consoles, use an operating system of some type.
Some of the oldest models may however use an embedded operating system that
or system calls.
By invoking these interfaces, the application can request a service from the
operating system, pass parameters, and receive the results of the operation.
Users may also interact with the operating system with some kind of
software user interface (UI) like typing commands by using command line interface
For hand-held and desktop computers, the user interface is generally considered
On large multi-user systems like UNIX and Unix-like systems, the user
operating system. (Whether the user interface should be included as part of the
15
Common contemporary operating systems include Microsoft Windows, Mac
OS, Linux, BSD and Solaris. Microsoft Windows has a significant majority of market
share in the desktop and notebook computer markets, while servers generally run on
Unix or Unix-like systems. Embedded device markets are split amongst several
operating systems.
personal computers running x86 and IA-64 processors, including home and business
The name "XP" is short for "experience". Windows XP is the successor to both
Windows 2000 Professional and Windows Me, and is the first consumer-oriented
architecture.
Windows XP was first released on 25 October 2001, and over 400 million
copies were in use in January 2006, according to an estimate in that month by an IDC
analyst. It was succeeded by Windows Vista, which was released to volume license
2007.
16
Direct OEM and retail sales of Windows XP ceased on 30 June 2008, although
The increased functionality of EFS has significantly enhanced the power of the
flexibility for corporate users when deploying security solutions based on encrypted
• Multi-user support for encrypted files in the shell user interface (UI)
17
NEW FEATURES
Because Windows XP is built on a Windows 2000 base, you find that there are a
beyond Windows NT 4.0 and Windows 2000 in a number of ways. Whether you are
computer, you find that authentication is even more manageable than before. Here are
and configuration:
Anonymous Group. Previously, the Anonymous Group was granted access to any
resource to which the Everyone Group was granted access, even though
anonymous users are not required to supply usernames and passwords for
authentication.
are configured to use Guest only network logons. All users, including anonymous
users, accessing resources on a computer from over a network with this default
setting, are forced to use the Guest Account for authentication and are subsequently
given all the same access rights and privileges as the Guest Account.
• Service Accounts. Two new service accounts have been added to Windows XP
to enhance the granularity of service account access: Local Service for services
18
that run locally, and Network Service for services that run on the network. The
Local System account remains available as well, and is the only account that
domain prevent users with blank passwords from logging on over the network. This
connected to the Internet. All blank password access is restricted to local logons
only.
the event a user forgets his or her password. This Wizard creates a disk that can be
used to reset a local account password (it cannot be used to reset a domain
workstation, even if the username and password are the same. Others can use this
username and password combinations for access to other resources, such as secured
the user's profile and can travel around the network with the user if roaming
19
• Fast User Switching. Fast user switching allows multiple users of the same
another user who is currently logged on to the system. Fast User Switching uses
Terminal Services technology to provide this ability. This feature is only available
remotely.
20
ensure that code be accessed by the .NET Framework can integrate with any other
code.
management, thread management, and removing and also ensures more security and
runtime. Code that targets the runtime is known as managed code, while code that
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of managed
code, thereby creating a software environment that can exploit both managed and
unmanaged features. The .NET Framework not only provides several runtime hosts,
21
Internet Explorer is an example of an unmanaged application that hosts
the runtime (in the form of a MIME type extension). Using Internet Explorer to host
HTML documents.
code execution, code safety verification, compilation, and other system services these
• Security.
• Robustness.
• Productivity.
• Performance.
SECURITY
The runtime enforces code access security. The security features of the
rich. With regards to security, managed components are awarded varying degrees of
trust, depending on a number of factors that include their origin to perform file-access
22
ROBUSTNESS:
type- and code-verification infrastructure called the common type system (CTS). The
CTS ensures that all managed code is self-describing. The managed environment of
PRODUCTIVITY:
programmers can write applications in their development language of choice, yet take
full advantage of the runtime, the class library, and components written in other
PERFORMANCE:
common language runtime provides many standard runtime services, managed code is
never interpreted. A feature called just-in-time (JIT) compiling enables all managed
code to run in the native machine language of the system on which it is executing.
23
.NET Framework is designed to fulfill the following objectives: To provide a
that code based on the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime
and the .NET Framework class library. The common language runtime is the
foundation of the .NET Framework. while also enforcing strict type safety and other
forms of code accuracy that ensure security and robustness. In fact, the concept of
code management is a fundamental principle of the runtime. Code that targets the
runtime is known as managed code, while code that does not target the runtime is
known as unmanaged code. The class library, the other main component of the .NET
24
provided by ASP.NET, such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of managed
code, thereby creating a software environment that can exploit both managed and
unmanaged features. The .NET Framework not only provides several runtime hosts,
but also supports the development of third-party runtime hosts. For example,
managed code. ASP.NET works directly with the runtime to enable Web Forms
applications and XML Web services, both of which are discussed later in this topic
possible, but with significant improvements that only managed code can offer, such as
semi-trusted execution and secure isolated file storage. The following illustration
shows the relationship of the common language runtime and the class library to your
execution, code safety verification, compilation, and other system services. These
features are intrinsic to the managed code that runs on the common language runtime.
The following sections describe the main components and features of the .NET
awarded varying degrees of trust, depending on a number of factors that include their
origin (such as the Internet, enterprise network, or local computer). This means that a
25
managed component might or might not be able to perform file-access operations,
The runtime also enforces code robustness by implementing a strict type- and
code-verification infrastructure called the common type system (CTS). The CTS
ensures that all managed code is self-describing. The various Microsoft and third-party
language compilers generate managed code that conforms to the CTS. This means that
managed code can consume other managed types and instances, while strictly
26
In addition, the managed environment of the runtime eliminates many common
software issues. For example, the runtime automatically handles object layout and
manages references to objects, releasing them when they are no longer being used.
This automatic memory management resolves the two most common application
errors, memory leaks and invalid memory references. The runtime also accelerates
development language of choice, yet take full advantage of the runtime, the class
library, and components written in other languages by other developers. Any compiler
vendor who chooses to target the runtime can do so. Language compilers that target
the .NET Framework make the features of the .NET Framework available to existing
code written in that language, greatly easing the migration process for existing
applications.
While the runtime is designed for the software of the future, it also supports
code enables developers to continue to use necessary COM components and DLLs.
runtime provides many standard runtime services, managed code is never interpreted.
A feature called just-in-time (JIT) compiling enables all managed code to run in the
27
be hosted by high-performance, server-side applications, such as Microsoft® SQL
The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object oriented,
providing types from which your own managed code can derive functionality. This not
only makes the .NET Framework types easy to use, but also reduces the time
associated with learning new features of the .NET Framework. In addition, third-party
components can integrate seamlessly with classes in the .NET Framework. For
example, the .NET Framework collection classes implement a set of interfaces that
you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework. As you would expect from an
object-oriented class library, the .NET Framework types enable you to accomplish a
range of common programming tasks, including tasks such as string management, data
collection, database connectivity, and file access. In addition to these common tasks,
the class library includes types that support a variety of specialized development
scenarios. For example, you can use the .NET Framework to develop the following
For example, the Windows Forms classes are a comprehensive set of reusable types
that vastly simplify Windows GUI development. If you write an ASP.NET Web Form
28
application, you can use the Web Forms classes. Client applications are the closest to a
applications that display windows or forms on the desktop, enabling a user to perform
a task.
reporting tools, and so on. Client applications usually employ windows, menus,
buttons, and other GUI elements, and they likely access local resources such as the file
system and peripherals such as printers. Another kind of client application is the
traditional ActiveX control (now replaced by the managed Windows Forms control)
deployed over the Internet as a Web page. This application is much like other client
graphical elements. In the past, developers created such applications using C/C++ in
conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application
applications.
The Windows Forms classes contained in the .NET Framework are designed to
be used for GUI development. You can easily create command windows, buttons,
menus, toolbars, and other screen elements with the flexibility necessary to
29
accommodate shifting business needs. For example, the .NET Framework provides
In some cases the underlying operating system does not support changing these
attributes directly, and in these cases the .NET Framework automatically recreates the
forms. This is one of many ways in which the .NET Framework integrates the
developer interface, making coding simpler and more consistent. Unlike ActiveX
controls, Windows Forms controls have semi-trusted access to a user's computer. This
means that binary or natively executing code can access some of the resources on the
user's system (such as GUI elements and limited file access) without being able to
installed on a user's system can now be safely deployed through the Web. Your
applications can implement the features of a local application while being deployed
You create a new thread in Visual Basic .NET by declaring a variable of type
System.Threading. Thread and calling the constructor with the Address Of statement
and the name of the procedure or method you want to execute on the new thread.
To start the execution of a new thread, use the Start method, as in the following
code:
30
MyThread.Start()
To stop the execution of a thread, use the Abort method, as in the following
code:
MyThread.Abort()
Besides starting and stopping threads, you can also pause threads by calling the Sleep
or Suspend methods, resume a suspended thread with the Resume method, and destroy
MyThread.Sleep ()
MyThread.Suspend ()
MyThread.Abort ()
Thread Priorities:
Every thread has a priority property — that determines how large of a time slice
it gets to execute. The operating system allocates longer time slices to high-priority
threads than it does to low-priority threads. New threads are created with the value of
Normal, but you can adjust the Priority property to any of the other values in the
once the last foreground thread has stopped. You can use the IsBackground property to
31
determine or change the background status of a thread
Once a thread has started, you can call its methods to change its state. For
example, you can cause a thread to pause for a fixed number of milliseconds by calling
Thread.Sleep. The Sleep method takes as a parameter a timeout, which is the number
wakes the destination thread out of any wait it may be in and causes an exception to be
raised. You can also pause a thread by calling Thread.Suspend. When a thread calls
Thread.Suspend on itself, the call blocks until another thread resumes it by calling
non-blocking and causes the other thread to pause. Calling Thread.Resume breaks
another thread out of its suspended state and causes it to resume execution.
suspended thread does not pause until the common language runtime determines that it
has reached a safe point.The Thread.Abort method stops a running thread by raising a
Overview Of Ado.Net:
Server, as well as data sources exposed via OLE DB and XML. Data-sharing
32
consumer applications can use ADO.NET to connect to these data sources and
ADO.NET cleanly factors data access from data manipulation into discrete
components that can be used separately or in tandem. ADO.NET includes .NET data
Those results are either processed directly, or placed in an ADO.NET DataSet object
in order to be exposed to the user in an ad-hoc manner, combined with data from
multiple sources, or remoted between tiers. The ADO.NET DataSet object can also be
used independently of a .NET data provider to manage data local to the application or
The ADO.NET classes are found in System.Data.dll, and are integrated with the
XML classes found in System.Xml.dll. When compiling code that uses the
coupled based on the Web application model. More and more of today's applications
use XML to encode data to be passed over network connections. Web applications use
33
HTTP as the fabric for communication between tiers, and therefore must explicitly
handle maintaining state between requests. This new model is very different from the
era, where a connection was held open for the duration of the program's lifetime and
Microsoft recognized that an entirely new programming model for data access was
needed, one that is built upon the .NET Framework. Building on the .NET Framework
ensures that the data access technology would be uniform — components would share
designed to meet the needs of this new programming model: disconnected data
architecture, tight integration with XML, common data representation with the ability
to combine data from multiple and varied data sources, and optimized facilities for
application development model. At the same time, the programming model stays as
similar as possible to ADO, so current ADO developers do not have to start from the
34
most new .NET-based applications will be written using ADO.NET, ADO remains
environment for which many new applications are written. The concept of working
with a disconnected set of data has become a focal point in the programming model.
XML and data access are intimately tied — XML is all about encoding data,
and data access is increasingly becoming all about XML. The .NET Framework does
not just support Web standards — it is built entirely on top of them.XML support is
The XML classes in the .NET Framework and ADO.NET are part of the same
architecture — they integrate at many different levels. You no longer have to choose
between the data access set of services and their XML counterparts; the ability to cross
Ado.Net Architecture:
35
applications.
ADO.NET was designed hand-in-hand with the XML classes in the .NET Framework
— both are components of a single architecture.ADO.NET and the XML classes in the
.NET Framework converge in the DataSet object. The DataSet can be populated with
data from an XML source, whether it is a file or an XML stream. The DataSet can be
written as World Wide Web Consortium (W3C) compliant XML, including its schema
as XML Schema definition language (XSD) schema, regardless of the source of the
data in the DataSet. Because the native serialization format of the DataSet is XML, it
is an excellent medium for moving data between tiers making the DataSet an optimal
choice for remoting data and schema context to and from an XML Web service.
Ado.Net Components:
The ADO.NET components have been designed to factor data access from data
manipulation. There are two central components of ADO.NET that accomplish this:
the DataSet, and the .NET data provider, which is a set of components including the
explicitly designed for data access independent of any data source. As a result it can be
used with multiple and differing data sources, used with XML data, or used to manage
data local to the application. The DataSet contains a collection of one or more
36
DataTable objects made up of rows and columns of data, as well as primary key,
foreign key, constraint, and relation information about the data in the DataTable
modify data, run stored procedures, and send or retrieve parameter information. The
DataReader provides a high-performance stream of data from the data source. Finally,
the DataAdapter provides the bridge between the DataSet object and the data source.
The DataAdapter uses Command objects to execute SQL commands at the data source
to both load the DataSet with data, and reconcile changes made to the data in the
DataSet back to the data source. You can write .NET data providers for any data
source.
37
Ado.net architecture:
ADO.NET Architecture
The design of the DataSet enables you to easily transport data to clients over the
Web using XML Web services, as well as allowing you to marshal data between .NET
components using .NET Remoting services.You can also remote a strongly typed
DataSet in this fashion. Note that DataTable objects can also be used with remoting
38
ASP.NET
unified Web development platform that provides the services necessary for developers
compatible, it also provides a new programming model and infrastructure for more
any .NET compatible language, including Visual Basic .NET, C#, and JScript .NET.
Developers can easily access the benefits of these technologies, which include the
managed common language runtime environment (CLR), type safety, inheritance, and
so on.
editors and other programming tools, including Microsoft Visual Studio .NET. Not
only does this make Web development easier, but it also provides all the benefits that
these tools have to offer, including a GUI that developers can use to drop server
Developers can choose from the following two features when creating an
ASP.NET application. Web Forms and Web services, or combine these in any way
they see fit. Each is supported by the same infrastructure that allows you to use
39
authentication schemes, cache frequently used data, or customize your application's
Web Forms allows us to build powerful forms-based Web pages. When building
these pages, we can use ASP.NET server controls to create common UI elements, and
program them for common tasks. These controls allow we to rapidly build a Web
Form out of reusable built-in or custom components, simplifying the code of a page.
An XML Web service provides the means to access server functionality remotely.
Using Web services, businesses can expose programmatic interfaces to their data or
business logic, which in turn can be obtained and manipulated by client and server
server-server scenarios, using standards like HTTP and XML messaging to move data
across firewalls. XML Web services are not tied to a particular component technology
component model, and running on any operating system can access XML Web
services
Each of these models can take full advantage of all ASP.NET features, as
well as the power of the .NET Framework and .NET Framework common language
runtime.
technique for displaying data to Web site visitors. ASP.NET makes it easier than ever
to access databases for this purpose. It also allows us to manage the database from
your code.
40
ASP.NET provides a simple model that enables Web developers to
write logic that runs at the application level. Developers can write this code in the
global.aspx text file or in a compiled class deployed as an assembly. This logic can
include application-level events, but developers can easily extend this model to suit
are familiar to ASP developers and are readily compatible with all other .NET
Framework APIs.
Implementing the IHttpHandler interface gives you a means of interacting with the
low-level request and response services of the IIS Web server and provides
functionality much like ISAPI extensions, but with a simpler programming model.
Implementing the IHttpModule interface allows you to include custom events that
Framework and common language runtime. Additionally, it has been designed to offer
platforms. All ASP.NET code is compiled, rather than interpreted, which allows early
binding, strong typing, and just-in-time (JIT) compilation to native code, to name only
41
a few of its benefits. ASP.NET is also easily factorable, meaning that developers can
remove modules (a session module, for instance) that are not relevant to the
caching APIs). ASP.NET also ships with performance counters that developers and
system administrators can monitor to test new applications and gather metrics on
existing applications.
Writing custom debug statements to your Web page can help immensely in
not removed. The problem is that removing the debug statements from your pages
significant effort.
debug statements to our pages as we develop them. They appear only when you have
enabled tracing for a page or entire application. Enabling tracing also appends details
about a request to the page, or, if you so specify, to a custom trace viewer that is stored
authentication schemes for Web applications. We can easily remove, add to, or replace
human readable and writable. Each of our applications can have a distinct
42
configuration file and we can extend the configuration scheme to suit our
requirements.
requirements for working with data. You might never need to directly edit an XML
file containing data - but it is very useful to understand the data architecture in
ADO.NET.
• Interoperability
• Maintainability
• Programmability
• Performance Scalability
INTEROPERABILITY:
acceptance of XML. Because XML is the format for transmitting datasets across the
network, any component that can read the XML format can process data. The
The transmitting component can simply transmit the dataset to its destination
43
component might be a Visual Studio application or any other application implemented
The only requirement is that the receiving component be able to read XML. SO,
MAINTAINABILITY:
In the life of a deployed system, modest changes are possible, but substantial,
Architectural changes are rarely attempted because they are so difficult. As the
become scarce and response time or throughput can suffer. Faced with this problem,
software architects can choose to divide the server's business-logic processing and
In effect, the application server tier is replaced with two tiers, alleviating
functionality in various ways that help you program more quickly and with fewer
mistakes.
PERFORMANCE:
44
SCALABILITY:
does not retain database locks or active database connections for long durations.
Visual Studio .NET is a complete set of development tools for building ASP
Web applications, XML Web services, desktop applications, and mobile applications
Visual Basic .NET, Visual C++ .NET, and Visual C# .NET all use the same
integrated development environment (IDE), which allows them to share tools and
leverage the functionality of the .NET Framework and simplify the development of
language runtime and unified programming classes; ASP.NET uses these components
to create ASP Web applications and XML Web services. Also it includes MSDN
Library, which contains all the documentation for these development tools.
45
XML WEB SERVICES
XML Web services are applications that can receive the requested data using
XML over HTTP. XML Web services are not tied to a particular component
component model, or operating system. In Visual Studio .NET, you can quickly create
and include XML Web services using Visual Basic, Visual C#, JScript, Managed
XML SUPPORT:
data. XML is a subset of SGML that is optimized for delivery over the Web. The
World Wide Web Consortium (W3C) defines XML standards so that structured
data will be uniform and independent of applications. Visual Studio .NET fully
supports XML, providing the XML Designer to make it easier to edit XML and
Visual Basic.NET, the latest version of visual basic, includes many new
features. The Visual Basic supports interfaces but not implementation inheritance.
46
overloading. In addition, Visual Basic .NET supports multithreading concept.
Specification) and supports structured exception handling. CLS is set of rules and
the execution of the code and also makes the development process easier by providing
services.
components that created in Visual Basic.NET can be used in any other CLS-compliant
language.
In addition, we can use objects, classes, and components created in other CLS-
application.
IMPLEMENTATION INHERITANCE:
creating applications in Visual Basic.NET, we can drive from another class, which is
known as the base class that derived class inherits all the methods and properties of the
47
base class. In the derived class, we can either use the existing code of the base class or
override the existing code. Therefore, with help of the implementation inheritance,
destroy them. In other words, destructors are used to release the resources allocated to
the object. In Visual Basic.NET the sub finalize procedure is available. The sub
finalize procedure is used to complete the tasks that must be performed when an object
destroyed. In addition, the sub finalize procedure can be called only from the class it
GARBAGE COLLECTION:
the .NET Framework automatically releases memory for reuse by destroying objects
that are no longer in use. In Visual Basic.NET, the garbage collector checks for the
objects that are not currently in use by applications. When the garbage collector comes
across an object that is marked for garbage collection, it releases the memory occupied
by the object.
OVERLOADING:
define multiple procedures with the same name, where each procedure has a different
48
set of arguments. Besides using overloading for procedures, we can use it for
MULTITHREADING:
must ensure that a separate thread in the application handles user interaction.
can create robust and effective exception handlers to improve the performance of our
application.
4FEATURES of SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now
called SQL Server 2000 Analysis Services. The term OLAP Services has been
replaced with the term Analysis Services. Analysis Services also includes a new data
mining component. The Repository component available in SQL Server version 7.0 is
now called Microsoft SQL Server 2000 Meta Data Services. References to the
49
component now use the term Meta Data Services. The term repository is used only in
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
VIEWS OF TABLE:
1. Design View
2. Datasheet View
Design View
50
Datasheet View
view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data
The data that make up the answer is either dynaset (if you edit it) or a
snapshot (it cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or perform an
FORMS:
by record .A form displays only the information we want to see in the way we want to
see it. Forms use the familiar controls such as textboxes and checkboxes. This makes
Views of Form:
We can work with forms in several primarily there are two views,
They are,
1. Design View
2. Form View
51
Design View
view. We can add control to the form that are bound to fields in a table or query,
Form View
The form view which display the whole design of the form.
REPORT:
A report is used to vies and print information from the database. The report can
ground records into many levels and compute totals and average by checking values
from many records at once. Also the report is attractive and distinctive because we
MACRO:
A macro is a set of actions. Each action in macros does something. Such as opening
MODULE:
Modules are units of code written in access basic language. We can write and use
UML DIAGRAM:
52
Unified Modeling Language (UML): UML is a method for describing the system
engineering practices that have proven successful in then modeling of large and
complex systems. The UML is a very important part of developing objects oriented
software and the software development process. The UML uses mostly graphical
notations to express the design of software projects. Using the UML helps project
teams communicate, explore potential designs, and validate the architectural design of
the software. The UML‘s four structural diagrams exist to visualize, specify, construct
and document the static aspects of a system. we can View the static parts of a system
Definition:
UML is a language:
UML Specifying:
Specifying means building models that are precise, unambiguous and complete.
In particular, the UML address the specification of all the important analysis, design
53
UML Visualization:
The UML includes both graphical and textual representation. It makes easy to
UML Constructing:
and it is sufficiently expressive and free from any ambiguity to permit the direct
execution of models.
UML Documenting:
Goal of UML:
concepts.
processes.
54
• Integrate best practices.
Uses of UML:
The UML is intended primarily for software intensive systems. It has been used
• Telecommunications
• Transportation
• Defense/Aerospace
• Retails
UML DIAGRAM
55
DOTNET
.NET is a major technology change for Microsoft and the software world. Just
like the computer world moved from DOS to Windows, now they are moving to .NET.
But don't be surprised if you find anyone saying that "I do not like .NET and I would
stick with the good old COM and C++". There are still lot of people who like to use
The simple answer is 'it is the technology from Microsoft, on which all other
.NET technology was introduced by Microsoft, to catch the market from the
SUN's Java. Few years back, Microsoft had only VC++ and VB to compete with Java,
but Java was catching the market very fast. With the world depending more and more
the Internet/Web and java related tools becoming the best choice for the web
moved to java from VC++ and VB. This was alarming for Microsoft and many of the
Microsoft fan's kept on asking "is Microsoft sleeping?” And Microsoft had the answer.
One fine morning, they announced: "We are not sleeping. We have the answer for
But Microsoft has a wonderful history of starting late but catching up quickly.
This is true in case of .NET too. Microsoft put their best men at work for a secret
project called Next Generation Windows Services (NGWS) under the direct
supervision of Mr. Bill Gates. The outcome of the project is what we now know as
56
.NET. Even though .NET has borrowed most of its ideas from Sun's J2EE, it has really
What is .NET?
➢ .NET provides a common set of class libraries, which can be accessed from any
.NET based programming language. There will not be separate set of classes
and libraries for each language. If you know any one .NET language, you can
57
DOTNET Architecture
58
Full form of CLR is Common Language Runtime and it forms the heart of the
.NET framework. All Languages have runtime and its the responsibility of the runtime
to take care of the code execution of the program. For example VC++ has
MSCRT40.DLL, VB6 has MSVBVM60.DLL, Java has Java Virtual Machine etc.
• Garbage Collection :-
When objects are not referred GC automatically releases those memories thus
the machine. Example the program has rights to edit or create a new file but the
security configuration of machine does not allow the program to delete a file.
CAS will take care that the code runs under the environment of machines
security configuration.
• Code Verification :-
This ensures proper code execution and type safety while the code runs. It
prevents the source code to perform illegal operation such as accessing invalid
59
CLR uses JIT and compiles the IL code to machine code and then
In order that two language communicate smoothly CLR has CTS (Common
Type System).Example in VB you have “Integer” and in C++ you have “long”
these data types are not compatible so the interfacing between them is very
and “int” data type in C++ will convert it to System.int32 which is data type of
CTS.
This is a subset of the CTS which all .NET languages are expected
languages in to one umbrella and CLS is one step towards that. Microsoft
has defined CLS which are nothing but guidelines that language to follow
manner.
• Managed Code
60
Managed code runs inside the environment of CLR i.e. .NET runtime.
In short all IL are managed code. But if you are using some third party software
example VB6 or VC++ component they are unmanaged code as .NET runtime
(CLR) does not have control over the source code execution of the language.
data and helps them transform the data into information. Such database management
systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems
allow users to create, update and extract information from their database.
characteristics of people, things and events. SQL Server stores each data item in its
own fields. In SQL Server, the fields relating to a particular person, thing or event
are bundled together to form a single complete unit of data, called a record (it can
fields. No two fields in a record can have the same field name.
During an SQL Server Database design project, the analysis of your business
needs identifies all the fields or attributes of interest. If your business needs change
over time, you define any additional fields or change the definition of existing fields.
61
tables are created for the various groups of information. Related tables are grouped
PRIMARY KEY
uniquely identifies each record in the table. The Unique identifier is called the
Primary Key, or simply the Key. The primary key provides the means to distinguish
one record from all other in a table. It allows the user and the database system to
RELATIONAL DATABASE
stored in one table. SQL Server makes it very easy to link the data in multiple
example. This is what makes SQL Server a relational database management system,
or RDBMS. It stores data in two or more tables and enables you to define
relationships between the table and enables you to define relationships between the
tables.
FOREIGN KEY
When a field is one table matches the primary key of another field is
referred to as a foreign key. A foreign key is a field or a group of fields in one table
62
REFERENTIAL INTEGRITY
Not only does SQL Server allow you to link multiple tables, it also
maintains consistency between them. Ensuring that the data among related tables is
DATA ABSTRACTION
abstract view of the data. This system hides certain details of how the data is stored
Physical level: This is the lowest level of abstraction at which one describes how
Conceptual Level: At this level of database abstraction all the attributed and what
data are actually stored is described and entries and relationship among them.
View level: This is the highest level of abstraction at which one describes only
ADVANTAGES OF RDBMS
63
• Conflicting requirements can be balanced
DISADVANTAGES OF DBMS
upgraded to allow for the extensive programs and the workspace required for
their execution and storage. While centralization reduces duplication, the lack
decision support systems (DSS) to the most rigorous online transaction processing
(OLTP) application, even application that require simultaneous DSS and OLTP
access to the same critical data, SQL Server leads the industry in both performance
and capability.
SQL SERVER is a truly portable, distributed, and open DBMS that delivers
specially designed for online transactions processing and for handling large database
64
application.
SQL SERVER with transactions processing option offers two features which
contribute to very high level of transaction processing tvehicleoughput, which are the
DBMS enables all the systems in the organization to be linked into a singular,
PORTABILITY
SQL SERVER is fully portable to more than 80 distinct hardware and operating
OPEN SYSTEMS
SQL. SQL Server’s open architecture integrates SQL SERVER and non –SQL
application, and third party vehicle products SQL Server’s Open architecture
provides transparent access to data from other relational database and even non-
relational database.
65
DISTRIBUTED DATA SHARING
access data stored on remote server with the same ease as if the information was
stored on a single local computer. A single SQL statement can access data at
multiple sites. You can store data where system requirements such as performance,
UNMATCHED PERFORMANCE
performance is limited not by the CPU power or by disk I/O, but user waiting on
one another for data access. SQL Server employs full, unrestricted row-level
locking and contention free queries to minimize and in many cases entirely
NO I/O BOTTLENECKS
SQL Server’s fast commit groups commit and deferred write technologies
dramatically reduce disk I/O bottlenecks. While some database write whole data block
to disk at commit time, SQL Server commits transactions with at most sequential
log file on disk at commit time, On high tvehicleoughput systems, one sequential
66
writes typically group commit multiple transactions. Data read by the transaction
remains as shared memory so that other transactions may access that data without
reading it again from disk. Since fast commits write all data necessary to the
recovery to the log file, modified blocks are written back to the database
ALGORITHMS CONCEPTS:
other media type is eminent; the search for best data protection against security attacks
and a method to timely deliver the data without much delay is the matter of discussion
among security related communities. Cryptography is one such method that provides
as "the study of secret", which is most attached to the definition of encryption. The
two main characteristics that identify and differentiate encryption algorithm from
another are their capability to secure the protected data against attacks and their speed
and effectiveness in securing the data. This paper provides a comparative study
between four such widely used encryption algorithms DES, of DES, 3DES, AES and
RSA on the basis of their ability to secure and protect data against attacks and speed of
67
Basics of RSA algorithm
Shamir and Len Adlemen in 1977. It is most popular and asymmetric key
cryptographic algorithm. It may used to provide both secrecy and digital signature [2].
It uses the
prime no. to generate the public and private key based on mathematical fact and
multiplying large numbers together. It uses the block size data in which plaintext and
cipher text are integers between 0and n1 for some n values. Size of n is considered
1024bits or 309 decimal digits. In this two different keys are used for encryption and
decryption purpose. As sender knows encryption key and receiver knows decryption
key.
Algorithm
➢ Compute n=p*q
➢ Choose the public key e such that gcd (φ (n), e) =1; 1<e< φ (n)
C=M ê mod n
68
Decryption
Ron Rivest, Adi Shamir and Leonard Adleman. RSA implemented following two
important ideas:
the decryption keys are kept private, so only the person with the correct decryption
key can decipher an encrypted message. Everyone has their own encryption and
corresponding decryption keys. The keys are made in such a way that the decryption
message is actually originated from the sender, and didn't just come from
authentication. This is done with the help of the sender's decryption key, and later
the signature can be verified by anyone, using the corresponding public encryption
key. Signatures therefore cannot be copied. Also, no signer can later deny having
Finding ‘n’ is the first step of the algorithm, where ‘n’ is the product of
two prime numbers ‘p’ & ‘q’. The number ‘n’ will be revealed in the encryption
and decryption keys, but the numbers ‘p’ and ‘q’ will not be explicitly shown. he
69
prime numbers ‘p’ and ‘q’ should be large such that it will be very difficult to derive
Choose a number ‘e’ such that ‘e’ is co-prime to φ (n), where φ (n) is
the Euler’s totient function that counts the number of positive integers less than
Determine the private key ‘d’ such that‘d’ is the multiplicative inverse of
Encryption
Let ‘m’ be the message (integer type) that is to be encrypted using public
key ‘e’ to give the encrypted message as ‘c’ where ‘c’ is calculated as
Symmetric Encryption
In case of Symmetric Encryption, same cryptography keys are used for
simpler and faster but their main drawback is that both the users need to transfertheir
70
Asymmetric Encryption
In case of Asymmetric encryption, two keys are used. It is also known as
Public Key Cryptography (PKC), because users tend to use two keys: public key,
which is known to public and a private key which is only known to user.
Decryption
The decrypted message ‘m’ is found out using the private key‘d’ and is
calculated
71
Encrypted and Decrypted Data
HASHING ALGORITHMS
digest" of the input …The MD5 algorithm is intended for digital signature
being encrypted with a private (secret) key under a public-key cryptosystem such
as RSA.”
with a 128- bit hash value. MD5 has been employed in a wide variety of security
72
applications, and is also commonly used to check the integrity of files. An MD5 hash
IMPLEMENTATION STEPS:
that its length (in bits) equals to 448 mod 512. Padding is always performed, even
message, and then "0" bits are appended so that the length in bits of the padded
message becomes congruent to 448 mod 512. At least one bit and at most 512 bits are
appended.
73
• Step2. Append length A 64-bit representation of the length of the message is
appended to the result of step1. If the length of the message is greater than 2^64, only
the low-order 64 bits will be used. The resulting message (after padding with bits
and with b) has a length that is an exact multiple of 512 bits. The input message will
word A: 01 23 45 67
word B: 89 ab cd ef
word C: fe dc ba 98
word D: 76 54 32 10
• Step4. Process message in 16-word blocks Four functions will be defined such
that each function takes an input of three 32-bit words and produces a 32-bit word
output.
74
• The function g in round 2 was changed from (XY v XZ v YZ) to (XZ v Y
not(Z)).
• The order in which input words are accessed in rounds 2 and 3 is changed.
• The shift amounts in each round have been optimized. The shifts in
of 128 bits.
STEPS:
1. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit little
endian integers), the message is padded so that its length is divisible by 512.
2. The padding works as follows: first a single bit, 1, is appended to the end of the
message.
3. This is followed by as many zeros as are required to bring the length of the
75
4. The remaining bits are filled up with a 64-bit integer representing the length of the
5. The MD5 algorithm uses 4 state variables, each of which is a 32 bit integer (an
unsigned long on most systems). These variables are sliced and diced and are
A = 0x67452301
B = 0xEFCDAB89
C = 0x98BADCFE
D = 0x10325476.
6. Now on to the actual meat of the algorithm: the main part of the algorithm uses four
functions to thoroughly goober the above state variables. Those functions are as
follows:
H(X,Y,Z) = X ^ Y ^ Z
I(X,Y,Z) = Y ^ (X | ~(Z))
Where &, |, ^, and ~ are the bit-wise AND, OR, XOR, and NOT operators
76
7. These functions, using the state variables and the message as input, are used to
transform the state variables from their initial state into what will become the message
digest. For each 512 bits of the message, the rounds performed (this is only pseudo-
code, don’t try to compile it) After this step, the message digest is stored in the state
variables (A, B, C, and D). To get it into the hexadecimal form you are used to seeing,
output the hex values of each the state variables, least significant byte first. For
A = 0x01234567;
B = 0x89ABCDEF;
C = 0x1337D00D
D = 0xA5510101
length.
• MD5 is being used heavily from large corporations, such as IBM, Cisco
77
4.SYSTEM DESIGN
The importance can be stated with a single word “Quality”. Design is the
place where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we
can accurately translate a customer’s view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that
follow. Without a strong design we risk building an unstable system – one that will be
difficult to test, one whose quality cannot be assessed until the last stage.
4.1 Normalization
N/A
78
4.2 Input design
In the input design, user-oriented inputs are converted into a computer based system
format. It also includes determining the record media, method of input, speed of
capture and entry on to the screen. Online data entry accepts commands and data
through a keyboard. The major approach to input design is the menu and the prompt
design. In each alternative, the user’s options are predefined. The data flow diagram
indicates logical data flow, data stores, source and destination. Input data are collected
and organized into a group of similar data. Once identified input media are selected for
processing.
Also the important input format is designed in such a way that accidental errors
are avoided. The user has to input only just the minimum data required, which also
helps in avoiding the errors that the users may make. Accurate designing of the input
format is very important in developing efficient software. The goal or input design is
to make entry as easy, logical and free from errors.
4.3 TABLE DESIGN
INPUT FORMAT
Network Wi-Fi nodes Alpha numeric
Available paths Alpha numeric
File Any file format
(txt,doc,jpg,ect)
79
4.4 SFD/DFD
A data flow diagram is graphical tool used to describe and analyze movement
of data through a system. These are the central tool and the basis from which the other
components are developed. The transformation of data from input to output, through
processed, may be described logically and independently of physical components
associated with the system. These are known as the logical data flow diagrams. The
physical data flow diagrams show the actual implements and movement of data
between people, departments and workstations. A full description of a system actually
consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane
and Sarson notation develops the data flow diagrams. Each component in a DFD is
labeled with a descriptive name. Process is further identified with a number that will
be used for identification purpose. The development of DFD’S is done in several
levels. Each process in lower level diagrams can be broken down into a more detailed
DFD in the next level. The lop-level diagram is often called context diagram. It
consists a single process bit, which plays vital role in studying the current system. The
process in the context level diagram is exploded into other process at the first level
DFD.
The idea behind the explosion of a process into more process is that
understanding at one level of detail is exploded into greater detail at the next level.
This is done until further explosion is necessary and an adequate amount of detail is
described for analyst to understand the process.
Larry Constantine first developed the DFD as a way of expressing system
requirements in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in
system design. So it is the starting point of the design to the lowest level of detail. A
DFD consists of a series of bubbles joined by data flows in the system.
DFD SYMBOLS:
80
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information
flows
3. A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data
Data flow
Data Store
CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name should
be representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to
the source. One way to indicate this is to draw long flow line back to a source. An
alternative way is to repeat the source symbol as a destination. Since it is used
more than once in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
81
4. The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized.
A DFD typically shows the minimum contents of data store. Each data store
should contain all the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.
CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the overall
system-processing label includes an identification of the technology used to process
the data. Similarly data flows and data stores are often labels with the names of the
actual physical media on which data are stored such as file folders, computer files,
business forms or computer tapes.
CURRENT LOGICAL:
82
The physical aspects at the system are removed as much as possible so that the
current system is reduced to its essence to the data and the processors that transforms
them regardless of actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy
with the user were completely happy with the functionality of the current system but
had problems with how it was implemented typically through the new logical model
will differ from current logical model while having additional functions, absolute
function removal and inefficient flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new
system.
PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a
sink.
3) A process has a verb phrase label.
DATA STORE
1) Data cannot move directly from one data store to another data store, a process must
move data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and / or destination of data.
1) Data cannot move direly from a source to sink it must be moved by a process
83
2) A source and /or sink has a noun phrase land
DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The
later is usually indicated however by two separate arrows since these happen at
different type.
2) A join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be at
least one other process that handles the data flow produce some other data flow
returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear
on a single arrow as long as all of the flows on the same arrow move together as one
package
Data Flow Diagram
84
Level 0
Level 1
Level 2
Level 3
85
Class Diagram :
86
Use Case
5. SYSTEM DESCRIPTION
Modules
• Authentication
• Source
• Destination
Modules Description
87
• Authentication
This module is to register the new users and previously registered users
can enter into our project. The Register user only can enter into Proposed Process in
our Project. The Other user can view Existing Of our Project
• Source
Before transmitting the source the user will check for the hackers, if they
find any hackers they block the access of the hackers and denial their request.
• Destination
In this module the user transmit the file to the receiver in the network.
anomaly for the software. Thus a series of testing are performed for the proposed
88
A good test case is one that has a high probability of finding an as
Testing Objectives:
➢ A good test case is one that has a probability of finding an as yet undiscovered
error
Testing Principles:
➢ Testing should begin on a small scale and progress towards testing in large
The primary objective for test case design is to derive a set of tests that has the
two different categories of test case design techniques are used. They are
89
White-box Testing:
White box testing focus on the program control structure. Test cases are
derived to ensure that all statements in the program have been executed at least once
during testing and that all logical conditions have been executed.
Block-box Testing:
regard to the internal workings of a program. Black box testing mainly focuses on the
information domain of the software, deriving test cases by partitioning input and
output in a manner that provides through test coverage. Incorrect and missing
functions, interface errors, errors in data structures, error in functional logic are the
Testing Strategies:
A strategy for software testing must accommodate low-level tests that are
necessary to verify that all small source code segments has been correctly
implemented as well as high-level tests that validate major system functions against
customer requirements.
90
Testing Fundamentals:
A good test case is one that has high probability of finding an undiscovered error. If
testing is conducted successfully it uncovers the errors in the software. Testing cannot
show the absence of defects, it can only show that software defects present.
Information flow for testing flows the pattern. Two class of input provided
Test configuration includes test plan and test cases and test tools. Tests are
conducted and all the results are evaluated. That is test results are compared with
expected results. When erroneous data are uncovered, an error is implied and
debugging commences.
Unit Testing:
Unit testing is essential for the verification of the code produced during the
coding phase and hence the goal is to test the internal logic of the modules. Using the
detailed design description as a guide, important paths are tested to uncover errors
91
within the boundary of the modules. These tests were carried out during the
Integration Testing:
Integration testing focuses on unit tested modules and build the program
System Testing:
System testing tests the integration of each module in the system. It also
tests to find discrepancies between the system and it’s original objective, current
individual modules. Entire system is working properly or not will be tested here, and
specified path ODBC connection will correct or not, and giving output or not are
tested here these verifications and validations are done by giving input values to the
system and by comparing with expected output. Top-down testing implementing here.
Acceptance Testing:
This testing is done to verify the readiness of the system for the
implementation. Acceptance testing begins when the system is complete. Its purpose is
to provide the end user with the confidence that the system is ready for use. It involves
92
planning and execution of functional tests, performance tests and stress tests in order
Test coverage Analyzer – records the control paths followed for each test case.
Timing Analyzer – also called a profiler, reports the time spent in various regions of
Coding standards – static analyzers and standard checkers are used to inspect code for
system are correct the goal will be successfully achieved. System Testing is utilized as
another and interact in a total system. Each portion of the system is tested to see
whether it conforms to related programs in the system. Each portion of the entire
system is tested against the entire module with both test and live data before the entire
The first test of a system is to see whether it produces correct output. The
1. Online - Response
When the mouse is clicked on the router the statistics of the router for the
selected algorithm have to be displayed on the screen. The router must crash
93
2. Stress Testing
The purpose of stress testing is to prove that the system does not
malfunction under peak loads. In the simulator we test it with the greater number of
nodes and getting the correct results for each and every router applying different
routing algorithms. All the routers are purposely crashed to generate a peak load
The usability test verifies the user friendly nature of the system. The user is
asked to use only the documentation and procedure as a guide to determine whether
Test case:
94
Implementation
The term Implementation has different meanings ranging from the conversation
of a basic application to a complete replacement of a computer system. The procedures
however, are virtually the same. Implementation includes all those activities that take
place to convert from old systems to new.
The new system may be totally new replacing an existing manual or automated
system or it may be major modification to an existing system. The method of
implementation and time scale to be adopted is found out initially. Neat the system is
test properly and at the same time the users are trained in the new procedure. Proper
implementation is essential to provide a reliable system to meet organization
requirement.
Successful and efficient utilization in the system can be achieved only through
proper implementation of the system in the organization. So implementation phase is
also important like other phases such as analysis, design, coding and testing.
• Careful planning
95
Sender
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Diagnostics;
using System.Net.NetworkInformation;
using System.Data.SqlClient;
namespace TADR
{
public partial class Form1 : Form
{
public delegate void MyPing(int id);
public const string tempadd = @"C:\WINDOWS\Temp";
public Form1()
{
InitializeComponent();
}
96
{
openFileDialog1.ShowDialog();
label2.Text = openFileDialog1.FileName;
}
97
{
outputFile.Write(buffer, 0, bytesRead);
}
outputFile.Close();
}
fs.Close();
backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
//label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
sys.SendFile(s);
label2.Visible = false;
98
{
string ls = myipsplit + id;
var pingSender = new Ping();
PingReply reply = pingSender.Send(ls, 100);
if (reply != null)
if (reply.Status == IPStatus.Success)
results[id] = reply.Address.ToString();
};
var asyncResults = new IAsyncResult[100];
for (int i = 1; i < 100; i++)
{
asyncResults[i] = ping.BeginInvoke(i, null, null);
}
for (int i = 1; i < 100; i++)
{
ping.EndInvoke(asyncResults[i]);
}
Trace.WriteLine(DateTime.Now);
var sb = new StringBuilder();
System.DirectoryServices.DirectoryEntry entryPC;
entryPC = new System.DirectoryServices.DirectoryEntry();
entryPC.Path = "WinNT://Workgroup";
for (int i = 1; i < 100; i++)
{
if (results[i] != null)
listView1.Items.Add(results[i]);
sb.AppendFormat("{0}", results[i]);
}
99
listView1.Items.Clear();
for (int i = 1; i < 7; i++)
{
listView1.Items.Add("192.168.1." + i).ImageIndex = 1;
}
{
IPAddress[] ipAddress = Dns.GetHostAddresses("localhost");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);
100
fileName = fileName.Replace("\\", "/");
while (fileName.IndexOf("/") > -1)
{
filePath += fileName.Substring(0, fileName.IndexOf("/") + 1);
fileName = fileName.Substring(fileName.IndexOf("/") + 1);
}
byte[] fileNameByte = Encoding.ASCII.GetBytes(fileName);
if (fileNameByte.Length > 850 * 1024)
{
curMsg = "File size is more than 850kb,please try with small file.";
return;
}
curMsg = "Buffering ...";
byte[] fileData = File.ReadAllBytes(filePath + fileName);
byte[] clientData = new byte[4 + fileNameByte.Length +
fileData.Length];
byte[] fileNameLen = BitConverter.GetBytes(fileNameByte.Length);
fileNameLen.CopyTo(clientData, 0);
fileNameByte.CopyTo(clientData, 4);
fileData.CopyTo(clientData, 4 + fileNameByte.Length);
101
}
}
}
class attackdenied
{
cliendSock.Connect(ipEnd);
102
cliendSock.Close();
curMsg = "Attacker Blocked";
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Information", MessageBoxButtons.OK,
MessageBoxIcon.Exclamation);
}
}
103
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);
104
Receiver
namespace receiver
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}
105
MessageBox.Show("Please select file receiving
path","Information",MessageBoxButtons.OK,MessageBoxIcon.Information);
}
reciver1 obj = new reciver1();
class reciver1
{
IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("localhost");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.IP);
sock.Bind(ipEnd);
106
}
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
try
{
curMsg = "Starting...";
sock.Listen(100);
107
File.Delete(receivedPath + "/" + fileName);
clientSock.Close();
}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
}
public void c()
{
const string tempadd = @"C:\";
int c;
string inputFile = reciver1.receivedPath + "/" + reciver1.fileName;//
Substitute this with your Input File
FileStream fs = new FileStream(inputFile, FileMode.Open,
FileAccess.Read);
if (fs.Length > 500)
c = 10;
else
108
c = 5;
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);
109
private void groupBox1_Enter(object sender, EventArgs e)
{
if (!prevFileName.Equals(baseFileName))
{
if (outputFile != null)
{
outputFile.Flush();
outputFile.Close();
}
110
outputFile = new FileStream(outPath + baseFileName + extension,
FileMode.OpenOrCreate, FileAccess.Write);
}
int bytesRead = 0;
byte[] buffer = new byte[1024];
FileStream inputTempFile = new FileStream(tempFile,
FileMode.OpenOrCreate, FileAccess.Read);
inputTempFile.Close();
File.Delete(tempFile);
prevFileName = baseFileName;
}
outputFile.Close();
111
7. CONCLUSION AND FUTURE SCOPE
CFAEF scheme to simultaneously defend against false data injection, PDoS, and
112
8. FORMS AND REPORTS
Sender
This form shows the sender which sends the files to receiver.
113
Recevier:
In this form shows the Receiver page which receives the files from
114
Attack:
In this form shows attacker, sender to find the attacker for sending
115
Sender and Receiver
In this form shows attacker, and showing the message for request to
sender to find the attacker for sending to detect and attack the files
116
Attackers Found
In this form shows sender to detect the hacker from sending files.
117
In this form shows attacker, and showing the message for request to
sender to find the attacker for sending to detect and attack the System
Block
118
9. BIBILOGRAPHY
119