You are on page 1of 119

CONSTRAINED FUNCTION-BASED MESSAGE AUTHENTICATION

FOR SENSOR NETWORKS

A PROJECT REPORT

Submitted For the Partial Fulfillment of the Requirement for theDegree of Master of
Computer Application

By

BALAJI A
REG No: A20105PCA6214

Under the Guidance of

Dr. R. VELMURUGAN M.C.A., M.Phil., Ph.D., M.B.A


PG AND RESEARCH DEPARTMENT OF COMPUTER SCIENCE

PRESIDENCY COLLEGE CHENNAI.

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS

CHENNAI - 600 005

JUNE 2022

1
BONAFIDE CERTIFICATE

This is to certify that the entitled CONSTRAINED FUNCTION-BASED


MESSAGE AUTHENTICATION FOR SENSOR NETWORKS submitted
to the University of Madras, Chennai by BALAJI.A (Reg. No. 20105PCA6214)
for the Partial Fulfillment for the award of degree of Master of Computer
Application is a bonafide record of work carried out by him under my guidance
and supervision.

Name of the Guide Co-ordinator

Dr. R. Velmurugan, M.C.A., M.Phil., Ph.D., M.B.A Dr. S. Sasikala, M.C.A., M.Phil., Ph.D

Assis Prof, PG and Research Dept of Computer Science Assoc.Prof, Dept of MCA,
Presidency College, Chennai IDE, University of Madras.

Date:06/08/2022

Submitted for the Viva-Vice Examination held


On 06/08/2022 at Centre IDE, University of Madras.

Examiners:

1. Name :

Signature :

2. Name :

Signature :

2
ACKNOWLEDGEMENT

I list here the various eminent personalities; their indomitable spirit helped
me to taste the fruits of success.

My heartful thanks to “Dr. K. RAVICHANDRAN, Ph.D, The Director,


Correspondence Education, University of Madras” for giving me this
opportunity.

I express my sincere thanks to Dr. S. Sasikala, MCA., M.Phil., Ph.D.,


Associate Professor Co-ordinator for giving me an opportunity to do a real time
project during the course of my Master of Computer Applications degree.

I express my sincere thanks to my guide Dr. R. VELMURUGAN M.C.A.,


M.Phil., Ph.D., M.B.A., for suggesting the problems, offerings guidance and
discussions throughout the course of project.

I would like to express my sincere thanks to all the staff members of our
department for their support throughout the project.

Last but not least, I bowed my head to my parents for inspirations and
encouragement without which I may not completed this project productively.

3
CONSULTANT OFFICE:

USA
6 LEVAN CT
BRIDGEWATE
R,
NEW JERSEY - 08807 908-
393-2156
Date: 27 - June - 2022

Subject: Project Completion Letter


TO WHOM SO EVER IT MAY CONCERN

This is to certify that Mr BALAJI A (Reg No: A20105PCA6214) of Final Year


Project MCA(Computer Application), INSTITUTE OF DISTANCE EDUCATION
UNIVERSITY OF MADRAS,CHENNAI has completed successfully to do her
Project work in our organization from February 2022 to June 2022.

PROJECT TOPIC: CONSTRAINED FUNCTION-BASED MESSAGE


AUTHENTICATION FOR SENSOR NETWORKS

(HR MANAGER)

ADDRESS: 13/6, KUMARAN COLONY, 3RD STREET, VADAPALANI, CHENNAI, INDIA - 600
026. FAX: 9144- 45032415 I HTTP://WWW_,_UJMINOSOFT.NETI E-MAIL:
tffi@LUMWOSQFT.NET

4
INDEX

S.No. CONTENT Page No.


1. INTRODUCTION 6
1.1 COMPANY PROFILE 7
1.2 PROJECT OVERVIEW 8
1.3 ABSTRACT 9
2. SYSTEM ANALYS S 10
2.1 FEASIBILITY SYSTEM 10
2.2 EXISTING SYSTEM 12
2.3 PROPOSED SYSTEM 12
3. SYSTEM CONFIGURATION 14
3.1 HARDWARE SPECIFICTION 14
3.2 SOFTWARE SPECIFICATION 14
3.3 ABOUT THE SOFTWARE 20
4. SYSTEM DESIGN 78
4.1 NORMALIZATION 78
4.2 TABLE DESIGN 79
4.3 INPUT DESIGN 79
4.4 SFD/DFD 80
5. SYSTEM DESCRIPTION 87
6. TESTING AND IMPLEMENTATION 88
7. CONCLUSION AND FUTURE SCOPE 112
8. FORMS AND REPORT 113
9. BIBLIOGRAPHY 119

5
1. INTRODUCTION

A WIRELESS sensor network (WSN) is composed of a large number of

sensor nodes with limited resources. Since WSNs can be deployed in an unattended or

hostile environment, the design of an efficient authentication scheme is of great

importance to the data authenticity and integrity in WSNs. In this respect, many

authentication schemes have been proposed. The most straightforward way to

guarantee data authenticity is to use conventional public-key cryptography-based

digital signature techniques. Although the use of public-key cryptography on WSNs

has been demonstrated in and to, be feasible, the computation overhead is still rather

high for resource-constrained devices.

Authentication Problem: Sensor networks are vulnerable to false data

injection attack , by which the adversary injects false data, attempting to either deceive

the base station (BS, or data sink), and path-based denial of service (PDoS) attack , by

which the adversary sends bogus messages to randomly selected nodes so as to waste

the energy of forwarding nodes.1 Several so-called en-route filtering schemes have

been proposed to quickly discover and remove the bogus event report injected by the

adversary. Here, “en-route filtering” means that not only the destination node but also

the intermediate nodes can check the authenticity of the message in order to reduce the

number of hops the bogus message travels and, thereby, conserve energy. Hence, it is

especially useful in mitigating false data injection attack and PDoS attack [13],

because the falsified messages will be filtered out as soon as possible.

6
1.1Company Profile

Apex Global Solutions is a software development and services company

focusing on software product development, IT consulting, IT infrastructure setup and

managed services business.

The company focuses on CRM based sales Analytics products development

and providing innovative business solutions that can be delivered quickly and cost-

effectively in complex environments.

The company provides solutions for sales analytics and sales forecast which fits

for multiple business areas including banking, insurance, manufacturing, supply chain,

healthcare, retail and more.

We believe that technology should simplify businesses, not complicate them; it

should free you, not tie you down. It is this philosophy that has been the driving force

behind all our innovations and product developments. Having experienced resources

and functional experts, S cube Technologies is proficient enough to handle a varied

range of business requirements.

Our products are architected on our proprietary platforms enabling you to

accommodate changes on the go; these platforms render complete transformation of

the business in real time, besides addressing the entire business cycle from transaction

7
to analytics. You can easily meet various business demands, across several industries

and verticals.

All of these solutions are configurable, extendable and scalable to meet the

customer's changing business needs and to automate the business, end-to-end.

We bring the best people and the best ideas together to deliver value to our

customers. We believe that companies can innovate and deliver outstanding service

only if they tap the commitment, energy, and imagination of their employees. Value

must therefore be created for those employees in order to motivate and enable them.

Value for employees includes being treated respectfully and being involved in

decision making.

Employees also value meaningful work, excellent career opportunities, and

continued training and development. S cube Technologies is committed to helping its

employees chart their own career path to suit their interests and inherent strengths. An

opportunity for continuous growth and learning, acquiring new credentials that help

them gain competencies and prepare for further challenges.

1.2Project Overview

The network security consists of the provisions and policies adopted by the

network to prevent and monitor unauthorized access, misuse, modification, or denial

of the computer network and network-accessible resources. Now a day’s Sensor

8
networks are vulnerable to false data injection attack and path-based denial of service

attack. Which rely on complicated security associations among sensor nodes, our

design, which directly exploits an en-route filtering hash function, appears to be novel.

In proposed system en-route filtering scheme, enabling each forwarding node to check

the authenticity of the received message, acts as a defense against these two attacks.

The CFA of en-route filtering scheme, enabling each forwarding node to check the

authenticity of the received message. CFAEF is inherently defended against false

endorsement path-based denial of service attack.

1.3 ABSTRACT

Sensor networks are vulnerable to false data injection attack , by which the

adversary injects false data, attempting to either deceive the base station (BS, or data

sink), and path-based denial of service (PDoS) attack , by which the adversary sends

bogus messages to randomly selected nodes so as to waste the energy of forwarding

nodes. Sensor networks are vulnerable to false data injection attack and path-based

denial of service (PDoS) attack. While conventional authentication schemes are

insufficient for solving these security conflicts, an en-route filtering scheme, enabling

each forwarding node to check the authenticity of the received message, acts as a

defense against these two attacks.

9
2. SYSTEM ANALYSIS

2.1 Feasibility system

Our project meet the performance metric my archiving the following feasibility

study

• Technical Feasibility

This is concerned with specifying equipment and software that will successfully

satisfy the user requirement. The technical needs of the system may vary considerably,

but might include:

• The facility to produce outputs in a given time.

This project produces the output in milliseconds thus by avoiding the time

complexity

• Response time under certain conditions.

Here in this project we switch on and off the sensor nodes in specific condition

thus by providing the good response of the system

• Ability to process a certain volume of transaction at a particular speed.

The speed of transaction in our system is depend on the hardware of the

network connection that we use

10
• Facility to communicate data to distant locations.

Our project deal with the WIFI nodes which can communicate with the distant

nodes

In examining technical feasibility, configuration of the system is given more

importance than the actual make of hardware. The configuration has given the

complete picture about the project

• Economic Feasibility

Our project has reduces the cost and increase the lifetime of the WIFI nodes by

making the WIFI nodes to go to semi sleep state which reduces the power conception

and cost thus by achieving the economical feasibility

• Operational Feasibility

• What changes will be brought with the system?

There is only an little changes that was brought to the system which will run

automatically without affecting any other aspect

• What organizational structure is disturbed?

There is no change or disturbance in structure of organization in our project

• What new skills will be required? Do the existing staff members have these skills? If

not, can they be trained in due course of time?

11
The concept in our project runs automatically so there are no new skills

required.

This feasibility study is carried out by a small group of people who are familiar

with information system technique and are skilled in system analysis

and design process.

Proposed projects are beneficial only if they can be turned into information

system that will meet the operating requirements of the organization. This test of

feasibility asks if the system will work when it is developed and installed.

2.2 Existing System

In the Existing system the Sensor networks are vulnerable to false data

injection attack , by which the adversary injects false data, attempting to either

deceive the base station (BS, or data sink), and path-based denial of service (PDoS)

attack , by which the adversary sends bogus messages to randomly selected nodes so

as to waste the energy of forwarding nodes. Sensor networks are vulnerable to false

data injection attack and path-based denial of service (PDoS) attack. While

conventional authentication schemes are insufficient for solving these security

conflicts, an en-route filtering scheme, enabling each forwarding node to check the

authenticity of the received message, acts as a defense against these two attacks.

2.3 Proposed System

Constrained Function-based message Authentication (CFA) scheme, which

can be thought of as a hash function directly supporting the en-route filtering

12
functionality.. Together with the redundancy property of sensor networks, which

means that an event can be simultaneously observed by multiple sensor nodes, the

devised CFA scheme is used to construct a CFA-based en-route filtering (CFAEF)

scheme. In addition to the resilience against false data injection and PDoS attacks,

CFAEF is inherently resilient against false endorsement-based DoS attack. In contrast

to most of the existing methods, which rely on complicated security associations

among sensor nodes, our design, which directly exploits an en-route filtering hash

function, appears to be novel. We examine the CFA and CFAEF schemes from both

the theoretical and numerical aspects to demonstrate their efficiency and effectiveness.

1) Resilience to node compromise (RNC), which means that the compromised nodes

cannot forge the messages sent from the genuine nodes; 2) immediate authentication

(IA), which can be thought of as a synonym to en-route filtering and can be used to

filter out the falsified messages as soon as possible to conserve energy; 3)

independence of network setting (INS), which means that CFA can be applied to the

networks with different network settings; 4) efficiency (EFF), which means that CFA

has low computational and communication overhead. With these characteristics, a

CFA-based en-route filtering (CFAEF) scheme can be constructed in such a way that

the source node sends to the destination node a message, together with the

corresponding CFA-based endorsements generated by the neighboring nodes. The

advantages of applying CFA on MAC generation are that the filtering capability can

be improved, the resilience against FEDoS attack can be achieved, and the impractical

assumptions previously made in the literature are no longer required.

13
3. SYSTEM CONFIGURATION

3.1 Software Requirement:

Operating system : Windows XP Professional.

Front-end : DOTNET(C#)

Back-end : SQL Server 2008

3.2 Hardware Requirement:

System : Pentium IV 2.4 GHz.

Hard Disk : 40 GB.

RAM : 512 MB.

OPERATING SYSTEM

An operating system (commonly abbreviated to either OS or O/S) is an interface

between hardware and user; it is responsible for the management and coordination of

activities and the sharing of the limited resources of the computer.

The operating system acts as a host for applications that are run on the machine.

As a host, one of the purposes of an operating system is to handle the details of the

operation of the hardware.

This relieves application programs from having to manage these details and

makes it easier to write applications.

14
Almost all computers, including handheld computers, desktop computers,

supercomputers, and even video game consoles, use an operating system of some type.

Some of the oldest models may however use an embedded operating system that

may be contained on a compact disk or other data storage device.

Operating systems offer a number of services to application programs and users.

Applications access these services through application programming interfaces (APIs)

or system calls.

By invoking these interfaces, the application can request a service from the

operating system, pass parameters, and receive the results of the operation.

Users may also interact with the operating system with some kind of

software user interface (UI) like typing commands by using command line interface

(CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”).

For hand-held and desktop computers, the user interface is generally considered

part of the operating system.

On large multi-user systems like UNIX and Unix-like systems, the user

interface is generally implemented as an application program that runs outside the

operating system. (Whether the user interface should be included as part of the

operating system is a point of contention.)

15
Common contemporary operating systems include Microsoft Windows, Mac

OS, Linux, BSD and Solaris. Microsoft Windows has a significant majority of market

share in the desktop and notebook computer markets, while servers generally run on

Unix or Unix-like systems. Embedded device markets are split amongst several

operating systems.

OVERVIEW OF WINDOWS XP PROFESSIONAL

Windows XP is a line of operating systems produced by Microsoft for use on

personal computers running x86 and IA-64 processors, including home and business

desktops, notebook computers, and media centers.

The name "XP" is short for "experience". Windows XP is the successor to both

Windows 2000 Professional and Windows Me, and is the first consumer-oriented

operating system produced by Microsoft to be built on the Windows NT kernel and

architecture.

Windows XP was first released on 25 October 2001, and over 400 million

copies were in use in January 2006, according to an estimate in that month by an IDC

analyst. It was succeeded by Windows Vista, which was released to volume license

customers on 8 November 2006, and worldwide to the general public on 30 January

2007.

16
Direct OEM and retail sales of Windows XP ceased on 30 June 2008, although

it is still possible to obtain Windows XP from System Builders.(smaller OEMs who

sell assembled computers) until 31 July 2009 or by purchasing Windows Vista

Ultimate or Business and then downgrading to Windows XP.

The increased functionality of EFS has significantly enhanced the power of the

Windows XP Professional client. Windows XP Professional now provides additional

flexibility for corporate users when deploying security solutions based on encrypted

data files and folders. These new features include:

• Full support for revocation checking on certificates used by the system

• Alternate color support (green) for encrypted files

• Support for encrypted offline folders

• Multi-user support for encrypted files in the shell user interface (UI)

• Support for the Microsoft-enhanced cryptographic service provider (CSP)

• Additional support for FIPS 140-1 Level 1 compliant symmetric algorithms

(3DES [Data Encryption Standard])

• End-to-end encryption using EFS over WebDAV

• Enhanced recovery policy flexibility

• Additional security features for protecting EFS data.

17
NEW FEATURES

Because Windows XP is built on a Windows 2000 base, you find that there are a

number of familiar secure authentication features. Windows XP, however, goes

beyond Windows NT 4.0 and Windows 2000 in a number of ways. Whether you are

connected to a domain, configured as a part of a workgroup, or are using a stand-alone

computer, you find that authentication is even more manageable than before. Here are

some of the biggest changes and additions to authentication processes, management,

and configuration:

• Everyone Group. By default, the Everyone Group no longer includes the

Anonymous Group. Previously, the Anonymous Group was granted access to any

resource to which the Everyone Group was granted access, even though

anonymous users are not required to supply usernames and passwords for

authentication.

• Guest Account. By default, Windows XP workstations not joined to a domain

are configured to use Guest only network logons. All users, including anonymous

users, accessing resources on a computer from over a network with this default

setting, are forced to use the Guest Account for authentication and are subsequently

given all the same access rights and privileges as the Guest Account.

• Service Accounts. Two new service accounts have been added to Windows XP

to enhance the granularity of service account access: Local Service for services

18
that run locally, and Network Service for services that run on the network. The

Local System account remains available as well, and is the only account that

has Act as part of the operating system rights by default.

• Blank Passwords. By default, Windows XP workstations that are not part of a

domain prevent users with blank passwords from logging on over the network. This

is especially helpful for preventing unauthorized access to home workstations

connected to the Internet. All blank password access is restricted to local logons

only.

• Password Reset Wizard. Windows XP supplies a recovery mechanism for use in

the event a user forgets his or her password. This Wizard creates a disk that can be

used to reset a local account password (it cannot be used to reset a domain

password). This disk is computer specific so it cannot be used on another

workstation, even if the username and password are the same. Others can use this

disk without proper authorization to access a local account, so it is a good idea to

keep this disk in a safe location.

• Stored User Names. Windows XP allows a user to store frequently used

username and password combinations for access to other resources, such as secured

Web sites or computers in an entrusted domain. This information becomes part of

the user's profile and can travel around the network with the user if roaming

profiles have been enabled.

19
• Fast User Switching. Fast user switching allows multiple users of the same

computer to log on without shutting down applications that may be in use by

another user who is currently logged on to the system. Fast User Switching uses

Terminal Services technology to provide this ability. This feature is only available

on computers that are not connected to a domain.

3.3 ABOUT THE SOFTWARE


SOFTWARE PROFILE
LANGUAGE SPECIFICATION
FEATURES OF.NET

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application

development in the highly distributed environment of the Internet.

OBJECTIVES OF .NET FRAMEWORK:

1. To provide a consistent object-oriented programming environment whether

object codes is stored and executed locally on Internet-distributed, or executed

remotely.

2. To provide a code-execution environment to minimizes software deployment

and guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and

Web-based applications. To make communication on distributed environment to

20
ensure that code be accessed by the .NET Framework can integrate with any other

code.

COMPONENTS OF .NET FRAMEWORK

THE COMMON LANGUAGE RUNTIME (CLR):

The common language runtime is the foundation of the .NET Framework. It

manages code at execution time, providing important services such as memory

management, thread management, and removing and also ensures more security and

robustness. The concept of code management is a fundamental principle of the

runtime. Code that targets the runtime is known as managed code, while code that

does not target the runtime is known as unmanaged code.

THE .NET FRAME WORK CLASS LIBRARY:

It is a comprehensive, object-oriented collection of reusable types used to

develop applications ranging from traditional command-line or graphical user

interface (GUI) applications to applications based on the latest innovations provided

by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the

common language runtime into their processes and initiate the execution of managed

code, thereby creating a software environment that can exploit both managed and

unmanaged features. The .NET Framework not only provides several runtime hosts,

but also supports the development of third-party runtime hosts.

21
Internet Explorer is an example of an unmanaged application that hosts

the runtime (in the form of a MIME type extension). Using Internet Explorer to host

the runtime to enables embeds managed components or Windows Forms controls in

HTML documents.

FEATURES OF THE COMMON LANGUAGE RUNTIME:

The common language runtime manages memory; thread execution,

code execution, code safety verification, compilation, and other system services these

are all run on CLR.

• Security.

• Robustness.

• Productivity.

• Performance.

SECURITY

The runtime enforces code access security. The security features of the

runtime thus enable legitimate Internet-deployed software to be exceptionally feature

rich. With regards to security, managed components are awarded varying degrees of

trust, depending on a number of factors that include their origin to perform file-access

operations, registry-access operations, or other sensitive functions.

22
ROBUSTNESS:

The runtime also enforces code robustness by implementing a strict

type- and code-verification infrastructure called the common type system (CTS). The

CTS ensures that all managed code is self-describing. The managed environment of

the runtime eliminates many common software issues.

PRODUCTIVITY:

The runtime also accelerates developer productivity. For example,

programmers can write applications in their development language of choice, yet take

full advantage of the runtime, the class library, and components written in other

languages by other developers.

PERFORMANCE:

The runtime is designed to enhance performance. Although the

common language runtime provides many standard runtime services, managed code is

never interpreted. A feature called just-in-time (JIT) compiling enables all managed

code to run in the native machine language of the system on which it is executing.

Finally, the runtime can be hosted by high-performance, server-side applications, such

as Microsoft® SQL Server™ and Internet Information Services (IIS).

Overview Of The .Net Framework:

The .NET Framework is a new computing platform that simplifies

application development in the highly distributed environment of the Internet. The

23
.NET Framework is designed to fulfill the following objectives: To provide a

consistent object-oriented programming environment whether object code is stored

and executed locally, executed locally but Internet-distributed, or executed remotely.

To provide a code-execution environment that minimizes software deployment and

versioning conflicts. To provide a code-execution environment that guarantees safe

execution of code, including code created by an unknown or semi-trusted third party.

To provide a code-execution environment that eliminates the performance problems of

scripted or interpreted environments. To make the developer experience consistent

across widely varying types of applications, such as Windows-based applications and

Web-based applications. To build all communication on industry standards to ensure

that code based on the .NET Framework can integrate with any other code.

The .NET Framework has two main components: the common language runtime

and the .NET Framework class library. The common language runtime is the

foundation of the .NET Framework. while also enforcing strict type safety and other

forms of code accuracy that ensure security and robustness. In fact, the concept of

code management is a fundamental principle of the runtime. Code that targets the

runtime is known as managed code, while code that does not target the runtime is

known as unmanaged code. The class library, the other main component of the .NET

Framework, is a comprehensive, object-oriented collection of reusable types that you

can use to develop applications ranging from traditional command-line or graphical

user interface (GUI) applications to applications based on the latest innovations

24
provided by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the

common language runtime into their processes and initiate the execution of managed

code, thereby creating a software environment that can exploit both managed and

unmanaged features. The .NET Framework not only provides several runtime hosts,

but also supports the development of third-party runtime hosts. For example,

ASP.NET hosts the runtime to provide a scalable, server-side environment for

managed code. ASP.NET works directly with the runtime to enable Web Forms

applications and XML Web services, both of which are discussed later in this topic

possible, but with significant improvements that only managed code can offer, such as

semi-trusted execution and secure isolated file storage. The following illustration

shows the relationship of the common language runtime and the class library to your

applications and to the overall system.

Features Of The Common Language Runtime:

The common language runtime manages memory, thread execution, code

execution, code safety verification, compilation, and other system services. These

features are intrinsic to the managed code that runs on the common language runtime.

The following sections describe the main components and features of the .NET

Framework in greater detail. With regards to security, managed components are

awarded varying degrees of trust, depending on a number of factors that include their

origin (such as the Internet, enterprise network, or local computer). This means that a

25
managed component might or might not be able to perform file-access operations,

registry-access operations, or other sensitive functions, even if it is being used in the

same active application. The runtime enforces code access security.

The runtime also enforces code robustness by implementing a strict type- and

code-verification infrastructure called the common type system (CTS). The CTS

ensures that all managed code is self-describing. The various Microsoft and third-party

language compilers generate managed code that conforms to the CTS. This means that

managed code can consume other managed types and instances, while strictly

enforcing type fidelity and type safety.

Architecture of .NET Framework

26
In addition, the managed environment of the runtime eliminates many common

software issues. For example, the runtime automatically handles object layout and

manages references to objects, releasing them when they are no longer being used.

This automatic memory management resolves the two most common application

errors, memory leaks and invalid memory references. The runtime also accelerates

developer productivity. For example, programmers can write applications in their

development language of choice, yet take full advantage of the runtime, the class

library, and components written in other languages by other developers. Any compiler

vendor who chooses to target the runtime can do so. Language compilers that target

the .NET Framework make the features of the .NET Framework available to existing

code written in that language, greatly easing the migration process for existing

applications.

While the runtime is designed for the software of the future, it also supports

software of today and yesterday. Interoperability between managed and unmanaged

code enables developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language

runtime provides many standard runtime services, managed code is never interpreted.

A feature called just-in-time (JIT) compiling enables all managed code to run in the

native machine language of the system on which it is executing. Meanwhile, the

memory manager removes the possibilities of fragmented memory and increases

memory locality-of-reference to further increase performance. Finally, the runtime can

27
be hosted by high-performance, server-side applications, such as Microsoft® SQL

Server™ and Internet Information Services (IIS).

Net Framework Class Library:

The .NET Framework class library is a collection of reusable types that tightly

integrate with the common language runtime. The class library is object oriented,

providing types from which your own managed code can derive functionality. This not

only makes the .NET Framework types easy to use, but also reduces the time

associated with learning new features of the .NET Framework. In addition, third-party

components can integrate seamlessly with classes in the .NET Framework. For

example, the .NET Framework collection classes implement a set of interfaces that

you can use to develop your own collection classes. Your collection classes will blend

seamlessly with the classes in the .NET Framework. As you would expect from an

object-oriented class library, the .NET Framework types enable you to accomplish a

range of common programming tasks, including tasks such as string management, data

collection, database connectivity, and file access. In addition to these common tasks,

the class library includes types that support a variety of specialized development

scenarios. For example, you can use the .NET Framework to develop the following

types of applications and services: Console applications, Windows GUI applications

(Windows Forms), ASP.NET applications, XML Web services, Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable types

that vastly simplify Windows GUI development. If you write an ASP.NET Web Form

28
application, you can use the Web Forms classes. Client applications are the closest to a

traditional style of application in Windows-based programming. These are the types of

applications that display windows or forms on the desktop, enabling a user to perform

a task.

Client applications include applications such as word processors and

spreadsheets, as well as custom business applications such as data-entry tools,

reporting tools, and so on. Client applications usually employ windows, menus,

buttons, and other GUI elements, and they likely access local resources such as the file

system and peripherals such as printers. Another kind of client application is the

traditional ActiveX control (now replaced by the managed Windows Forms control)

deployed over the Internet as a Web page. This application is much like other client

applications: it is executed natively, has access to local resources, and includes

graphical elements. In the past, developers created such applications using C/C++ in

conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application

development (RAD) environment such as Microsoft® Visual Basic®. The .NET

Framework incorporates aspects of these existing products into a single, consistent

development environment that drastically simplifies the development of client

applications.

The Windows Forms classes contained in the .NET Framework are designed to

be used for GUI development. You can easily create command windows, buttons,

menus, toolbars, and other screen elements with the flexibility necessary to

29
accommodate shifting business needs. For example, the .NET Framework provides

simple properties to adjust visual attributes associated with forms.

In some cases the underlying operating system does not support changing these

attributes directly, and in these cases the .NET Framework automatically recreates the

forms. This is one of many ways in which the .NET Framework integrates the

developer interface, making coding simpler and more consistent. Unlike ActiveX

controls, Windows Forms controls have semi-trusted access to a user's computer. This

means that binary or natively executing code can access some of the resources on the

user's system (such as GUI elements and limited file access) without being able to

access or compromise other resources.

Because of code access security, many applications that once needed to be

installed on a user's system can now be safely deployed through the Web. Your

applications can implement the features of a local application while being deployed

like a Web page.

Creating and Using Threads:

You create a new thread in Visual Basic .NET by declaring a variable of type

System.Threading. Thread and calling the constructor with the Address Of statement

and the name of the procedure or method you want to execute on the new thread.

Starting and Stopping Threads:

To start the execution of a new thread, use the Start method, as in the following

code:

30
MyThread.Start()

To stop the execution of a thread, use the Abort method, as in the following

code:

MyThread.Abort()

Besides starting and stopping threads, you can also pause threads by calling the Sleep

or Suspend methods, resume a suspended thread with the Resume method, and destroy

a thread using the Abort method, as in the following code:

MyThread.Sleep ()

MyThread.Suspend ()

MyThread.Abort ()

Thread Priorities:

Every thread has a priority property — that determines how large of a time slice

it gets to execute. The operating system allocates longer time slices to high-priority

threads than it does to low-priority threads. New threads are created with the value of

Normal, but you can adjust the Priority property to any of the other values in the

System.Threading. ThreadPriority enumeration.See ThreadPriority Enumeration for a

detailed description of the various thread priorities.

Foreground and Background Threads:

A foreground thread runs indefinitely, while a background thread terminates

once the last foreground thread has stopped. You can use the IsBackground property to

31
determine or change the background status of a thread

Changing Thread States:

Once a thread has started, you can call its methods to change its state. For

example, you can cause a thread to pause for a fixed number of milliseconds by calling

Thread.Sleep. The Sleep method takes as a parameter a timeout, which is the number

of milliseconds that the thread remains blocked. Calling

Thread.Sleep(System.Threading.Timeout.Infinite) causes a thread to sleep until it is

interrupted by another thread that calls Thread.Interrupt. The Thread.Interrupt method

wakes the destination thread out of any wait it may be in and causes an exception to be

raised. You can also pause a thread by calling Thread.Suspend. When a thread calls

Thread.Suspend on itself, the call blocks until another thread resumes it by calling

Thread.Resume. When a thread calls Thread.Suspend on another thread, the call is

non-blocking and causes the other thread to pause. Calling Thread.Resume breaks

another thread out of its suspended state and causes it to resume execution.

Unlike Thread.Sleep, Thread.Suspend does not immediately stop a thread; the

suspended thread does not pause until the common language runtime determines that it

has reached a safe point.The Thread.Abort method stops a running thread by raising a

ThreadAbortException exception that causes the thread to die.

Overview Of Ado.Net:

ADO.NET provides consistent access to data sources such as Microsoft SQL

Server, as well as data sources exposed via OLE DB and XML. Data-sharing

32
consumer applications can use ADO.NET to connect to these data sources and

retrieve, manipulate, and update data.

ADO.NET cleanly factors data access from data manipulation into discrete

components that can be used separately or in tandem. ADO.NET includes .NET data

providers for connecting to a database, executing commands, and retrieving results.

Those results are either processed directly, or placed in an ADO.NET DataSet object

in order to be exposed to the user in an ad-hoc manner, combined with data from

multiple sources, or remoted between tiers. The ADO.NET DataSet object can also be

used independently of a .NET data provider to manage data local to the application or

sourced from XML.

The ADO.NET classes are found in System.Data.dll, and are integrated with the

XML classes found in System.Xml.dll. When compiling code that uses the

System.Data namespace, reference both System.Data.dll and System.Xml.dll. For an

example of compiling an ADO.NET application using a command line compiler,

ADO.NET provides functionality to developers writing managed code similar to the

functionality provided to native COM developers by ADO. For a discussion of the

differences between ADO and ADO.NET.

Design Goals for ADO.NET

As application development has evolved, new applications have become loosely

coupled based on the Web application model. More and more of today's applications

use XML to encode data to be passed over network connections. Web applications use

33
HTTP as the fabric for communication between tiers, and therefore must explicitly

handle maintaining state between requests. This new model is very different from the

connected, tightly coupled style of programming that characterized the client/server

era, where a connection was held open for the duration of the program's lifetime and

no special handling of state was required.

In designing tools and technologies to meet the needs of today's developer,

Microsoft recognized that an entirely new programming model for data access was

needed, one that is built upon the .NET Framework. Building on the .NET Framework

ensures that the data access technology would be uniform — components would share

a common type system, design patterns, and naming conventions.ADO.NET was

designed to meet the needs of this new programming model: disconnected data

architecture, tight integration with XML, common data representation with the ability

to combine data from multiple and varied data sources, and optimized facilities for

interacting with a database, all native to the.NET Framework.

Leverage Current Ado Knowledge:

The design for ADO.NET addresses many of the requirements of today's

application development model. At the same time, the programming model stays as

similar as possible to ADO, so current ADO developers do not have to start from the

beginning in learning a brand new data access technology

ADO.NET is an intrinsic part of the .NET Framework without seeming

completely foreign to the ADO programmer.ADO.NET coexists with ADO. While

34
most new .NET-based applications will be written using ADO.NET, ADO remains

available to the .NET programmer through .NET COM interoperability services.

Support The N-Tier Programming Model:

ADO.NET provides first-class support for the disconnected, n-tier programming

environment for which many new applications are written. The concept of working

with a disconnected set of data has become a focal point in the programming model.

The ADO.NET solution for n-tier programming is the DataSet.

Integrate XML Support

XML and data access are intimately tied — XML is all about encoding data,

and data access is increasingly becoming all about XML. The .NET Framework does

not just support Web standards — it is built entirely on top of them.XML support is

built into ADO.NET at a very fundamental level.

The XML classes in the .NET Framework and ADO.NET are part of the same

architecture — they integrate at many different levels. You no longer have to choose

between the data access set of services and their XML counterparts; the ability to cross

over from one to the other is inherent in the design of both.

Ado.Net Architecture:

Data processing has traditionally relied primarily on a connection-based, two-

tier model. As data processing increasingly uses multi-tier architectures, programmers

are switching to a disconnected approach to provide better scalability for their

35
applications.

Xml And Ado.Net:

ADO.NET leverages the power of XML to provide disconnected access to data.

ADO.NET was designed hand-in-hand with the XML classes in the .NET Framework

— both are components of a single architecture.ADO.NET and the XML classes in the

.NET Framework converge in the DataSet object. The DataSet can be populated with

data from an XML source, whether it is a file or an XML stream. The DataSet can be

written as World Wide Web Consortium (W3C) compliant XML, including its schema

as XML Schema definition language (XSD) schema, regardless of the source of the

data in the DataSet. Because the native serialization format of the DataSet is XML, it

is an excellent medium for moving data between tiers making the DataSet an optimal

choice for remoting data and schema context to and from an XML Web service.

Ado.Net Components:

The ADO.NET components have been designed to factor data access from data

manipulation. There are two central components of ADO.NET that accomplish this:

the DataSet, and the .NET data provider, which is a set of components including the

Connection, Command, DataReader, and DataAdapter objects.The ADO.NET DataSet

is the core component of the disconnected architecture of ADO.NET. The DataSet is

explicitly designed for data access independent of any data source. As a result it can be

used with multiple and differing data sources, used with XML data, or used to manage

data local to the application. The DataSet contains a collection of one or more

36
DataTable objects made up of rows and columns of data, as well as primary key,

foreign key, constraint, and relation information about the data in the DataTable

objects.The Connection object provides connectivity to a data source.

The Command object enables access to database commands to return data,

modify data, run stored procedures, and send or retrieve parameter information. The

DataReader provides a high-performance stream of data from the data source. Finally,

the DataAdapter provides the bridge between the DataSet object and the data source.

The DataAdapter uses Command objects to execute SQL commands at the data source

to both load the DataSet with data, and reconcile changes made to the data in the

DataSet back to the data source. You can write .NET data providers for any data

source.

37
Ado.net architecture:

ADO.NET Architecture

Remoting Or Marshaling Data Between Tiers And Clients:

The design of the DataSet enables you to easily transport data to clients over the

Web using XML Web services, as well as allowing you to marshal data between .NET

components using .NET Remoting services.You can also remote a strongly typed

DataSet in this fashion. Note that DataTable objects can also be used with remoting

services, but cannot be transported via an XML Web service.

38
ASP.NET

ASP.NET is the next version of Active Server Pages (ASP); it is a

unified Web development platform that provides the services necessary for developers

to build enterprise-class Web applications. While ASP.NET is largely syntax

compatible, it also provides a new programming model and infrastructure for more

secure, scalable, and stable applications.

ASP.NET is a compiled, NET-based environment, we can author applications in

any .NET compatible language, including Visual Basic .NET, C#, and JScript .NET.

Additionally, the entire .NET Framework is available to any ASP.NET application.

Developers can easily access the benefits of these technologies, which include the

managed common language runtime environment (CLR), type safety, inheritance, and

so on.

ASP.NET has been designed to work seamlessly with WYSIWYG HTML

editors and other programming tools, including Microsoft Visual Studio .NET. Not

only does this make Web development easier, but it also provides all the benefits that

these tools have to offer, including a GUI that developers can use to drop server

controls onto a Web page and fully integrated debugging support.

Developers can choose from the following two features when creating an

ASP.NET application. Web Forms and Web services, or combine these in any way

they see fit. Each is supported by the same infrastructure that allows you to use

39
authentication schemes, cache frequently used data, or customize your application's

configuration, to name only a few possibilities.

Web Forms allows us to build powerful forms-based Web pages. When building

these pages, we can use ASP.NET server controls to create common UI elements, and

program them for common tasks. These controls allow we to rapidly build a Web

Form out of reusable built-in or custom components, simplifying the code of a page.

An XML Web service provides the means to access server functionality remotely.

Using Web services, businesses can expose programmatic interfaces to their data or

business logic, which in turn can be obtained and manipulated by client and server

applications. XML Web services enable the exchange of data in client-server or

server-server scenarios, using standards like HTTP and XML messaging to move data

across firewalls. XML Web services are not tied to a particular component technology

or object-calling convention. As a result, programs written in any language, using any

component model, and running on any operating system can access XML Web

services

Each of these models can take full advantage of all ASP.NET features, as

well as the power of the .NET Framework and .NET Framework common language

runtime.

Accessing databases from ASP.NET applications is an often-used

technique for displaying data to Web site visitors. ASP.NET makes it easier than ever

to access databases for this purpose. It also allows us to manage the database from

your code.

40
ASP.NET provides a simple model that enables Web developers to

write logic that runs at the application level. Developers can write this code in the

global.aspx text file or in a compiled class deployed as an assembly. This logic can

include application-level events, but developers can easily extend this model to suit

the needs of their Web application.

ASP.NET provides easy-to-use application and session-state facilities that

are familiar to ASP developers and are readily compatible with all other .NET

Framework APIs.

ASP.NET offers the IHttpHandler and IHttpModule interfaces.

Implementing the IHttpHandler interface gives you a means of interacting with the

low-level request and response services of the IIS Web server and provides

functionality much like ISAPI extensions, but with a simpler programming model.

Implementing the IHttpModule interface allows you to include custom events that

participate in every request made to your application.

ASP.NET takes advantage of performance enhancements found in the .NET

Framework and common language runtime. Additionally, it has been designed to offer

significant performance improvements over ASP and other Web development

platforms. All ASP.NET code is compiled, rather than interpreted, which allows early

binding, strong typing, and just-in-time (JIT) compilation to native code, to name only

41
a few of its benefits. ASP.NET is also easily factorable, meaning that developers can

remove modules (a session module, for instance) that are not relevant to the

application they are developing.

ASP.NET provides extensive caching services (both built-in services and

caching APIs). ASP.NET also ships with performance counters that developers and

system administrators can monitor to test new applications and gather metrics on

existing applications.

Writing custom debug statements to your Web page can help immensely in

troubleshooting your application's code. However, it can cause embarrassment if it is

not removed. The problem is that removing the debug statements from your pages

when your application is ready to be ported to a production server can require

significant effort.

ASP.NET offers the TraceContext class, which allows us to write custom

debug statements to our pages as we develop them. They appear only when you have

enabled tracing for a page or entire application. Enabling tracing also appends details

about a request to the page, or, if you so specify, to a custom trace viewer that is stored

in the root directory of your application.

The .NET Framework and ASP.NET provide default authorization and

authentication schemes for Web applications. We can easily remove, add to, or replace

these schemes, depending upon the needs of our application.

ASP.NET configuration settings are stored in XML-based files, which are

human readable and writable. Each of our applications can have a distinct

42
configuration file and we can extend the configuration scheme to suit our

requirements.

DATA ACCESS WITH ADO.NET

As you develop applications using ADO.NET, you will have different

requirements for working with data. You might never need to directly edit an XML

file containing data - but it is very useful to understand the data architecture in

ADO.NET.

ADO.NET offers several advantages over previous versions of ADO:

• Interoperability

• Maintainability

• Programmability

• Performance Scalability

INTEROPERABILITY:

ADO.NET applications can take advantage of the flexibility and broad

acceptance of XML. Because XML is the format for transmitting datasets across the

network, any component that can read the XML format can process data. The

receiving component need not be an ADO.NET component.

The transmitting component can simply transmit the dataset to its destination

without regard to how the receiving component is implemented. The destination

43
component might be a Visual Studio application or any other application implemented

with any tool whatsoever.

The only requirement is that the receiving component be able to read XML. SO,

XML was designed with exactly this kind of interoperability in mind.

MAINTAINABILITY:

In the life of a deployed system, modest changes are possible, but substantial,

Architectural changes are rarely attempted because they are so difficult. As the

performance load on a deployed application server grows, system resources can

become scarce and response time or throughput can suffer. Faced with this problem,

software architects can choose to divide the server's business-logic processing and

user-interface processing onto separate tiers on separate machines.

In effect, the application server tier is replaced with two tiers, alleviating

the shortage of system resources. If the original application is implemented in

ADO.NET using datasets, this transformation is made easier.

ADO.NET data components in Visual Studio encapsulate data access

functionality in various ways that help you program more quickly and with fewer

mistakes.

PERFORMANCE:

ADO.NET datasets offer performance advantages over ADO disconnected

record sets. In ADO.NET data-type conversion is not necessary.

44
SCALABILITY:

ADO.NET accommodates scalability by encouraging programmers to conserve

limited resources. Any ADO.NET application employs disconnected access to data; it

does not retain database locks or active database connections for long durations.

VISUAL STUDIO .NET

Visual Studio .NET is a complete set of development tools for building ASP

Web applications, XML Web services, desktop applications, and mobile applications

In addition to building high-performing desktop applications, you can use Visual

Studio's powerful component-based development tools and other technologies to

simplify team-based design, development, and deployment of Enterprise solutions.

Visual Basic .NET, Visual C++ .NET, and Visual C# .NET all use the same

integrated development environment (IDE), which allows them to share tools and

facilitates in the creation of mixed-language solutions. In addition, these languages

leverage the functionality of the .NET Framework and simplify the development of

ASP Web applications and XML Web services.

Visual Studio supports the .NET Framework, which provides a common

language runtime and unified programming classes; ASP.NET uses these components

to create ASP Web applications and XML Web services. Also it includes MSDN

Library, which contains all the documentation for these development tools.

45
XML WEB SERVICES

XML Web services are applications that can receive the requested data using

XML over HTTP. XML Web services are not tied to a particular component

technology or object-calling convention but it can be accessed by any language,

component model, or operating system. In Visual Studio .NET, you can quickly create

and include XML Web services using Visual Basic, Visual C#, JScript, Managed

Extensions for C++, or ATL Server.

XML SUPPORT:

Extensible Markup Language (XML) provides a method for describing structured

data. XML is a subset of SGML that is optimized for delivery over the Web. The

World Wide Web Consortium (W3C) defines XML standards so that structured

data will be uniform and independent of applications. Visual Studio .NET fully

supports XML, providing the XML Designer to make it easier to edit XML and

create XML schemas.

VISUAL BASIC .NET

Visual Basic.NET, the latest version of visual basic, includes many new

features. The Visual Basic supports interfaces but not implementation inheritance.

Visual basic.net supports implementation inheritance, interfaces and

46
overloading. In addition, Visual Basic .NET supports multithreading concept.

COMMON LANGUAGE SPECIFICATION (CLS):

Visual Basic.NET is also compliant with CLS (Common Language

Specification) and supports structured exception handling. CLS is set of rules and

constructs that are supported by the CLR (Common Language Runtime).

CLR is the runtime environment provided by the .NET Framework; it manages

the execution of the code and also makes the development process easier by providing

services.

Visual Basic.NET is a CLS-compliant language. Any objects, classes, or

components that created in Visual Basic.NET can be used in any other CLS-compliant

language.

In addition, we can use objects, classes, and components created in other CLS-

compliant languages in Visual Basic.NET .The use of CLS ensures complete

interoperability among applications, regardless of the languages used to create the

application.

IMPLEMENTATION INHERITANCE:

Visual Basic.NET supports implementation inheritance. This means that, while

creating applications in Visual Basic.NET, we can drive from another class, which is

known as the base class that derived class inherits all the methods and properties of the

47
base class. In the derived class, we can either use the existing code of the base class or

override the existing code. Therefore, with help of the implementation inheritance,

code can be reused.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to

destroy them. In other words, destructors are used to release the resources allocated to

the object. In Visual Basic.NET the sub finalize procedure is available. The sub

finalize procedure is used to complete the tasks that must be performed when an object

is destroyed. The sub finalize procedure is called automatically when an object is

destroyed. In addition, the sub finalize procedure can be called only from the class it

belongs to or from derived classes.

GARBAGE COLLECTION:

Garbage Collection is another new feature in Visual Basic.NET. The .NET

Framework monitors allocated resources, such as objects and variables. In addition,

the .NET Framework automatically releases memory for reuse by destroying objects

that are no longer in use. In Visual Basic.NET, the garbage collector checks for the

objects that are not currently in use by applications. When the garbage collector comes

across an object that is marked for garbage collection, it releases the memory occupied

by the object.

OVERLOADING:

Overloading is another feature in Visual Basic.NET. Overloading enables us to

define multiple procedures with the same name, where each procedure has a different

48
set of arguments. Besides using overloading for procedures, we can use it for

constructors and properties in a class.

MULTITHREADING:

Visual Basic.NET also supports multithreading. An application that supports

multithreading can handle multiple tasks simultaneously, we can use multithreading to

decrease the time taken by an application to respond to user interaction.

To decrease the time taken by an application to respond to user interaction, we

must ensure that a separate thread in the application handles user interaction.

STRUCTURED EXCEPTION HANDLING:

Visual Basic.NET supports structured handling, which enables us to detect and

remove errors at runtime. In Visual Basic.NET, we need to use Try…Catch…Finally

statements to create exception handlers. Using Try…Catch…Finally statements, we

can create robust and effective exception handlers to improve the performance of our

application.

4FEATURES of SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now

called SQL Server 2000 Analysis Services. The term OLAP Services has been

replaced with the term Analysis Services. Analysis Services also includes a new data

mining component. The Repository component available in SQL Server version 7.0 is

now called Microsoft SQL Server 2000 Meta Data Services. References to the

49
component now use the term Meta Data Services. The term repository is used only in

reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table

design view. We can specify what kind of data will be hold.

50
Datasheet View

To add, edit or analyses the data itself we work in tables datasheet

view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data

that answers the question from one or more table.

The data that make up the answer is either dynaset (if you edit it) or a

snapshot (it cannot be edited).Each time we run query, we get latest information in the

dynaset. Access either displays the dynaset or snapshot for us to view or perform an

action on it, such as deleting or updating.

FORMS:

A form is used to view and edit information in the database record

by record .A form displays only the information we want to see in the way we want to

see it. Forms use the familiar controls such as textboxes and checkboxes. This makes

viewing and entering data easy.

Views of Form:

We can work with forms in several primarily there are two views,

They are,

1. Design View

2. Form View

51
Design View

To build or modify the structure of a form, we work in forms design

view. We can add control to the form that are bound to fields in a table or query,

includes textboxes, option buttons, graphs and pictures.

Form View

The form view which display the whole design of the form.

REPORT:

A report is used to vies and print information from the database. The report can

ground records into many levels and compute totals and average by checking values

from many records at once. Also the report is attractive and distinctive because we

have control over the size and appearance of it.

MACRO:

A macro is a set of actions. Each action in macros does something. Such as opening

a form or printing a report .We write macros to automate the common

MODULE:

Modules are units of code written in access basic language. We can write and use

module to automate and customize the database in very sophisticated ways.

UML DIAGRAM:

52
Unified Modeling Language (UML): UML is a method for describing the system

architecture in detail using the blueprint. UML represents a collection of best

engineering practices that have proven successful in then modeling of large and

complex systems. The UML is a very important part of developing objects oriented

software and the software development process. The UML uses mostly graphical

notations to express the design of software projects. Using the UML helps project

teams communicate, explore potential designs, and validate the architectural design of

the software. The UML‘s four structural diagrams exist to visualize, specify, construct

and document the static aspects of a system. we can View the static parts of a system

using one of the following diagrams.

Definition:

UML is a general-purpose visual modeling language that is used to specify,

visualize, construct, and document the artifacts of the software system.

UML is a language:

It will provide vocabulary and rules for communications and function on

conceptual and physical representation. So it is modeling language.

UML Specifying:

Specifying means building models that are precise, unambiguous and complete.

In particular, the UML address the specification of all the important analysis, design

and implementation decisions that must be made in developing and displaying a

software intensive system.

53
UML Visualization:

The UML includes both graphical and textual representation. It makes easy to

visualize the system and for better understanding.

UML Constructing:

UML models can be directly connected to a variety of programming languages

and it is sufficiently expressive and free from any ambiguity to permit the direct

execution of models.

UML Documenting:

UML provides variety of documents in addition raw executable codes

Goal of UML:

The primary goals in the design of the UML were:

• Provide users with a ready-to-use, expressive visual modeling language

so they can develop and exchange meaningful models.

• Provide extensibility and specialization mechanisms to extend the core

concepts.

• Be independent of particular programming languages and development

processes.

• Provide a formal basis for understanding the modeling language.

• Encourage the growth of the OO tools market.

• Support higher-level development concepts such as collaborations,

frameworks, patterns and components.

54
• Integrate best practices.

Uses of UML:

The UML is intended primarily for software intensive systems. It has been used

effectively for such domain as

• Enterprise Information System

• Banking and Financial Services

• Telecommunications

• Transportation

• Defense/Aerospace

• Retails

UML DIAGRAM

55
DOTNET

.NET is a major technology change for Microsoft and the software world. Just

like the computer world moved from DOS to Windows, now they are moving to .NET.

But don't be surprised if you find anyone saying that "I do not like .NET and I would

stick with the good old COM and C++". There are still lot of people who like to use

the bullock-cart instead of the latest Honda car.

The simple answer is 'it is the technology from Microsoft, on which all other

Microsoft technologies will be depending on in future.’

.NET technology was introduced by Microsoft, to catch the market from the

SUN's Java. Few years back, Microsoft had only VC++ and VB to compete with Java,

but Java was catching the market very fast. With the world depending more and more

the Internet/Web and java related tools becoming the best choice for the web

applications, Microsoft seemed to be losing the battle. Thousands of programmers

moved to java from VC++ and VB. This was alarming for Microsoft and many of the

Microsoft fan's kept on asking "is Microsoft sleeping?” And Microsoft had the answer.

One fine morning, they announced: "We are not sleeping. We have the answer for

you.” And that answer was .NET.

But Microsoft has a wonderful history of starting late but catching up quickly.

This is true in case of .NET too. Microsoft put their best men at work for a secret

project called Next Generation Windows Services (NGWS) under the direct

supervision of Mr. Bill Gates. The outcome of the project is what we now know as

56
.NET. Even though .NET has borrowed most of its ideas from Sun's J2EE, it has really

outperformed their competitors.

What is .NET?

➢ It is a platform neutral framework.

➢ Is a layer between the operating system and the programming language.

➢ It supports many programming languages, including VB.NET, C# etc.

➢ .NET provides a common set of class libraries, which can be accessed from any

.NET based programming language. There will not be separate set of classes

and libraries for each language. If you know any one .NET language, you can

write code in any .NET language!!

In future versions of Windows, .NET will be freely distributed as part of operating

system and users will never have to install .NET separately.

57
DOTNET Architecture

Common Language Runtime (CLR)

58
Full form of CLR is Common Language Runtime and it forms the heart of the

.NET framework. All Languages have runtime and its the responsibility of the runtime

to take care of the code execution of the program. For example VC++ has

MSCRT40.DLL, VB6 has MSVBVM60.DLL, Java has Java Virtual Machine etc.

Similarly .NET has CLR. Following are the responsibilities of CLR

• Garbage Collection :-

CLR automatically manages memory thus eliminating memory leaks.

When objects are not referred GC automatically releases those memories thus

providing efficient memory management.

• Code Access Security :-

CAS grants rights to program depending on the security configuration of

the machine. Example the program has rights to edit or create a new file but the

security configuration of machine does not allow the program to delete a file.

CAS will take care that the code runs under the environment of machines

security configuration.

• Code Verification :-

This ensures proper code execution and type safety while the code runs. It

prevents the source code to perform illegal operation such as accessing invalid

memory locations etc.

• IL( Intermediate language )-to-native translators and optimizer’s

59
CLR uses JIT and compiles the IL code to machine code and then

executes. CLR also determines depending on platform what is optimized way of

running the IL code.

• CTS (Common Type System):-

In order that two language communicate smoothly CLR has CTS (Common

Type System).Example in VB you have “Integer” and in C++ you have “long”

these data types are not compatible so the interfacing between them is very

complicated. In order to able that two different languages can communicate

Microsoft introduced Common Type System. So “Integer” data type in VB6

and “int” data type in C++ will convert it to System.int32 which is data type of

CTS.

➢ CLS(Common Language Specification) :-

This is a subset of the CTS which all .NET languages are expected

to support. It was always a dream of Microsoft to unite all different

languages in to one umbrella and CLS is one step towards that. Microsoft

has defined CLS which are nothing but guidelines that language to follow

so that it can communicate with other .NET languages in a seamless

manner.

• Managed Code

60
Managed code runs inside the environment of CLR i.e. .NET runtime.

In short all IL are managed code. But if you are using some third party software

example VB6 or VC++ component they are unmanaged code as .NET runtime

(CLR) does not have control over the source code execution of the language.

SQL SERVER -2005

A database management, or DBMS, gives the user access to their

data and helps them transform the data into information. Such database management

systems include dBase, paradox, IMS, SQL Server and SQL Server. These systems

allow users to create, update and extract information from their database.

A database is a structured collection of data. Data refers to the

characteristics of people, things and events. SQL Server stores each data item in its

own fields. In SQL Server, the fields relating to a particular person, thing or event

are bundled together to form a single complete unit of data, called a record (it can

also be referred to as raw or an occurrence). Each record is made up of a number of

fields. No two fields in a record can have the same field name.

During an SQL Server Database design project, the analysis of your business

needs identifies all the fields or attributes of interest. If your business needs change

over time, you define any additional fields or change the definition of existing fields.

SQL SERVER TABLES

SQL Server stores records relating to each other in a table. Different

61
tables are created for the various groups of information. Related tables are grouped

together to form a database.

PRIMARY KEY

Every table in SQL Server has a field or a combination of fields that

uniquely identifies each record in the table. The Unique identifier is called the

Primary Key, or simply the Key. The primary key provides the means to distinguish

one record from all other in a table. It allows the user and the database system to

identify, locate and refer to one particular record in the database.

RELATIONAL DATABASE

Sometimes all the information of interest to a business operation can be

stored in one table. SQL Server makes it very easy to link the data in multiple

tables. Matching an employee to the department in which they work is one

example. This is what makes SQL Server a relational database management system,

or RDBMS. It stores data in two or more tables and enables you to define

relationships between the table and enables you to define relationships between the

tables.

FOREIGN KEY

When a field is one table matches the primary key of another field is

referred to as a foreign key. A foreign key is a field or a group of fields in one table

whose values match those of the primary key of another table.

62
REFERENTIAL INTEGRITY

Not only does SQL Server allow you to link multiple tables, it also

maintains consistency between them. Ensuring that the data among related tables is

correctly matched is referred to as maintaining referential integrity.

DATA ABSTRACTION

A major purpose of a database system is to provide users with an

abstract view of the data. This system hides certain details of how the data is stored

and maintained. Data abstraction is divided into tvehicleee levels.

Physical level: This is the lowest level of abstraction at which one describes how

the data are actually stored.

Conceptual Level: At this level of database abstraction all the attributed and what

data are actually stored is described and entries and relationship among them.

View level: This is the highest level of abstraction at which one describes only

part of the database.

ADVANTAGES OF RDBMS

• Redundancy can be avoided

• Inconsistency can be eliminated

• Data can be shared

• Standards can be enforced

• Security restrictions can be applied

• Integrity can be maintained

63
• Conflicting requirements can be balanced

• Data independence can be achieved.

DISADVANTAGES OF DBMS

A significant disadvantage of the DBMS system is cost. In addition

to the cost of purchasing of developing the vehicle, the hardware has to be

upgraded to allow for the extensive programs and the workspace required for

their execution and storage. While centralization reduces duplication, the lack

of duplication requires that the database be adequately backed up so that in case

of failure the data can be recovered.

FEATURES OF SQL SERVER (RDBMS)

SQL SERVER is one of the leading database management systems

(DBMS) because it is the only Database that meets the uncompromising

requirements of today’s most demanding information systems. From complex

decision support systems (DSS) to the most rigorous online transaction processing

(OLTP) application, even application that require simultaneous DSS and OLTP

access to the same critical data, SQL Server leads the industry in both performance

and capability.

SQL SERVER is a truly portable, distributed, and open DBMS that delivers

unmatched performance, continuous operation and support for every database.

SQL SERVER RDBMS is high performance fault tolerant DBMS which is

specially designed for online transactions processing and for handling large database
64
application.

SQL SERVER with transactions processing option offers two features which

contribute to very high level of transaction processing tvehicleoughput, which are the

row level lock manager

ENTERPRISE WIDE DATA SHARING

The unrivaled portability and connectivity of the SQL SERVER

DBMS enables all the systems in the organization to be linked into a singular,

integrated computing resource.

PORTABILITY

SQL SERVER is fully portable to more than 80 distinct hardware and operating

systems platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of

proprietary platforms. This portability gives complete freedom to choose the

database server platform that meets the system requirements.

OPEN SYSTEMS

SQL SERVER offers a leading implementation of industry –standard

SQL. SQL Server’s open architecture integrates SQL SERVER and non –SQL

SERVER DBMS with industry’s most comprehensive collection of tools,

application, and third party vehicle products SQL Server’s Open architecture

provides transparent access to data from other relational database and even non-

relational database.

65
DISTRIBUTED DATA SHARING

SQL Server’s networking and distributed database capabilities to

access data stored on remote server with the same ease as if the information was

stored on a single local computer. A single SQL statement can access data at

multiple sites. You can store data where system requirements such as performance,

security or availability dictate.

UNMATCHED PERFORMANCE

The most advanced architecture in the industry allows the SQL

SERVER DBMS to deliver unmatched performance.

SOPHISTICATED CONCURRENCY CONTROL

Real World applications demand access to critical data. With

most database Systems application becomes “contention bound” – which

performance is limited not by the CPU power or by disk I/O, but user waiting on

one another for data access. SQL Server employs full, unrestricted row-level

locking and contention free queries to minimize and in many cases entirely

eliminates contention wait times.

NO I/O BOTTLENECKS

SQL Server’s fast commit groups commit and deferred write technologies

dramatically reduce disk I/O bottlenecks. While some database write whole data block

to disk at commit time, SQL Server commits transactions with at most sequential

log file on disk at commit time, On high tvehicleoughput systems, one sequential

66
writes typically group commit multiple transactions. Data read by the transaction

remains as shared memory so that other transactions may access that data without

reading it again from disk. Since fast commits write all data necessary to the

recovery to the log file, modified blocks are written back to the database

independently of the transaction commit, when written from memory to disk.

ALGORITHMS CONCEPTS:

In this we using Algorithms concepts of RSA is for security purpose under

bases of Encryption and Decryption.

In today world importance of exchange of data over internet and

other media type is eminent; the search for best data protection against security attacks

and a method to timely deliver the data without much delay is the matter of discussion

among security related communities. Cryptography is one such method that provides

the security mechanism in timely driven fashion. Cryptography is usually referred to

as "the study of secret", which is most attached to the definition of encryption. The

two main characteristics that identify and differentiate encryption algorithm from

another are their capability to secure the protected data against attacks and their speed

and effectiveness in securing the data. This paper provides a comparative study

between four such widely used encryption algorithms DES, of DES, 3DES, AES and

RSA on the basis of their ability to secure and protect data against attacks and speed of

encryption and decryption.

67
Basics of RSA algorithm

This is public key encryption algorithm developed by Ron Rivest, Adi

Shamir and Len Adlemen in 1977. It is most popular and asymmetric key

cryptographic algorithm. It may used to provide both secrecy and digital signature [2].

It uses the

prime no. to generate the public and private key based on mathematical fact and

multiplying large numbers together. It uses the block size data in which plaintext and

cipher text are integers between 0and n1 for some n values. Size of n is considered

1024bits or 309 decimal digits. In this two different keys are used for encryption and

decryption purpose. As sender knows encryption key and receiver knows decryption

key.

Algorithm

➢ Choose large prime numbers p and q such that p~=q.

➢ Compute n=p*q

➢ Compute φ (pq) = (p-1)*(q-1)

➢ Choose the public key e such that gcd (φ (n), e) =1; 1<e< φ (n)

➢ Select the private key d such that d*e mod φ (n) =1

So in RSA algorithm encryption and decryption are performed as-

Encryption Calculate cipher text C from plaintext message M such that

C=M ê mod n

68
Decryption

M=C^d mod n=Mêd mod n

RSA algorithm is a cryptographic algorithm introduced in the year 1978 by

Ron Rivest, Adi Shamir and Leonard Adleman. RSA implemented following two

important ideas:

1. Public-key encryption. In RSA, encryption keys are made public while

the decryption keys are kept private, so only the person with the correct decryption

key can decipher an encrypted message. Everyone has their own encryption and

corresponding decryption keys. The keys are made in such a way that the decryption

key cannot be easily deduced from the public encryption key.

2. Digital signatures. The receiver may need to verify that a transmitted

message is actually originated from the sender, and didn't just come from

authentication. This is done with the help of the sender's decryption key, and later

the signature can be verified by anyone, using the corresponding public encryption

key. Signatures therefore cannot be copied. Also, no signer can later deny having

signed the message.

The various steps involved in RSA algorithm are :

Finding large prime numbers

Finding ‘n’ is the first step of the algorithm, where ‘n’ is the product of

two prime numbers ‘p’ & ‘q’. The number ‘n’ will be revealed in the encryption

and decryption keys, but the numbers ‘p’ and ‘q’ will not be explicitly shown. he

69
prime numbers ‘p’ and ‘q’ should be large such that it will be very difficult to derive

from ‘n’. n=p*q -------------------- (1)

Finding the public key (e)

Choose a number ‘e’ such that ‘e’ is co-prime to φ (n), where φ (n) is

the Euler’s totient function that counts the number of positive integers less than

or equal to ‘n’ that

Are relatively prime to ‘n’ i.e.

Determine the private key (d)

Determine the private key ‘d’ such that‘d’ is the multiplicative inverse of

the public key ‘e’ i.e.

Encryption

Let ‘m’ be the message (integer type) that is to be encrypted using public

key ‘e’ to give the encrypted message as ‘c’ where ‘c’ is calculated as

Symmetric Encryption
In case of Symmetric Encryption, same cryptography keys are used for

encryption of plaintext and decryption of ciphertext [2]. Symmetric key encryption is

simpler and faster but their main drawback is that both the users need to transfertheir

keys in a secure way

70
Asymmetric Encryption
In case of Asymmetric encryption, two keys are used. It is also known as

Public Key Cryptography (PKC), because users tend to use two keys: public key,

which is known to public and a private key which is only known to user.

Decryption

The decrypted message ‘m’ is found out using the private key‘d’ and is

calculated

71
Encrypted and Decrypted Data

HASHING ALGORITHMS

MD5 algorithm was developed by Professor Ronald L. Rivest in 1991.

According to RFC 1321, “MD5 message-digest algorithm takes as input a message

of arbitrary length and produces as output a 128-bit "fingerprint" or "message

digest" of the input …The MD5 algorithm is intended for digital signature

applications, where a large file must be "compressed" in a secure manner before

being encrypted with a private (secret) key under a public-key cryptosystem such

as RSA.”

MD5 (Message-Digest algorithm 5): is a widely used cryptographic function

with a 128- bit hash value. MD5 has been employed in a wide variety of security

72
applications, and is also commonly used to check the integrity of files. An MD5 hash

is typically expressed as a 32-digit hexadecimal number.

MD5 Algorithm Structure

IMPLEMENTATION STEPS:

• Step1 Append padding bits : The input message is "padded" (extended) so

that its length (in bits) equals to 448 mod 512. Padding is always performed, even

if the length of the message is already 448 mod 512.

Padding is performed as follows: a single "1" bit is appended to the

message, and then "0" bits are appended so that the length in bits of the padded

message becomes congruent to 448 mod 512. At least one bit and at most 512 bits are

appended.

73
• Step2. Append length A 64-bit representation of the length of the message is

appended to the result of step1. If the length of the message is greater than 2^64, only

the low-order 64 bits will be used. The resulting message (after padding with bits

and with b) has a length that is an exact multiple of 512 bits. The input message will

have a length that is an exact multiple of 16 (32-bit) words.

• Step3. Initialize MD buffer A four-word buffer (A, B, C, D) is used to

compute the message digest. Each of A, B, C, D is a 32-bit register. These registers

are initialized to the following values in hexadecimal, low-order bytes first):

word A: 01 23 45 67

word B: 89 ab cd ef

word C: fe dc ba 98

word D: 76 54 32 10

• Step4. Process message in 16-word blocks Four functions will be defined such

that each function takes an input of three 32-bit words and produces a 32-bit word

output.

F (X, Y, Z) = XY or not (X) Z

G (X, Y, Z) = XZ or Y not (Z)

H (X, Y, Z) = X xor Y xor Z

I (X, Y, Z) = Y xor (X or not (Z))

MD5 vs. MD4

• A fourth round has been added.

• Each step has a unique additive constant.

74
• The function g in round 2 was changed from (XY v XZ v YZ) to (XZ v Y

not(Z)).

• Each step adds in the result of the previous step.

• The order in which input words are accessed in rounds 2 and 3 is changed.

• The shift amounts in each round have been optimized. The shifts in

different rounds are distinct.

ALGORITHM:-MD5 processes a variable-length message into a fixed-length output

of 128 bits.

STEPS:

1. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit little

endian integers), the message is padded so that its length is divisible by 512.

2. The padding works as follows: first a single bit, 1, is appended to the end of the

message.

3. This is followed by as many zeros as are required to bring the length of the

message up to 64 bits fewer than a multiple of 512.

75
4. The remaining bits are filled up with a 64-bit integer representing the length of the

original message, in bits.

5. The MD5 algorithm uses 4 state variables, each of which is a 32 bit integer (an

unsigned long on most systems). These variables are sliced and diced and are

(eventually) the message digest.

The variables are initialized as follows:

A = 0x67452301

B = 0xEFCDAB89

C = 0x98BADCFE

D = 0x10325476.

6. Now on to the actual meat of the algorithm: the main part of the algorithm uses four

functions to thoroughly goober the above state variables. Those functions are as

follows:

F(X,Y,Z) = (X & Y) | (~(X) & Z)

G(X,Y,Z) = (X & Z) | (Y & ~(Z))

H(X,Y,Z) = X ^ Y ^ Z

I(X,Y,Z) = Y ^ (X | ~(Z))

Where &, |, ^, and ~ are the bit-wise AND, OR, XOR, and NOT operators

76
7. These functions, using the state variables and the message as input, are used to

transform the state variables from their initial state into what will become the message

digest. For each 512 bits of the message, the rounds performed (this is only pseudo-

code, don’t try to compile it) After this step, the message digest is stored in the state

variables (A, B, C, and D). To get it into the hexadecimal form you are used to seeing,

output the hex values of each the state variables, least significant byte first. For

example, if after the digest:

A = 0x01234567;

B = 0x89ABCDEF;

C = 0x1337D00D

D = 0xA5510101

Then the message digest would be: 67452301EFCDAB890DD03713010151A5

(required hash value of the input value).

• Comparing to other digest algorithms, MD5 is simple to implement, and

provides a "fingerprint" or message digest of a message of arbitrary

length.

• It performs very fast on 32-bit machine.

• MD5 is being used heavily from large corporations, such as IBM, Cisco

Systems, to individual programmers.

• MD5 is considered one of the most efficient algorithms currently available.

77
4.SYSTEM DESIGN

Software design sits at the technical kernel of the software engineering


process and is applied regardless of the development paradigm and area of application.
Design is the first step in the development phase for any engineered product or system.
The designer’s goal is to produce a model or representation of an entity that will later
be built. Beginning, once system requirement have been specified and analyzed,
system design is the first of the three technical activities -design, code and test that is
required to build and verify software.

The importance can be stated with a single word “Quality”. Design is the
place where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we
can accurately translate a customer’s view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that
follow. Without a strong design we risk building an unstable system – one that will be
difficult to test, one whose quality cannot be assessed until the last stage.

During design, progressive refinement of data structure, program


structure, and procedural details are developed reviewed and documented. System
design can be viewed from either technical or project management perspective. From
the technical point of view, design is comprised of four activities – architectural
design, data structure design, interface design and procedural design.

4.1 Normalization
N/A

78
4.2 Input design

In the input design, user-oriented inputs are converted into a computer based system
format. It also includes determining the record media, method of input, speed of
capture and entry on to the screen. Online data entry accepts commands and data
through a keyboard. The major approach to input design is the menu and the prompt
design. In each alternative, the user’s options are predefined. The data flow diagram
indicates logical data flow, data stores, source and destination. Input data are collected
and organized into a group of similar data. Once identified input media are selected for
processing.

In this software, importance is given to develop Graphical User Interface (GUI),


which is an important factor in developing efficient and user-friendly software. For
inputting user data, attractive forms are designed. User can also select desired options
from the menu, which provides all possible facilities.

Also the important input format is designed in such a way that accidental errors
are avoided. The user has to input only just the minimum data required, which also
helps in avoiding the errors that the users may make. Accurate designing of the input
format is very important in developing efficient software. The goal or input design is
to make entry as easy, logical and free from errors.
4.3 TABLE DESIGN

INPUT FORMAT
Network Wi-Fi nodes Alpha numeric
Available paths Alpha numeric
File Any file format
(txt,doc,jpg,ect)

79
4.4 SFD/DFD
A data flow diagram is graphical tool used to describe and analyze movement
of data through a system. These are the central tool and the basis from which the other
components are developed. The transformation of data from input to output, through
processed, may be described logically and independently of physical components
associated with the system. These are known as the logical data flow diagrams. The
physical data flow diagrams show the actual implements and movement of data
between people, departments and workstations. A full description of a system actually
consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane
and Sarson notation develops the data flow diagrams. Each component in a DFD is
labeled with a descriptive name. Process is further identified with a number that will
be used for identification purpose. The development of DFD’S is done in several
levels. Each process in lower level diagrams can be broken down into a more detailed
DFD in the next level. The lop-level diagram is often called context diagram. It
consists a single process bit, which plays vital role in studying the current system. The
process in the context level diagram is exploded into other process at the first level
DFD.
The idea behind the explosion of a process into more process is that
understanding at one level of detail is exploded into greater detail at the next level.
This is done until further explosion is necessary and an adequate amount of detail is
described for analyst to understand the process.
Larry Constantine first developed the DFD as a way of expressing system
requirements in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in
system design. So it is the starting point of the design to the lowest level of detail. A
DFD consists of a series of bubbles joined by data flows in the system.
DFD SYMBOLS:

80
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information
flows
3. A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name should
be representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to
the source. One way to indicate this is to draw long flow line back to a source. An
alternative way is to repeat the source symbol as a destination. Since it is used
more than once in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.

81
4. The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized.
A DFD typically shows the minimum contents of data store. Each data store
should contain all the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.

SAILENT FEATURES OF DFD’S


1. The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the
dataflow take place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS


1. Current Physical
2. Current Logical
3. New Logical
4. New Physical

CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the overall
system-processing label includes an identification of the technology used to process
the data. Similarly data flows and data stores are often labels with the names of the
actual physical media on which data are stored such as file folders, computer files,
business forms or computer tapes.
CURRENT LOGICAL:

82
The physical aspects at the system are removed as much as possible so that the
current system is reduced to its essence to the data and the processors that transforms
them regardless of actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy
with the user were completely happy with the functionality of the current system but
had problems with how it was implemented typically through the new logical model
will differ from current logical model while having additional functions, absolute
function removal and inefficient flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new
system.

RULES GOVERNING THE DFD’S

PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a
sink.
3) A process has a verb phrase label.

DATA STORE
1) Data cannot move directly from one data store to another data store, a process must
move data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and / or destination of data.
1) Data cannot move direly from a source to sink it must be moved by a process

83
2) A source and /or sink has a noun phrase land

DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The
later is usually indicated however by two separate arrows since these happen at
different type.
2) A join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be at
least one other process that handles the data flow produce some other data flow
returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear
on a single arrow as long as all of the flows on the same arrow move together as one
package
Data Flow Diagram

84
Level 0

Level 1

Level 2

Level 3

85
Class Diagram :

86
Use Case

5. SYSTEM DESCRIPTION

Modules

• Authentication

• Source

• Find Hackers and Denied services

• Destination

Modules Description

87
• Authentication

This module is to register the new users and previously registered users

can enter into our project. The Register user only can enter into Proposed Process in

our Project. The Other user can view Existing Of our Project

• Source

The source module is to select the file to be transmitted to the receiver

side from the user

• Find Hackers and Denied services

Before transmitting the source the user will check for the hackers, if they

find any hackers they block the access of the hackers and denial their request.

• Destination

In this module the user transmit the file to the receiver in the network.

6. TESTING AND IMPLEMENTATION

Testing is a process of executing a program with the indent of finding an

error. Testing is a crucial element of software quality assurance and presents

ultimate review of specification, design and coding.

System Testing is an important phase. Testing represents an interesting

anomaly for the software. Thus a series of testing are performed for the proposed

system before the system is ready for user acceptance testing.

88
A good test case is one that has a high probability of finding an as

undiscovered error. A successful test is one that uncovers an as undiscovered error.

Testing Objectives:

➢ Testing is a process of executing a program with the intent of finding an error

➢ A good test case is one that has a probability of finding an as yet undiscovered

error

➢ A successful test is one that uncovers an undiscovered error

Testing Principles:

➢ All tests should be traceable to end user requirements

➢ Tests should be planned long before testing begins

➢ Testing should begin on a small scale and progress towards testing in large

➢ Exhaustive testing is not possible

➢ To be most effective testing should be conducted by a independent third party

The primary objective for test case design is to derive a set of tests that has the

highest livelihood for uncovering defects in software. To accomplish this objective

two different categories of test case design techniques are used. They are

➢ White box testing.

➢ Black box testing.

89
White-box Testing:

White box testing focus on the program control structure. Test cases are

derived to ensure that all statements in the program have been executed at least once

during testing and that all logical conditions have been executed.

Block-box Testing:

Black box testing is designed to validate functional requirements without

regard to the internal workings of a program. Black box testing mainly focuses on the

information domain of the software, deriving test cases by partitioning input and

output in a manner that provides through test coverage. Incorrect and missing

functions, interface errors, errors in data structures, error in functional logic are the

errors falling in this category.

Testing Strategies:

A strategy for software testing must accommodate low-level tests that are

necessary to verify that all small source code segments has been correctly

implemented as well as high-level tests that validate major system functions against

customer requirements.

90
Testing Fundamentals:

Testing is a process of executing program with the intent of finding error.

A good test case is one that has high probability of finding an undiscovered error. If

testing is conducted successfully it uncovers the errors in the software. Testing cannot

show the absence of defects, it can only show that software defects present.

Testing Information Flow:

Information flow for testing flows the pattern. Two class of input provided

to test the process. The software configuration includes a software requirements

specification, a design specification and source code.

Test configuration includes test plan and test cases and test tools. Tests are

conducted and all the results are evaluated. That is test results are compared with

expected results. When erroneous data are uncovered, an error is implied and

debugging commences.

Unit Testing:

Unit testing is essential for the verification of the code produced during the

coding phase and hence the goal is to test the internal logic of the modules. Using the

detailed design description as a guide, important paths are tested to uncover errors

91
within the boundary of the modules. These tests were carried out during the

programming stage itself. All units of ViennaSQL were successfully tested.

Integration Testing:

Integration testing focuses on unit tested modules and build the program

structure that is dictated by the design phase.

System Testing:

System testing tests the integration of each module in the system. It also

tests to find discrepancies between the system and it’s original objective, current

specification and system documentation. The primary concern is the compatibility of

individual modules. Entire system is working properly or not will be tested here, and

specified path ODBC connection will correct or not, and giving output or not are

tested here these verifications and validations are done by giving input values to the

system and by comparing with expected output. Top-down testing implementing here.

Acceptance Testing:

This testing is done to verify the readiness of the system for the

implementation. Acceptance testing begins when the system is complete. Its purpose is

to provide the end user with the confidence that the system is ready for use. It involves

92
planning and execution of functional tests, performance tests and stress tests in order

to demonstrate that the implemented system satisfies its requirements.Tools to special

importance during acceptance testing include:

Test coverage Analyzer – records the control paths followed for each test case.

Timing Analyzer – also called a profiler, reports the time spent in various regions of

the code are areas to concentrate on to improve system performance.

Coding standards – static analyzers and standard checkers are used to inspect code for

deviations from standards and guidelines.

System testing makes a logical assumption that if all parts of the

system are correct the goal will be successfully achieved. System Testing is utilized as

user-oriented vehicle before implementation. Programs are invariably related to one

another and interact in a total system. Each portion of the system is tested to see

whether it conforms to related programs in the system. Each portion of the entire

system is tested against the entire module with both test and live data before the entire

system is ready to be tested.

The first test of a system is to see whether it produces correct output. The

other tests that are conducted are:

1. Online - Response

When the mouse is clicked on the router the statistics of the router for the

selected algorithm have to be displayed on the screen. The router must crash

immediately when it the “Down” button clicked in the popup menu.

93
2. Stress Testing

The purpose of stress testing is to prove that the system does not

malfunction under peak loads. In the simulator we test it with the greater number of

nodes and getting the correct results for each and every router applying different

routing algorithms. All the routers are purposely crashed to generate a peak load

condition and the working is tested.

3. Usability Documentation and Procedure

The usability test verifies the user friendly nature of the system. The user is

asked to use only the documentation and procedure as a guide to determine whether

the system can run smoothly.

Test case:

Condition Message Reason Pass/Fail


Source file file Selected Paths are available Pass
successfully
Source file Path not found Paths are not available Fail
Receiver Stopped Connections are not Fail
available
Receiver Running Connections are Pass
successfully available
Receiver IP already in use Another receiver is Pass
running
Receiver IP is not use Receiver is waiting Fail

94
Implementation

The term Implementation has different meanings ranging from the conversation
of a basic application to a complete replacement of a computer system. The procedures
however, are virtually the same. Implementation includes all those activities that take
place to convert from old systems to new.

The new system may be totally new replacing an existing manual or automated
system or it may be major modification to an existing system. The method of
implementation and time scale to be adopted is found out initially. Neat the system is
test properly and at the same time the users are trained in the new procedure. Proper
implementation is essential to provide a reliable system to meet organization
requirement.
Successful and efficient utilization in the system can be achieved only through
proper implementation of the system in the organization. So implementation phase is
also important like other phases such as analysis, design, coding and testing.
• Careful planning

• Investigation of the system and its constraints

• Design the methods to achieve the change over

• Training the staff in the changed phase

• Ensuring the user has understood and accepted the changes

Getting complete feedback during test run and ensuring everything in


perfect for the final change over.

95
Sender

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Diagnostics;
using System.Net.NetworkInformation;
using System.Data.SqlClient;

namespace TADR
{
public partial class Form1 : Form
{
public delegate void MyPing(int id);
public const string tempadd = @"C:\WINDOWS\Temp";
public Form1()
{
InitializeComponent();
}

private void button1_Click(object sender, EventArgs e)

96
{
openFileDialog1.ShowDialog();
label2.Text = openFileDialog1.FileName;
}

private void button2_Click(object sender, EventArgs e)


{
int c;
string inputFile = openFileDialog1.FileName; // Substitute this with your
Input File
FileStream fs = new FileStream(inputFile, FileMode.Open, FileAccess.Read);
if (fs.Length > 500)
c = 10;
else
c = 5;
int numberOfFiles =c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);

for (int i = 1; i <= numberOfFiles; i++)


{
string baseFileName = Path.GetFileNameWithoutExtension(inputFile);
string extension = Path.GetExtension(inputFile);
FileStream outputFile = new FileStream(tempadd + "\\" + baseFileName +
"." + i.ToString().PadLeft(5, Convert.ToChar("0")) + extension + ".pak",
FileMode.Create, FileAccess.Write);
int bytesRead = 0;
byte[] buffer = new byte[sizeOfEachFile];

if ((bytesRead = fs.Read(buffer, 0, sizeOfEachFile)) > 0)

97
{
outputFile.Write(buffer, 0, bytesRead);
}
outputFile.Close();
}
fs.Close();
backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
//label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
sys.SendFile(s);
label2.Visible = false;

private void Form1_Load(object sender, EventArgs e)


{
string myipsplit = string.Empty;
string localhostname = Dns.GetHostName();
IPAddress[] paddresses = Dns.GetHostAddresses("localhost");
string myip = paddresses.Where(ip => ip.AddressFamily ==
AddressFamily.InterNetwork).FirstOrDefault().ToString();
string[] myiparray = myip.Split(new[] { '.' });
for (int j = 1; j < myiparray.Length; j++)
myipsplit += myiparray[j - 1] + ".";
Trace.WriteLine(DateTime.Now);
var results = new string[100];
MyPing ping =
id =>

98
{
string ls = myipsplit + id;
var pingSender = new Ping();
PingReply reply = pingSender.Send(ls, 100);
if (reply != null)
if (reply.Status == IPStatus.Success)
results[id] = reply.Address.ToString();
};
var asyncResults = new IAsyncResult[100];
for (int i = 1; i < 100; i++)
{
asyncResults[i] = ping.BeginInvoke(i, null, null);
}
for (int i = 1; i < 100; i++)
{
ping.EndInvoke(asyncResults[i]);
}
Trace.WriteLine(DateTime.Now);
var sb = new StringBuilder();

System.DirectoryServices.DirectoryEntry entryPC;
entryPC = new System.DirectoryServices.DirectoryEntry();
entryPC.Path = "WinNT://Workgroup";
for (int i = 1; i < 100; i++)
{
if (results[i] != null)
listView1.Items.Add(results[i]);
sb.AppendFormat("{0}", results[i]);
}

99
listView1.Items.Clear();
for (int i = 1; i < 7; i++)
{
listView1.Items.Add("192.168.1." + i).ImageIndex = 1;
}

private void timer1_Tick(object sender, EventArgs e)


{
label2.Text = attackdenied.curMsg;
// label1.Visible = true;
label2.Visible = true;
}
class sys
{
public static string curMsg;
public static void SendFile(string fileName)
{
try
{

{
IPAddress[] ipAddress = Dns.GetHostAddresses("localhost");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);

Socket cliendSock = new Socket(AddressFamily.InterNetwork,


SocketType.Stream, ProtocolType.IP);
string filePath = "";

100
fileName = fileName.Replace("\\", "/");
while (fileName.IndexOf("/") > -1)
{
filePath += fileName.Substring(0, fileName.IndexOf("/") + 1);
fileName = fileName.Substring(fileName.IndexOf("/") + 1);
}
byte[] fileNameByte = Encoding.ASCII.GetBytes(fileName);
if (fileNameByte.Length > 850 * 1024)
{
curMsg = "File size is more than 850kb,please try with small file.";
return;
}
curMsg = "Buffering ...";
byte[] fileData = File.ReadAllBytes(filePath + fileName);
byte[] clientData = new byte[4 + fileNameByte.Length +
fileData.Length];
byte[] fileNameLen = BitConverter.GetBytes(fileNameByte.Length);
fileNameLen.CopyTo(clientData, 0);
fileNameByte.CopyTo(clientData, 4);
fileData.CopyTo(clientData, 4 + fileNameByte.Length);

curMsg = "Connection to server ...";


cliendSock.Connect(ipEnd);
curMsg = "File sending...";
cliendSock.Send(clientData);
curMsg = "Disconnecting...";
cliendSock.Close();
curMsg = "File transferred.";
}

101
}

catch (Exception ex)


{
if (ex.Message == "No connection could be made because the target
machine actively refused it")
curMsg = "File Sending fail. Because server not running.";
else
{
curMsg = "File Sending fail." + ex.Message;
}
}

}
}
class attackdenied
{

public static string curMsg;


public static void requist()
{
try
{
IPAddress[] ipAddress = Dns.GetHostAddresses("local host");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1001);
Socket cliendSock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.IP);

cliendSock.Connect(ipEnd);

102
cliendSock.Close();
curMsg = "Attacker Blocked";
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Information", MessageBoxButtons.OK,
MessageBoxIcon.Exclamation);
}
}

private void button3_Click(object sender, EventArgs e)


{
listView1.Items.Remove(listView1.SelectedItems[0]);

private void button2_Click_1(object sender, EventArgs e)


{
int c;
string inputFile = openFileDialog1.FileName; // Substitute this with your
Input File
FileStream fs = new FileStream(inputFile, FileMode.Open, FileAccess.Read);
if (fs.Length > 500)
c = 10;
else
c = 5;

103
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);

for (int i = 1; i <= numberOfFiles; i++)


{
string baseFileName = Path.GetFileNameWithoutExtension(inputFile);
string extension = Path.GetExtension(inputFile);
FileStream outputFile = new FileStream(tempadd + "\\" + baseFileName +
"." + i.ToString().PadLeft(5, Convert.ToChar("0")) + extension + ".pak",
FileMode.Create, FileAccess.Write);
int bytesRead = 0;
byte[] buffer = new byte[sizeOfEachFile];

if ((bytesRead = fs.Read(buffer, 0, sizeOfEachFile)) > 0)


{
outputFile.Write(buffer, 0, bytesRead);
}
outputFile.Close();
}
fs.Close();
backgroundWorker1.RunWorkerAsync();
string s = openFileDialog1.FileName.ToString();
//label1.Visible = true;
FolderBrowserDialog ff = new FolderBrowserDialog();
sys.SendFile(s);
label2.Visible = false;
}
}
}

104
Receiver
namespace receiver
{
public partial class Form1 : Form
{

public Form1()
{
InitializeComponent();
reciver1.receivedPath = "";
}

private void Form1_Load(object sender, EventArgs e)


{
reciver1.receivedPath = @"C:\";
}

private void button1_Click(object sender, EventArgs e)


{

private void button2_Click(object sender, EventArgs e)


{
if (reciver1.receivedPath.Length > 0)
backgroundWorker1.RunWorkerAsync();
else

105
MessageBox.Show("Please select file receiving
path","Information",MessageBoxButtons.OK,MessageBoxIcon.Information);

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)


{
obj.StartServer();
}

private void timer1_Tick(object sender, EventArgs e)


{
label4.Text = reciver1.curMsg;

}
reciver1 obj = new reciver1();

class reciver1
{

IPEndPoint ipEnd;
Socket sock;
public reciver1()
{
IPAddress[] ipAddress = Dns.GetHostAddresses("localhost");
IPEndPoint ipEnd = new IPEndPoint(ipAddress[0], 1000);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.IP);
sock.Bind(ipEnd);

106
}
public static string fileName;
public static string receivedPath;
public static string curMsg = "Stopped";
public void StartServer()
{
try
{
curMsg = "Starting...";
sock.Listen(100);

curMsg = "Running and waiting to receive file.";


Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 5000];

int receivedBytesLen = clientSock.Receive(clientData);


curMsg = "Receiving data...";

int fileNameLen = BitConverter.ToInt32(clientData, 0);


fileName = Encoding.ASCII.GetString(clientData, 4, fileNameLen);

BinaryWriter bWrite = new BinaryWriter(File.Open(receivedPath + "/" +


fileName, FileMode.Append));
bWrite.Write(clientData, 4 + fileNameLen, receivedBytesLen - 4 -
fileNameLen);
curMsg = "Saving file...";
bWrite.Close();
c();

107
File.Delete(receivedPath + "/" + fileName);
clientSock.Close();

curMsg = "Recived & Saved file;";


StreamWriter sw = new StreamWriter(@"C:\\selectfile.txt");
sw.WriteLine(reciver1.receivedPath + "\\" + fileName);
sw.Close();
System.Diagnostics.Process.Start(@"G:\PROJ\Traffic Aware Dynamic
Routing to Alleviate Congestion in wireless Sensors
Networks\receiver\receiver\monitoroff.exe");

}
catch (Exception ex)
{
curMsg = "File Receving error." + ex;
}
}
public void c()
{
const string tempadd = @"C:\";
int c;
string inputFile = reciver1.receivedPath + "/" + reciver1.fileName;//
Substitute this with your Input File
FileStream fs = new FileStream(inputFile, FileMode.Open,
FileAccess.Read);
if (fs.Length > 500)
c = 10;
else

108
c = 5;
int numberOfFiles = c;
int sizeOfEachFile = (int)Math.Ceiling((double)fs.Length / numberOfFiles);

for (int i = 1; i <= numberOfFiles; i++)


{
string baseFileName = Path.GetFileNameWithoutExtension(inputFile);
string extension = Path.GetExtension(inputFile);
FileStream outputFile = new FileStream(tempadd + "\\" + baseFileName
+ "." + i.ToString().PadLeft(5, Convert.ToChar("0")) + extension + ".pak",
FileMode.Create, FileAccess.Write);
int bytesRead = 0;
byte[] buffer = new byte[sizeOfEachFile];

if ((bytesRead = fs.Read(buffer, 0, sizeOfEachFile)) > 0)


{
outputFile.Write(buffer, 0, bytesRead);
}
outputFile.Close();
}
fs.Close();
}
}

private void button3_Click(object sender, EventArgs e)


{
this.Close();
}

109
private void groupBox1_Enter(object sender, EventArgs e)
{

private void button1_Click_1(object sender, EventArgs e)


{
const string tempadd = @"C:\";
string outPath =tempadd;
string[] tmpFiles = Directory.GetFiles(tempadd, "*.pak");
FileStream outputFile = null;
string prevFileName = "";

foreach (string tempFile in tmpFiles)


{

string fileName = Path.GetFileNameWithoutExtension(tempFile);


string baseFileName = fileName.Substring(0,
fileName.IndexOf(Convert.ToChar(".")));
string extension = Path.GetExtension(fileName);

if (!prevFileName.Equals(baseFileName))
{
if (outputFile != null)
{
outputFile.Flush();
outputFile.Close();
}

110
outputFile = new FileStream(outPath + baseFileName + extension,
FileMode.OpenOrCreate, FileAccess.Write);
}

int bytesRead = 0;
byte[] buffer = new byte[1024];
FileStream inputTempFile = new FileStream(tempFile,
FileMode.OpenOrCreate, FileAccess.Read);

while ((bytesRead = inputTempFile.Read(buffer, 0, 1024)) > 0)


outputFile.Write(buffer, 0, bytesRead);

inputTempFile.Close();
File.Delete(tempFile);
prevFileName = baseFileName;
}
outputFile.Close();

private void pictureBox2_Click(object sender, EventArgs e)


{

private void label3_Click(object sender, EventArgs e)


{

111
7. CONCLUSION AND FUTURE SCOPE

A CFA scheme, which can be thought of as a hash function directly supporting

en-route filtering functionality, was proposed. According to CFA, we constructed a

CFAEF scheme to simultaneously defend against false data injection, PDoS, and

FEDoS attacks. Some theoretical and numerical analyses were provided

To demonstrate the efficiency and effectiveness of CFAEF.

112
8. FORMS AND REPORTS

Sender

This form shows the sender which sends the files to receiver.

113
Recevier:

In this form shows the Receiver page which receives the files from

114
Attack:

In this form shows attacker, sender to find the attacker for sending

files by using the IP address.

115
Sender and Receiver

In this form shows attacker, and showing the message for request to

sender to find the attacker for sending to detect and attack the files

116
Attackers Found

In this form shows sender to detect the hacker from sending files.

117
In this form shows attacker, and showing the message for request to
sender to find the attacker for sending to detect and attack the System
Block

118
9. BIBILOGRAPHY

[1] T. C. Aysal and K. E. Barner, “Sensor data cryptography in wireless sensor


networks,” IEEE Trans. Inf. Forensics Security, vol. 3, no. 2, pp. 273–289, Jun. 2008.
[2] M. Albrecht, C. Gentry, S. Halevi, and J. Katz, “Attacking cryptographic schemes
based on ‘Perturbation Polynomials.’,” in Proc. ACM Conf. Computer and
Communications Security (CCS), Chicago, IL, 2010.

[3] C. Blundo, A. D. Santis, A. Herzberg, S. Kutten, U. Vaccaro, and M. Yung,


“Perfectly-secure key distribution for dynamic conferences,” in Proc. Int. Cryptology
Conf. (CRYPTO), Santa Barbara, CA, 1993.
[4] M. Cagalj, S. Capkun, and J. P. Hubaux, “Wormhole-based antijamming
techniques in sensor networks,” IEEE Trans. Mobile Comput., vol. 6, no. 1, pp. 100–
114, Jan. 2007.
[5] M. Ceberio and V. Kreinovich, “Greedy algorithms for optimizing multivariate
horner schemes,” ACM SIGSAM Bull., vol. 38, no. 1, pp. 8–15, 2004

119

You might also like