You are on page 1of 37

1.

INTRODUCTION

1.1 ABOUT THE PROJECT

A mobile ad hoc network (MANET) is a collection of mobile wireless nodes that


establish communication without any centralized control or fixed infrastructure. Since the
radio transmission range of each node is limited, a packet may be forwarded over multiple
hops to reach its destination. This limitation also introduces the potential for spatial channel
reuse. Most medium access control (MAC) protocols attempt to exploit this potential in order
to minimize delay and maximize throughput on a per hop basis. Scheduled approaches to
channel access provide deterministic rather than probabilistic delay guarantees. This is
important for applications sensitive to maximum delay. Furthermore, the control overhead
and carrier sensing associated with contention MAC protocols can be considerable in terms
of time and energy. The challenge with scheduling is to achieve a reasonable throughput
objective. Two approaches have emerged to exploit spatial reuse in response to topology
changes. Topology-dependent protocols alternate between a contention phase in which
neighbour information is collected, and a scheduled phase in which nodes follow a schedule
constructed using the neighbour information . In contrast, the idea in topology-transparent
protocols is to design schedules that are independent of the detailed network topology.
Specifically, the schedules do not depend on the identity of a node’s neighbours, but rather on
how many of them are transmitting. Even if a node’s neighbours change its schedule does
not; if the number of neighbours does not exceed the designed bound then the schedule still
succeeds. Gave a construction based on Galois fields and finite geometries using the
algebraic property that polynomials of bounded degree cannot have many roots in common;
informally, their intersection is small.

This project maintain the records as 5 Modules

1. Login
2. Node accessing
3. Sending Files
4. Receiving Files
5. Error Correction

1
Network administrator can login on server can access all the files in the network.
Administrator can set the user priorities. This is useful to find which nodes are ready to
receive the data.

Objective

The main objective of this project is to contributions towards the practical deployment of
topology-transparent scheduling. 1) We generalize the combinatorial requirement on
topology-transparent schedules and establish that the solution is a well known object called a
cover-free family. Thus a wealth of combinatorial tools is available for schedule construction.
2) We demonstrate, via simulation for both static and mobile ad hoc networks, that the
expected throughput using rateless forward error correction (RFEC) closely matches the
theoretical bound that assumes immediate feedback availability. Thus uncast can be
effectively implemented with low computational overhead.

The schedules derived from the polynomials share the same intersection property and
do not overlap in too many slots. In their scheme if a node has at most neighbors, there is at
least one collision-free slot to each neighbor within a frame. Their focus was on parameters to
minimize schedule length. Ju et al. argued that the parameters satisfying the condition on
delay do not maximize the minimum throughput. They showed it is possible to achieve
higher minimum throughput at the expense of longer frame length. Intuitively, while
Chlamtac et al. strive to get one free slot to each neighbor per frame, Ju et al. aim to get many
slots to the same neighbor per frame. There are complex trade-offs between the design
parameters and the delay and throughput characteristics of the resulting schedules .Since its
introduction, topology-transparent scheduling has remained a theoretical curiosity. The
reasons for this relate to the following assumptions: Suitable design parameters and can be
selected. Given and , a construction for schedules exists. A method for frame
synchronization exists. A method to distribute schedules to nodes exists.) The neighborhood
bound is not exceeded. Feedback on the outcome of a slot is available at the end of that slot;
for unicast, this is required by the transmitter to decide whether to retransmit the packet.
While it is arguable whether the assumptions are strong or weak, the assumption in which
underlies the analysis is wrong for a MANET. Only the receiver, and not the transmitter, can
determine the outcome of a transmission. Hence if the transmitter is to know the outcome, it
must gain this knowledge from the receiver. we make two contributions towards the practical

2
deployment of topology-transparent scheduling: We generalize the combinatorial
requirement on topology-transparent schedules and establish that the solution is a well known
object called a cover-free family. Thus a wealth of combinatorial tools is available for
schedule construction. We demonstrate, via simulation for both static and mobile ad hoc
networks, that the expected throughput using rateless forward error correction (RFEC)
closely matches the theoretical bound that assumes immediate feedback availability. Thus
unicast can be effectively implemented with low computational overhead. We use LT coding
a RFEC scheme that does not require knowledge of the loss rate on the channel. It permits
fast encoding and decoding algorithms; decoding is successful in recovering the original
message once an amount of data only marginally larger than the original data is received. For
our purposes, any scheme offering these features would suffice. In fact, there are a number of
schemes that could be used; see Richardson and Urbane for a thorough discussion of low
density parity check (LDPC) codes.

In particular, in cases with bit flips or low loss rates, coding schemes other than LT, such
as Raptor codes, may be more suitable. In our context, it is assumed that bit errors are
corrected by the protocols independently of collisions. Moreover collisions, which are the
only cause of erasures, are frequent events. Hence while LT may not be the best selection it is
anticipated to be a reasonable choice. Naturally, better schemes would only improve the
performance of RFEC. The rest of this paper is organized as follows. Section II defines a
cover-free family and examines orthogonal arrays as an important class of this family. We
also derive the bound on expected throughput. Section III discusses acknowledgment
schemes including RFEC for this purpose and overviews the LT process. Section IV
describes an experiment that makes a direct comparison between the proposed RFEC scheme
and the ideal scheme in which the transmitter is omniscient, in the sense that it receives
acknowledgments instantaneously. Comparisons for achieved delay and throughput are
presented. Section V addresses the greatest challenge for scheduling in dynamic
environments, namely adaptation to changing network conditions. Finally in Section VI, we
examine the potential use of topology transparent schemes in light of the practical
acknowledgment scheme developed and discuss remaining limitations.

3
2. SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

The existing topology-transparent protocols depend on two parameters: the number of


nodes in the network, and, the maximum node degree. This is construction based on Galois
fields and finite geometries using the algebraic property that polynomials of bounded degree
cannot have many roots in common; informally, their intersection is small.
The schedules derived from the polynomials share the same intersection property and
do not overlap in too many slots. In their scheme if a node has at most neighbours, there is at
least one collision-free slot to each neighbour within a frame. Their focus was on parameters
to minimize schedule length.

2.1.1. Drawbacks

 Topology-transparent scheduling has suffered from many drawbacks.


 Protocols attempt to exploit this potential in order to minimize delay and maximize
throughput on a per hop basis.
 The malfunction on any given node will affect the whole system.
 Active replication is that in practice most of the real world servers are
non‐deterministic.

4
2.2 PROPOSED SYSTEM
In the proposed system, we make two contributions towards the practical deployment
of topology-transparent scheduling: 1) It generalize the combinatorial requirement on
topology-transparent schedules and establish that the solution is a well-known object called a
cover-free family. Thus a wealth of combinatorial tools is available for schedule construction.
2) We demonstrate, via simulation for both static and mobile ad hoc networks, that the
expected throughput using Rateless Forward Error Correction (RFEC) closely matches the
theoretical bound that assumes immediate feedback availability. Thus uni cast can be
effectively implemented with low computational overhead. It have established that the
combinatorial construction of such schemes can be done much more generally than
previously suggested. The combinatorial characterization leads not only to more general
construction schemes but also to analytic results suggesting that topology-transparent
schemes retain strong throughput and delay performance even when in an environment with
neighbourhoods larger than anticipated.

2.2.1 Advantages

 Retransmission of data can often be avoided.


 Cost of higher bandwidth requirements on average.
 Forward error correction is that a back-channel is not required.
 Closely matches the theoretical bound that assumes immediate feedback availability.

5
2.3 FEASIBILTY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three Key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

2.3.1 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.

2.3.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client.

2.3.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user must not
feel threatened by the system, instead must accept it as a necessity. The level of acceptance by
the users solely depends on the methods that are employed to educate the user about the
system and to make him familiar with it.

6
3. SYSTEM SPECIFICATION

3.1 HARDWARE REQUIREMENTS

This section gives the details and specification of the hardware on which the system
is expected to work.

PROCESSOR : PENTIUM III 866 MHz

RAM : 128 MD SD RAM

MONITOR : 15” COLOR

HARD DISK : 20 GB

KEYBOARD : STANDARD 102 KEYS

MOUSE : 3 BUTTONS

3.2 SOFTWARE REQUIREMENTS

Software programs are designed to run on personal computers. The system


specification is an important part.

OPERATING SYSTEM : WINDOWS 2000 PROFESSIONAL


ENVIRONMENT : VISUAL STUDIO .NET 2005
.NET FRAMEWORK : VERSION 2.0
FRONT END : VB.NET
WEB TECHNOLOGY : ACTIVE SERVER PAGES.NET
BACK END : SQLSERVER 2005

7
3.3 MODULE DESCRIPTION

Login

Network administrator can login on server can access all the files in the network.
Administrator can set the user priorities.

Node access

In this module we can find which nodes are active and which are inactive. This is
useful to find which nodes are ready to receive the data.

Sending Data

From one node to another, can send data if the node is active. Sender will calculate
the lines of file and keep it in separate file for the error correction. The sender calculates size
of the file in order to find the time.

Receiving Files

Node can receive the data if it is active and receiving node will calculate the no. of
lines. This will keep in separate file for the error correction. If the files are received correctly
it resends the acknowledgement of receiving file.

Error Correction

Sending node will calculate both the files of sender and receiver. In the state of
receiver is not connected the sender checks the state for certain period if the system get
connected before the period it resend the file, if not it display the message that system is not
connected. Finally we are displaying the time calculation for sending file in the graph.

8
3.4 SOFTWARE DESCRIPTION

.NET Framework

The initial announcement of the .NET Framework, it's taken on many new and different
meanings to different people. To a developer, .NET means a great environment for creating
robust distributed applications. To an IT manager, .NET means simpler deployment of
applications to end users, tighter security, and simpler management. To a CTO or CIO, .NET
means happier developers using state-of-the-art development technologies and a smaller
bottom line. To understand why all these statements are true, you need to get a grip on what
the .NET Framework consists of, and how it's truly a revolutionary step forward for
application architecture, development, and deployment.

.NET Framework

Now that you are familiar with the major goals of the .NET Framework, let's briefly
examine its architecture.The .NET Framework sits on top of the operating system, which can
be a few different flavors of Windows and consists of a number of components .NET is
essentially a system application that runs on Windows.

Conceptually, the CLR and the JVM are similar in that they are both runtime
infrastructures that abstract the underlying platform differences. However, while the JVM
officially supports only the Java language, the CLR supports any language that can be
represented in its Common Intermediate Language (CIL). The JVM executes bytecode, so it
can, in principle, support many languages, too.

Unlike Java's bytecode, though, CIL is never interpreted. Another conceptual


difference between the two infrastructures is that Java code runs on any platform with a JVM,
whereas .NET code runs only on platforms that support the CLR. In April, 2003, the
International Organization for Standardization and the International Electro technical
Committee (ISO/IEC) recognized a functional subset of the CLR, known as the Common
Language Interface (CLI), as an international standard.

9
The .NET Framework Class Library

The second most important piece of the .NET Framework is the .NET Framework
class library , As have seen, the common language runtime handles the dirty work of actually
running the code write. But to write the code, need a foundation of available classes to
access the resources of the operating system, database server, or file server. The FCL is made
up of a hierarchy of namespaces that expose classes, structures, interfaces, enumerations, and
delegates that give you access to these resources.

The namespaces are logically defined by functionality. For example, the System.
Data namespace contains all the functionality available to accessing databases. This
namespace is further broken down into System.Data.SqlClient, which exposes
functionality specific to SQL Server, and System.Data.OleDb, which exposes specific
functionality for accessing OLEDB data sources.

The bounds of a namespace aren't necessarily defined by specific assemblies within


the FCL; rather, they're focused on functionality and logical grouping. In total, there are more
than 20,000 classes in the FCL, all logically grouped in a hierarchical manner. Figure 1.8
shows where the FCL fits into the .NET Framework and the logical grouping of namespaces.

To use an FCL class in your application, you use the Imports statement in Visual
Basic .NET or the using statement in C#. When you reference a namespace in Visual Basic
.NET or C#, you also get the convenience of auto-complete and auto-list members when you
access the objects' types using Visual Studio .NET. This makes it very easy to determine
what types are available for each class in the namespace you're using. As you'll see over the
next several weeks, it's very easy to start coding in Visual Studio .NET.

The Structure of a .NET Application

To understand how the common language runtime manages code execution, you must
examine the structure of a .NET application. The primary unit of a .NET application is the
assembly. An assembly is a self-describing collection of code, resources, and metadata. The
assembly manifest contains information about what is contained within the assembly. The
assembly manifest provides:

10
 Identity information, such as the assembly’s name and version number
 A list of all types exposed by the assembly
 A list of other assemblies required by the assembly
 A list of code access security instructions, including permissions required by the
assembly and permissions to be denied the assembly

Each assembly has one and only one assembly manifest, and it contains all the
description information for the assembly. However, the assembly manifest can be contained
in its own file or within one of the assembly’s modules.

Introduction to Object-Oriented Programming

Programming in the .NET Framework environment is done with objects. Objects are
programmatic constructs that represent packages of related data and functionality. Objects are
self-contained and expose specific functionality to the rest of the application environment
without detailing the inner workings of the object itself. Objects are created from a template
called a class. The .NET base class library provides a set of classes from which you can
create objects in your applications. You also can use the Microsoft Visual Studio
programming environment to create your own classes. This lesson introduces you to the
concepts associated with object-oriented programming.

Visual Basic .NET

When an instance of a class is created, a copy of the instance data defined by that
class is created in memory and assigned to the reference variable. Individual instances of a
class are independent of one another and represent separate programmatic constructs. There
is generally no limit to how many copies of a single class can be instantiated at any time. To
use a real-world analogy, if a car is an object, the plans for the car are the class. The plans can
be used to make any number of cars, and changes to a single car do not, for the most part,
affect any other cars.

Objects are composed of members. Members are properties, fields, methods, and
events, and they represent the data and functionality that comprise the object. Fields and
properties represent data members of an object. Methods are actions the object can perform,
and events are notifications an object receives from or sends to other objects when activity
happens in the application.

11
To continue with the real-world example of a car, consider that a Car object has fields
and properties, such as Color, Make, Model, Age, Gas Level, and so on. These are the data
that describe the state of the object. A Car object might also expose several methods, such as
Accelerate, Shift Gears, or Turn. The methods represent behaviors the object can execute.
And events represent notifications. For example, a Car object might receive an Engine
Overheating event from its Engine object, or it might raise a Crash event when interacting
with a Tree object.

Object Models

Simple objects might consist of only a few properties, methods, and perhaps an event
or two. More complex objects might require numerous properties and methods and possibly
even subordinate objects. Objects can contain and expose other objects as members. For
example, the Textbox control exposes a Font property, which consists of a Font object.
Similarly, every instance of the Form class contains and exposes a Controls collection that
comprises all of the controls contained by the form. The object model defines the hierarchy of
contained objects that form the structure of an object.

An object model is a hierarchical organization of subordinate objects contained and


exposed within a main object. To illustrate, let’s revisit the example of a car as an object. A
car is a single object, but it also consists of subordinate objects. A Car object might contain
an Engine object, four Wheel objects, a Transmission object, and so on. The composition of
these subordinate objects directly affects how the Car object functions as a whole.

Encapsulation

Encapsulation is the concept that implementation of an object is independent of its


interface. Put another way, an application interacts with an object through its interface, which
consists of its public properties and methods. As long as this interface remains constant, the
application can continue to interact with the component, even if implementation of the
interface was completely rewritten between versions.

Objects should only interact with other objects through their public methods and
properties. Thus, objects should contain all of the data they require, as well as all of the
functionality that works with that data. The internal data of an object should never be exposed
in the interface; thus, fields rarely should be Public (public).

12
Polymorphism

Polymorphism is the ability of different classes to provide different implementations


of the same public interfaces. In other words, polymorphism allows methods and properties
of an object to be called without regard for the particular implementation of those members.
For example, a Driver object can interact with a Car object through the Car public interface.
If another object, such as a Truck object or a SportsCar object, exposes the same public
interface, the Driver object can interact with them without regard to the specific
implementation of that interface. There are two principal ways through which polymorphism
can be provided: interface polymorphism and inheritance polymorphism

FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL
Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta Data
Services. The term repository is used only in reference to the repository engine within Meta
Data Services. SQL-SERVER database consist of six type of objects,

Query

A query is a question that has to be asked the data. Access gathers data that answers
the question from one or more table. The data that make up the answer is either dynaset (if
you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest
information in the dynaset. Access either displays the dynaset or snapshot for us to view or
perform an action on it, such as deleting or updating.

Forms

A form is used to view and edit information in the database record by record .A form
displays only the information we want to see in the way we want to see it. Forms use the
familiar controls such as textboxes and checkboxes. This makes viewing and entering data
easy.

13
Views of Form:

We can work with forms in several primarily there are two views,

They are,

1. Design View

2. Form View

Design View

To build or modify the structure of a form, we work in forms design view. We can
add control to the form that are bound to fields in a table or query, includes textboxes, option
buttons, graphs and pictures.

Report

A report is used to vies and print information from the database. The report can
ground records into many levels and compute totals and average by checking values from
many records at once. Also the report is attractive and distinctive because we have control
over the size and appearance of it.

Macro

A macro is a set of actions. Each action in macros does something. Such as opening a
form or printing a report .We write macros to automate the common tasks the work easy and
save the time.

Module

Modules are units of code written in access basic language. We can write and use
module to automate and customize the database in very sophisticated ways.

14
4. SYSTEM DESIGN

Design is multi-step process that focuses on data structure software architecture,


procedural details, (algorithms etc.) and interface between modules. The design process also
translates the requirements into the presentation of software that can be accessed for quality
before coding begins.Computer software design changes continuously as new methods; better
analysis and broader understanding evolved. Software Design is at relatively early stage in its
revolution.

Therefore, Software Design methodology lacks the depth, flexibility and quantitative
nature that are normally associated with more classical engineering disciplines. However
techniques for software designs do exist, criteria for design qualities are available and design
notation can be applied.

4.1 INPUT DESIGN


Input design is the process of converting user-originated inputs to a computer-based
format. Input design is one of the most expensive phases of the operation of computerized
system and is often the major problem of a system.

In the project, the input design is made in various windows forms with various
methods. For examples, in the Admin and user form, the empty username and password is not
allowed. The username if exists in the database, the input is considered to be invalid and is
not accepted.

4.2 OUTPUT DESIGN


Output design generally refers to the results and information that are generated by the
system for many end-users; output is the main reason for developing the system and the basis
on which they evaluate the usefulness of the application.

4.3 CODE DESIGN


Codes facilities easier identification, simplification in handling and retrieval if items
by consuming less storage space. The codes are designed in such a manner that it will be
easily understands by the user. The codes also generated automatically by the system

15
4.4 Dataflow Diagram

Level 0

User

Login the
specified user

Check Login
Details

16
Level 1

Sending Data

No

Node Active

Yes

Storing File by
Receiving Data
Receiver

No
Matching the files for
error correction and
calculate time

Yes

Stop

17
5. SYSTEM TESTING

The procedure level testing is made first. By giving improper inputs, the errors
occurred are noted and eliminated. In computer programming, unit testing is a procedure used
to validate that individual units of source code are working properly. A unit is the smallest
testable part of an application. In procedural programming a unit may be an individual
program, function, procedure, etc., while in object-oriented programming, the smallest unit is
a method, which may belong to a base/super class, abstract class or derived/child class.

TESTING TYPES

5.1 BLACK BOX TESTING

Internal system design is not considered in this type of testing. Tests are based on
requirements and functionality.

5.2 WHITE BOX TESTING

This testing is based on knowledge of the internal logic of an application’s code. Also
known as Glass box Testing. Internal software and code working should be known for this
type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

5.3 UNIT TESTING

Unit testing verification efforts on the smallest unit of software design, module. This
is known as “Module Testing”. The modules are tested separately. This testing is carried out
during programming stage itself. In these testing steps, each module is found to be working
satisfactorily as regard to the expected output from the module.

5.4 INTEGRATION TESTING

Integration testing is a systematic technique for constructing tests to uncover error


associated within the interface. In the project, all the modules are combined and then the
entire programmer is tested as a whole. In the integration-testing step, all the error uncovered
is corrected for the next testing steps.

18
5.5 VALIDATION TESTING

Validation testing is where requirements established as a part of software requirement


analysis is validated against the software that has been constructed This test provides the final
assurance that the software meets all functional, behavioural and performance requirements.
The errors, which were not uncovered during integration testing, are found out and corrected
during this phase.

The purpose of integration testing is to verify functional, performance and reliability


requirements placed on major design items. These "design items", i.e. assemblages (or groups
of units), are exercised through their interfaces using black box testing, success and error
cases being simulated via appropriate parameter and data inputs.

Test cases are constructed to test that all components within assemblages interact
correctly, for example across procedure calls or process activations, and this is done after
testing individual modules, i.e. unit testing.

The overall idea is a "building block" approach, in which verified assemblages are
added to a verified base which is then used to support the integration testing of further
assemblages.

19
6. SYSTEM IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is turned out
into a working system. Thus it can be considered to be the most critical stage in achieving a
successful new system and in giving the user, confidence that the new system will work and
be effective.

The implementation stage involves careful planning, investigation of the existing


system and it’s constraints on implementation, designing of methods to achieve changeover
and evaluation of changeover methods.

Implementation is the process of converting a new system design into operation. It is


the phase that focuses on user training, site preparation and file conversion for installing a
candidate system. The important factor that should be considered here is that the conversion
should not disrupt the functioning of the organization.

The application is implemented in the Internet Information Services 5.0 web server
under the Windows 2000 Professional and accessed from various clients.

Implementation is the most crucial stage in achieving a successful system and giving
the user’s confidence that the new system is workable and effective. Implementation of a
modified application to replace an existing one. This type of conversation is relatively easy to
handle, provide there are no major changes in the system.

Each program is tested individually at the time of development using the data and has
verified that this program linked together in the way specified in the programs specification,
the computer system and its environment is tested to the satisfaction of the user. The system
that has been developed is accepted and proved to be satisfactory for the user. And so the
system is going to be implemented very soon. A simple operating procedure is included so
that the user can understand the different functions clearly and quickly.

20
7. SYSTEM MAINTEANANCE

The objectives of this maintenance work are to make sure that the system gets into
work all time without any bug. Provision must be for environmental changes which may
affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In our project the process can be added
without affecting other parts of the system.

Maintenance plays a vital role .The system liable to accept any modification after its
implementation. This system has been designed to favour all new changes. Doing this will
not affect the system’s performance or its accuracy.

In the project system testing is made as follows. The procedure level testing is made
first. By giving improper inputs, the errors occurred are noted and eliminated. Then the web
form level testing is made.

In the form, the zero length username and password are given and checked. Also the
duplicate username is given and checked. The client side validations are made.

The dates are entered in wrong manner and checked. Wrong email-id is given and
checked. This is the final step in system life cycle. Here we implement the tested error-free
system into real-life environment and make necessary changes, which runs in an online
fashion. Here system maintenance is done every months or year based on company policies,
and is checked for errors like runtime errors, long run errors and other maintenances like
table verification and reports.

21
8. CONCLUSION

Topology-transparent scheduling has suffered from many drawbacks. In this paper, we


have established that the combinatorial construction of such schemes can be done much more
generally than previously suggested. The combinatorial characterization leads not only to
more general construction schemes but also to analytic results suggesting that topology-
transparent schemes retain strong throughput and delay performance even when in an
environment with neighbourhoods larger than anticipated.

The fundamental problem, from the beginning, has been to develop a realistic
acknowledgment model that realizes the performance indicated by a theory based on
omniscient acknowledgment (OMN) and in which collision is the only cause of erasures.
Rateless forward error correction (RFEC) has been proposed here as a solution, and a
practical implementation using LT codes described. To validate this solution, experiments
have been conducted using topology-transparent schedules based on orthogonal arrays, to
compare OMN and RFEC, and to explore the analytical model developed earlier. The
computational results are compelling, showing that RFEC has no observable negative effect
on throughput, and only a small impact on delay.

22
9. FUTURE ENHANCEMENTS

The project has covered almost all the requirements. Further requirements and
improvements can easily be done since the coding is mainly structured or modular in nature.
Improvements can be appended by changing the existing modules or adding new modules.
The fundamental problem, from the beginning, has been to develop a realistic
acknowledgment model that realizes the performance indicated by a theory based on
omniscient acknowledgment and in which collision is the only cause of erasures.

23
10.BIBLIOGRAPHY

BOOK REFERENCES

1. A. Woo and D. E. Culler, “A transmission control scheme for media access in sensor
networks,” in Proc. MobiCom’01, Jul. 2001, pp.221–235.

2. I. Chlamtac and S. S. Pinter, “Distributed node organization algorithm for channel access
in a multihop dynamic radio network,” IEEE Trans.Comput., vol. 36, pp. 728–737, Jun.
1987.

3. C. Zhu and S. Corson, “A five-phase reservation protocol FPRP for mobile ad hoc
networks,” in Proc. IEEE INFOCOM, 1998, pp. 322–331.

4. I. Chlamtac and A. Faragó, “Making transmission schedules immune to topology changes


in multi-hop packet radio networks,” IEEE/ACM Trans. Networking, vol. 2, no. 1, pp. 23–29,
Feb. 1994.

5. J.-H. Ju and V. O. K. Li, “An optimal topology-transparent scheduling method in multihop


packet radio networks,” IEEE/ACM Trans. Networking,vol. 6, no. 3, pp. 298–306, Jun. 1998.

6. C. J. Colbourn, A. C. H. Ling, and V. R. Syrotiuk, “Cover-free families and topology-


transparent scheduling for MANETs,” Designs, Codes, and Cryptography, vol. 32, no. 1–3,
pp. 35–65, May 2004.

WEBSITES

1. http://www9.limewire.com/developer/gnutella_protocol_0.4.pdf

2. http://www.darkridge.com/~jpr5/doc/gnutella.html

3. http://people.cs.uchicago.edu

4. http://www.pcquest.com/content/p2p/102091205.asp

24
11. APPENDIX

11.1 SAMPLE CODING

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

using System.IO;

using System.Data.SqlClient;

namespace CodeTreeView

public partial class regform : Form

SqlConnection con;

SqlCommand cmd;

SqlCommand cmd1;

SqlDataReader dr;

public regform()

InitializeComponent();

25
}

private void button1_Click(object sender, EventArgs e)

con = new SqlConnection("server=.;database=estimation;user id=sa;password=");

con.Open();

cmd =new SqlCommand ("select * from login where username='"+username .Text +"' AND
password='"+password .Text +"'",con );

cmd1 = new SqlCommand("select * from rreport", con);

cmd1.CommandText = "insert into rreport(userreport)values(@userreport)";

cmd1.Parameters.Add("@userreport", SqlDbType.VarChar, 20).Value = username.Text;

cmd1.ExecuteNonQuery();

dr = cmd.ExecuteReader();

if (dr.Read())

browse bbb = new browse();

bbb.Show();

//report cvcv = new report();

//cvcv.label8.Text = username.Text;

// cvcv.Show();

else

label4.Text = "please enter a correct username and password";

26
}

}private void linkLabel1_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)

registration rrr = new registration();

rrr.Show();

private void button2_Click(object sender, EventArgs e){

Application.Exit();}

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

using System.Data.SqlClient;

using System.IO;

namespace PathFinderApp

public partial class registration : Form

SqlConnection con;

SqlCommand cmd;

27
public registration()

InitializeComponent();

private void registration_Load(object sender, EventArgs e)

private void glassButton1_Click(object sender, EventArgs e)

con = new SqlConnection("server=.;database=manet;user id=sa;password= ");

con.Open();

cmd = new SqlCommand("insert into multicast(username,password,confirm,emailid) values


('" + textBox1.Text + "','" + textBox2.Text + "','" + textBox3.Text + "','" + textBox4.Text +
"')", con);

cmd.ExecuteNonQuery();

con.Close();

login eer = new login();

eer.Show();

28
11.2 SCREEN SHOTS

Login

29
Registration

30
Connected IP’s

31
Node access

32
Choose File

33
Open The File

34
Send File

35
Acknowledgement Receive

36
Time Calculation

37