You are on page 1of 58

Network Border Patrol

- 1-

Network Border Patrol

Dept. of CSE

KVSRIT

Network Border Patrol

- 2-

ABSTRACT
Network Border Patrol
The Internets excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network. Moreover, NBP is complemented with the proposed enhanced core-stateless fair queuing (ECSFQ) mechanism, which provides fair bandwidth allocations to competing flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. Simulation results show that NBP effectively eliminates congestion collapse and that, when combined with ECSFQ, approximately max-min fair bandwidth allocations can be achieved for competing flows. ECSFQ is a mechanism where feedback control and rate control algorithm are achieved.

Dept. of CSE

KVSRIT

Network Border Patrol

- 3-

Dept. of CSE

KVSRIT

Network Border Patrol

- 4-

Project Synopsis

Dept. of CSE

KVSRIT

Network Border Patrol

- 5-

EXISTING SYSTEM
The present system has the TCP congestion control, which is implemented primarily through algorithms operating at end systems. Unfortunately, TCP congestion control also illustrates some of the shortcomings the end-to-end argument. As a result of its strict adherence to end-to-end congestion control, the current Internet suffers from main maladies. DISADVANTAGES: Congestion are unavoidable as network traffic increases Packet loss may occur due to collapse Packets may come in various speed it may lead to collision problem Security is less Flexibility and Scalability are limited PROPOSED SYSTEM: In the proposed system novel congestion-avoidance mechanism called network border patrol (NBP). NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network. ADVANTAGES:
Network Border Patrol has been placed to control the speed of the packets.

Congestion is avoided as the speed decreases.

Packet loss is totally controlled Security is high Flexibility and Scalability are more.

Dept. of CSE

KVSRIT

Network Border Patrol

- 6-

System Analysis

Dept. of CSE

KVSRIT

Network Border Patrol

- 7-

MODULES: REGISTRATION MODULE CLIENT MODULE SERVER MODULE FEEDBACK MODULE RATE CONTROL MODULE

REGISTRATION MODULE : Every user must register with the server. Users are certified by the server to enter into the network to prevent the unauthorized users access the server. Client and Server have an authentication entry and also it is to differentiate between them.Sometimes by mistake, the user inputs are wrong means, the server generates the warning to every mismatch inputs. The server is matching the user name and password based on the database. If it is matched, then the user get the connection otherwise the server quit the unauthorized person

CLIENT MODULE: In this Module, first the user gives the user name and password for security. Then, the user gives request for retrieving the file and gets the file from the server.

Dept. of CSE

KVSRIT

Network Border Patrol

- 8-

Suppose, the server communicates with another user also, the server acts as a supervisor between those users. Those users share the file with respect to the server. Every user accessing the server with use of port and also identifies these port with its number. Clients are the users, requests some files or datas from the server. If it is available then the request is converted into the response and the server responds or else it displays File Not Found. SERVER: In this module, the server is the administrator of all the processes. The server only checks the user is authorized or not. In this Group Communication System process is functioning with help of cryptographic procedure. Based on the user defined query, the server searches the corresponding file in the database and sends that file to the particular user. The server has the user requirement file in its folder as well as it contains username, password, status and sender IP address in its database. The server response to the request is sent to the client .The packets which are sent by the server may vary from different speeds. If the speeds vary congestion or collision or packet loss may occur.

FEEDBACK MODULE: An algorithm has been followed to avoid the congestion or check the various collision while

transfer of packets which varies in different speeds.A Feedback algorithm is followed to speeds of the packets and finds which packet speed has to be reduced. Normally packets are traversed in various speeds. Because of traveling with various speeds there is a chance of congestion or packet loss occurrence. The above problem plays a main and important part in the current network topology. To overcome this we are following a patrol type method to control the speed of the packet to certain level. To attain this we have to find the speed, an algorithm has been followed i.e.; the feedback control algorithm in NBP determines the speed flow of the packets.

Dept. of CSE

KVSRIT

Network Border Patrol

- 9-

RATE CONTROL MODULE: Network Border Patrol comes into existence in this module where after finding the feedback, a rate has been reduced by the patrol near the entry point of the network router. Which a packet the speed has to reduce has been insisting by the feedback algorithm, accordingly the rate has been reduced and controlled by following the rate control algorithm.To overcome the congestion problem we are following a patrol type method to control the speed of the packet to certain level. After finding the feed back & if it is found to be fast the rate of the packet are controlled or reduced to certain limit. The packets are patrolled in such a manner congestions are reduced .i.e.; The NBP rate-control algorithm regulates the rate at which each flow is allowed to enter the network.

SOFTWARE REQUIREMENTS:Java1.3 or More Swings Windows 98 or more HARDWARE REQUIREMENTS Processor Processor Speed RAM Hard Disk : : : : Intel P-III based system 250 MHz to 833MHz 64MB to 256MB 2GB to 30GB

Dept. of CSE

KVSRIT

Network Border Patrol

- 10-

Proposed System Description

Dept. of CSE

KVSRIT

Network Border Patrol

- 11-

Network Border Patrol is a network layer congestion avoidance protocol that is aligned with the core-stateless approach. The core-stateless approach, which has recently received a great deal of research attention, allows routers on the borders (or edges) of a network to perform flow classification and maintain per-flow state but does not allow routers at the core of the network to do so. Below Figures illustrates this architecture. As in other work on core stateless approaches, we draw a further distinction between two types of edge routers. Depending on which flow it is operating on, an edge router may be viewed as an ingress or an egress router. An edge router operating on a flow passing into a network is called an ingress router, whereas an edge router operating on flow passing out of a network is called an egress router. Note that a flow may pass through more than one egress (or ingress) router if the endto-end path crosses multiple networks.

Fig 1
NBP prevents congestion collapse through a combination of per-flow rate monitoring at egress routers and per-flow rate control at ingress routers. Rate monitoring allows an egress router to determine how rapidly each flows packets are leaving the network, whereas rate control allows an ingress router to police the rate at which each flows packets enter the network. Linking these two functions together are the feedback packets exchanged between ingress and egress routers; ingress routers send egress routers forward feedback packets to inform them about the flows that are being rate controlled, and egress routers send ingress routers backward feedback packets to inform them about the rates at which each flows packets are leaving the network. By matching the ingress rate and egress rate of each flow, NBP prevents congestion collapse within the network1. This section describes three important aspects of the NBP mechanism: Dept. of CSE KVSRIT

Network Border Patrol

- 12-

modified edge routers, which must be present in the network, the feedback control algorithm, which determine show and when information is exchanged between edge routers, and the rate control algorithm, which uses the information carried in feedback packets to regulate flow transmission rates and thereby prevent congestion collapse in the network.

The Feedback Control Algorithm

Dept. of CSE

KVSRIT

Network Border Patrol

- 13-

The feedback control algorithm in NBP determines how and when feedback packets are exchanged between edge routers. Feedback packets take the form of ICMP packets and are necessary in NBP for three reasons. First, forward feedback packets allow egress routers to discover which ingress routers are acting as sources for each of the flows they are monitoring. Second, backward feedback packets allow egress routers to communicate per-flow bit rates to ingress routers. Third, forward and backward feedback packets allow ingress routers to detect incipient network congestion by monitoring edge-to-edge round trip times.

The contents of feedback packets are shown in Figure 5. Contained within the forward feedback packet generated at an ingress router are a time stamp and a list of flow specifications for flows originating at the ingress router. The time stamp field is used to calculate the round trip time between two edge routers, and the list of flow specifications indicates to an egress router the identities of active flows originating at the ingress router. A flow specification is a value uniquely identifying a flow, assigned by the ingress router flow classifier. An ingress router adds a flow to its list of active flows whenever a packet

Dept. of CSE

KVSRIT

Network Border Patrol

- 14-

from a new flow arrives; it removes a flow when the flow becomes inactive. In the event that the networks maximum transmission unit size is not sufficient to hold an entire list of flow specifications, multiple forward feedback packets are used. When an egress router

receives a forward feedback packet, it immediately generates a backward feedback packet and returns it to the ingress router. Contained within the backward feedback packet are the forward feedback packets original time stamp, a hop count, and a list of observed bit rates, called egress rates, collected by the egress router for each flow listed in the forward feedback packet. The hop count, which is used by the ingress routers rate control algorithm, indicates how many routers are in the path between the ingress and the egress router. The egress router determines the hop count by examining the time to live (TTL) field of arriving forward feedback packets. When the backward feedback packet arrives at the ingress router, its contents are passed to the ingress routers rate controller, which uses them to adjust the parameters of each flows traffic shaper. In order to determine how often to generate forward feedback packets an ingress router keeps a byte transmission counter for each flow it monitors. Whenever a flows byte transmission counter exceeds a threshold, denoted NBPs transmission counter threshold (Tx), the ingress router generates and transmits a forward feedback packet to the flows egress router, and resets the the byte transmission counters of all flows included in the feedback packet. Using a byte ransmission counter for each flow ensures that forward feedback packets are generated more frequently when flows transmit at higher rates, thereby allowing ingress routers to respond more quickly to impending congestion collapse. To maintain a frequent flow of feedback between edge routers even when data transmission rates are low, ingress routers also generate forward feedback packets whenever a time-out interval, denoted _f , is exceeded. A rough estimate of the amount of overhead that NBP feedback packets create is provided in the Appendix of this paper.

Dept. of CSE

KVSRIT

Network Border Patrol The Rate Control Algorithm:

- 15-

The NBP rate control algorithm regulates the rate at which each flow is allowed to enter the network. Its primary goal is to converge on a set of per-flow transmission rates (hereinafter called ingress rates) that prevents congestion collapse due to undelivered packets. It also attempts to lead the network to a state of maximum link utilization and low router buffer occupancies, and it does this in a manner that is similar to TCP. In the NBP rate control algorithm, shown in Figure 6, a flow may be in one of two phases, slow start or congestion avoidance, similar to the phases of TCP congestion control. The desirable stability characteristics of slow start and congestion control algorithms have been proven in TCP congestion control, and NBP expects to benefit from their

on arrival of Backward Feedback packet p from egress router e currentRTT = currentTime - p.timestamp; if (currentRTT < e.baseRTT) e.baseRTT = currentRTT; deltaRTT = currentRTT - e.baseRTT; RTTsElapsed = (currentTime - e.lastFeedbackTime) / currentRTT; e.lastFeedbackTime = currentTime; for each flow f listed in p rateQuantum = min (MSS / currentRTT, f.egressRate / QF); if (f.phase == SLOW_START) if (deltaRTT f.ingressRate < MSS e.hopcount) f.ingressRate = f.ingressRate 2 ^ RTTsElapsed; else f.phase = CONGESTION_AVOIDANCE; if (f.phase == CONGESTION_AVOIDANCE) if (deltaRTT f.ingressRate < MSS e.hopcount) f.ingressRate = f.ingressRate + rateQuantum * RTTsElapsed; else f.ingressRate = f.egressRate - rateQuantum;

Dept. of CSE

KVSRIT

Network Border Patrol Figure 6: Pseudocode for ingress router rate control algorithm

- 16-

well-known stability features. In NBP, new flows entering the network start with the slow start phase and proceed to the congestion avoidance phase only after the flow has experienced incipient congestion. The rate control algorithm is invoked whenever a backward feedback packet arrives at an ingress router. Recall that backward feedback packets contain a timestamp and a list of flows arriving at the egress router from the ingress router as well as the monitored egress rates for each flow. Upon the arrival of a backward feedback packet, the algorithm calculates the current round trip time (currentRTT in Figure 6) between the edge routers and updates the base round trip time (e.baseRTT), if necessary. The base round trip time (e.baseRTT) reflects the best observed round trip time between the two edge routers. The algorithm then calculates deltaRTT, which is the difference between the current round trip time (currentRTT) and the base round trip time (e.baseRTT). A deltaRTT value greater than zero indicates that packets are requiring a longer time to traverse the network than they once did, and this can only be due to the buffering of packets within the network. NBPs rate control algorithm decides that a flow is experiencing incipient congestion whenever it estimates that the network has buffered the equivalent of more than one of the flows packets at each router hop. To do this, the algorithm first computes the product of the flows ingress rate (f.ingressRate) and deltaRTT (i.e., f.ingressRate _ deltaRTT). This value provides an estimate of the amount of the flows data that is buffered somewhere in the network. If this amount (i.e., f.ingressRate _ deltaRTT) is greater than the number of router hops between the ingress and the egress routers (e.hopcount) multiplied by the size of the largest possible packet (MSS) (i.e., MSS _ e.hopcount), then the flow is considered to be experiencing incipient congestion. The rationale for determining incipient congestion in this manner is to maintain both high link utilization and low queueing delay. Ensuring there is always at least one packet buffered for transmission on a network link is the simplest way to achieve full utilization of the link, and deciding that congestion exists when more than one packet is buffered at the link keeps queueing delays low.

Dept. of CSE

KVSRIT

Network Border Patrol

- 17-

Therefore, NBPs rate control algorithm allows the equivalent of e.hopCount packets to be buffered in flow fs path before it reacts to congestion by monitoring deltaRTT 2. A similar approach is used in the DECbit congestion avoidance mechanism. Furthermore, the approach used by NBPs rate control algorithm to detect congestion, by estimating whether the network has buffered the equivalent of more than one of the flows packets at each router hop, has the advantage of , when congestion occurs, flows with higher ingress rates detect congestion first. This is because the condition f.ingressRate _ deltaRTT MSS _ e.hopcount fails first for flows f with a large ingress rate, detecting that the path is congested due to ingress flow f. When the rate control algorithm determines that a flow is not experiencing congestion, it increases the flows ingress rate. If the flow is in the slow start phase, its ingress rate is doubled for each round trip time that has elapsed since the last backward feedback packet arrived (f.ingress _2RTTsElapsed). The estimated number of round trip times since the last feedback packet arrived is denoted as RTTsElapsed. Doubling the ingress rate during slow start allows a new flow to rapidly capture available bandwidth when the network is underutilized. If, on the other hand, the flow is in the congestion avoidance phase, then its ingress rate is conservatively incremented by one rateQuantum value for each round trip that has elapsed since the last backward feedback packet arrived (f.ingressRate + rateQuantum_ RTTsElapsed). This is done to avoid the creation of congestion. The rate quantum is computed as the maximum segment size divided by the current round trip time between the edge routers. This results in rate growth behavior that is similar to TCP in its congestion avoidance phase. Furthermore, the rate quantum is not allowed to exceed the flows current egress rate divided by a constant quantumfactor (QF). This guarantees that rate increments are not excessively large when the round trip time is small 3.

When the rate control algorithm determines that a flow is experiencing incipient congestion, it reduces the flows ingress rate. If a flow is in the slow start phase, it enters the congestion avoidance phase. If a flow is already in the

Dept. of CSE

KVSRIT

Network Border Patrol

- 18-

congestion avoidance phase, its ingress rate is reduced to the flows egress rate decremented by a constant value. In other words, an observation of incipient congestion forces the ingress router to send the flows packets into the network at a rate slightly lower than the rate at which they are leaving the network. NBPs rate control algorithm is designed to have minimum impact on TCP flows. The rate at which NBP regulates each flow (f.ingressRate) is primarily a function of the round trip time between the flows ingress and egress routers (currentRTT). In NBP, the initial ingress rate for a new flow is set to be MSS/e.baseRTT, following TCPs initial rate of one segment per round trip time. NBPs currentRTT is always smaller than TCPs end-to-end round trip time (as the distance between ingress and egress routers, i.e., the currentRTT in NBP, is shorter than end to end distance, i.e., TCPs RTT). As a result, f.ingressRate is normally larger than TCPs transmission rate when the network is not congested, since TCP transmission window increases at a rate slower than NBPs f.ingressRate increases. Therefore, NBP normally does not regulate TCP flows. However, when congestion occurs, NBP reacts first by reducing f.ingressRate and, therefore, reducing the rate at which TCP packets are allowed to enter the network. TCP eventually detects the congestion (either by loosing packets or due to longer round trip times) and then promptly reduces its transmission rate. From this time point on, f.ingressRate becomes greater than TCPs transmission rate, and therefore, NBPs congestion control do not regulate TCP sources until congestion happens again.

Dept. of CSE

KVSRIT

Network Border Patrol

- 19-

System Design

Dept. of CSE

KVSRIT

Network Border Patrol

- 20-

Data Flow Diagrams

Dept. of CSE

KVSRIT

Network Border Patrol

- 21-

AUTHENTICATION MODULE

START

ENTER USERNAME & PWD

IS VALID ?

REGISTER WITH THE SERVER

Dept. of CSE

KVSRIT

Network Border Patrol

- 22-

CLIENT

CLIENT REGISTERS WITH THE SERVER REQUEST FOR FILE TO THE SERVER

Dept. of CSE

KVSRIT

Network Border Patrol

- 23-

SERVER

SERVER

CHECK FOR THE FILE AND IF IT IS AVAILABLE RESPONDS TO THE CLIENT

PACKET HAS BEEN SENT TO THE APPROPRIATE CLIENT

Dept. of CSE

KVSRIT

Network Border Patrol

- 24-

FEEDBACK

SERVER

ACTIVATES THE NBP AT THE ROUTER

PACKET SPEED HAS BEEN MONITORED

Dept. of CSE

KVSRIT

Network Border Patrol

- 25-

RATECONTROL MODULE

SERVER BASED ON THE PREDEFINED VALUE FOR THE SPEED OF THE PACKETS

PACKET SPEED HAS BEEN CONTROLLED

Dept. of CSE

KVSRIT

Network Border Patrol

- 26-

START

ENTER USERNAME & PWD YES NO IS VALID?

CLIENT REQUEST FOR FILE

PACKETS ARE SENT FROM SERVER

NO

CHECK IF PACKET SPEED IS HIGH?

YES NBP IS ACTIVATED AND THE SPEED IS REDUCED TO AVOID CONGESTION

PACKETS SENT SUCCESSFULLY TO THE CLIENT

STOP

Dept. of CSE

KVSRIT

Network Border Patrol

- 27-

UML Diagrams

Dept. of CSE

KVSRIT

Network Border Patrol Sequence Diagram

- 28-

source Data packets

Ingress

Router

Egress

Destination

Data packets

Data packets

Forward feedback

Data packets

Forward feedback

Backward feedback

Backward feedback

Dept. of CSE

KVSRIT

Network Border Patrol

- 29-

Software Overview

Dept. of CSE

KVSRIT

Network Border Patrol

- 30-

FEATURES OF THE LANGUAGE USED Front-end tool: JAVA 1.3


About Java Initially the language was called as oak but it was renamed as Java in 1995. The primary motivation of this language was the need for a platform-independent (i.e., architecture neutral) language that could be used to create software to be embedded in various consumer electronic devices.
Java is a programmers language. Java is cohesive and consistent. Except

for those constraints imposed by the Internet

environment, Java gives the programmer, full control. Finally, Java is to Internet programming where C was to system programming. Applications and Applets An application is a program that runs on our Computer under the operating system of that computer. It is more or less like one creating using C or C++. Javas ability to create Applets makes it important. An Applet is an application designed, to be transmitted over the Internet and executed by a Java compatible web browser. An applet is actually a tiny Java program, dynamically downloaded across the network, just like an image. But the difference is, it is an intelligent program, not just a media file. It can react to the user input and dynamically change.

FEATURES OF JAVA
Security Every time you that you download a normal program, you are risking a viral infection. Prior to Java, most users did not download executable programs frequently, and those who did scanned them for viruses prior to execution. Most users still worried about the

Dept. of CSE

KVSRIT

Network Border Patrol

- 31-

possibility of infecting their systems with a virus. In addition, another type of malicious program exists that must be guarded against. This type of program can gather private information, such as credit card numbers, bank account balances, and passwords. Java answers both of these concerns by providing a firewall between a networked application and your computer. When you use a Java-compatible Web browser, you can safely download Java applets without fear of virus infection or malicious intent. Portability For programs to be dynamically downloaded to all the various types of platforms connected to the Internet, some means of generating portable executable code is needed .As you will see, the same mechanism that helps ensure security also helps create portability. Indeed, Javas solution to these two problems is both elegant and efficient. The Byte code The key that allows the Java to solve the security and portability problem is that the output of Java compiler is Byte code. Byte code is a highly optimized set of instructions designed to execute by the Java run-time system, which is called the Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code. Translating a Java program into byte code helps makes it much easier to run a program in a wide variety of environments. The reason is, Once the run-time package exists for a given system, any Java program can run on it. Although Java was designed for interpretation, there is technically nothing about Java that prevents on-the-fly compilation of byte code into native code. Sun has just completed its Just In Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it compiles byte code into executable code in real time, on a piece-by-piece, demand basis. It is not possible to compile an entire Java program into executable code all at once, because Java performs various run-time checks that can be done only at run time. The JIT compiles code, as it is needed, during execution. Java Virtual Machine (JVM) Beyond the language, there is the Java virtual machine. The Java virtual machine is an important element of the Java technology. The virtual machine can be embedded within a web

Dept. of CSE

KVSRIT

Network Border Patrol

- 32-

browser or an operating system. Once a piece of Java code is loaded onto a machine, it is verified. As part of the loading process, a class loader is invoked and does byte code verification makes sure that the code thats has been generated by the compiler will not corrupt the machine that its loaded on. Byte code verification takes place at the end of the Compilation process to make sure that is all accurate and correct. So byte code verification is integral to the compiling and executing of Java code.

Java Source

Javac

Java Byte Code

Java Virtual Machine

.Java

.Class

The above picture shows the development process a typical Java programming uses to produce byte codes and executes them. The first box indicates that the Java source code is located in a. Java file that is processed with a Java compiler called JAVA. The Java compiler produces a file called a. class file, which contains the byte code. The class file is then loaded across the network or loaded locally on your machine into the execution environment is the Java virtual machine, which interprets and executes the byte code. Java Architecture Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet. Compilation of Code When you compile the code, the Java compiler creates machine code (called byte code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute the byte code. The JVM is created for overcoming the issue of portability. The code is

Dept. of CSE

KVSRIT

Network Border Patrol

- 33-

written and compiled for one machine and interpreted on all machines. This machine is called Java Virtual Machine.

Compiling and interpreting Java Source Code


Java Interpreter (PC)

PC Co mpi Source Code .. .. .. Macintosh Compiler

Java Byte code

Java Interpreter (Macintosh

(Platform
Independent)

SPARC Compiler

Java Interpret er (Sparc)

During run-time the Java interpreter tricks the byte code file into thinking that it is running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95 or SunSARC station running Solaris or Apple Macintosh running system and all could receive code from any computer through Internet and run the Applets. SIMPLE Java was designed to be easy for the Professional programmer to learn and to use effectively. If you are an experienced C++ programmer, learning Java will be even easier. Because Java inherits the C/C++ syntax and many of the object oriented features of C++. Most of the confusing concepts from C++ are either left out of Java or implemented in a

Dept. of CSE

KVSRIT

Network Border Patrol

- 34-

cleaner, more approachable manner. In Java there are a small number of clearly defined ways to accomplish a given task. Object-Oriented Java was not designed to be source-code compatible with any other language. This allowed the Java team the freedom to design with a blank slate. One outcome of this was a clean usable, pragmatic approach to objects. The object model in Java is simple and easy to extend, while simple types, such as integers, are kept as high-performance non-objects. Robust The multi-platform environment of the Web places extraordinary demands on a program, because the program must execute reliably in a variety of systems. The ability to create robust programs was given a high priority in the design of Java. Java is strictly typed language; it checks your code at compile time and run time. Java virtually eliminates the problems of memory management and de-allocation, which is completely automatic. In a well-written Java program, all run time errors can and should be managed by your program. SWING Swing is a set of classes that provides more powerful and flexible components than are possible with the AWT. In addition to that the familiar components such as buttons, check box and labels swings supplies several exciting additions including tabbed panes, scroll panes, trees and tables. Even familiar components such as buttons have more capabilities in swing. For example a button may have both an image and text string associated with it. Also the image can be changed as the state of button changes. Unlike AWT components swing components are not implemented by platform specific code instead they are return entirely in JAVA and, therefore , are platform- independent. The term lightweight is used to describe such elements. The number of classes and interfaces in the swing packages is substantial. The swing component classes are

Dept. of CSE

KVSRIT

Network Border Patrol

- 35-

SWING COMPONENT CLASSES


Class Abstract Button Description Abstract super class for Swing Buttons Encapsulates a mutually exclusive Set of Buttons Encapsulates an Icon

Button Group

Image Icon

Japplet

The swing version of Applet

Jbutton

The Swing Push Button class

JcheckBox

The swing CheckBox class

JcomboBox

Encapsulates a combobox

JLabel

The swing version of a Label

JradioButton

The swing version of a RadioButton

Dept. of CSE

KVSRIT

Network Border Patrol

- 36-

Networking Classes in the JDK Through the classes in java.net, Java programs can use TCP or UDP to communicate over the Internet. The URL, URL Connection, Socket, and Server Socket classes all use TCP to communicate over the network. The Datagram Packet, Datagram Socket, and Multicast
Socket

classes are for use with UDP.

What Is a URL? If you've been surfing the Web, you have undoubtedly heard the term URL and have used URLs to access HTML pages from the Web. It's often easiest, although not entirely accurate, to think of a URL as the name of a file on the World Wide Web because most URLs refer to a file on some machine on the network. However, remember that URLs also can point to other resources on the network, such as database queries and command output. Definition: URL is an acronym for Uniform Resource Locator and is a reference (an address) to a resource on the Internet. The following is an example of a URL which addresses the Java Web site hosted by Sun Microsystems:

As in the previous diagram, a URL has two main components:


Protocol identifier Resource name

Note that the protocol identifier and the resource name are separated by a colon and two forward slashes. The protocol identifier indicates the name of the protocol to be used to fetch the resource. The example uses the Hypertext Transfer Protocol (HTTP), which is typically used to serve up hypertext documents. HTTP is just one of many different protocols used to

Dept. of CSE

KVSRIT

Network Border Patrol

- 37-

access different types of resources on the net. Other protocols include File Transfer Protocol (FTP), Gopher, File, and News. The resource name is the complete address to the resource. The format of the resource name depends entirely on the protocol used, but for many protocols, including HTTP, the resource name contains one or more of the components listed in the following table: Host Name Filename Port Number Reference The name of the machine on which the resource lives. The pathname to the file on the machine. The port number to which to connect (typically optional). A reference to a named anchor within a resource that usually identifies a specific location within a file (typically optional).

For many protocols, the host name and the filename are required, while the port number and reference are optional. For example, the resource name for an HTTP URL must specify a server on the network (Host Name) and the path to the document on that machine (Filename); it also can specify a port number and a reference. In the URL for the Java Web site
java.sun.com /index.html.

is the host name and the trailing slash is shorthand for the file named

Sequence of socket calls for connection-oriented protocol: System Calls Socket - create a descriptor for use in network communication. On success, socket system call returns a small integer value similar to a file descriptor Name. Bind - Bind a local IP address and protocol port to a socket When a socket is created it does not have nay notion of endpoint address. An application calls bind to specify the local; endpoint address in a socket. For TCP/IP protocols, the endpoint address uses the socket address in structure. Servers use bind to specify the wellknown port at which they will await connections. Connect - connect to remote client After creating a socket, a client calls connect to establish an actual connection to a remote server. An argument to connect allows the client to specify the remote endpoint, which

Dept. of CSE

KVSRIT

Network Border Patrol

- 38-

include the remote machines IP address and protocols port number. Once a connection has been made, a client can transfer data across it. Accept () - accept the next incoming connection Accept creates a new socket for each new connection request and returns the descriptor of the new socket to its caller. The server uses the new socket only for the new connections it uses the original socket to accept additional connection requests once it has accepted connection, the server can transfer data on the new socket. Return Value: This system-call returns up to three values An integer return code that is either an error indication or a new socket description The address of the client process The size of this address Listen - place the socket in passive mode and set the number of incoming TCP connections the system will en-queue. Backlog - specifies how many connections requests can be queued by the system while it wants for the server to execute the accept system call it us usually executed after both the socket and bind system calls, and immediately before the accept system call. send, send to, receive and receive from system calls These system calls are similar to the standard read and write system calls, but additional arguments are requested. close - terminate communication and de-allocate a descriptor. The normal UNIX close system call is also used to close a socket.

Back-end tool: MS-ACCESS


It minimizes data contention and guarantees data concurrency. Access maintains the preceding features with a high degree of overall system performance. Database users do not suffer from slow processing performance. Access also offers the heterogeneous on some non oracle database transparently. option that allows users to access data

Dept. of CSE

KVSRIT

Network Border Patrol

- 39-

While most software development, web page design, etc. nowadays is done using graphical user interfaces, experienced Unix/Linux people appreciate the simplicity and efficiency of the powerful text-based tools available on those platforms. If you are familiar for example with one of the powerful text editors, such as Emacs or VI, it is very easy to work remotely (over the Internet or intranet) on other computers using Telnet or SSH connections

Dept. of CSE

KVSRIT

Network Border Patrol

- 40-

Testing & Debugging Strategies

Dept. of CSE

KVSRIT

Network Border Patrol

- 41-

Overview Software testing identifies errors at an early stage and hence prevents a breakdown at a later stage, where it might be costlier. It is not unusual for a software development organization to expend 40% of the total project effort on testing. In the extreme, the testing of mission-critical software (e.g., flight control, nuclear reactor monitoring) can cost three to five times as much as all other software engineering activities combined. A planned testing identifies the difference between the expected results and the actual results. Since the main objective of software testing is to find errors, the person who does it must work to prove that the software fails. A successful testing is one that uncovers, as many as yet undiscovered errors, which helps to make the software more rugged and reliable. However it should be clear, that testing cannot show the absence of defects, it can only highlight that the defects are present in software. Levels of Testing Testing is applied at different levels in the software development life-cycle, but the testing done is different in nature and has different objectives at each level. The focus of all testing is to find errors, but different type of errors are looked for at each level. The levels of testing in a development project could be: Unit Testing : To test individual software items Integration Testing : To test the interfaces between the software items System Testing : To validate the entire product Acceptance Testing : To test the product for acceptance Further levels of testing than stated above could include Alpha Testing : Testing by users at the developers site or on a test environment Beta Testing : Testing by a large number of users at the user site Field Testing : To test the system on the target environment All levels of testing should be planned in advance and conducted systematically. The activities that are to be carried out include: Test planning Test case preparation Test execution

Dept. of CSE

KVSRIT

Network Border Patrol

- 42-

Unit testing (White Box Testing) At the lowest level, the function of the basic unit of software is tested in isolation. This is where the most detailed investigation of the internal workings of individual units are carried out. Unit testing is often performed by the programmer who wrote the code. The purpose of unit testing is to find errors in the individual units, which could be data or logic related errors. The test can be derived from the program specification or design document. Units which cannot be tested in isolation may require the creation of small test programs known as test harness. Activities to be followed for Unit Testing Ensure that all paths are traversed and branching takes properly. Ensure that instructions related to a testcase are executed properly. Verify operation at normal value range. Verify operation outside range values. Verify program execution at boundary conditions. Check the calls to all programs. Ensure all data structures are handled properly. Check file handling. Ensure that all loops are terminated normally. Identify and remove abnormal termination of all loops. Ensure all errors are trapped. Check the values returned from called programs ( stubs may be used if called program is not yet ready) Integration testing (Interfaces Testing) When two or more tested units are combined the tests should look for errors in two ways; in the interfaces between the units and in the functions which can be performed by the integrated unit which could not be assessed during unit testing. The tests are derived from the design document. System testing (Functionality Testing) After integration testing is completed the entire system is tested as whole. System testing looks for errors in the end-to-end functionality of the system and also for errors in non-

Dept. of CSE

KVSRIT

Network Border Patrol functional quality attributes such as performance, reliability, volume, usability,

- 43-

maintainability security etc. System testing can be carried out by independent personnel. The tests are derived from the Functional Specification

Regression testing Regression testing is a testing process that is applied after the programs are modified. Regression testing is a major component in maintenance and conversion projects. Modifying a program involves creating new logic to correct an error or implement a change, and incorporating that logic into an existing program. The new logic may involve minor modification such as adding, deleting or rewriting a few lines of code or may involve major modifications such as adding, deleting or replacing one or more modules or subsystems. Regression testing aims to check the correctness of the new logic, to ensure the continuous working of the unmodified portions of a program and to validate that the whole functions correctly. Acceptance testing (Black Box Testing) After system testing or regression testing is completed, the system is handed over to the customer or user, and acceptance testing marks the transition from ownership by the developers to the ownership by the users. The acceptance test is different in nature in three ways: It is the responsibility of the accepting organization rather than the developing organization The purpose of acceptance testing is to give confidence that the system is working rather than trying to find errors. The acceptance testing is a demonstration rather than a test. Acceptance testing also includes testing of the user organization working practices, to ensure the computer system will fit the operational procedures. The acceptance test gives a confidence to the users that the system is ready for operational use. General tips For every testing activity planning should be done. Test plan should be prepared, reviewed and approved before testing commences. Test plan should be documented and testing should be done as per the approved plan.

Dept. of CSE

KVSRIT

Network Border Patrol

- 44-

Test cases should be prepared, reviewed and approved prior to testing and must be as per the test plan. A good test should have a reasonable probability of catching an error. It should not be redundant and should make the program failure obvious without being too simple or too complex. Tests must be documented for their description, expected results and actual results. If a bug is discovered, its nature, severity and fix done should also be documented Testing never ends, it just gets transferred from the developer to the customer. Every time the customer uses the program a test gets conducted. The testing process during development should therefore terminate when the number of errors found in the unit time reduces to a very low extent. For system or regression testing it is preferable to install the system afresh.For multiuser systems aspects such as security measures, integrity and recovery measures mustbethoroughly tested.Test Plan should indicate the test coverage ( statement, path, branch and usage profile)

Dept. of CSE

KVSRIT

Network Border Patrol

- 45-

Output Screens

Dept. of CSE

KVSRIT

Network Border Patrol

- 46-

Dept. of CSE

KVSRIT

Network Border Patrol

- 47-

Dept. of CSE

KVSRIT

Network Border Patrol

- 48-

Dept. of CSE

KVSRIT

Network Border Patrol

- 49-

Dept. of CSE

KVSRIT

Network Border Patrol

- 50-

Dept. of CSE

KVSRIT

Network Border Patrol

- 51-

Dept. of CSE

KVSRIT

Network Border Patrol

- 52-

Dept. of CSE

KVSRIT

Network Border Patrol

- 53-

Dept. of CSE

KVSRIT

Network Border Patrol

- 54-

Conclusion & Recommendations

Dept. of CSE

KVSRIT

Network Border Patrol Conclusion and Recommendations:

- 55-

In our project, we have presented a novel congestion-avoidance mechanism for the Internet called NBP and an ECSFQ mechanism. Unlike existing Internet congestion control approaches, which rely solely on end-to-end control, NBP is able to prevent congestion collapse from undelivered packets. ECSFQ complements NBP by providing fair bandwidth allocations in a core-stateless fashion. NBP ensures at the border of the network that each flows packets do not enter the network faster than they are able to leave it, while ECSFQ ensures, at the core of the network that flows transmitting at a rate lower than their fair share experience no congestion.

Dept. of CSE

KVSRIT

Network Border Patrol

- 56-

Bibliography

Dept. of CSE

KVSRIT

Network Border Patrol


BIBLIOGRAPHY
S No. TITLE AUTHOR PUBLICATION

- 57-

EDITION

1.

THE COMPLETE

PATRICK NAUGHTON HERBERT SCHILDT

TATA McGraw HILL

1994

REFERENCE JAVA2

2.

JAVA

JAWORSICK

TECH.MEDIA Publication

1998

CERTIFICATION

3.

COMPLETE JAVA 2 ERNEST

ROBERT HELLERS PUBLICATIONS

DPB

1998

4.

CRYPTOGRAPHY DEMYSTIFIED

JOHN E. HERSHEY

TATA Mc Graw HILL

2004

5.

MASTERING JAVA SECURITY

RICH HELTON JOHENNIE HELTON

WILEY Dreamtech

2002

6.

JAVA NETWORK PROGRAMMING

OREILLY PUBLICATIONS

2004

7.

SOFTWARE

ROGER PRESSMAN

TATA Mc Graw

2004

ENGINEERING

HILL

Dept. of CSE

KVSRIT

Network Border Patrol

- 58-

Dept. of CSE

KVSRIT

You might also like