PRIORITY BASED MULTIPLE BUFFER MANAGEMENT FOR WEIGHTED PACKETS

1. A.Rabiyathul basariya M.E (Second Year), 2. Mrs. P.Parvathi M.Tech., Sudharsan Engineering College Abstract Motivated by providing differentiated services on the Internet, we consider efficient online algorithms for buffer management in network switches. The online buffer management problem formulates the problem of queuing policies of network switches supporting QOS (Quality of Service) guarantee. In FIFO buffering model, in which unit-length packets arrive in an online manner and each packet is associated with a value (weight) representing its priority. The order of the packets being sent should comply with the order of their arriving time. The buffer size is finite. At most one packet can be sent in each time step. The paper proposes the concept of multi-buffer management for efficient packet delivery and allows the prioritized delivery of packet, maintains the multiple buffer delivery of packet Our objective is to maximize weighted Throughput, defined by the total value of the packets sent. 1. Introduction Network switches inside routers of the IP-based network infrastructure are critical in implementing the Internet’s Functionalities. Packets arrive at the network switches in an online manner. The buffer management policy in the network Switches is in charge of two tasks: packet queuing and packet delivery. When new packets arrive, buffer management decides which ones to accept and queue for potential delivery. Since the buffer has finite buffer slots, it may not be able to accommodate all arriving packets. Therefore, some packets already in the buffer might have to be dropped. Without loss of generality, time is assumed to be discrete. In each time step, the buffer management selects a pending packet in the buffer to send. The buffer management can be regarded as an online scheduling algorithm processing prioritized packets. Fig. 1 illustrates the buffer structure inside a network switch.

Figure 1. The buffer management inside a network switch. Buffer size is considered to be finite and drops the packet if overflow occurs. This paper proposes the concept of multiple buffers for managing large number of packets Using multiple buffer it able to handle large number of packets in the network and also based on the priority of the packet it will be delivered.

2. Competitive FIFO Buffer Management for Weighted Packets They consider efficient online algorithms for buffer management in network switches. We study a FIFO buffering model, in which unit-length packets arrive in an online manner and each packet is associated with a value (weight) representing its priority. The order of the packets being sent should comply with the order of their arriving time. The buffer size is finite. At most one packet can be sent in each time step. They[2] design competitive online FIFO buffering algorithms, where competitive ratios are used to measure online algorithms’ performance against the worst-case scenarios. They first provide an online algorithm with a constant competitive ratio 2. Then, They study the experimental performance of their algorithm on real Internet packet traces and compare it with all other known FIFO online competitive algorithms. They provide an online algorithm ON with a constant competitive ratio. They experimentally evaluate algorithm ON and all known competitive buffer management algorithms. 2.1. A Competitive Online Algorithm ON Fei Li[2] describes an online algorithm called ROS (ROS stands for Relaxed Online Optimal Solution). ROS Works in a relaxed model in which the FIFO order constraint over sending packets is not demanded. A good characteristic of ROS results that can per mutate the packet sending sequence of ROS to get an optimal offline algorithm OPT for the FIFO buffering model. ROS is used to calculate ON’s competitive ratio. 2.2. An optimal offline algorithm OPT

In optimal offline algorithm each time step, ROS greedily accepts packets and drops the minimum-value one if packets overflow happens. Then ROS sends a pending packet with the maximum value. Since those sent packets do not need to obey the FIFO order in the delivery sequence, at the end of each time step, all unsent packets (if any) will be kept in the buffer. 3. Competitive Queuing Policies for QoS Switches Packet scheduling in a network providing differentiated services, where each packet is assigned a value. various queueing models for supporting QoS (Quality of Service)[3]. In the nonpreemptive model, packets accepted to the queue will be transmitted eventually and cannot be dropped. The FIFO preemptive model allows packets accepted to the queue to be preempted (dropped) prior to their departure, while ensuring that transmitted packets are sent in the order of arrival. In the bounded delay model, packets must be transmitted before a certain deadline, otherwise it is lost (while transmission ordering is allowed to be arbitrary). In all models the goal of the buffer policy is to maximize the total value of the accepted packets. Let a be the ratio between the maximal and minimal value. For the non-preemptive model They derive a O(log a) competitive ratio, both exhibiting a buffer policy and a general Lower bound. 4. Competitive Queue Policies for Differentiated Services In Competitive Queue Policies for Differentiated Services the packets are tagged as either being high or low priority packets. Outgoing links in the network are serviced by a single FIFO queue. This

model [4] gives a benefit of α>= 1 to each high priority packet and a benefit of 1 to each low priority packet. A queue policy control which of the arriving packets are dropped and which enter the queue. Once a packet enters the queue it is eventually sent. The aim of a queue policy is to maximize the sum of the benefits of all the packets it delivers. W. Aiello, Y. Mansour, S. Rajagopolan, and A. Rosen analyze and compare different queue policies for this problem using the competitive analysis approach, where the benefit of the once policy is compared to the benefit of an optimal offline policy. They derive both upper and lower bounds for the policies and in most cases bounds are tight. 5. Random Early Detection Gateways for Congestion Avoidance Random Early Detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a preset threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection’s share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.

Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. Random Early Detection gateways are ineffective mechanism for congestion avoidance at the gateway, in cooperation with network transport protocols. If RED gateways packets when the average queue size exceeds the maximum threshold, rather than simply setting a bit in packet headers, then RED gateways control the calculated average queue size. This action provides an upper bound on the average delay at the gateway. The probability that the RED gateway chooses a particular connection to notify during congestion is roughly proportional to that connection’s share of the bandwidth at the gateway. This approach avoids a bias against bursty traffic at the gateway. 6. Proposed Work In this paper we proposes the concept of implementing the multiple buffers for managing large number of packets. In simple buffer not possible to manage the more number of packets. 6.1 Packet Analysis Packet analysis, often referred to as packet sniffing or protocol analysis, describes the process of capturing and interpreting live data as it flows across a network in order to better understand what is happening on that network. Packet analysis can help us understand network characteristics, learn who is on a network, determine who or what is utilizing available bandwidth, identify peak network usage times, identify possible attacks or malicious activity, and find unsecured and bloated applications. 6.1.1 Viewing Endpoints An endpoint is the place where communication ends on a particular Protocol. For instance, there are two

endpoints in TCP/IP communication: the IP addresses of the systems sending and receiving data, 192.168.1.25 and 192.168.1.30. An example on Layer 2 would be the communication taking Place between two physical NICs and their MAC addresses. The NICs sending and receiving data have addresses of 00:ff:ac:ce:0b:de and 00:ff:ac:e0:dc:0f, Making those addresses the endpoints of communication.

request separate buffer is maintained. Buffer has a fixed number of slots. All the arriving packets are fed into the buffer and delivered to the client requests.

Fig 6.2 Multiple Buffer Management 6.3 Packet Priority When a network segment becomes congested, the hub-and-switch workload results in the delay or dropping of packets. On a network using packet-priority values, a packet with a higher priority receives preferential treatment and is serviced before a packet with a lower priority. Priority is given to the application packets based on the type of data packets. Each data packets have different priority to the delivery. Priority is set based on the delay value of the arriving packets. Each data packet has a delay value. During the packets in the buffer the delay value of packet is analyzed and based on the delay constrain the packet delivered to the clients. Thus allows the prioritized delivery of packet. Conclusion We conclude that for practical usage of these priority algorithms, thorough experiments need to performed

Fig 6.1 Viewing end points

N o

Destination Capture Capture IP d Time d Length Thu Nov 19 192.168.0.25 0 00:26:18:51: ff:ff:ff:ff:ff:ff 192.168.0.11 09:47:02 209 5 3b:2f IST 2009 Source IP Thu Nov 19 00:26:18:51: 00:08:5c:8c:e 203.145.184. 1 192.168.0.11 09:47:02 3b:2f 7:06 32 IST 2009

Source MAC

Destination MAC

52

Table 6.Packet Analysis 6.2 Multiple Buffer Management Multiple Buffers are managed between clients in order to maintain the large number of incoming packets. For each

and analyzed to pick the best algorithm in real applications. The Multi-buffering model and measure its performance experimentally measured using Internet traces. References [1] The Internet http://ita.ee.lbl.gov. traffic archive.

[2] Fei Li Competitive FIFO Buffer Management for Weighted Packets .In proceedings of the 7th Annual communication and services Research Conference,2009 [3] N. Andelman, Y. Mansour, and A. Zhu. Competitive queuing polices for QoS switches. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 761–770, 2003. [4] W. Aiello, Y. Mansour, S. Rajagopolan, and A. Rosen. Competitive queue policies for differentiated services. In Proceedings of the 19th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM), pages 431–440, 2000. [5] S. Floyd and V. Jacobson. Random early detection gateway for congestion avoidance. IEEE/ACM Transactions on Networking, pages 397–413, 1993.

Sign up to vote on this title
UsefulNot useful