You are on page 1of 10

Switches buffer

Names of the interns


Abdullah Ahmed Abdul Razzaq
Zainab Matar Radi
Saad Mutlaq Abdul Latif
Sana Muhammad Hadi
Hussein Thamer Sadiq
Group category and number
Network projects
The first group
Class code: ia7fcoq
Date : 28/10/2020
Contents

Abstract.......................................................................................................................................................3
The Objective...............................................................................................................................................4
General Objectives..................................................................................................................................4
Specific Objectives...................................................................................................................................4
Planning.......................................................................................................................................................5
Methodology...............................................................................................................................................5
Packet Buffers in Cloud Network Switches..............................................................................................5
A Simple Rack-Top Switch Model............................................................................................................6
The TCP/IP Bandwidth Capture Effect.....................................................................................................8
Challenges...................................................................................................................................................9
Conclusion.................................................................................................................................................10
References.................................................................................................................................................10
Abstract
Source sends more and more data so you can think of window size as being equal to bandwidth it
increases the bandwidth up to a point where a packet is dropped congestion and that drop of a
packet is how congestion is signaled back to the source and it will drop the window size to half
of what it was before this is about the 1/2 point and then it'll keep increasing again until a packet
gets dropped and then it comes down increase and drop and these points where the packet gets
dropped is due to a buffer being overflowed it's due to a detection stops sending data for a period
of time to bring the effective bandwidth down make sure that you don't drain to the point of
being empty and that's caused.

Background The idea of switching small blocks of information was first discussed
independently by Paul at the RAND Corporation during the early 1960s during the US and
Donald Davies at the National Physical Laboratory (NPL) within the UK in 1965. At the
National Physical Laboratory (NPL), research into packet switching began with a proposal for a
broad-area network in 1965, and a local-area network in 1966. ARPANET funding was secured
by Bob Taylor in 1966, and when he hired Larry Roberts, planning began in 1967.In 1969, the
NPL network, ARPANET, and SITA HLN became operative. Until the 1973 launch, about
twenty different network technologies were developed. There were two basic differences in the
division of functions and activities between the hosts at the edge of the network and the network
heart. In the late 1950s, the US Air Force established a good area In the late 1950s, the US Air
Force established an honest area network for the Semi-Automatic Ground Environment (SAGE)
radar defense system .They sought a system which can survive a nuclear attack to enable a
response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual
assured destruction). Developed the concept of distributed adaptive message block switching in
support of the Air Force initiative. The concept was first presented to the Air Force within the
summer of 1961 as briefing B-265, later published as RAND report P-2626 in 1962, and
eventually in report RM 3420 in 1964. Report P-2626 described a general architecture for a
large-scale, distributed, survivable communications network. The work focuses on three key
ideas: use of a decentralized network with multiple paths between any two points, dividing user
messages into message blocks, and delivery of these messages by store and forward switching .In
1965, a similar message routing theory was developed by Davies independently. He coined the
term packet switching and proposed the creation in the UK of a national network. In 1966, he
gave a lecture on the idea, after which he was told about the work of Baron by a person from the
Ministry of Defense ( MoD). Roger Scantlebury, a member of the Davies team, met Lawrence
Roberts and proposed it for use in ARPANET at the 1967 Symposium on Operating Systems
Concepts. For his original network design, Davies chose some of the same parameters as Baron
did, such as a packet size of 1024 bits. In 1966, to meet NPL requirements and demonstrate the
feasibility of packet switching, Davies suggested that a network should be installed in the
laboratory. He assumed that "all network users must provide themselves with some kind of error
management," inventing what became known as the end-to - end idea, in order to cope with
1
packet permutations (due to dynamically changed route preferences) and datagram losses
(inevitable when fast sources are sent to slow destinations). In 1968, Lawrence Roberts
contracted with Klein rock at UCLA to conduct theoretical work to model the performance of the
ARPANET, which in the early 1970s underpinned the growth of the network. On packet
networks, including datagram networks, the NPL team also conducted simulation work. Built by
Louis Pouring in the early 1970s, the French CYCLADES network was the first to introduce the
end-to - end Davies concept and to make the hosts responsible for the efficient transmission of
data on a packet-switched network, rather than being a network service itself. His team was also
the first to solve the highly complex problem of offering a stable virtual circuit infrastructure for
user applications when using a best-effort network service, an early contribution to what would
be TCP. In May 1974, Vent Cerf and Bob Kahn defined Transmission Control Software, an
Internet protocol for sharing resources using packet switching between nodes. In RFC 675
(Internet Transmission Control Software Specification), written in December 1974 by Vent Cerf,
Yonge Dalai, and Carl Sunshine, the TCP specifications were then released. This monolithic
protocol was later layered as TCP, as the Transmission Control Protocol, atop the Internet
Protocol, IP. Metal-oxide-semiconductor (CMOS) complementary VLSI (very large-scale
integration) technology led to the growth of high-speed broadband packet switching during the
1980s-1990s. The NPL Data Communications Network entered operation in 1970 after a pilot
project in 1969.In the datagram method, the hosts are responsible for ensuring the orderly
distribution of packets, working according to the end-to - end concept. The network guarantees
sequenced data transmission to the host in the virtual call system. This results in a host interface
that is easier but complicates the network. This form of network is used by the X.25 protocol set.
Who is working on Switches they provide error checks on the received data, thus preventing
premature when network congestion occurs. If speed or duplex self-negotiation fails, they
provide additional memory for a specific port.

The Objective
The purpose of this project is to describe the switches buffer, which should be clear, realistic and
achievable within the duration of the project. The Proposal should be aligned, consistent and
understandable. Having said this, the goals of this Project are:
General Objectives
The proposal seeks to investigate what is switches buffer, the history of how the switches buffer
created thus improves knowledge about switches buffer and enhances the understanding of them.
Specific Objectives
 to analysis how buffer space works in the network
 Help to clarify thoughts about switches buffer.
 Realize the role of buffer space in the switches.
 Understand the importance of it
 Know how packets behaviors during times of congestion.
 Explore the interrelations between buffer space and packets throughout the network.

2
Planning
Project proposal  
Tasks Date Duration Name
Topic research  9 /10 /2020  2 All
Planning, Methodology  11/10 /2020 1  ‫زينب مطر راضي‬
 ‫سعد مطلك عبداللطيف‬
overview, Abstract  16/ 10/2020  4 ‫عبدهللا احمد عبدالرزاق‬
Background,Challenge
s  14/10/2020  3  ‫سنا محمد هادي‬
Objective, Conclusion  12 /10/2020  2  ‫حسين ثامر صادق‬

Methodology
Today’s cloud data applications, including Hadoop, Big Data, Search or Storage, are distributed
applications running on server clusters with many-to-many communication patterns. The key to
achieving predictable performance for these distributed applications is to provide consistent
network bandwidth and latency to the various traffic flows since in most cases it is the slowest
flow or query completing last that determines the overall performance.
We demonstrate that without sufficient packet buffer memory in the switches, network
bandwidth is allocated grossly unfairly among different flows, resulting in unpredictable
completion times for distributed applications. This is the result of packets on certain flows
getting dropped more often than on other flows, the so-called TCP/IP Bandwidth Capture effect.
We present simulation data that show that in heavily loaded networks, query completion times
are dramatically shorter with big buffer switches compared to small buffer switches.
Packet Buffers in Cloud Network Switches
The performance of distributed Big Data applications depends critically on the packet buffer size
of the datacenter switches in the network fabric. In this paper we compare the performance of
both individual switches and network fabrics built with a leaf spine architecture with switches of
various packet buffer sizes under various levels of network load. The results show a dramatic
improvement at the application level under load when using big buffer switches. There are two
types of switch chips commonly used in datacenter switches. The first type uses on-chip shared
SRAM buffers, which today are typically 12 Mbytes of packet buffer memory for 128 10G ports,
or approximately 100 Kbytes per 10GigE port. The second type of switch chip uses external
DRAM buffer, which typically provides 100 Mbytes of packet buffer per 10GigE port, or 1000X
more than the SRAM switch. At Arista, we build datacenter switches with both types of silicon.
The small buffer switch silicon is used in the Arista 7050X, 7250X, 7060X, 7260X and
7300X/7320X switches, while the large buffer switch silicon is used in the Arista 7048T,

3
7280E/7280R, and 7500E/7500R
switches. Both types of switch
product lines have been widely
deployed. In this paper we will
discuss how the difference in buffer
size affects application
performance in distributed cloud
applications.

Figure1 difference in buffer size

A Simple Rack-Top Switch Model


To understand the interaction of multiple TCP/IP flows and switch buffer size it helps to start
with devices attached to a single switch. For this, we assumed 20 servers connected with 10
Gigabit Ethernet to a rack top switch with one 40 Gigabit Ethernet uplink. Each server has 10
threads, resulting in a total of 200 flows (20 servers x 10 flows/server) sharing the 40G uplink
(5:1 over subscription). The question is what bandwidth is seen by each flow.

4
Figure 2: Single switch with 40 Gigabit Ethernet uplink connected with 10Gig to 20 servers with
10 flows each
We modeled the network shown in Figure 2 with the NS-2 network simulator using standard

TCP/IP settings using two types of switches: (1) a large buffer switch with 256 Mbytes shared
packet buffer and perfect per-flow queuing behavior, and (2) a small buffer switch with 4 Mbytes
of shared packet buffer. The resulting bandwidth per flow is shown in Figure 3 below.

5
Figure 3: Distribution of bandwidth across different flows in a small buffer versus big buffer
switch.
As can be seen, the ideal large buffer switch delivers 200 Mbps per flow. With the small buffer
switch however, the bandwidth per flow is distributed along a Gaussian curve. Roughly half the
flows receive more bandwidth than the mean, while the other flows receive less bandwidth than
what would have been fair, and some flows receive barely any bandwidth at all. The result of this
unfair bandwidth allocation between different flows is that some flows have an awfully long tail
and have a substantially longer completion time.
The TCP/IP Bandwidth Capture Effect
Why would small buffer switches create such a wide range of bandwidth for different flows?
The answer to this is inherent in the way TCP/IP works and TCP/IP flows interact when packets
are dropped. The TCP/IP protocol relies on ACK packets from the receiver to pace the speed of
transmission of packets by adjusting the sender bandwidth to the available network bandwidth. If
there are insufficient buffers in the network, packets are dropped to signal the sender to reduce
its rate of packet transmission. When many flows pass through a congested switch with limited
buffer resources, which packet is dropped and which flow is impacted is a function of whether a
packet buffer was available in the precise moment when that packet arrived, and is therefore a
function of chance. TCP flows with packets that are dropped will back off and get less share of
the overall network bandwidth, with some really “unlucky” flows getting their packets dropped
all the time and receiving barely any bandwidth at all. In the meantime, the “lucky” flows that by
chance have packets arriving when packet buffer space is available do not drop packets and
instead of slowing down will increase their share of bandwidth. The result is a Poisson-like
distribution of bandwidth per flow that can vary by more than an order of magnitude between the
top 5% and the bottom 5% of flows. We call this behavior the “TCP/IP Bandwidth Capture
Effect”, meaning in a congested network with limited buffer resources certain flows will capture
more bandwidth than other flows. The TCP/IP Bandwidth Capture Effect is conceptually similar

6
to the Ethernet CSMA/CD bandwidth capture effect in shared Ethernet, where stations that
collide with other stations on the shared LAN keep backing off and as a result receive less
bandwidth than other stations that were not colliding [7][8]. The Ethernet CSMA/CD bandwidth
capture effect was solved with the introduction of full duplex Ethernet and Ethernet switches that
eliminated the CSMA/CD access method. The TCP/IP Bandwidth Capture Effect can be solved
by switches that have sufficient buffering such that they don’t cause TCP retransmission
timeouts. Note that the overall throughput of the network is not impacted by the TCP bandwidth
capture effect since when certain flows time out, other flows will pick up the slack. Thus one
does not need large packet buffers to saturate a bottleneck link, assuming sufficient number of
flows, however that does not say anything about how the bandwidth of the bottleneck link is
allocated to the various flows. Without sufficient buffering, the allocation of bandwidth will be
very much uneven, which can have a very significant impact on distributed applications that
depend on all flows completing. In summary, the TCP/IP Bandwidth Capture Effect is inherent
in the way the TCP/IP protocol interacts with networks that have insufficient buffer resources
and drop packets under load. In contrast, large buffer switches drop virtually no packets,
enabling the network to provide predictable and fair bandwidth across all flows.

Challenges
We use the switches to store or drop traffic. The switches are self-learning. They are simple to
use. Only connect them to the server and the protocols have no issues. Switches split a network
into smaller networks, and this distinction is the same for traffic, meaning that the gap extension
can be managed and the congestion of the network is minimized by this reduction of networks.
The switches are difficult to identify and appreciate the degree of performance they can aid with
because during the transmission of the incoming packets they introduce a small delay in latency,
so the switches will hinder the network performance. One of the variables determining the
network's performance is the number of packets and nodes and the reliability of the network
itself. As far as knowing the network's reliability by the rate of usage is concerned, it is a statistic
that can be learned regarding the health of the network performance. The switches are fast and if
they have enough capacity to process and deal with Ethernet easily so that it outperforms traffic,
it operates at wire level. Data, for example, is stored on the server over several nodes. If one of
the nodes seeks data from all the other nodes at the same time, the responses must be obtained
concurrently, meaning that if the data is crowded during compilation and transmission, the data
is stored in temporary buffers with switches installed and waiting for them. In order to be ready
to repair the problem, the pressure coming from packages and nodes or spreading the problem is
governed from one part to the components.
Query completion (Figure 1) times as a network load function the effects of query completion
times (QCT) are similarly dramatic. Under 90% network load, and for 90% query completion,
the tiny buffer network took quite 1 second. In comparison, under equal loading conditions, the
best performance of the QCT with wide buffer switches was merely 20 mesa, a 50:1 performance
advantage. In fact, we find that a lot of 90 percent packet loss at the spine layer is similarly
disruptive for spine traffic to packet loss at the leaf layer. This wasn't found in because their
study didn't simulate high traffic load at the spine layer, which didn't stress the spine switch.

7
Figure 1: Query Completion Times with small and large buffer Switches under 90% network
load.

Conclusion
In networking researchers are trying to eliminate any bottlenecks in the Internet by having buffer
memory in the switches, The amount of buffer memory required by a port is dynamically
allocated during times of congestion, to Prevents the packets at those times from dropping there
are temporarily hold (buffered) in the memory because they cannot be processed or transmitted
at the rate they arrived thus the buffers are essential to make sure there is no dropped packets in
the network.

References
 M. G. Hluchyj and M. J. Karol, Queuing in high-performance packet switching, IEEE J.
Select. Areas Commun., vol. 6, pp. 1587–1597, 1988.
 David K. Hunter, Member, IEEE, Meow C. Chia, and Ivan Andonovic,
Senior Member, IEEE, Member, Buffering in Optical Packet Switche 1998.
 ANDREAS BECHTOLSHEIM, LINCOLN DALE, ARISTA , Why Big Data Needs Big
Buffer Switches
 Tom Etzel the CTO of the insieme business unit and a Cisco 2018.

You might also like