You are on page 1of 1

As a variable sized packet arrives, it is written as a linked list into

shared memory segments. After a forwarding decision is made, the


selected egress port reads the packet out of shared memory and
sends it on its way. If a large packet arrives, the head of the packet
can start egress transmission before the tail of the packet is written
into memory. This is known as cut-through operation, which provides
low-latency transmission independent of packet size. As we
mentioned in the last section, IOQ architectures can also operate in
cut-through mode, but by storing the packet only once, the output-
queued shared memory architecture has lower intrinsic latency.
The egress side of the chip provides a lot of functional flexibility when
using this type of architecture. Each egress port can independently
select packets from memory for transmission and can act like an
egress scheduler. Packets can be divided into traffic classes and a list
of packets available for transmission can be provided to each egress
scheduler. The scheduler can then use mechanisms like strict priority
or deficit weighted round robin to select packets for transmission.
Traffic shaping can also be applied. Multicast packets can be stored
once in shared memory and simply read from memory multiple times
for transmission to each requested egress port allowing full bandwidth
multicast. Because of this simpler design, less overall on-chip memory
is required compared to the IOQ architecture.
Link-level flow control can also be easily implemented in this type of
design. Figure 3.18 shows a functional view of link-level flow control in
an output-queued shared memory switch.

You might also like