You are on page 1of 30

LAN Switching Technologies and Virtual LAN

Course

INFS612: Data Communication and Distributed Processing

Dr. Schneider

Fall Semester, 2001


Table of Contents

1. Introduction to LAN
1.1 Background
1.2 History
1.3 LAN Switches
1.4 Bandwidth Problem in LANs
1.5 Possible Solutions
1.6 How can LAN Switching help?
1.7 Bridges and Routers

2. Switch Features
2.1 Full Duplex
2.2 Flow Control
2.3 Static and Dynamic Switching
2.4 Cut-Through Versus Store-and-Forward Switching
2.5 Address Resolution
2.6 Multiple LAN Technologies
2.7 Network Management
2.8 Multilayer Switching
2.9 Switching in Ethernet environment

3. Switch Architecture
3.1 Frame Versus Cell Switching
3.2 Blocking and non-blocking switch architecture

4. Switched LAN Topology


4.1 LAN Switch and OSI model
4.2 ATM and Ethernet Switching

5. VLANs
5.1 Introduction to VLAN’s
5.2 What are VLAN's?
5.3 Why use VLAN's?

6. How VLAN's work


6.1 Implementation of Virtual LAN
6.2 Types of Connections
6.3 Frame Processing

7. Summary
8. References

1. Introduction
Local Area Network (LAN) technology has made a significant impact on almost every
industry. Operations of these industries depend on computers and networking. The data is
stored on computers than on paper, and the dependence on networking is so high that
banks, airlines, insurance companies and many government organizations would stop
functioning if there were a network failure. Since, the reliance on networks is so high and
the network traffic is increasing, we have to address some of the bandwidth problems this
has caused and find ways to tackle them.

1.1 Background

A LAN switch is a device that provides much higher port density at a lower cost than
traditional bridges. For this reason, LAN switches can accommodate network designs
featuring fewer users per segment, thereby increasing the average available bandwidth
per user. This chapter provides a summary of general LAN switch operation and maps
LAN switching to the OSI reference model.
The trend toward fewer users per segment is known as micro segmentation. Micro
segmentation allows the creation of private or dedicated segments, that is, one user per
segment. Each user receives instant access to the full bandwidth and does not have to
contend for available bandwidth with other users. As a result, collisions (a normal
phenomenon in shared-medium networks employing hubs) do not occur. A LAN switch
forwards frames based on either the frame's Layer 2 address (Layer 2 LAN switch), or in
some cases, the frame's Layer 3 address (multi-layer LAN switch). A LAN switch is also
called a frame switch because it forwards Layer 2 frames, whereas an ATM switch
forwards cells. Although Ethernet LAN switches are most common, Token Ring and
FDDI LAN switches are becoming more prevalent as network utilization increases.
Figure 1 illustrates a LAN switch providing dedicated bandwidth to devices, and it
illustrates the relationship of Layer 2 LAN switching to the OSI data link layer:

Figure 1: A LAN switch is a data link layer device.

1.2 History
The earliest LAN switches were developed in 1990. They were Layer 2 devices dedicated
to solving bandwidth issues. Recent LAN switches are evolving to multi-layer devices
capable of handling protocol issues involved in high-bandwidth applications that
historically have been solved by routers. Today, LAN switches are being used to replace
hubs in the wiring closet because user applications are demanding greater bandwidth.

1.3 LAN Switches

A LAN switch is a device that typically consists of many ports that connect LAN
segments (Ethernet and Token Ring) and a high-speed port (such as 100-Mbps Ethernet,
Fiber Distributed Data Interface [FDDI], or 155-Mbps ATM). The high-speed port, in
turn, connects the LAN switch to other devices in the network.
A LAN switch has dedicated bandwidth per port, and each port represents a different
segment. For best performance, network designers often assign just one host to a port,
giving that host dedicated bandwidth of 10 Mbps, as shown in Figure 2, or 16 Mbps for
Token Ring networks.

Figure 2: Sample LAN switch configuration.

When a LAN switch first starts up and as the devices that are connected to it request
services from other devices, the switch builds a table that associates the MAC address of
each local device with the port number through which that device is reachable. That way,
when Host A on Port 1 needs to transmit to Host B on Port 2, the LAN switch forwards
frames from Port 1 to Port 2, thus sparing other hosts on Port 3 from responding to
frames destined for Host B. If Host C needs to send data to Host D at the same time that
Host A sends data to Host B, it can do so because the LAN switch can forward frames
from Port 3 to Port 4 at the same time it forwards frames from Port 1 to Port 2.
Whenever a device connected to the LAN switch sends a packet to an address that is not
in the LAN switch's table (for example, to a device that is beyond the LAN switch), or
whenever the device sends a broadcast or multicast packet, the LAN switch sends the
packet out all ports (except for the port from which the packet originated)---a technique
known as flooding.

Because they work like traditional "transparent" bridges, LAN switches dissolve
previously well-defined workgroup or department boundaries. A network built and
designed only with LAN switches appears as a flat network topology consisting of a
single broadcast domain. Consequently, these networks are liable to suffer the problems
inherent in flat (or bridged) networks---that is, they do not scale well. Note, however, that
LAN switches that support VLANs are more scalable than traditional bridges.

1.4 Bandwidth Problem in LANs

Local Area Networks in many organizations have to deal with increased bandwidth
demands. More and more users are being added to the existing LANs. If this was the only
problem, it could be solved by upgrading the backbone that connects various LANs.
Bridges and routers can be used to keep the number of users per LAN at an optimal
number. However with increase in the speed of workstation the bandwidth requirement of
each machine has grown more that five times in the last few years. Coupled with
bandwidth hungry multimedia applications, and unmanaged and bursty traffic this
problem is further aggravated.

With the increasing use of client-server architecture in which most of the software is
stored in the server, the traffic from workstations to server has increased. Further, the use
of a large number of GUI applications means more pictures and graphics files need to be
transferred to the workstations. This is another cause of increased traffic per workstation.
LAN switching is a fast growing market, with virtually every network vendor marketing
its products. Besides LAN switches, switching routers, switching hubs are also sold.
Different vendors add new features to their products to keep them competitive. At
present, one can get switches that link it as well as different LAN topologies.

1.5 Possible Solutions

The conventional approach would be to install a faster network technology, for example
replacing Ethernet with Asynchronous Transfer Mode (ATM), Fiber Distributed Data
Interface (FDDI) or fast Ethernet. Although these are great technologies, such a move is
expensive, needs new equipment, staff training and the network downtime also takes its
toll. Another approach would be to segment the network into smaller parts using bridges
and routers. This too is expensive, although not as much as complete migration to new
networking technology and would only work if the traffic between segments is low.
Otherwise, bridges and routers would act as network bottlenecks and frame loss may
occur.

LAN switching is considered to be a solution to this problem and has been adopted by
many organizations. Besides making more bandwidth available, it can also form an
intermediate step in moving to faster networks such as ATM.

1.6 How can LAN Switching help?

The reason it works is simple. Ethernet, token ring and FDDI all use shared media.
Conventional Ethernet is bridged or routed. A 100 Mbps Ethernet will have to divide its
bandwidth over a number of users because of shared access. However with a switched
network one can connect each port directly so bandwidth is shared only among a number
of users in a workgroup (connected to the ports). Since there is reduced media sharing
more bandwidth is available. Switches can also maintain multiple connections at one
point.

1.7 Bridges and Routers

Conventional Ethernet uses bridges and hubs that work in a half duplex mode. Using the
Carrier Sense Medium Access/Collision Detection (CSMA/CD) protocol the sender
senses the channel before transmitting. Collisions can occur if stations start transmitting
at the same time. This causes delay and increase in transmission time. An increase in
transmission time may also result, as each station has to wait until transmission by others
is complete.

A bridge divides the network into two collision domains thus reducing congestion as only
frames that need to be forwarded are sent. Routers divide network into different broadcast
domains and help similarly. Problems range from time to gain access to the media to
latency in bridges and routers. Also, higher bus length implies more propagation delay.
This architecture is thus not scalable. In contrast, switches have a much lower latency and
have a scalable architecture. More features are listed in the next section [Christensen,
1995]

2. Switch Features

Switches normally have higher port counts than bridges and divide network into several
dedicated channels parallel to each other. These multiple independent data paths increase
the throughput capacity of a switch. There is no contention to gain access and LAN
switch architecture is scalable. Another advantage of switches is that most of them are
self-configuring, minimizing network downtime, although ways for manual configuration
are also available.

If a segment is attached to a port of a switch then CSMA/CD is used for media access in
that segment. However, if the port has only one station attached then there is no need for
any media access protocol. The basic operation of a switch is like a multiport bridge. The
source and destination Medium Access Control (MAC) address of incoming frame is
looked up and if the frame is to be forwarded, it is sent to the destination port. Although
this is mostly what all switches do, there are a variety of features that distinguish them,
like the following.

2.1 Full Duplex

Full duplex mode of Ethernet allows simultaneous flow of traffic from one station to
another without collision. So, Ethernet in full duplex mode doesn't require collision
detection when only one port station is attached to each port. There is no contention
between stations to transmit over a medium, and a station can transmit whenever a frame
is queued in the adapter. The station can also receive at the same time. This has a
potential to double the performance of the server. The effective bandwidth is equal to the
number of switched ports times the bit rate on medium/2 for half duplex and for full
duplex equal to number of switched ports times the bit rate on medium. One catch to this
is, that while a client can send as well as receive the frames at the same time, at peak
loads server might be overburdened. This may lead to frame loss and eventual loss of
connection to the server. To avoid such a situation, flow control at the client level may be
used.

Another big advantage of full duplex is that since there cannot be a collision in full
duplex, there is no MAC layer limitation on the distance, e.g. 2500 m for Ethernet. One
can have a 100 km Ethernet using a single mode fiber. The limitation now is at physical
layer. Thus, media speed rates can be sustained depending upon the station and the switch
to which it is attached. The user is unaware of full duplex operation, and no new software
applications are needed for this enhancement.

2.2 Flow Control

Flow control is necessary when the destination port is receiving more traffic than it can
handle. Since the buffers are only meant for absorbing peaks traffic, with excessive load
frames may be dropped. It is a costly operation as delay is of the order of seconds for
each dropped frame.

Traditional networks do not have a layer 2 flow control mechanism, and rely mainly on
higher layers for this. Switches come with various flow control strategies depending on
the vendors. Some switches upon finding that the destination port is overloaded will send
jam message to the sender. Since the decoding of MAC address is fast and a switch can,
in very little time, respond with a jam message, collision or packet loss can be avoided.
To the sender, jam packet is like a virtual collision, so it will wait a random time before
retransmit ting. This strategy works as only those frames that go to the overloaded
destination port are jammed and not the others.

2.3 Static and Dynamic Switching

• Static Switching

The functionality is similar to that of a hub as the traffic goes to all other ports in
the group. Since individual hubs are cheaper, they are normally preferred.

• Dynamic Switching

These switches learn on which port a station is attached by studying the frames
that station transmits. Once learned, the frames are transmitted only to the
destination station, saving the bandwidth of other stations. Stations are relearned
every time, so any change of station from one port to another is automatically
reconfigured.
2.4 Cut-Through Versus Store-and-Forward Switching

Cut-through switching

Marked by low latency, these switches begin transmission of the frame to the destination
port even before the whole frame is received. Thus frame latency is about 1/20th of that
in store-and-forward switches (explained later). Cut-through switches with runt (collision
fragments) detection will store the frame in the buffer and begin transmission as soon as
the possibility of runt is eliminated and it can grab the outgoing channel. Filtering of
runts is important as they seriously waste the bandwidth of the network. The delay in
these switches is about 60 microseconds. Compare this with store-and-forward switches
where every frame is buffered (delay: 0.8 microsecond per byte). The delay thus for 1500
byte frame is 1200 microsecond. No Cyclic Redundancy Check (CRC) verification is
done in these switches. Figure 3 shows a frame being forwarded from port 1 to port 4
without being stored in buffer.

Figure 3.: Cut through Switching

Store-and-forward switching

This type of switches receive whole of the frame before forwarding it. While the frame is
being received, processing is done. Upon complete arrival of the frame, CRC is verified
and the frame is directly forwarded to the output port. Even though there are some
disadvantages of store-and-forward switches, in certain cases they are essential. For
example when we have a slow port transmitting to a fast port. The frame must be
buffered and transmitted only when it is completely received. Another advantage would
be in high traffic conditions, when the frames have to be buffered since the output port
may be busy. As traffic increases the chances of a certain output port being busy
obviously increase, so even cut-through switches may need to buffer the frames. Thus, in
some cases store-and-forward switching has its obvious advantage.

2.5 Address Resolution

To allow forwarding and filtering of packets at wire speed, LAN switches should be able
to decode MAC addresses very quickly. Since Central Processing Unit (CPU) based
lookups are expensive, hardware solutions may be used. Switches maintain address tables
just like transparent bridges. They learn the addresses of their neighbors, and when a
frame is to be forwarded, they first look up the address table and broadcast only if no
entry corresponding to that destination is found. Stations that have not transmitted
recently are aged out. This way a small address table can be maintained and the switch
can relearn if a station starts transmitting again.

2.6 Multiple LAN Technologies

Switches can support ports having single LAN technology or a multiple of them. But
according to [Buyer's Guide: Network World, June'96], no vendor supported all six LAN
technologies namely, Ethernet, 100Base-T, FDDI, token ring, ATM and 100VG-
AnyLAN. The reason for this is that

• The vendors don't have the resources for this and


• Most mixed LANs don't use more those 2-3 different LAN technologies.

2.7 Network Management

This is an important feature, as it allows the network administrator to detect a problem


even before it occurs. Most switch vendors provide some kind of network management.
Monitoring is mainly through Simple Network Management Protocol (SNMP) or Remote
Monitoring (RMON) while diagnostics are mostly proprietary.
RMON is used for real-time performance and error statistics. When implemented in three
stages, it consists of:

1. Statistics, History, Event and Alarms as first four groups


2. Hosts, HostTopN, Matrix and Filter as next four groups
Hosts group: Gives statistics stored for all MAC addresses
HostTopN: Gives position of stations based on traffic and error statistics
Matrix: Gives information about who communicates with whom
Filter: Provides user with the ability to select specific packets
3. And finally capture:
It has a significant influence on cost, being the most memory intensive of the
three.

Since many vendors offer RMON support or at least promise to do so in near future, it
will be a standard feature except in some cheaper switches.
Mirror ports can be used to monitor traffic through other ports. Most of the vendors
provide support for this, and others plan to do so in near future. HP, IBM, Ornet and UB
designs have designated specific ports for mirroring. When this port is not being used for
mirroring, it can be used for other traffic.

2.8 Multilayer Switching

Multilayer switching has been described as the next architectural generation of LAN
switching. [Communications Week, 1997] Multilayer switches are important for
networks using ATM and gigabit Ethernet. Although the definition for multilayer
switches is not standardized, they can be described as switches that besides MAC layer
routing, have some routing layer functionality like multicast and broadcast containment,
some VLAN services, and Packet filtering and fire walling between two VLANs. They
may also support Transmission Control Protocol/Internet Protocol (TCP/IP) and Internet
work Packet Exchange (IPX) routing. Many of these switches provide support for frame
and cell switching.

One of the most important features would be that it provides gigabit level scaling. This
makes it easier and cheaper to upgrade the network in future when the demands on
network increases. Using policy based VLAN, support for various classes of service and
Quality of Service can be provided. Thus offering features that were once available only
in ATM networks.

LAN switching products have gone down in cost in the last couple of years, and the
price/performance ratio is favorable. The advantage for a network manager is that it
provides better service at a lower cost. Further, at times of network upgrade, less staff
needs to be retrained, as the network is scalable. According to one of the studies, it was
found that network managers were able to spend 30% more time on design and
performance-trending activities in a switch based environment. Multilayer switching is
increasingly being used in data-center implementations. These switches provide high
network capacity along with greater internetworking functionality using the VLAN
services.

2.9 Switching in the Ethernet Environment

The most common LAN media is traditional Ethernet, which has a maximum bandwidth
of 10 Mbps. Traditional Ethernet is a half-duplex technology. Each Ethernet host checks
the network to determine whether data is being transmitted before it transmits and defers
transmission if the network is in use. In spite of transmission deferral, two or more
Ethernet hosts can transmit at the same time, which results in a collision. When a
collision occurs, the hosts enter a back-off phase and retransmit later. As more hosts are
added to the network, hosts must wait more often before they can begin transmitting, and
collisions are more likely to occur because more hosts are trying to transmit. Today,
throughput on traditional Ethernet LANs suffers even more because users are running
network-intensive software, such as client-server applications, which cause hosts to
transmit more often and for longer periods of time.

An Ethernet LAN switch improves bandwidth by separating collision domains and


selectively forwarding traffic to the appropriate segments. Figure 4 shows the topology of
a typical Ethernet network in which a LAN switch has been installed.

Figure 4: Ethernet switching.

In Figure 4, each Ethernet segment is connected to a port on the LAN switch. If Server A
on port 1 needs to transmit to Client B on port 2, the LAN switch forwards Ethernet
frames from port 1 to port 2, thus sparing port 3 and port 4 from frames destined for
Client B. If Server C needs to send data to Client D at the same time that Server A sends
data to Client B, it can do so because the LAN switch can forward frames from port 3 to
port 4 at the same time it is forwarding frames from port 1 to port 2. If Server A needs to
send data to Client E, which also resides on port 1, the LAN switch does not need to
forward any frames.

Performance improves in LANs in which LAN switches are installed because the LAN
switch creates isolated collision domains. By spreading users over several collision
domains, collisions are avoided and performance improves. Many LAN switch
installations assign just one user per port, which gives that user an effective bandwidth of
10 Mbps.

3. Switch Architecture

Various kinds of switch architectures have been developed. Because of this we need to
find a way of determining which is better. In industry, performance to cost ratio is used to
determine optimal architecture for a particular application. Switch cost is measured on a
per port basis, obtained by dividing the cost of the switch by the number of ports.
Switching fabrics use single stage solutions like Time Division Multiplexing (TDM) bus
(high speed bus) or space division methods that are multistage (multistage switch array)
or meshed (cross-bar). [Christensen, K. J.]

3.1 Frame Versus Cell Switching

Traversing of a frame through the fabric of a switch can be using frame switching model,
in which the whole frame is sent as such to the destination port, or using cell switching
where each frame is broken down into equal sized cells. In cell switching, a frame at the
input port is broken into the cells and is reassembled at the output port. In frame
switching the time of occupation of the transmission path between the input and output
port depends upon the size of the frame. Thus for a 64-byte long frame transmission time
would be 51.2 microsecond, and would be as high as 1.21 millisecond for a 1518 byte
long frame. That means the latency will depend on the frame size. For cell switching,
since a cell is of constant size, the transmission time is the same. So, cell-switching
performance does not depend on the data traffic, type of data or the number of ports.
Currently, frame switching seems to be the general trend among LAN switching vendors
and none of these approaches have been proved to be better than other.

3.2 Blocking and non-blocking switch architecture

Non-blocking architecture means that frame being forwarded from ports 1 to 5 cannot
block the forwarding of another frame from ports 2 to 4. Example. Multistage Banyan
and crossbar architecture (figure 5). In a blocking architecture, internal collisions can
occur. The switch may retry internally, and discard after a certain number of retries. Non-
blocking architectures provide a higher internal bandwidth, and are thus more
Figure 5.: Cross-bar switch with 5 ports

4. Switched LAN Topology

High-speed interfaces are needed in certain regions of high traffic. These may be at
workgroup level or at the backbone level. Following are certain regions where such
interfaces may be essential.

• Workgroup level Clients working with multimedia or any other bandwidth


hungry application will consume lot of bandwidth, so the workgroup should be
appropriately segmented to keep the traffic low in that particular LAN segment.
Replacing concentrators, repeaters, or hubs in a workgroup with LAN switches
can substantially increase the effective transmit bandwidth to each user. Full
duplex operation would further enhance the bandwidth availability. Workgroup
switches should be provided with high-speed ports for connecting to servers.

• Backbone level One may need a fast 100Base-TX interface for connection to a
server, since most of the traffic from the workstation has to go to the server.
Since, this can be a bottleneck, high-speed interfaces are essential for achieving
good performance. The server is connected to a high-speed port to allow it to
work at full efficiency. Another region where such an interface might be required
is in connections between switches. This may be essential, when a switch
connects a group of workstations to a switch with a group of servers. The link acts
like a high-speed collapsed backbone.

4.1 LAN Switch and the OSI Model

LAN switches can be categorized according to the OSI layer at which they filter and
forward, or switch, frames. These categories are: Layer 2, Layer 2 with Layer 3 features,
or multi-layer.

A Layer 2 LAN switch is operationally similar to a multiport bridge but has a much
higher capacity and supports many new features, such as full-duplex operation. A Layer 2
LAN switch performs switching and filtering based on the OSI data link layer (Layer 2)
MAC address. As with bridges, it is completely transparent to network protocols and user
applications.

A Layer 2 LAN switch with Layer 3 features can make switching decisions based on
more information than just the Layer 2 MAC address. Such a switch might incorporate
some Layer 3 traffic-control features, such as broadcast and multicast traffic
management, security through access lists, and IP fragmentation.

A multi-layer switch makes switching and filtering decisions on the basis of OSI data link
layer (Layer 2) and OSI network-layer (Layer 3) addresses. This type of switch
dynamically decides whether to switch (Layer 2) or route (Layer 3) incoming traffic. A
multi-layer LAN switch switches within a workgroup and routes between different
workgroups.
4.2 ATM and Ethernet switching

LAN switching makes the entire bandwidth available to each connected end station. With
structured cabling, user can get communication services like voice, video and data
communication at workstation. However, when 10 Mbps/100 Mbps is too less for a
station, then network manager should think of Asynchronous Transmission Mode
(ATM). This technology promises lots of functionality and great flexibility, and has been
called as the networking technology of future. LAN Emulation (LANE) is used when
ATM technology is used in traditional LANs without any change in applications at the
workstations. If the cable structure is ready, migrating to another technology will be less
expensive.

A number of issues have slowed migration to ATM. There is going to be a big initial cost
for replacing the existing equipment, and Network Interface Cards (NIC) in the current
workstations. Some feel that ATM is a new technology, and does not have good
management tools. Besides high bandwidth, Quality of Service (QoS) is one of the
important things ATM provides that will be essential when dealing with time critical
multimedia traffic. For LANs, protocols are being worked out to support multimedia
traffic. So, LAN switching is in a way slowing the adoption of ATM.

5. Virtual LAN (VLAN)

While bandwidth may be a reason big enough to go for switching, Virtual LAN (VLAN)
support may also be attractive. A VLAN is logical grouping of ports into workgroups.
With VLAN support network managers can define workgroups independent of
underlying network topology.

VLANs are becoming popular because of the flexibility they offer. Users can physically
move but stay on the same VLAN. Some other benefits are:

5.1 Introduction to VLAN’s

A Local Area Network (LAN) was originally defined as a network of computers located
within the same area. Today, Local Area Networks are defined as a single broadcast
domain. This means that if a user broadcasts information on his/her LAN, the broadcast
will be received by every other user on the LAN. Broadcasts are prevented from leaving a
LAN by using a router. The disadvantage of this method is routers usually take more time
to process incoming data compared to a bridge or a switch. More importantly, the
formation of broadcast domains depends on the physical connection of the devices in the
network. Virtual Local Area Networks (VLAN's) were developed as an alternative
solution to using routers to contain broadcast traffic.

In here, we define VLAN's and examine the difference between a LAN and a VLAN.
This is followed by a discussion on the advantages VLAN's introduce to a network.
Finally, we explain how VLAN's work based on the current draft standards.
5.2 What are VLAN's?

In a traditional LAN, workstations are connected to each other by means of a hub or a


repeater. These devices propagate any incoming data throughout the network. However,
if two people attempt to send information at the same time, a collision will occur and all
the transmitted data will be lost. Once the collision has occurred, it will continue to be
propagated throughout the network by hubs and repeaters. The original information will
therefore need to be resent after waiting for the collision to be resolved, thereby incurring
a significant wastage of time and resources. To prevent collisions from traveling through
all the workstations in the network, a bridge or a switch can be used. These devices will
not forward collisions, but will allow broadcasts (to every user in the network) and
multicasts (to a pre-specified group of users) to pass through. A router may be used to
prevent broadcasts and multicasts from traveling through the network.

The workstations, hubs, and repeaters together form a LAN segment. A LAN segment is
also known as a collision domain since collisions remain within the segment. The area
within which broadcasts and multicasts are confined is called a broadcast domain or
LAN. Thus a LAN can consist of one or more LAN segments. Defining broadcast and
collision domains in a LAN depends on how the workstations, hubs, switches, and routers
are physically connected together. This means that everyone on a LAN must be located in
the same area (see Figure 6).

Figure 6: Physical view of a LAN.

VLAN's allow a network manager to logically segment a LAN into different broadcast
domains (see Figure 7). Since this is a logical segmentation and not a physical one,
workstations do not have to be physically located together. Users on different floors of
the same building, or even in different buildings can now belong to the same LAN.
VLAN's also allow broadcast domains to be defined without using routers. Bridging
software is used instead to define which workstations are to be included in the broadcast
domain. Routers would only have to be used to communicate between two VLAN's

Physical View

Logical View
Figure 7: Physical and logical view of a VLAN

5.3 Why use VLAN's?

VLAN's offer a number of advantages over traditional LAN's. They are:


1) Performance
In networks where traffic consists of a high percentage of broadcasts and
multicasts, VLAN's can reduce the need to send such traffic to unnecessary
destinations. For example, in a broadcast domain consisting of 10 users, if the
broadcast traffic is intended only for 5 of the users, then placing those 5 users on a
separate VLAN can reduce traffic [Passmore et al (3Com report)].
Compared to switches, routers require more processing of incoming traffic. As the
volume of traffic passing through the routers increases, so does the latency in the
routers, which results in reduced performance. The use of VLAN's reduces the
number of routers needed, since VLAN's create broadcast domains using switches
instead of routers.

2) Formation of Virtual Workgroups

Nowadays, it is common to find cross-functional product development teams with


members from different departments such as marketing, sales, accounting, and
research. These workgroups are usually formed for a short period of time. During
this period, communication between members of the workgroup will be high. To
contain broadcasts and multicasts within the workgroup, a VLAN can be set up
for them. With VLAN's it is easier to place members of a workgroup together.
Without VLAN's, the only way this would be possible is to physically move all
the members of the workgroup closer together.

However, virtual workgroups do not come without problems. Consider the


situation where one user of the workgroup is on the fourth floor of a building, and
the other workgroup members are on the second floor. Resources such as a printer
would be located on the second floor, which would be inconvenient for the lone
fourth floor user.

Another problem with setting up virtual workgroups is the implementation of


centralized server farms, which are essentially collections of servers and major
resources for operating a network at a central location. The advantages here are
numerous, since it is more efficient and cost-effective to provide better security,
uninterrupted power supply, consolidated backup, and a proper operating
environment in a single area than if the major resources were scattered in a
building. Centralized server farms can cause problems when setting up virtual
workgroups if servers cannot be placed on more than one VLAN. In such a case,
the server would be placed on a single VLAN and all other VLAN's trying to
access the server would have to go through a router; this can reduce performance .

3) Simplified Administration

Seventy percent of network costs are a result of adds, moves, and changes of users
in the network [Buerger]. Every time a user is moved in a LAN, recabling, new
station addressing, and reconfiguration of hubs and routers becomes necessary.
Some of these tasks can be simplified with the use of VLAN's. If a user is moved
within a VLAN, reconfiguration of routers is unnecessary. In addition, depending
on the type of VLAN, other administrative work can be reduced or eliminated
[Cisco white paper]. However the full power of VLAN's will only really be felt
when good management tools are created which can allow network managers to
drag and drop users into different VLAN's or to set up aliases.
Despite this saving, VLAN's add a layer of administrative complexity, since it
now becomes necessary to manage virtual workgroups [Passmore et al (3Com
report)].

4) Reduced Cost
VLAN's can be used to create broadcast domains, which eliminate the need for
expensive routers.

5) Security
Periodically, sensitive data may be broadcast on a network. In such cases, placing
only those users who can have access to that data on a VLAN can reduce the
chances of an outsider gaining access to the data. VLAN's can also be used to
control broadcast domains, set up firewalls, restrict access, and inform the
network manager of an intrusion [Passmore et al (3Com report)].

6. VLAN's working

When a LAN bridge receives data from a workstation, it tags the data with a VLAN
identifier indicating the VLAN from which the data came. This is called explicit tagging.
It is also possible to determine to which VLAN the data received belongs using implicit
tagging. In implicit tagging the data is not tagged, but the VLAN from which the data
came is determined based on other information like the port on which the data arrived.
Tagging can be based on the port from which it came, the source Media Access Control
(MAC) field, the source network address, or some other field or combination of fields.
VLAN's are classified based on the method used. To be able to do the tagging of data
using any of the methods, the bridge would have to keep an updated database containing
a mapping between VLAN's and whichever field is used for tagging. For example, if
tagging is by port, the database should indicate which ports belong to which VLAN. This
database is called a filtering database. Bridges would have to be able to maintain this
database and also to make sure that all the bridges on the LAN have the same information
in each of their databases. The bridge determines where the data is to go next based on
normal LAN operations. Once the bridge determines where the data is to go, it now needs
to determine whether the VLAN identifier should be added to the data and sent. If the
data is to go to a device that knows about VLAN implementation (VLAN-aware), the
VLAN identifier is added to the data. If it is to go to a device that has no knowledge of
VLAN implementation (VLAN-unaware), the bridge sends the data without the VLAN
identifier.

In order to understand how VLAN's work, we need to look at the types of VLAN's, the
types of connections between devices on VLAN's, the filtering database which is used to
send traffic to the correct VLAN, and tagging, a process used to identify the VLAN
originating the data.

VLAN Standard: IEEE 802.1Q Draft Standard


There has been a recent move towards building a set of standards for VLAN products.
The Institute of Electrical and Electronic Engineers (IEEE) is currently working on a
draft standard 802.1Q for VLAN's. Up to this point, products have been proprietary,
implying that anyone wanting to install VLAN's would have to purchase all products
from the same vendor. Once the standards have been written and vendors create products
based on these standards, users will no longer be confined to purchasing products from a
single vendor. The major vendors have supported these standards and are planning on
releasing products based on them. It is anticipated that these standards will be ratified
later this year.

6.1 Implementation of a Virtual LAN

They are several ways to implement and define the end-user membership in a VLAN.
According to that VLANs can be divided into five general types:

Membership by Switch Port Group:

The initial implementations of a VLAN, a groups of switch ports will make up a VLAN
(for example, ports 2,3,5 and 8 will make up VLAN A, while ports 1,4,6 and 7 make up
VLAN B). An other generation of VLANs was implemented by grouping together
several ports from different switches (for example port 2 and 5 from switch number 1 and
port 1, 3, and 6 from switch number 2 make up VLAN A, while port 1,3, and 6 from
switch number 1 and port 2, 4,7, and 8 from switch number 2 make up VLAN B). This is
the most common method in defining and implementing a VLAN. The only limitation for
this method is that, the network manager has to reconfigure the VLAN membership every
time a user moves from one port to another.

Figure 8. VLANs Defined by Switch Port Group

Membership by MAC Address:

The MAC-layer address for each workstation is in the network interface card (NIC). This
allows network managers to move a workstation to a different location, so the user
membership remain the same. This method require that all users in the network must be
configured initially in at least one VLAN. The result of this initial configuration is that
thousands of users will be assigned to the same VLAN. A serious performance
degradation is the biggest disadvantage of this initial configuration. Another limitation to
this method arise when an organization uses a significant numbers of laptops. The MAC
address is in the docking station, so every time a laptop is moved to a different docking
station, the VLAN membership has to be updated

Membership by Network Layer Information:

These types of VLANs (also known as layer-3 base VLAN) take into consideration the
protocol type. The switches inspect the packet's IP address and then determine the VLAN
membership of the particular workstation. We should note here that this process does not
become a routing process. Defining a VLAN based on layer 3 information have several
advantages:

• It allows for partitioning using protocol type, which is a good feature for network
managers who is committed to service- or application based VLAN
implementation. TCP/IP users will have the benefit of physically moving their
workstation without having to reconfigure the network address. There is no need
for frame tagging for communication between switches and this reduces transport
overhead. The only limitation for this method is its performance due to the slow
process of inspecting layer 3 addresses packets. VLAN defined at layer 3 was
found to be more effective with protocol such as TCP/IP and less effective with
protocols like IPX, DECnet, and AppleTalk.

Figure 10 Layer 3 VLAN IP


Multicast Group: Workstations can be joined together as an IP multicast group. These
groups of IPs are established dynamically. The workstation will be given a chance to join
the IP multicast group for a certain period of time. Therefore, a VLAN is established for
each IP multicast group, and when a packet is sent it will be directed to the specific group
of IP addresses. This method of defining VLANs has a very high degree of flexibility and
application sensitivity.

Higher Layer VLAN:

Membership of another type of VLAN can be defined based on applications or services,


or a combination of both. For example, File Transfer Protocol (FTP) applications can be
processed together in one VLAN, and Telnet applications on another VLAN. This type of
VLAN creates more complex VLANs to manage and need a high level of automated
configuration features.

Figure 11 Service - based VLAN

6.2 Types of Connections

Devices on a VLAN can be connected in three ways based on whether the connected
devices are VLAN-aware or VLAN-unaware. Recall that a VLAN-aware device is one
which understands VLAN memberships (i.e. which users belong to a VLAN) and VLAN
formats.

1) Trunk Link
All the devices connected to a trunk link, including workstations, must be VLAN-aware.
All frames on a trunk link must have a special header attached. These special frames are
called tagged frames (see Figure12).
Figure 12: Trunk link between two VLAN-aware bridges.

2) Access Link
An access link connects a VLAN-unaware device to the port of a VLAN-aware bridge.
All frames on access links must be implicitly tagged (untagged) (see Figure 13). The
VLAN-unaware device can be a LAN segment with VLAN-unaware workstations or it
can be a number of LAN segments containing VLAN-unaware devices (legacy LAN).

Figure 13: Access link between a VLAN-aware bridge and a VLAN-unaware device.

3) Hybrid Link
This is a combination of the previous two links. This is a link where both VLAN-aware
and VLAN-unaware devices are attached (see Figure 14). A hybrid link can have both
tagged and untagged frames, but all the frames for a specific VLAN must be either
tagged or untagged.

Figure 14: Hybrid link containing both VLAN-aware and VLAN-unaware devices.
It must also be noted that the network can have a combination of all three types of links.
6.3 Frame Processing

A bridge on receiving data determines to which VLAN the data belongs either by implicit
or explicit tagging. In explicit tagging a tag header is added to the data. The bridge also
keeps track of VLAN members in a filtering database which it uses to determine where
the data is to be sent. Following is an explanation of the contents of the filtering database
and the format and purpose of the tag header [802.1Q].

1) Filtering Database
Membership information for a VLAN is stored in a filtering database. The
filtering database consists of the following types of entries:
i) Static Entries
Static information is added, modified, and deleted by management only.
Entries are not automatically removed after some time (ageing), but must
be explicitly removed by management. There are two types of static
entries:
a) Static Filtering Entries: which specify for every port whether
frames to be sent to a specific MAC address or group address and
on a specific VLAN should be forwarded or discarded, or should
follow the dynamic entry, and
b) Static Registration Entries: which specify whether frames to be
sent to a specific VLAN are to be tagged or untagged and which
ports are registered for that VLAN.
ii) Dynamic Entries
Dynamic entries are learned by the bridge and cannot be created or
updated by management. The learning process observes the port from
which a frame, with a given source address and VLAN ID (VID), is
received, and updates the filtering database. The entry is updated only if
all the following three conditions are satisfied:
a) This port allows learning,
b) The source address is a workstation address and not a group
address, and
c) There is space available in the database.
Entries are removed from the database by the ageing out process where,
after a certain amount of time specified by management (10 sec ---
1000000 sec), entries allow automatic reconfiguration of the filtering
database if the topology of the network changes. There are three types of
dynamic entries:
a) Dynamic Filtering Entries: which specify whether frames to be
sent to a specific MAC address and on a certain VLAN should be
forwarded or discarded.
b) Group Registration Entries: which indicate for each port
whether frames to be sent to a group MAC address and on a certain
VLAN should be filtered or discarded. These entries are added and
deleted using Group Multicast Registration Protocol (GMRP). This
allows multicasts to be sent on a single VLAN without affecting
other VLAN's.

c) Dynamic Registration Entries: which specify which ports are


registered for a specific VLAN. Entries are added and deleted
using GARP VLAN Registration Protocol (GVRP), where GARP
is the Generic Attribute Registration Protocol.

GVRP is used not only to update dynamic registration entries, but also to
communicate the information to other VLAN-aware bridges.
In order for VLAN's to forward information to the correct destination, all the
bridges in the VLAN should contain the same information in their respective
filtering databases. GVRP allows both VLAN-aware workstations and bridges to
issue and revoke VLAN memberships. VLAN-aware bridges register and
propagate VLAN membership to all ports that are a part of the active topology of
the VLAN. The active topology of a network is determined when the bridges are
turned on or when a change in the state of the current topology is perceived.
The active topology is determined using a spanning tree algorithm, which
prevents the formation of loops in the network by disabling ports. Once an active
topology for the network (which may contain several VLAN's) is obtained, the
bridges determine an active topology for each VLAN. This may result in a
different topology for each VLAN or a common one for several VLAN's. In either
case, the VLAN topology will be a subset of the active topology of the network
(see Figure 15).
Figure 15: Active topology of network and VLAN A using spanning tree algorithm.

2) Tagging

When frames are sent across the network, there needs to be a way of indicating to
which VLAN the frame belongs, so that the bridge will forward the frames only
to those ports that belong to that VLAN, instead of to all output ports as would
normally have been done. This information is added to the frame in the form of a
tag header. In addition, the tag header:

i) Allows user priority information to be specified,


ii) Allows source routing control information to be specified, and
iii) Indicates the format of MAC addresses.

Frames in which a tag header has been added are called tagged frames. Tagged
frames convey the VLAN information across the network.
The tagged frames that are sent across hybrid and trunk links contain a tag header.
There are two formats of the tag header:

i) Ethernet Frame Tag Header: The Ethernet frame tag header (see Figure
16) consists of a tag protocol identifier (TPID) and tag control information
(TCI).

Figure 16: Ethernet frame tag header.


ii) Token Ring and Fiber Distributed Data Interface (FDDI) tag header:
The tag headers for both token ring and FDDI networks consist of a
SNAP-encoded TPID and TCI.

Figure 17: Token ring and FDDI tag header.


TPID is the tag protocol identifier, which indicates that a tag header is
following and TCI (see Figure 18) contains the user priority, canonical
format indicator (CFI), and the VLAN ID.

Figure 18: Tag control information (TCI).

User priority is a 3-bit field, which allows priority information to be


encoded in the frame. Eight levels of priority are allowed, where zero is
the lowest priority and seven is the highest priority. How this field is used
is described in the supplement 802.1p.
The CFI bit is used to indicate that all MAC addresses present in the MAC
data field are in canonical format. This field is interpreted differently
depending on whether it is an Ethernet-encoded tag header or a SNAP-
encoded tag header. In SNAP-encoded TPID the field indicates the
presence or absence of the canonical format of addresses. In Ethernet-
encoded TPID, it indicates the presence of the Source-Routing
Information (RIF) field after the length field. The RIF field indicates
routing on Ethernet frames.The VID field is used to uniquely identify the
VLAN to which the frame belongs. There can be a maximum of (212 - 1)
VLAN's. Zero is used to indicate no VLAN ID, but that user priority
information is present. This allows priority to be encoded in non-priority
LAN's. 7.

Summary
For bandwidth-starved networks, switching offers an opportunity to solve the current
problems and keep us prepared for future technologies. It promises higher performance,
scalability and improved manageability. Switching is available in Ethernet, FDDI and
token ring, and can be used to boost performance. Since the underlying technology is the
same, new software is not needed and all this will make the migration cheaper and easier
with minimal training requirements.

One must do a thorough analysis of the network before deciding on which technology to
use. While choosing a switch, one must emphasize on the features that are going to offer
substantial benefits like RMON, mirror ports, varied uplink support, and economical cost.
One must realize though, that LAN switching is a quick fix kind of a solution. It can
alleviate/eliminate current bandwidth problems, but in long run on must think about
alternative technologies like ATM.

As we have seen there are significant advances in the field of networks in the form of
VLAN’s, which allow the formation of virtual workgroups, better security, improved
performance, simplified administration, and reduced costs. VLAN's are formed by the
logical segmentation of a network and can be classified into Layer1, 2, 3 and higher
layers. Only Layer 1 and 2 are specified in the draft standard 802.1Q. Tagging and the
filtering database allow a bridge to determine the source and destination VLAN for
received data. VLAN's if implemented effectively, show considerable promise in future
networking solutions.

8. References

1. Ethernet Switching: An Anixter Technology White Paper


http://www.anixter.com/techlib/whiteppr/network/anixeswp.htm
A good paper covering various aspects of switching, implementations concerns,
and some suggestions on switch selection.
2. Christensen, K. J., "Local Area Networks-evolving from shared to switched
access"
IBM Systems Journal v34 n3 (`95) p347-74
A detailed reference that has topics from first generations LANs to fourth
generation LANs (Switched LANs).

3. Edwin Meir, "Buyer's Guide: LAN switches take it all on"


Network World, June'96
Dicusses LAN switching technologies, VLANs and the pros and cons of
switching.

4. Kevin Tolly, "Comparing LAN switch contenders: Beyond Performance"


Network World, Jul'97
Discusses the architecture of switches, flow control, protocol and VLAN support,
fault tolerance and pricing of LAN switches.

5. Migration to Switched Ethernet LANs: A Technical White Paper


http://www.networking.ibm.com/mse/mse0c01.html
LAN features and Lab tests carried out on differnt LAN setups that include shared
and switched Ethernet, and fast Ethernet and ATM.

6. Robin Layland, "Time to Move On? The Price of Ethernet Switching"


Data Communications, Jul 1997
http://www.data.com/business_case/time.html
Gives a cost comparison of a switched LAN solution for Ethernet and Token
Ring. Judges switched Ethernet to be a better and cheaper colution and
discourages one to invest in a dead technology like Token Ring.

7. John Morency and Wendy Micheal, "Evaluating the next generation of multilayer
switching"
Communications Week, Issue 663, May, 1997
Discusses LAN switches that have Layer 3 features incorporated.

8. LAN Backbone Switching: An Anixter Technology/Business White Paper


http://www.anixter.com/techlib/whiteppr/network/m6317100.htm
Discusses LAN switching solutions for backbone layer congestion problems. It
compares different technologies like 100Base-T, 100VG-AnyLAN, FDDI/FDDI
II, HIPPI, etc. looking at their advantages and disadvantages.

9. Ethernet switches: Does It Belong on the Backbone?


Data Communications
http://www.data.com/Lab_Tests/Backbone.html
The author emphasizes the difference between workgroup and backbone switches,
and discusses some of the common features of switches.

10. FDDI Switches: Immediate Relief for Backbones Under Pressure


Data Communications, Nov 1995
http://www.data.com/Roundups/FDDI_Switches.html
This paper provides solutions for congested LANs without moving to ATM. In
most cases FDDI backbone already exists, so FDDI switches can be used to
improve LAN performance.

11. INTERNET-DRAFT: Benchmarking Terminology for LAN Switching Devices


(Mar 97) ftp://ftp.isi.edu/internet-drafts/draft-ietf-bmwg-lanswitch-05.txt
This provides benchmarking terminology used for LAN switching, and also
defines terms related to latency, forwarding performance, address handling and
filtering.

12. With Continuing Innovations in Ethernet, Who Needs ATM?


Data Communications, Aug 96
http://www.ddx.com/ether1.shtml
Discusses how new developments in Ethernet i.e. Gigabit Ethernet and Gigabit
version on AnyLAN might solve most of the problems in LANs and one may
never need ATM.

13. Al Chiang, "Parallel paths emerge for fast Ethernet and ATM"
Telecommunications (Americas Edition) v30 n3 Mar 1996 p38-39
According to author, Fast Ethernet and ATM will both used in future, with Fast
Ethernet replacing existing 10Base-T Ethernet and ATM in the areas where it is
really needed.

14. Bob Gohn, "Applications dictate LAN switch architectures"


Computer Design v34 (Aug 95) p78

15. David Newman, "LAN switches leave users looking for trouble"
Data Communications v24 n3 Mar 1995 4pp

16. David Passmore, John Freeman, ``The Virtual LAN Technology Report,'' March
7, 1997, http://www.3com.com/nsc/200374.html A very good overview of V
LAN's, their strengths, weaknesses, and implementation problems.

17) IEEE, ``Draft Standard for Virtual Bridge Local Area Networks,'' P802.1Q/D1,
May 16, 1997,
This is the draft standard for VLAN's which covers implementation issues of Layer 1
and 2 VLAN's.

18) Mathias Hein, David Griffiths, Orna Berry, ``Switching Technology in the Local
Network: From LAN to Switched LAN to Virtual LAN,'' February 1997,
Textbook explanation of what VLAN's are and their types.

19) Susan Biagi, "Virtual LANs," Network VAR v4 n1 p. 10-12, January 1996,
An Overview of VLAN's, advantages, and disadvantages.
20) David J. Buerger, ``Virtual LAN cost savings will stay virtual until networking's
next era,'' Network World, March 1995,
A short summary on VLAN's.

21) IEEE, ``Traffic Class Expediting and Dynamic Multicast Filtering,'' 802.1p/D6,
April 1997,
this is the standard for implementing priority and dynamic multicasts. Implementation
of priority in VLAN's is based on this standard.

22) IEEE, "Draft Standard P802.1Q/D11, IEEE Standard for Local and Metropolitan
Area Networks: Virtual Bridged Local Area Networks," July 30, 1998, ftp://p8021:-
go_wildcats@p8021.hep.net/8021/q-drafts/d11/q-d11.pdf

23) IEEE 802.1Q Virtual Bridged Local Area Networks,


http://grouper.ieee.org/groups/802/1/vlan.html

24) IEEE, "Information technology - Telecommunications and information exchange


between systems - Local and metropolitan area networks - common specifications -
Part 3: Media Access Control (MAC) Bridges: Revision, (Incorporating IEEE 802.1p:
Traffic Class Expediting and Dynamic Multicast Filtering)," IEEE P802.1D/D17,
May 25, 1998, ftp://p8021:-go_wildcats@p8021.hep.net/8021/d-drafts/d17/fdis-
15802-3.pdf

Books

1. Switching Technology in the Local Network: From LAN to Switched LAN to


Virtual LAN
Authors: Mathias Hein, David Griffiths, Orna Berry
Publisher: Thomson Executive Pr
Publication date: February 1997

2. Switched and Fast Ethernet


Authors: Robert Breyer, Sean Riley
Publisher: Ziff-Davis Press
Publication Date: 1996

3. Darryl P. Black, "Building Switched Networks: Multilayer Switching, Qos, Ip


Multicast, Network Policy, and Service-Level Agreements," 1999

You might also like