You are on page 1of 9

STP (Spanning Tree Protocol) – STP ensures that there is only one logical path between all destinations

on the
network by intentionally blocking redundant paths that could cause a loop. A port is considered blocked when
network traffic is prevented from entering or leaving that port. This does not include bridge protocol data unit
(BPDU) frames that are used by STP to prevent loops.

STA (Spanning Tree Algorithm) – STP uses the Spanning Tree Algorithm (STA) to determine which switch ports
on a network need to be configured for blocking to prevent loops from occurring. The STA designates a single
switch as the root bridge and uses it as the reference point for all path calculations. All switches participating in
STP exchange BPDU frames to determine which switch has the lowest bridge ID (BID) on the network. The
switch with the lowest BID automatically becomes the root bridge for the STA calculations. The STA considers
both path and port costs when determining which path to leave unblocked. The path costs are calculated using port
cost values associated with port speeds for each switch port along a given path. The sum of the port cost values
determines the overall path cost to the root bridge. If there is more than one path to choose from, STA chooses the
path with the lowest path cost. When the root bridge has been designated for the spanning-tree instance, the STA
starts the process of determining the best paths to the root bridge from all destinations in the broadcast domain.
The path information is determined by summing up the individual port costs along the path from the destination
to the root bridge. The default port costs are defined by the speed at which the port operates. Path cost is the sum of
all the port costs along the path to the root bridge. The paths with the lowest path cost become the preferred path,
and all other redundant paths are blocked.

BPDU (Bridge Protocol Data Unit) – Bridges exchange BPDU messages with other bridges to detect loops. The
BPDU is the message frame exchanged by switches for STP. Each BPDU contains a BID that identifies the switch
that sent the BPDU. The BID contains a priority value, the MAC address of the sending switch, and an optional
extended system ID. The lowest BID value is determined by the combination of these three fields.
When a switch receives a configuration BPDU that contains superior information (lower bridge ID, lower path
cost, and so forth), it stores the information for that port. If this BPDU is received on the root port of the switch,
the switch also forwards it with an updated message to all attached LANs for which it is the designated switch.
If a switch receives a configuration BPDU that contains inferior information to that currently stored for that port,
it discards the BPDU. If the switch is a designated switch for the LAN from which the inferior BPDU was
received, it sends that LAN a BPDU containing the up-to-date information stored for that port. In this way, inferior
information is discarded, and superior information is propagated on the network.

BPDU Timers - The amount of time that a port stays in the various port states depends on the BPDU timers. Only
the switch in the role of root bridge may send information through the tree to adjust the timers. The following
timers determine STP performance and state changes:
Hello time – The hello time is the time between each BPDU frame that is sent on a port. This is equal to 2 seconds
by default, but can be tuned to be between 1 and 10 seconds.
Forward delay - The forward delay is the time spent in the listening and learning state. This is by default equal to
15 seconds for each state, but can be tuned to be between 4 and 30 seconds.
Maximum age - The max age timer controls the maximum length of time a switch port saves configuration BPDU
information. This is 20 seconds by default, but can be tuned to be between 6 and 40 seconds.

BID (Bridge ID) - The BID is made up of a priority value, an extended system ID, and the MAC address of the
switch. Each switch in the broadcast domain initially assumes that it is the root bridge for the spanning-tree
instance, so the BPDU frames sent contain the BID of the local switch as the root ID. By default, BPDU frames are
sent every 2 seconds after a switch is booted. As the switches forward their BPDU frames, adjacent switches in the
broadcast domain read the root ID information from the BPDU frame. If the root ID from the BPDU received is
lower than the root ID on the receiving switch, the receiving switch updates its root ID identifying the adjacent
switch as the root bridge. When adjacent switches receive a BPDU frame, they compare the root ID from the
BPDU frame with the local root ID. Note: It may not be an adjacent switch, but any other switch in the broadcast
domain. The switch then forwards new BPDU frames with the lower root ID to the other adjacent switches.
Eventually, the switch with the lowest BID ends up being identified as the root bridge for the spanning-tree
instance. Priority is the initial deciding factor when choosing a root bridge. If the priority of all the switches was
the same, the MAC address would be the deciding factor.

Root ports - Switch ports closest to the root bridge. Every spanning-tree instance (switched LAN or broadcast
domain) has a switch designated as the root bridge. The root bridge serves as a reference point for all spanning-tree
calculations to determine which redundant paths to block. An election process determines which switch becomes
the root bridge. The root port exists on non-root bridges and is the switch port with the best path to the root bridge.
Root ports forward traffic toward the root bridge. The source MAC address of frames received on the root port
are capable of populating the MAC table. Only one root port is allowed per bridge.

Designated ports - All non-root ports that are still permitted to forward traffic on the network. The designated port
exists on root and non-root bridges. For root bridges, all switch ports are designated ports. For non-root bridges, a
designated port is the switch port that receives and forwards frames toward the root bridge as needed. Only one
designated port is allowed per segment. If multiple switches exist on the same segment, an election process
determines the designated switch, and the corresponding switch port begins forwarding frames for the segment.
Designated ports are capable of populating the MAC table.

Non-designated ports - All ports configured to be in a blocking state to prevent loops. The non-designated port is
a switch port that is blocked, so it is not forwarding data frames and not populating the MAC address table with
source addresses. A non-designated port is not a root port or a designated port. For some variants of STP, the non-
designated port is called an alternate port.

Port States - STP determines the logical loop-free path throughout the broadcast domain. The spanning tree is
determined through the information learned by the exchange of the BPDU frames between the interconnected
switches. To facilitate the learning of the logical spanning tree, each switch port transitions through five possible
port states and three BPDU timers.

The spanning tree is determined immediately after a switch is finished booting up. If a switch port were to
transition directly from the blocking to the forwarding state, the port could temporarily create a data loop if the
switch was not aware of all topology information at the time. For this reason, STP introduces five port states.
The table summarizes what each port state does. The following provides some additional information on how the
port states ensure that no loops are created during the creation of the logical spanning tree.
Blocking - The port is a non-designated port and does not participate in frame forwarding. The port receives
BPDU frames to determine the location and root ID of the root bridge switch and what port roles each switch port
should assume in the final active STP topology.
Listening - STP has determined that the port can participate in frame forwarding according to the BPDU frames
that the switch has received thus far. At this point, the switch port is not only receiving BPDU frames, it is also
transmitting its own BPDU frames and informing adjacent switches that the switch port is preparing to participate
in the active topology.
Learning - The port prepares to participate in frame forwarding and begins to populate the MAC address table.
Forwarding - The port is considered part of the active topology and forwards frames and also sends and receives
BPDU frames.
Disabled - The Layer 2 port does not participate in spanning tree and does not forward frames. The disabled state
is set when the switch port is administratively disabled.

When STP is enabled, every switch port in the network goes through the blocking state and the transitory states of
listening and learning at power up. The ports then stabilize to the forwarding or blocking state. During a topology
change, a port temporarily implements the listening and learning states for a specified period called the forward
delay interval. These values allow adequate time for convergence in a network with a switch diameter of seven. To
review, switch diameter is the number of switches a frame has to traverse to travel from the two farthest points on
the broadcast domain. A seven-switch diameter is the largest diameter that STP permits because of convergence
times. Convergence in relation to spanning tree is the time it takes to recalculate the spanning tree if a switch or a
link fails.

PortFast Technology - PortFast is a Cisco technology. When a switch port configured with PortFast is configured
as an access port, that port transitions from blocking to forwarding state immediately, bypassing the typical STP
listening and learning states. You can use PortFast on access ports, which are connected to a single workstation or
to a server, to allow those devices to connect to the network immediately rather than waiting for spanning tree to
converge. If an interface configured with PortFast receives a BPDU frame, spanning tree can put the port into the
blocking state using a feature called BPDU guard.

Metrics - Routing protocols must have some way to decide which route is best when a router learns of more than
one route to reach a subnet. To that end, each routing protocol defines a metric that gives an objective numeric
value to the “goodness” of each route. The lower the metric, the better the route.
RIP (Distance Vector) uses a metric called hop count, which measures the number of routers (hops) between a
router and a subnet.
OSPF (Link State)
EIGRP (Balanced Hybrid or Advanced Distance Vector) uses a metric that considers both the interface bandwidth
and interface delay settings as input into a mathematical formula to calculate the metric.

Distance Vector logic – Distance vector routing algorithms call for each router to send its entire routing table in
each update, but only to its neighbors. Distance vector routing algorithms can be prone to routing loops but are
computationally simpler than link-state routing algorithms.

Link-State logic - Link-state protocols build a detailed database that lists links (subnets) and their state (up,
down), from which the best routes can then be calculated.

OSPF (Open Shortest Path First) – OSPF features can be broken down into three catagories: neighbors , database
exchange, and route calculation. OSPF routers first form a neighbor relationship that provides a foundation for
communications. After routers become neighbors, the exchange the contents of their LSDBs, through a process
called database exchange. When a router has topology information in its link-state database (LSDB), it uses the
Dijkstra Shortest Path First (SPF) algorithm to calculate the now-best routes and adds those to the IP routing table.
OSPF calculates the metric for each possible route by adding up the outgoing interfaces’ OSPF cost. The OSPF
cost for an interface can be manually configured or a router can calculate the cost based on the interfaces’
bandwidth setting.
OSPF Neighbor table – OSPF routers send messages to learn about neighbors, then list those neighbors in the
OSPF neighbor table.
Neighbor States - Down, Init, Two-way, Full.
LSDB – OSPF topology table or OSPF database. Consists of lists of subnet numbers (called links), lists of routers,
along with the links (subnets) to which each router is connected. Each router independently uses this information
to run the SPF algorithm to compute the best routes to all the subnets from that router.
Router LSA – Includes a number to identify the router (router ID), the router’s interface IP address and masks, the
state (up or down) of each interface and the cost (metric) associated with the interface.
Link LSA – Identifies each link (subnet) and the routers that are attached to that link. It also identifies the link’s
state (up or down).
IP Routing table – To fill this table, each router runs the Dijkstra SPF algorithm against the LSDB to choose the
best route.
DR (Designated Router) – OSPF dictates that a subnet either should or should not use a designated router based on
the OSPF interface type or OSPF network type. Point-to-point or broadcast. These types can be configured with
the ip ospf network type command. With a DR, the topology exchange happens between the DR and every other
router, but not between each pair of routers. All routers learn all the topology info from all routers but only through
the DR. When a DR is required, the neighboring routers hold an election by looking at two fields inside the Hello
packets they receive. The router sending the Hello with the highest OSPF priority setting becomes the DR. In case
of a tie, the router sending the Hello with the highest RID wins. A priority setting of 0 means that the router does
not participate in the election and can never become the DR or BDR. Priority values are 1 to 255.
BDR (Backup Designated Router) –
ABR (Area Border Router) – Router with interfaces connected to the backbone area and to at least one other area.
ASBR (Autonomous System Border Router) – Router that connects to routers that do not use OSPF.
Backbone Router – A router in one area, the backbone area.
Internal Router – A router in a single non-backbone area.
Area – A set of routers that share the same LSDB information.
Backbone Area – A special OSPF area to which all other areas must connect. Area 0.
External Route – Route learned from outside the OSPF domain and then advertised into the OSPF domain.
Intra-area Route – Route to a subnet inside the same area as the router.
Interarea Route –Route to a subnet in an area of which the router is not a part.
Autonomous System – Reference to a set of routers that use OSPF.

EIGRP (Enhanced Interior Gateway Routing Protocol) (Cisco proprietary protocol) – As soon as a EIGRP
neighbor is discovered and passes the basic verification checks, the router becomes a neighbor. It must pass the
authentication process. It must use the same configured Autonomous System Number (ASN). The source IP
address used by the neighbors’ Hello must be in the same subnet. EIGRP’s metric gives it the ability to choose
routes that include more router hops but with faster links.
Neighbor Discovery – EIGRP routers send Hello messages to discover neighboring EIGRP routers.
Topology Exchange – Neighbors exchange full topology updates when the routers first boot up. After that, only
partial updates are exchanged based on changes to the network topology.

Choosing Routes – Each router analyzes its respective EIGRP topology tables to choose the lowest metric route.
EIGRP uses a composite metric, calculated as a function of bandwidth and delay. EIGRP calculates the metric for
each possible route by inserting the values of the composite metric into a formula.

If multiple routes exist and the metric for each is a tie, a router would place up to four (max of 16) equal-metric
routes into the routing table, sending some traffic over each route.
Variance allows routes whose metrics are relatively close in value to be considered equal, allowing multiple
unequal-metric routes to the same subnet to be added to the routing table.
RTP (Reliable Transport Protocol) – EIGRP Update Messages are sent using RTP.
Successor - For a particular subnet, the route with the best metric is called the successor. The routing table is
updated with this route.
Feasible Successor – Alternative, immediately usable loop free backup routes.
FD (Feasible Distance) – The metric of the best route to reach a subnet, as calculated on a router.
RD (Reported Distance) – The metric as calculated on a neighboring router and then reported and learned in a
EIGRP update.
When a route fails and has no feasible successor, EIGRP uses an algorithm called DUAL (Diffusing Update
Algorithm) that sends queries looking for a loop-free route to the affected subnet.
The EIGRP network command used without a wildcard mask must use a classful network address. Use a wildcard
subnet mask to use classless addressing.

EIGRP supports MD5 authentication.


RIPv2 (Routing Information Protocol) – The RIP-2 configuration process takes only the following three required
steps, with the possibility that the third step might need to be repeated:
Step 1 Use the router rip configuration command to move into RIP configuration mode.
Step 2 Use the version 2 RIP subcommand to tell the router to use RIP Version 2 exclusively.
Step 3 Use one or more network net-number RIP subcommands to enable RIP on the correct interfaces.
Step 4 (Optional) As needed, disable RIP on an interface using the passiveinterface type number RIP
subcommand to disable the sending of RIP updates on a interface.
Each RIP network command enables RIP on a set of interfaces. The RIP network command only uses a classful
network number as its one parameter. The router multicasts routing updates to a reserved IP multicast IP address,
224.0.0.9. The router listens for incoming updates on that same interface. The router advertises about the subnet
connected to the interface.
Split-horizon – Routing technique in which information about routes is prevented from exiting the router interface
through which that information was received to help in preventing loops.
Route Poisoning (advertising failed routes) – means advertising the failed route with an infinite metric, as opposed
to simply ceasing to advertise the route. For RIP 16 hops would be infinite since 15 hops is the max.
Counting to Infinity – Distance vector routing protocols risk causing routing loops during the time between when
the first router realizes a route has failed until all the routers learn of the failed route. Packets may loop around the
network while the routers count to their version of infinity. The counting to infinity process may take several
minutes with the bandwidth of the looping packets crippling the network during the process. RIP routers add 1 to
the metric before advertising the route. The process continues through each periodic update cycle, with all routers
eventually reaching metric 16. At that point the routes are removed from the routers routing tables.
Loop prevention features:
Poison Reverse – Route poisoning by advertising a route that previously was not advertised because of the split-
horizon rule with a special metric called infinity. When learning of a failed route, suspends split-horizon rules for
the failed route by sending a poison reverse route back toward the router from which the poisoned route was
learned.
Triggered Updates – When a route fails, don’t wait for the next periodic update. Instead, send an immediate
update listing the poisoned route.
Holddown Process – Tells a router to ignore new information about the failed route for a time period called the
holddown time, as counted using the holddown timer to give the routers time to make sure every router knows that
the route has failed. Until the timer expires, routers do not believe any other information about the failed route.
Holddown Timer – The default is 180 seconds with RIP. Others ??

BGP (Border Gateway Protocol) –

There are three popular WAN data link layer protocols: Point-to-Point Protocol (PPP), High-Level Data
Link Control (HDLC), and Frame Relay.

PPP (Point to Point Protocol) – to encapsulate datagrams. PPP has several components:
 A method for encapsulating multiple protocol datagrams.
 The Link Control Protocol (LCP) must be used to establish communications over a PPP link. Each link
end sends LCP packets to configure, and test the data link connection. Subsequently, when the link is
established, the peer may be verified by authentication.
 Once the link has been made, a Network Control Protocol (NCP) is used to establish and configure one or
more network layer protocols that will be used for the link. Then datagrams from those network-layer
protocols can then be sent over the link connection. The link will continue until closed.
Provides router to router and host to network connections over synchronous and asynchronous circuits. PPP is a
successor to SLIP. SLIP was designed to work with IP. PPP was designed to work with several network layer
protocols, such as IP, IPX and ARA. PPP also has built in security using CHAP and PAP. PPP relies on two
protocols: LCP and NCP. PPP defines an extensible Link Control Protocol (LCP) and proposes a family of
Network Control Protocols (NCP) for establishing and configuring different network-layer protocols.
LCP – LCP is used to automatically agree upon the encapsulation format options, handle varying limits on sizes of
packets, detect a looped-back link and other common misconfiguration errors, and terminate the link. Other
optional facilities provided are authentication of the identity of its peer on the link, and determination when a link
is functioning properly and when it is failing.
This protocol is used to establish, configure and test the data-link connection for a PPP link.
In order to establish communications over a point-to-point link, each end of the PPP link MUST first send LCP
packets to configure and test the data link. After the link has been established, the peer MAY be authenticated.

NCP - The Network Control Protocol (NCP), a protocol in the Point-to-Point Protocol (PPP) suite, provides
services in the PPP link connection process to establish and configure different network-layer protocols such as IP,
IPX or AppleTalk. After an NCP has reached the opened state, PPP will carry the corresponding network-layer
protocol packets. Any supported network-layer protocol packets received when the corresponding NCP is not in
the opened state must be silently discarded. The most commonly used NCPs are IP Control Protocol (IPCP) and
IPv6CP.

HDLC (High-Level Data Link Control) - Because point-to-point links are relatively simple, HDLC has only a
small amount of work to do. In particular, HDLC needs to determine if the data passed the link without any errors;
HDLC discards the frame if errors occurred. Additionally, HDLC needs to identify the type of packet inside the
HDLC frame so the receiving device knows the packet type. To achieve the main goal of delivering data across the
link and to check for errors and identify the packet type, HDLC defines framing. The HDLC header includes an
Address field and a Protocol Type field, with the trailer containing a frame check sequence (FCS) field. To achieve
the main goal of delivering data across the link and to check for errors and identify the packet type, HDLC defines
framing. The HDLC header includes an Address field and a Protocol Type field, with the trailer containing a frame
check sequence (FCS) field. HDLC defines a 1-byte Address field, although on point-to-point links, it is not really
needed. On point-to-point WAN links, the router on one end of the link knows that there is only one possible
recipient of the data—the router on the other end of the link—so the address does not really matter. HDLC
performs error detection just like Ethernet—it uses an FCS field in the HDLC trailer. And just like Ethernet, if a
received frame has errors in it, the device receiving the frame discards the frame, with no error recovery performed
by HDLC. HDLC also performs the function of identifying the encapsulated data, just like Ethernet. When a router
receives an HDLC frame, it wants to know what type of packet is held inside the frame. The Cisco implementation
of HDLC includes a Protocol Type field that identifies the type of packet inside the frame. Cisco uses the same
values in its 2-byte HDLC Protocol Type field as it does in the Ethernet Protocol Type field. The original HDLC
standards did not include a Protocol Type field, so Cisco added one to support the first serial links on Cisco
routers, back in the early days of Cisco in the latter 1980s. By adding something to the HDLC header, Cisco made
its version of HDLC proprietary. So, the Cisco implementation of HDLC will not work when connecting a Cisco
router to another vendor’s router.

Frame Relay - Service providers offer a class of WAN services, different from leased lines, that can be
categorized as packet-switching services. In a packet-switching service, physical WAN connectivity exists, similar
to a leased line. However, a company can connect a large number of routers to the packet-switching service, using
a single serial link from each router into the packet-switching service. Once connected, each router can send
packets to all the other routers—much like all the devices connected to an Ethernet hub or switch can send data
directly to each other. Two types of packet-switching service are very popular today, Frame Relay and
Asynchronous Transfer Mode (ATM), with Frame Relay being much more common. Frame Relay, has many
advantages over point-to-point links, particularly when you connect many sites via a WAN. For a Frame Relay
service, a leased line is installed between each router and a nearby Frame Relay switch; these links are called
access links. The access links run at the same speed and use the same signaling standards as do point-to-point
leased lines. However, instead of extending from one router to the other, each leased line runs from one router to a
Frame Relay switch. The difference between Frame Relay and point-to-point links is that the equipment in the
telco actually examines the data frames sent by the router. Frame Relay defines its own data-link header and
trailer. Each Frame Relay header holds an address field called a datalink connection identifier (DLCI). The WAN
switch forwards the frame based on the DLCI, sending the frame through the provider’s network until it gets to the
remote-site router on the other side of the Frame Relay cloud. The terms DCE and DTE actually have a second set
of meanings in the context of any packet-switching or frame-switching service. With Frame Relay, the Frame
Relay switches are called DCE, and the customer equipment—routers, in this case—are called DTE. In this case,
DCE refers to the device providing the service, and the term DTE refers to the device needing the frame-switching
service. At the same time, the CSU/DSU provides clocking to the router, so from a Layer 1 perspective, the
CSU/DSU is still the DCE and the router is still the DTE. It is just two different uses of the same terms. The
logical path that a frame travels between each pair of routers is called a Frame Relay VC. Typically, the service
provider preconfigures all the required details of a VC; these VCs are called permanent virtual circuits (PVC).
When R1 needs to forward a packet to R2, it encapsulates the Layer 3 packet into a Frame Relay header and trailer
and then sends the frame. R1 uses a Frame Relay address called a DLCI in the Frame Relay header, with the DLCI
identifying the correct VC to the provider. This allows the switches to deliver the frame to R2, ignoring the details
of the Layer 3 packet and looking at only the Frame Relay header and trailer. Recall that on a point-to-point serial
link, the service provider forwards the frame over a physical circuit between R1 and R2. This transaction is similar
in Frame Relay, where the provider forwards the frame over a logical VC from R1 to R2. VCs share the access link
and the Frame Relay network. For example, both VCs terminating at R1 use the same access link. So, with large
networks with many WAN sites that need to connect to a central location, only one physical access link is required
from the main site router to the Frame Relay network. By contrast, using point-to-point links would require a
physical circuit, a separate CSU/DSU, and a separate physical interface on the router for each point-to-point link.
So, Frame Relay enables you to expand the WAN but add less hardware to do so.

ATM (Asynchronous Transfer Mode) - cell-based switching technique that uses asynchronous time-division
multiplexing. This differs from other technologies based on packet-switched networks (such as the Internet
Protocol or Ethernet), in which variable sized packets (known as frames when referencing Layer 2) are used.

SONET (Synchronous Optical Networking) - and Synchronous Digital Hierarchy (SDH) are standardized
multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers or light-emitting
diodes (LEDs).

IPSec VPNs -

SSL Web VPNs –

VTP - VLAN Trunking Protocol

Switches (whether trunked or not) are always connected with crossover cables, not straight-through cables. In
CCNA land, there is no such thing as a "smart port" that will auto-detect a crossed connection and fix it. The
Catalyst 2960 has such a feature, but the exam will test your knowledge of when to use a crossover cable. For the
purposes of your exams, if two switches are not connected with a crossover cable, there will be no connectivity
between them, period.

Understanding VTP
VTP is a Layer 2 messaging protocol that maintains VLAN configuration consistency by managing the addition,
deletion, and renaming of VLANs on a network-wide basis. VTP minimizes misconfigurations and configuration
inconsistencies that can cause several problems, such as duplicate VLAN names, incorrect VLAN-type
specifications, and security violations.
Before you create VLANs, you must decide whether to use VTP in your network. Using VTP, you can make
configuration changes centrally on one or more switches and have those changes automatically communicated to
all the other switches in the network. Without VTP, you cannot send information about VLANs to other switches.
VTP is designed to work in an environment where updates are made on a single switch and are sent through VTP
to other switches in the domain. It does not work well in a situation where multiple updates to the VLAN database
occur simultaneously on switches in the same domain, which would result in an inconsistency in the VLAN
database.

VTP Server Mode


In server mode, you can create, modify, and delete VLANs for the entire VTP domain. VTP server mode is the
default mode for a Cisco switch. The number of VTP servers should be chosen to provide the degree of
redundancy that is desired in the network.

VTP Client Mode


If a switch is in client mode, you cannot create, change, or delete VLANs. In addition, the VLAN configuration
information that a VTP client switch receives from a VTP server switch is stored in a VLAN database, not in
NVRAM.

VTP Transparent Mode


Switches configured in transparent mode forward VTP advertisements that they receive on trunk ports to other
switches in the network. VTP transparent mode switches do not advertise their VLAN configuration and do not
synchronize their VLAN configuration with any other switch. In transparent mode, VLAN configurations are
saved in NVRAM (but not advertised to other switches), so the configuration is available after a switch reload.

VTP Pruning
VTP pruning increases network available bandwidth by restricting flooded traffic to those trunk links that the
traffic must use to reach the destination devices. Without VTP pruning, a switch floods broadcast, multicast, and
unknown unicast traffic across all trunk links within a VTP domain even though receiving switches might discard
them. VTP pruning blocks unneeded flooded traffic to VLANs on trunk ports that are included in the pruning-
eligible list. Only VLANs included in the pruning-eligible list can be pruned. By default, VLANs 2 through 1001
are pruning eligible switch trunk ports. If the VLANs are configured as pruning-ineligible, the flooding continues.
VTP pruning is supported with VTP Version 1 and Version 2. Pruning is disabled by default. You need to enable
pruning on only one VTP server switch in the domain.

DTP (Dynamic Trunking Protocol) - Cisco has implemented the Dynamic Trunking Protocol to make setting up
trunks easier. DTP can send and/or receive trunk negotiation frames to dynamically establish a trunk link with a
connected switch. DTP is not necessary to establish a trunk link, and like many other automatic functions, many
administrators would rather not use it and instead manually configure their trunk links. The CCNA exam is not
concerned with DTP, but does ask about the five port modes, so an explanation is warranted.

A switch port can be in one of five modes:

Off—In Off mode, the port is an Access port and will not trunk, even if the neighbor switch wants to. This mode is
intended for the connection of single hosts or hubs. DTP frames are not sent or acknowledged. The command to
enable this is switchport mode access.
On—In On mode, the port will trunk unconditionally, and trunk connectivity will happen if the neighbor switch
port is set to Auto, Desirable, or NoNegotiate. DTP frames are sent but not acted upon if received. The command
to enable this is switchport mode trunk.
NoNegotiate—Sets the port to trunk unconditionally even if the neighbor switch disagrees. A trunk will form only
if the neighbor switch port is set to On, Auto, or Desirable mode. DTP frames are not sent or acknowledged. The
command to enable this is switchport nonegotiate.
(Dynamic) Desirable—This mode actively solicits a trunk connection with the neighbor. DTP frames are sent and
responded to if received. A trunk forms if the neighbor is set to On, Desirable, or Auto. If the neighbor is set to
NoNegotiate, the trunk will not form because Desirable needs a response from the neighbor, which NoNegotiate
will not send. The command to enable this is switchport mode dynamic desirable.
(Dynamic) Auto—The port trunks only in response to a DTP request to do so. A trunk forms with a neighbor port
set to on or desirable. DTP frames are not sent but are acknowledged if received. The command to enable this is
switchport mode dynamic auto.
Exam Alert
Know the five switch port modes: On, Off, Desirable, Auto, and NoNegotiate.
Know the command to set permanent trunking mode: switchport mode trunk

You might also like