Professional Documents
Culture Documents
NETWORKING CAMPUS
CONFIGURATION AND
ADMINISTRATION
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Dell Confidential and Proprietary
Copyright © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
Course introduction.................................................................................. 1
Course Objectives...................................................................................................... 2
Course objectives ................................................................................................................ 3
Prerequisite skills ................................................................................................................. 4
Course agenda .................................................................................................................... 5
Introductions ........................................................................................................................ 6
Products .................................................................................................... 7
Overview ..................................................................................................................... 8
Enterprise Campus Network Design Considerations ............................................................ 9
Enterprise Campus Network Design Hierarchy .................................................................. 10
Enterprise Campus Network Design Methods .................................................................... 11
Firmware Upgrades.................................................................................................. 82
Firmware Upgrades Overview ............................................................................................ 83
Firmware Upgrades - File Structure ................................................................................... 84
System Defaults ................................................................................................................. 85
Configuration Files ............................................................................................................. 86
Firmware Upgrades - TFTP ............................................................................................... 88
Firmware Upgrades – Boot Menu (XMODEM) ................................................................... 89
Firmware Upgrade - HTTP ................................................................................................. 90
Upgrade Process Documentation ...................................................................................... 91
Software Upgrade CLI Process-Download Firmware Image .............................................. 93
Software Upgrade Process -Activate and Reload............................................................... 94
Software Upgrade Process - Verify the Upgrade ................................................................ 95
Software Upgrade Process - Update bootcode .................................................................. 96
Stacking................................................................................................. 573
Course Objectives
Course objectives
Prerequisite skills
Course agenda
Introductions
Introduction
Overview
Introduction
Overall Design:
The campus network represents any infrastructure between the user and the
applications in facilitating access
Site:
A building or group of buildings that are connected into one enterprise
network that consists of one or more LANs
Users:
Campus network users are employees, guests, and devices that connect to
applications and information using wired and wireless devices
Interconnect:
Interconnect within Campus Networking means connecting the campus core
to the edge of the network and WAN portions of the network
Switch Features:
The design should adhere to the architectural principles: Modularity,
resiliency, and flexibility
Modularity Types - The modules of the system are the building blocks that are
assembled into the larger campus. The advantage of the modular approach is
failures that occur within a module can be isolated from the remainder of the
network. The campus network architecture is based on the use of two basic blocks
or modules that are connected together through the core of the network: Access-
distribution block and Services block.
Access-Distribution Blocks
Access-distribution blocks are probably the most familiar element of the campus
architecture. It is the fundamental component of a campus design. Properly
designing the distribution block goes a long way to ensuring the success and
stability of the overall architecture. Access-distribution blocks consist of two of
the three hierarchical tiers within the campus architecture: The access and
distribution layers
Services Block
The services block is a newer element in the campus design. Campus network
planners added services, and now several challenges must be solved. The
services include, dual stack IPv4/IPv6 environments, and moving to controller-
based wireless networks, and migrating towards Unified Communications
services. The services block is not necessarily a single entity. There might be
multiple services blocks depending on the scale of the network.
Resiliency Types - Resiliency is a basic principle that is made real by using many
related features and design choices. For example, enabling port security on the
access switch controls which frames are permitted inbound from the client.
Resiliency principles can be extended to QOS and routing protocols such as
OSPF.
Resilient Power Supplies
Multiple traffic paths create resiliency
Routing protocols
Flexibility Methods - The control plane decides where the traffic goes. The data
plane moves the traffic in one interface and out another. The constant evolution of
campus network design requires an increasing degree of adaptability or flexibility.
The ability to modify portions of the network, or services, or capacity without going
through major upgrades are key to the effectiveness of campus designs. Key areas
where it is highly probable that networks evolve over the next few years are:
Control Plane Flexibility—The ability to support and enable migration between
multiple routing, spanning tree, and other control protocols.
Data Plane Flexibility—The ability to support the introduction and use of IPv6
as a parallel requirement alongside IPv4.
– User Group Flexibility—The ability to virtualize the network forwarding
capabilities and services within the campus fabric to support changes in
administrative structure of the enterprise. These changes could involve
acquisition, partnering, or outsourcing of business functions.
– Flexible Security Architecture—Increased security threats and changing
traffic patterns require a security architecture that can adapt to these
changing conditions.
Traffic Management Flexibility—Unified communications, collaborative
business approaches, and software models continue to evolve—along with a
trend toward increased growth in peer-to-peer traffic flows. These fundamental
changes require campus designs that enable security, monitoring, and
troubleshooting tools available to support these new traffic patterns.
SFP+ Ports
N2000 Series
2x10G SFP+ ports
Transceiver Detection/Support
Dell-qualified SFP+ transceivers are sold separately.
Support SFP+ Transceivers
Support SFP+ copper Twinax
Operating at 10 Gb
Details
Type-A, female USB port USB
2.0-compliant flash memory drive
Formatted as FAT-32
Copy configuration files and images between USB and switch
Move files between switches
Module Summary
1. What N-series models are best for the aggregation and core layers of the
network?
N3000 and N4000 Series
2. Which two N1500 series switches offer Power over Ethernet plus capabilities?
N1524P and N1548P
3. What modules are available for the N30xx Series Switch?
Two modules are available: 2-port SFP+ Module, 2-port 10G Base-T
Module
Refer to the student lab guide for instruction to complete the lab.
Introduction
The N-series boot process acts a boot loader and provides users and channel
partners the ability to install the target network operating system (Dell Networking
OS6).
Boot Options
Start Operational Code is what you select when you are done using the Boot
Menu.
Select Baud Rate sets the serial port baud rate for any boot menu function.
Retrieve Logs provides access to the Logs used especially when the
Operational Code does not boot.
Load New Operation Code provides a way to load new code when you have
damaged operation code.
Reboot causes the switch to reboot right now.
Restore configuration to Factory Defaults wipes out any existing configuration
and starts as is the switch was received from the factory.
Activate Backup Image lets you switch to a second image if you suspect the first
image is damaged.
Start Password Recovery lets you into the switch to recover from forgetting the
password.
Reset Password
Factory Reset
Use the following procedure to reset the Dell N series switch to factory defaults:
1. Manually reboot your switch
2. While the switch is booting, wait for the “Dell Networking Boot Options” and
select option #2 (Display Boot Menu) within 3 s.
3. On the Boot Main Menu, enter choice number 10 Restore Configuration to
Factory Defaults. Then the enable password can be set as if it were a new
switch.
Recover Password
Command Parameter
Stacking Review
Creating a Stack
How you cable the stack can vary from switch to switch.
N4000 switches require stacking ports to be defined, do this before cabling
them together.
From the factory each switch is setup as unit 1.
Once a switch is Master it wants to stay master.
When you power on the second switch the unit number will conflict with the
Master. The master will change the unit number of the second switch to unit 2.
When this happens any configuration on the switch will be lost.
Each additional switch added will go through the same process.
It is possible to change the unit number prior to stacking and the configuration
will then not be lost.
Once the switch unit has become Master, it always will try to stay Master. A unit
that powers up first will take on the Master Role. If two or more units power up for
the first time within the first few minutes of one another they will elect a Master
based on MAC Address. Once a Master unit fails the standby unit now becomes
Master, that unit is now the Master and will try to stay Master. If two devices think
they are Masters the higher MAC Address will become Master.
If both switches were set to unit 1 the switch that is not master will now be
setup as unit 2 any configuration on that unit is now lost.
Switch Connections
Connection Methods
It is a good idea to be familiar with all the different methods that are used to
connect the switch. A connection to an N-Series switch can be established
through the serial console, Telnet, SSH, or web interface.
The switch is not configured with a default user name, password, or IP address.
The initial configuration must use the console port.
The serial cable is used to connect the switch to a terminal or serial port on a
personal computer.
Many other Dell switch models use this type of console port.
The command line of the N-Series switch can be accessed several ways.
For the N1100 switches, use a supplied Micro USB to USB serial cable to
access the serial console. Connect the Micro USB end of the cable to the
serial console port and the USB connector to the personal computer USB
port. Download the adapter software and install to your personal computer.
Download and install the terminal emulation software on your personal
computer (for example, PuTTY). Access the serial console with the correct
settings (default setting is 9600 baud, 8 data bits, no parity bit, 1 stop bit,
and no flow control).
For the N2000 switches, use a supplied RJ45 to DB9 serial cable to access
the serial console. Connect the RJ45 end of the cable to the serial console
port and the DB9 connector to your personal computer. Download and install
the terminal emulation software on your personal computer (for example,
PuTTY). Access the serial console with the correct settings (default setting
is 9600 baud, 8 data bits, no parity bit, 1 stop bit, and no flow control).
Besides to the RJ45 serial port, the N3000 and N4000 series switches have
an out of band management port that is connected through an Ethernet
connection. The serial and out of band ports on the N4000 are on the back
of the switch. The serial cable cannot be connected on the Ethernet port and
the Ethernet port cannot be used for the initial configuration.
TeraTerm is one of the tools that you can use to connect to a switch through the
serial console. It is a free download from Ayera Technologies and is compatible
with most Microsoft operating systems.
PuTTY
PuTTY is a free open-source terminal emulator application that can act as a
client for SSH, Telnet, rlogin, and raw TCP protocols. It also provides serial port
connection capability. Downloadable versions are available for both Windows
and Linux/Unix operating systems.
For serial connections, You must turn off flow control to allow PuTTY to
establish a serial connection to a Dell switch.
Remote Management
Telnet or SSH is used to provide remote access to the switch over an IP address.
Connection Methods
To perform any type of configuration on a switch, you must be familiar with the
different connection methods. A connection to a switch can be established
through the serial console, Telnet, SSH, or a web interface.
Initially, you are required to connect to a switch through a serial connection to
configure it for other connection methods.
Telnet Connection
Telnet is a network protocol that is used on the Internet or local area networks
to provide bi-directional, interactive communications between computer systems
or devices. Typically, Telnet provides access to a CLI on a remote host over a
virtual terminal connection.
SSH
The operating system supports SSH for secure, remote connections to the CLI.
The SSH server can be enabled or disabled.
SSH is used to create a secure remote connection using some of the sample
commands shown here.
It is a good idea to disable Telnet once you have SSH turned on. That way you
funnel all your users through the encrypted remote access.
The N3000 and N4000 series switches have out of band interfaces, which
allows the administrator to configure a management network that is not
accessible through the switch.
Configure a username and password.
Add the IP address and default gateway to the out of band interface.
You do not have to configure an enable password to use an in-band interface,
but Dell EMC recommends it. The enable password is required to set up an out-
of-band interface.
Review Question
Review Question
Review Question
CLI: Overview
The CLI on N-Series switches is used to control and define the many device
parameters and features. The CLI is hierarchically and modularly structured.
This way the user has better control and insight into the various commands and
levels of configuration. If all the CLI commands were located in one general
interface, the user would find it difficult to control and handle. For example, the
help command would produce an endless command list.
A CLI command is a series of:
– Keywords: Mandatory words composing the command until the first
parameter, Keywords state a command.
– Parameters: specify configuration options, some are mandatory, and some
are optional. There are two types of parameters:
Positional: Position of the parameter matters, parameters must be in a
specific order.
Key: Position does not matter, order may be changed.
In the command snmp-server community dellpvt rw, snmp-server and
community are keywords. Input dellpvt rw are key parameters, where dellpvt
specifies the community string and rw specifies the SNMP permissions.
CLI Modes
Modes:
– Exec
– Exec Privileged
– Configuration
The CLI is used to navigate between different privileges, protocols and
interfaces.
Each mode has a different prompt.
CLI - EXEC
Config is the shortcut for configure. Cisco devices require configure terminal.
The prompt will start with (config) and end with "#”.
Note: Nonexisting interfaces are excluded from interface range prompt. When
creating an interface range, interfaces appear in the order they were entered
and are not sorted.
The show range command is available under interface range mode. This
command allows you to display all interfaces that have been validated under the
interface range context.
Abbreviated Commands
In this example, the IP Address for VLAN 10 is being removed. VLAN 10 is also
being removed. The example starts with a show run of VLAN 10 to display the
configuration of VLAN 10. The IP and VLAN are then removed to show the
usage of the no command. Then the show vlan command is used to show
VLAN 10 is removed from the configuration.
For terminal monitor, you must enter the command run terminal no monitor
instead of no terminal monitor.
Abbreviations must be long enough to uniquely identify the parameter from any
other of the parameters. Tab or space initiate the command completion to occur.
The DO command lets you complete a command from higher level without being at
that level.
Here is a list of steps that are required to set up the initial switch configuration.
Out-of-Band interface
VLAN Interface
N1(config)#interface out-of-band
N1(config-if)#ip address 192.168.1.1 /24
N1(config-if)#exit
This configuration sets the out of band IP address to 192.168.1.1 with 24
bits for the network and 8 bits for host names.
Review Question
Review Question
The history buffer is enabled and stores the last 10 commands entered.
These commands can be recalled, reviewed, modified, and reissued.
The buffer is not preserved after the switch resets.
Interface Types
Physical Interfaces
The physical ports on the switch include the out-of-band (OOB) interface
(N3000 and N4000 only) and Ethernet switch ports.
Logical interfaces
Port-based VLANs
VLAN routing interfaces
Link Aggregation Groups (LAGs), also called port channels)
Tunnels
Loopback interfaces
Interfaces
Ethernet Interfaces
Ethernet interfaces use a naming scheme that identifies the link speed and its
location within the switch. The naming scheme is:
<Interface Type> Unit#/Slot#/Port#—For example, gi2/0/10 identifies the
gigabit port 10 in slot 0 within the second unit on a nonmodular switch. The
table that follows lists the supported interface type tags.
Unit #—The unit number is greater than 1 only in a stacking solution where
switches are stacked to form a virtual switch. In this case, the Unit# indicates
the logical position of the switch in a stack. The range is 1 through 12. The
unit value is 1 for stand-alone switches.
Slot#—The slot number is an integer that is assigned to a particular slot.
Front panel ports have a slot number of 0. Rear panel ports are numbered
from 1 and can be identified by the Lexan on the rear panel. Use the show
slot command to retrieve information for a particular slot.
Port # — The port number is an integer that is assigned to the physical port
on the switch and corresponds to the Lexan printed next to the port on the
front or back panel. Ports are numbered from 1 to the maximum number of
ports available on the switch, typically 24 or 48.
Firmware Upgrades
Firmware updates can be performed by FTP, TFTP, XMODEM, or through the Web
Interface (GUI).
System Defaults
When the switch is first powered on, neither user or enable passwords are
configured. The hostname is console. No out-of-band or in-band management has
been set up. There are no protocols configured by default.
Configuration Files
There are three files that are used for storing the switch configuration
information. The first is the startup-config. When the switch is reloaded, it uses
the startup-config to configure itself. If no startup-config is present, the reload
resets the switch to its default configuration. Deleting the startup-config and
reloading the switch is the procedure that is used for resetting a switch.
There is a backup-config file where you can keep a copy of the startup-config, in
case you lose or corrupt the startup-config. Also, it is a good idea to keep an
extra backup by copying it to an off-switch location.
The running-config file is used to keep the currently active switch configuration.
When the switch is reloaded, the running-config is built from the startup-config.
As the network administrator changes the configuration, the changes are
incorporated into the running-config, but not the startup-config. If the switch was
to reload before the changes in the running-config are copied to the startup-
config, the changes that are made by the administrator would be lost. It is
recommended that the running-config is copied to the startup config often. We
talked earlier about the copy running-config startup –config earlier as an
example of command line shortcuts. There is an even shorter way to do this
operation. The command write, which can be entered as write which executes
a copy running-config startup-config.
Use the show running-config command to display the content of the running
configuration. There may be a lot of content to display, so the output can be
piped into a script capable of filtering the output.
Upgrading the firmware using the boot menu is a last resort option. This
method is used when the switch is unable to complete startup of the runtime
code. Using Xmodem is much slower than doing the upgrade via TFTP, which
is why TFTP is the preferred method when possible.
Before performing an upgrade via Xmodem, you’ll want to set the terminal baud
rate to the highest speed possible, which in most cases is 115200 bps. If that
choice is not available, then 57600 or 38400 might be the highest you can set
the baud rate.
Information on upgrading the firmware via Xmodem can be found on the
Upgrade via Boot Menu page in the Switch Administration and Management
module.
Another option for upgrading the firmware on Dell switches is via HTTP. This
method does not require additional software and works with Internet Explorer
and Firefox web browsers.
Using this method, you can download or upload configuration files and
download software images.
Only DNOS 6.0 uses a web interface.
Always download and follow the firmware upgrade documents for the new
version of code as version specific restrictions, or upgrade path, or required
commands may be included on the upgrade instructions.
If your boot code version of the system is running with a version equal or higher
than the version mentioned, DO NOT proceed with the upgrade process. A
downgrade may be needed to include a switch on a existing stack already in
production. If you have questions regarding the boot code version for your system,
contact technical support.
The general procedure for upgrading the software is the same on the N2000 and
N3000 switches. The N4000 series is slightly different, but similar. The process is
documented in detail for each new release, and can be found on the Dell Network
website.
The process of upgrading the firmware is going to begin by saving the current
configuration, and as a best practice, copying it off the switch for safe keeping.
Then we copy the new version of the firmware to the switch into a file called
“backup.” Backup is a name used to refer to one of the “image” files in the switch’s
file system we saw earlier.
The next step is to boot the system using the backup file that contains the new
firmware release you just downloaded. The boot process makes the backup file,
the active file, and makes the current active file the backup. So, if you want to boot
from the new code on the next boot the command is "boot system active". Always
do a "show version" to verify the OS you will be booting from.
Once the boot completes, you have to reload. You receive a warning about
unsaved changes, which you should respond to with “y”, and then again “y” again.
The reload continues using the startup-config.
To verify that the new firmware is installed, show the version again and make sure
the active configuration is the latest code. The previous release is the backup.
The last steps in the process are to issue bootcode command, then reload.
The update bootcode command is not documented in the help files.
If you are upgrading a stacked switch, the process is same, it just takes a lot
longer depending on the size of the stack.
Review Question
Module Summary
1. What is the number of interface types available with Dell Networking OS 6.X?
Refer to the student lab guide for instruction to complete the lab.
Introduction
VLANs
Introduction
VLAN Overview
VLAN Overview
A Virtual LAN (VLAN) is a group of PCs, Servers, and other network resources that
behave as if they were connected to a single network segment. Think of a VLAN as
a subnet. A VLAN is essentially its own broadcast domain.
VLANs provide greater network efficiency by reducing broadcast traffic, but also
enable you to make network changes without having to update IP addresses or IP
subnets. VLANs inherently provide a high level of network security since traffic
must pass through a Layer 3 switch or a router to reach a different VLAN.
Trunk
(VLAN 1 & VLAN 2)
VLANs:
Divides a network into smaller broadcast domains, reducing unnecessary
broadcasts, improving network performance
Blocks traffic between VLANs, improving security
Easier network management
Inter-VLAN communications need Layer 3 routing process (network routers)
VLAN Tagging
VLAN Tagging
VLAN tagging creates a logical separation between devices that are based on the
VLAN tags. The standards body of IEEE named the tags in the 802.1Q
specification for Ethernet framing. The VLAN ID is stored inside the 802.1Q tag.
With frame tagging, a four-byte data tag field is appended to frames that cross the
network. The tag identifies which VLAN the frame belongs to. The tag may be
added to the frame by the end station itself or by a network device, such as a
switch. The tag may also specify the relative priority of the frame in the network.
A VLAN is a broadcast domain and isolates a computer network at the Data Link
Layer. Traffic can only pass between VLANs at Layer 3.
Access
An access port connects to a single end station belonging to a single VLAN.
An access port is configured with ingress filtering enabled and accepts either
an untagged frame or a packet that is tagged with the access port VLAN.
Tagged packets received with a VLAN other than the access port VLAN are
discarded. An access port transmits only untagged packets.
Trunk
A trunk port connects two switches. A trunk port may belong to multiple
VLANs. A trunk port accepts only packets that are tagged with the VLAN IDs
of the VLANs to which the trunk is a member. If there is a native VLAN
configured on the port, it accepts untagged packets as well. A trunk port only
transmits tagged packets for member VLANs other than the native VLAN
and untagged packets for the native VLAN.
General
Switch Filtering
Switch Filtering
During the process of a frame entering, flowing through, and exiting the switch,
filters are applied to narrow down the number of unnecessary frames. Three filters
are applied when a frame enters a switch port. If any of the conditions are not met,
the frame is dropped.
Switch
Fabric
Filtering Database
Either static or dynamic entries
Either unicast or multicast entries
Forwarding Decisions
Known MAC address frames – look up in Content Addressable Memory
(CAM) address table. Lookup key is based on both VLAN tag and
destination MAC address – leading to the required egress port
Broadcast frames – lookup is done directly at the VLAN Port Table (flooding
to all ports of the VLAN)
Unknown unicast frames – initial lookup in MAC forwarding table, when
entry is not found – flooding is performed based on the VLAN Port Table
One rule is applied when a frame exits a switch port.
Egress Rules Filter
VLAN Configuration
Creating VLANs
Command Description
Command Description
Command Description
Command Description
Command Description
To view the VLAN membership of a specific port, use the show interfaces
switchport command.
Console#show interfaces switchport <switchport>
Troubleshooting VLANs
VLAN assignment
Use the show vlan command to determine the VLANs created on the
switch and which ports are assigned to the VLANs.
Switchport mode
Use the show interfaces switchport <switchport> command to
display the complete switchport VLAN configuration for all possible switch
mode configurations of an interface. To confirm that the ports are in the
correct mode, review the VLAN membership mode.
VLAN mismatch between switches
Native VLAN mismatches - Trunk ports are configured with different native
VLANs.
Trunk mode mismatches - One trunk port is configured with trunk mode off
and the other with trunk mode on.
Allowed VLANs on trunks - The list of enabled VLANs on a trunk has not
been updated with the current VLAN trunking requirements.
1. Use the show vlan command to confirm the native VLAN and the other
created VLANs.
Module Summary
2. What three filters are applied to a frame when it enters a switch port?
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module covers the Spanning Tree Protocol in a Dell EMC networking
environment.
Introduction
Overview
Per VLAN RSTP (RSTP-PV) is the IEEE 802.1w (RSTP) standard that is
implemented per VLAN. The module covers RSTP-PV in more detail.
Switches in the network determine the root bridge and compute the port roles
which are called root, designated, or blocked. To ensure that each bridge has
enough information, the bridges use special data frames called Bridge Protocol
Data Units (BPDUs) to exchange STP information.
STP Convergence
ROOT
BRIDGE
Spanning Tree in a network converges which means the following have been
determined:
Root Bridge (Switch)
Designated Ports (Forwarding)
ROOT PORT
Switch A
Designated
Switch C
Port
Blocked Port
Backup Port
ROOT PORT
ROOT PORT
X
Blocked Port
ROOT BRIDGE
Switch D
PRI = 4096
Switch B
Designated
Port
Switch E
ROOT PORT
All ports on a root switch are designated ports and are always forwarding. The
same parameters have been met that were identified in the previous example.
An STP-enabled switch sends a Bridge Protocol Data Unit (BPDU) frame using the
unique MAC address of the port itself as a source address. The destination
address is set to the STP multicast address 01:80:C2:00:00:00. It enables all STP-
aware switches in the same LAN to receive the BPDU frame. BPDUs are
exchanged every 2 s by default and enable switches to track network changes.
When a device is first connected to a switch port, it will not immediately forward
data. Instead, it goes through several states while it processes BPDUs and
determines the topology of the network. The process begins the election of a root
bridge and takes about approximately 50 s.
Root Bridge
The root bridge of the spanning tree is the bridge with the lowest bridge ID and is
where all traffic aggregates. Each bridge has a unique identifier (ID) and a
configurable priority number. The bridge ID is a concatenation of these numbers.
The unique ID is the MAC address of the switch. Default priority is 32768. Best
practice suggests having the root bridge as close to the network gateway as
possible.
To assign a static root switch, you must change the default bridge priority of 32768.
This value must be lowered to enable it to be assigned the root port role. This value
is changed in increments of 4096. Set the switch priority to 4096, as all the other
switches are set to 32768 and cause it to be elected the root switch. A bridge
priority of “0” prevents a switch from participating in the root election however not
all vendors observe this rule.
Port States
Forwarding Forwarding
Learning Learning
Listening
Blocking Discarding
Costs
Port cost is a value that is based on the interface type. The greater the port cost,
the less likely the port is selected to be a forwarding port. Port costs were modified
from the original bandwidth reference for 10 Mbps Ethernet from the 1970s. With
ever-increasing bandwidth, port costs had to be changed to remain relevant to
calculations in STP.
The forwarding port typically has the most bandwidth and is closest to the root
switch. The default port cost can be altered to enable the switch to select a specific
port to become a root port.
16 Mb/s 62 1,250,000
1 Gb/s 4 20,000
2 Gb/s 3 10,000
10 Gb/s 2 2,000
STP Enhancements
DirectLink Group
Root port.
All ports that provide an alternate connection to root bridge.
Ports that are self-looped are excluded.
DirectLink Rapid Convergence (DRC)
DRC - Failover
DRC - Failover
“Immediate” transition to forwarding state
Violates IEEE standard behavior
No listening/learning state transitions
Floods dummy multicast packets on new uplink
Hysteresis prevents
immediate transition
Delay equal to 2 x forwarding
delay
IRC Flow
When a port receives a negative RLQ response, it has lost connection to the root
and the switch ages out its BPDU. If all other nondesignated ports received a
negative answer, the switch has lost the root and restarts the STP calculation.
If the response confirms that the switch can still access the root bridge, it
immediately ages out the port on which the inferior BPDU was received.
If the switch only received responses with a root different from the original root, it
has lost the root port and restarts the STP calculation immediately.
The IEEE published the Rapid Spanning Tree Protocol (RSTP) standard as 802.1w
in 2001. RSTP is essentially the same as STP, however it provides faster
convergence and interoperability with switches that are configured with STP.
RSTP achieves
approximately
90% faster
reconfiguration
time, and then the
reconfiguration
time of STP by:
Reducing the
number of
state changes
before active
ports start
learning.
Predefining an
alternate route
that can be
used when a node or port fails.
RSTP Configuration
Command Description
Introduction
RSTP-PV Overview
RSTP-PV is the IEEE 802.1w (RSTP) standard that is implemented per VLAN. A
single instance of rapid spanning tree (RSTP) runs on each configured VLAN. Each
RSTP instance on a VLAN has a root switch. The RSTP-PV protocol state
machine, port roles, port states, and timers are similar to the ones defined for
RSTP. RSTP-PV embeds the DirectLink Rapid Convergence (DRC) and
IndirectLink Fast Rapid Convergence (IRC) features, which cannot be disabled.
RSTP-PV is not compatible with protocol-based VLANs. Ensure that ports that are
enabled for per-VLAN spanning tree are not configured for protocol-based VLAN
capability.
Dell EMC Networking N-Series switches support both Rapid Spanning Tree Per
VLAN (RSTP-PV) and Spanning Tree Per VLAN (STP-PV).
RSTP-PV Limitations
RSTP-PV Configuration
Command Description
Optional Features
Introduction
This lesson covers the following optional STP features that are supported on the
Dell EMC Networking N-Series switches:
PortFast
BPDU filtering
BPDU flooding
Root guard
Loop guard
BPDU protection
PortFast
The PortFast feature reduces the STP convergence time by enabling edge ports to
transition to the forwarding state without going through the listening and learning
states.
BPDU Filtering
Ports that have PortFast enabled continue to transmit BPDUs. The BPDU filtering
feature prevents PortFast-enabled ports from sending BPDUs.
Enabling BPDU filtering on a specific port prevents the port from sending BPDUs
and enables the port to drop any BPDUs it receives.
BPDU Flooding
The BPDU flooding feature determines the behavior of the switch when it receives
a BPDU on a port that is disabled for spanning tree. If BPDU flooding is configured,
the switch floods the received BPDU to all the ports on the switch which are
similarly disabled for spanning tree.
Root Guard
Root guard is another way of controlling the spanning-tree topology other than
setting the bridge priority or path costs. Root guard ensures that a port does not
become a root port or a blocked port. A switch that is elected as root bridge has all
ports set as designated ports. If the switch receives a superior STP BPDU on a
root-guard enabled port, the root guard feature moves the port to a root-
inconsistent spanning-tree state. No traffic is forwarded across the port, but it
continues to receive BPDUs, discards received traffic, and is added to the active
topology. Essentially, it is equivalent to the IEEE 802.1D listening state. By not
transitioning the port on which the superior BPDU has been received to the
forwarding state, root guard helps maintain the existing spanning-tree topology.
Loop Guard
Loop guard protects a network from forwarding loops that are induced by BPDU
packet loss. The reasons for failing to receive packets are numerous, including
heavy traffic, software problems, incorrect configuration, and unidirectional link
failure. When a nondesignated port no longer receives BPDUs, the spanning tree
algorithm considers the link as loop free and transitions the link from blocking to
forwarding. Once in the forwarding state, the link may create a loop in the network.
Enabling loop guard prevents such accidental loops. When a port is no longer
receiving BPDUs and the max age timer expires, the port is moved to a loop-
inconsistent blocking state. In the loop-inconsistent blocking state, traffic is not
forwarded so the port behaves as if it is in the blocking state. It discards received
traffic, does not learn MAC addresses, and is not part of the active topology. The
port remains in this state until it receives a BPDU. It transitions through the normal
spanning tree states that are based on the information in the received BPDU.
BPDU Protection
When the switch is used as an access layer device, most ports function as edge
ports. The port has a single, direct connection and is configured as an edge port to
implement the fast transition to a forwarding state. When the port receives a BPDU
packet, the system sets it to nonedge port and recalculates the spanning tree,
which causes network topology flapping. In normal cases, these ports do not
receive any BPDU packets. However, someone may forge BPDU to maliciously
attack the switch and cause network flapping.
BPDU protection can be enabled in RSTP to prevent such attacks. When BPDU
protection is enabled, the switch disables an edge port that has received BPDU
and notifies the network manager about it.
Module Summary
1. How does RSTP determine the root bridge if all the switches have the same
priority value?
4. What optional feature ensures that a port does not become a root port or a
blocked port?
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module introduces Link Aggregation Groups, or LAGs on Dell EMC N-Series
switches. How to configure and monitor LAGs and various LAG implementations is
also covered.
LAG Overview
Introduction
This lesson explains what a Link Aggregation Group or LAG is, related terminology,
the two types of LAGs and their supported configurations.
administrator can statically set the STP cost of the LAG. Use the spanning-
tree cost command on the port channel, to statically set the STP cost.
Load sharing
Network traffic is balanced across a LAG. User configurable hashing algorithms
are used to optimize load balancing across the physical links in a LAG.
A static LAG is set up only once. An administrator can add or remove links
manually. It is the responsibility of the administrator to see that both ends of the
link are configured correctly. If the links are not configured correctly, there is no
underlying protocol to detect errors.
A dynamic LAG uses Link Aggregation Control Protocol—LACP to exchange
information between link endpoints.
Supported Configurations
Supported Configurations
Physical connections
A physical interface can belong to only one port channel.
All interfaces in the port channel must operate at the same speed.
Only those interfaces that match the speed of the first interface in the port
channel are enabled.
A port channel is "UP" when at least one member link is up.
Port configuration
All the physical ports in the link aggregation group must reside on the same
switch. If a virtual switch is created out of stacked switches, the port channel
interfaces may come from any switch in the stack. Stacked switches provide
high availability by spreading the port channel interfaces across multiple
switches in a single virtual switch.
The port channel must be configured the same on each switch. For
example, a static port channel is configured as static on both switches that
are connected to the link.
Dedicated ports are used between each switch, using a static LAG between switch
B and C and a LACP lag between switches A and B.
Within the industry, both LACP and static lags are described as IEEE LAGs. IEEE
defines both types of LAG in its standards. However, only LACP includes the
standardized control protocol.
This example is not valid for the following reason: Switch A is configured for static,
and switch B is configured for LACP. In this case, the port channel does not come
up.
On switch A, the links are aggregated to form static logical port channel 1, or
po1. A show interface po1 command displays the interface as both
administratively up and operationally up. This condition is due to the individual
links being up.
Switch B po1 would not group and would remain in an administratively up,
operationally down state.
Since the port channel does not fully come online, there are implications for
performance as STP blocks the highest numbered interfaces that are
redundant.
The main thing to make note of here, is that a single LAG CANNOT be split
between three switches. Depending on which links came up first, two members
would be UP while the other two links would be down. This condition is due to the
different received SYSTEM MAC addresses within the LACP PDUs.
The main thing to note for this example, is that the LAG shows up active on each
device but WILL NOT work properly. A LAG between a dual-NIC server and a
switch is valid, and is discussed later. But trying to aggregate links between
switches and links between a switch and a server is not valid.
Introduction
This lesson displays and describes the commands that are used to create a static
port channel.
Notes on commands
The interface range command that is displayed on this slide groups switch
ports 1, 2, 6 and 7, and then modifies their configuration as one group. The
ports are each 10-Gbps Ethernet interfaces. Notice how the prompt changes
(config ==> config-if) after entering this command. This new prompt indicates
that the next command applies to the group of interfaces specified in the
previous command.
The channel-group command creates the port channel from the interfaces
that are specified in the previous command. The 1 in the command specifies the
creation of port channel 1, or Po1. The on parameter specifies that Po1 is a
static port channel.
Remember that these commands must be run on both switches that attach to the
port channel.
The main thing to note for this example, is that the LAG shows up active on each
device but WILL NOT work properly. A LAG between a dual-NIC server and a
switch is valid, and is discussed later. But trying to aggregate links between
switches and links between a switch and a server is not valid.
This screen displays information for a different port channel than created on the
previous slide.
What if a LAG contains links that are distributed across stacking units? The default
behavior is to distribute locally received ingress traffic across all LAG links in the
stack per the selected hashing algorithm. When local is enabled, traffic is
forwarded only on LAG interfaces attached. Forwarding is disabled to LAG
interfaces on other stacking units. Forwarding paths are reduced through restricting
LAG hashing to only select egress links on the stack unit where the traffic
ingresses.
CAUTION: If the capacity of the local egress LAG links is exceeded, traffic is
discarded. Use of the local option should be carefully considered before enabling.
The operator must ensure that sufficient egress bandwidth is available in the LAG
links on every stack member to avoid excessive discards.
The main thing to note for this example, is that the LAG shows up active on each
device but WILL NOT work properly. A LAG between a dual-NIC server and a
switch is valid, and is discussed later. But trying to aggregate links between
switches and links between a switch and a server is not valid.
Introduction
This lesson displays and describes the commands that are used to configure and
verify dynamic port channels.
The commands used to configure a dynamic port channel are similar to the
commands used to configure a static port channel.
Remember that these commands must be run on both switches that attach to the
port channel.
This screen displays information for a different port channel than created on the
previous slide.
LAG Hashing
Introduction
Introduce LAG hashing and explain the ways that traffic is distributed across the
multiple links in a port channel.
This diagram shows flows in one direction only. Response to these I/O flows
traveling in the other direction from the switch on the right may use a different link
on the LAG. Each switch calculates hashing independently.
Enhanced hashing mode is the recommended and default hashing mode for Dell
EMC Networking N-Series switches.
The various hashing algorithms use some variation of the following information
from the MAC and IP header:
Source or destination MAC address
Source or destination IP address
Source or destination TCP or UDP port number
EtherType
Source switch module and port ID
It is possible that traffic may not be balanced across the links, depending on which
hashing mode is used. For example, if most traffic is directed at a single IP
address, all of that traffic would traverse a single link if the hashing mode is set to
destination IP address. It is important to understand traffic patterns when setting
the hashing mode.
There are seven LAG hashing modes. They are displayed using the hashing-
mode ? command. Mode 7 - Enhanced hashing mode is recommended and set by
default because it has the best load balancing performance usually.
Introduction
This lesson covers common deployment scenarios in which LAGs are used in a
campus environment.
Requirements:
The server must have a Network Interface Card—NIC teaming configuration.
NIC teaming enables multiple Ethernet network interface adapter ports on the
server to act as a single virtual network adapter port.
NIC teaming only provides load balancing and failover when multiple
network adapter cards are used.
The NIC team uses the MAC address of the primary NIC team member.
LACP is configured to provide dynamic link aggregation and to communicate
with LACP running on the switch.
Switches should have LACP enabled and use dynamic port channels.
When members are deleted from a LAG they become normal links, and
spanning tree maintains their individual link state information.
If there is more than one LAG between two switches, STP blocks one of them to
prevent network loops. This is the same behavior as for non-LAG interfaces.
STP causes the switch to select the path cost based on the link speed. The default
cost values are:
VLANs treat the port channel as a single interface, not as multiple individual
interfaces.
Features such as VLAN trunking apply to the port channel, not to the individual
paths that make up the port channel.
The LAG interface as a whole can be a member of a VLAN complying with
IEEE 802.1Q.
When members are added to a LAG, they are removed from all existing VLAN
membership. LAG members assume the VLAN membership of the LAG.
When members are removed from a LAG, they are added back to the VLANs
that they were previously members of as per the configuration file. The VLAN
membership for a port still can be configured when it is a member of a LAG.
However this configuration is only applied when the port leaves the LAG.
Module Summary
Refer to the student lab guide for instruction to complete the lab.
Introduction
Introduction
This lesson introduces the Multi-switch Link Aggregation Group or MLAG feature,
and compares it to LAG and stacking, explains basic operation, and discusses
limitations.
Introduction to MLAG
An MLAG enables a port channel from a single switch to connect with two MLAG
peer switches. The peer switches must have a peer link between them.
A LAG has multiple connections that act as one larger point-to-point connection. An
MLAG enables two switches to act like one switch from a point-to-point LAG
prospective. The ability of two switches to act like a single switch is an MLAG. This
feature enables a switch to create a LAG to two separate switches for physical
diversity, while still acting like a single bundled interface to manage.
MLAG Advantage
STP Blocking
STP is deployed to avoid packet storms due to loops in the network. STP sets
ports to blocking state. These ports do not carry traffic. When a topology
change occurs, STP reconverges.
MLAG
MLAG acts as one switch not two so a loop is not created. None of the links are
blocked. Traffic can flow over both links.
MLAG Components
MLAG components
MLAG switches
– MLAG-aware switches run Dell Network operating system switch firmware.
No more than two MLAG-aware switches can pair to form one end of the
LAG.
– Stacked switches do not support MLAGs. SW1 and SW2 are MLAG peer
switches. The switches form a single logical end point for the MLAG from the
perspective of Switch A.
MLAG interfaces
– MLAG functionality is a property of port channels.
– Port-channels configured as MLAGs are called MLAG interfaces.
– Administrators can configure multiple instances of MLAG interfaces on the
peer MLAG switches.
– Port-channel limitations and capabilities like min-links and maximum number
of ports that are supported per LAG also apply to MLAG interfaces.
MLAG member ports
– Ports on the peer MLAG switches that are part of the MLAG interface (P1 on
SW1 and S1 on SW2).
Non-redundant ports
– Ports on either of the peer switches that are not part of the MLAG (ports P4
and S4). MLAG interfaces and non-redundant ports cannot be members of
the same VLAN. A VLAN may contain MLAG interfaces, or a VLAN may
contain non-redundant ports, but not both.
MLAG peer-link
– A link connects two MLAG peer switches (ports P2, P3, S2, S3). Only one
peer-link can be configured per device.
– The peer-link is crucial for the operation of the MLAG component.
– A port channel must be configured as the peer-link.
– All VLANs configured on MLAG interfaces must be configured on the peer-
link as well.
MLAG dual control plane detection link
– A virtual link that is used to advertise the Dual Control Plane Detection
Protocol (DCPDP) packets between the two MLAG switches. DCPDP is
optional and should be used cautiously.
– The protocol is used as a secondary means of detecting the presence of the
peer switch in the network.
– Do not configure the DCPDP protocol on MLAG interfaces.
Must be a LAG
o Dynamic LAGs are recommended not static LAGs
Must disable spanning tree
Peer link should be configured as a trunk port
o Can only support MLAG VLANs
o Must remove non-MLAG VLANs
Peer link should have multiple links to carry the bandwidth of the LAG
partner
MLAG Peers
MLAG Peers:
MLAG supports two Switches Only, not 1, not 3.
No Stacking, switches that are part of a stack cannot also perform MLAG
functions.
Switches elect a primary and secondary switch.
Primary switch handles LACP and STP protocols for redundant interfaces.
Each switch handles their own Non-redundant interfaces.
Forwarding Database (FDB) synchronized between switches.
Stacking vs MLAG
Stacking and MLAG can provide similar functions. The difference is in how the
stack is managed:
Stacking has a consolidated management structure.
Master controls the configuration of the whole stack.
If the stack needs a firmware upgrade, the whole stack must be upgraded
simultaneously. Upgrades require scheduled down time to reboot each
switch after a firmware upgrade.
MLAG has an independent management structure.
Configuring dual control plane detection protocol is optional, and not necessary,
because keep-alive messages that are sent through MLAG peer-link are sufficient
for setting up MLAG.
MLAG Caveats
MLAG Considerations
Peer switches must be the same model. For example, both switches are N3048.
Peer switches must be the same series. For example, 6.1.
N2000 and N4000 series cannot be peers because of different table sizes.
No stacking: MLAG is formed with two stand-alone switches only.
Upgrade scenario is minimally disruptive (not hitless)
Reconvergence equivalent to spanning-tree
Link failover has momentary packet loss
2 secs
Momentary LAG flap on MLAG partners
Primary switch failure
~14 seconds
Reconvergence equivalent to spanning-tree
MLAG Incompatibilities
To enable MLAG globally, go to configure mode and issue the feature vpc
command. Verify it is enabled with the show vpc brief command.
The peer-link is crucial for MLAG operation. The peer-link must be configured on a
port channel interface. Only one peer-link aggregation group is enabled per peer
switch. All instances of MLAG running on the two peer switches share the peer-link.
The peer-link must NOT have the spanning tree feature enabled.
View Members
View DCPDP
Debug VPC
Module Summary
4. What characteristics are required for links that connect MLAG peers?
Refer to the student lab guide for instruction to complete the lab.
Introduction
Introduce and show how to configure and discovery protocols on Dell EMC N-
Series switches.
Introduction
This lesson covers the proprietary Layer 2 discovery protocol of Cisco Systems—
Cisco Discovery Protocol, or CDP. It also covers a non-Cisco discovery protocol,
that is compatible with CDP, Industry Standard Discovery Protocol (ISDP.) Dell
EMC switches use ISDP because it is compatible with CDP.
CDP Overview
The show cdp neighbors command is used on Cisco IOS to show the data that
CDP collects.
The screenshot shows a Dell device N3024 detected from local port Gi 1/0/7
and Gi 1/0/7.
From this output alone, you cannot be certain it is the same device (though
hostname is the same).
By default, the CDP version 2 is enabled and all interfaces transmit and receive
CDP advertisements in 60-second intervals.
CDP Addressing
show isdp entry Shows detailed information for the identified CDP
<Device ID> neighbor
isdp holdtime Configures the interval for storing CDP data without an
<seconds> update.
Introduction
This lesson covers how to configure, disable, and monitor Link Layer Discovery
Protocol (LLDP).
Link Layer Discovery Protocol (LLDP) is based on the IEEE 802.1AB standard. The
standard defines the protocol, managed objects, and their definitions. The objects
and definitions enable the discovery of the physical topology and the connection
end-point information from neighboring devices on Ethernet networks. It uses a
network management information architecture in the form of a Management
Information Base (MIB) for compiling and storing information about devices on the
LAN. The network administrators access this information using the Simple Network
Management Protocol (SNMP) to query the MIB data of each device.
As with Spanning-Tree and other Link Layer protocols, LLDP relies on special
protocol data units, or PDUs, to exchange operational information between
participants. Similarly, LLDP PDUs are encapsulated inside the Ethernet frames for
transport. LLDP PDU frames are sent at 30-second intervals from each
participating device port.
When used for an LLDP PDU, an Ethernet frame has its Destination MAC address
set to one of three of the LLDP multicast addresses. These MAC addresses are
used to help switches and routers to process the frames locally that they received
and prohibit them from forwarding it. The MAC addresses are:
01:80:c2:00:00:00
01:80:c2:00:00:02
01:80:c2:00:00:0e
The EtherType field is set to 0x88cc. This value indicates that the Ethernet frame is
transporting an LLDP PDU.
Each device sends specific type, length, and value (TLV) information about itself to
directly connected neighboring devices. The information is organized into TLVs and
carried inside the special fields in the Ethernet frames.
The LLDP PDU portion of the frame starts with the following mandatory TLVs:
Chassis ID, Port ID, and Time-to-Live. The mandatory TLVs precede any optional
TLVs. The Ethernet frame ends with a TLV, which is named as "end of LLDPDU."
This TLV is always zero for both the type and length field.
The administrator configures the inclusion of the optional TLVs in the management
set. By default, they are not included.
Each TLV field carries a specific device information. The table sorts the information
by TLV type and displays the information that is contained in the TLV fields. Not all
devices support all the available TLV values. Device vendors choose which
optional TLVs to support. Scroll down to view the entire table.
TLV types 0–3 are mandatory - they must be included in each LLDP packet.
TLV types 4–8 are optional.
Type 127 can be used to transmit custom information.
0 End of LLDP PDU Marks the end of an LLDP data unit. Mandatory
The Wireshark capture shows the details of the LLDP frame. The Enabled
Capabilities TLV shows that the remote port supports bridging and routing.
3 Time-to-Live 120 s
By default, LLDP is enabled for Dell EMC switches running DNOS 6. All ports are
configured to transmit and receive LLDP.
In this example, a cable connects the Te1/0/7 ports of switches N1 and N2.
These ports are not yet configured as switchports.
Since the ports transmit and receive LLDP packets by default, N1 has received
data from N2 through its own Te1/0/7 port. The LLDP transmission from N2
included the values for the default TLVs.
Disabling LLDP
Since the LLDP service is enabled by default, and there is no command to disable
it. To prevent a switch from participating in LLDP, disable all ports from transmitting
and receiving the protocol.
Use these commands at the interface configuration level for each interface:
no lldp transmit – Use the no lldp transmit command in Interface
Configuration mode to enable the LLDP advertise transmit capability. To re-
enable local data transmission, remove the no from this command.
no lldp receive – Use the no lldp receive command in Interface
Configuration mode to enable the LLDP advertise receive capability. To re-
enable local data transmission, remove the no from this command.
Use the show lldp interface command to display the current LLDP interface
state. In this example, the command returns the configuration of all the interfaces. It
includes the transmit and receive states, and the TLVs that are being advertised.
Use the show lldp local-device command to display the LLDP data that may
be transmitted. This command can display summary information or detail for each
interface. The example shows the detail local device data for an interface. t shows
values from TLV1 to TLV8, except TLV3(Time -to-live). The TTL TLV is specific to
each LLDP PDU that is transmitted. Although the eight TLV values are listed in the
output, only the mandatory TLVs, 1–3, are transmitted by default.
Use the show lldp remote-device command to display the LLDP data that is
received on any of the interfaces of the system. This command can display
summary information or details for each interface. The example shows the detail
remote device data that is received on an interface. It shows values for TLVs 1–7.
Module Summary
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module provides a review of routing concepts and shows how to configure
routing on Dell EMC N-Series switches.
Routing Overview
Introduction
This lesson provides a review of the routing table and illustrates how to enable
routing on an N-Series switch
Overview
Historically, specialized devices that are called routers performed most IP routing,
although the routing logic is now possible using Layer 3 or multilayer switches.
The routing process uses a routing table to determine where to forward packets to
the next hop towards their final destination. The next routing process in the path
and then checks its routing table and forwards the packet to the next hop. This
process continues until the packet reaches its destination.
Route Types
In the example:
Router r1 is directly connected to two networks:
Interface Fe1 is connected to network 192.168.1.0 /24
Interface Fe2 is connected to network 192.168.2.0 /24
Router r2 is directly connected to two networks:
Dynamic—Dynamic routes are ones that are added automatically to the routing
table by a routing protocol. Routers use routing protocols to communicate with
each other, distributing route information with each other. It enables the routers to
automate the process of determining routes between any two nodes on the IP
network.
Default—The default route is a routing entry that specifies where to send a packet
that does not match any other routing entry. The default route is often used to
direct traffic bound for the Internet. The next hop is set to the IP address of the
router that connects the network to the Internet. The default route is either a static
route or a dynamic route. In the diagram, the static route is also a default route.
Use the following command to display the current state of the routing table:
The output of the command also displays the IPv4 address of the default gateway
and the default route that is associated with the gateway.
Inter-VLAN Routing
In DNOS 6, the routing process is disabled by default. To enable it, use the ip
routing command. Once enabled, the no ip routing command turns it off.
Inter-VLAN routing is performed on the N-Series switches. Each VLAN must have a
switched virtual interface that is created for it. To create them, enter Configuration
mode and use the interface vlan <vlan-id> command. Routing between
the SVIs occurs automatically—the switch is directly connected to each SVI, so no
additional routing table entries are needed.
Static Routes
Introduction
Static Routes
The Administrator can create a static route for each destination network. They are
stored in the switch configuration.
They are useful for routes that do not change or in switches that do not support
certain routing protocols. They are also useful for handling traffic to unknown
destinations, such as the Internet.
IP Route Command
The default route is a route statement that indicates no bits have to match (/0).
Packets always use a route with the most matching bits:
10.1.2.3 goes to 10.0.3.2 – the first 16 bits match the first statement.
10.2.4.7 goes to 10.0.3.2 – the first 16 bits match the second statement.
10.3.4.7 goes to 10.0.3.2 – the first 16 bits matched the 3rd routing statement.
11.3.4.5 goes to 10.0.3.14 – no bits match the first three statements. 0.0.0.0 /0
matches all packets – it uses this default route.
The default route is also called the route of last resort. Any route that matches 1-24
bits will be a better match. Only destination addresses that have not matched any
other route are compared to the default route.
The icons that are used in this diagram are most readily identified as routers.
However, as discussed in previous modules, there is reason to differentiate
between a router and Layer 3 switch for Ethernet routing. Within the industry, it is
not uncommon for Layer 3 switches to be diagramed as routers.
The distribution routers are directly connected to each IP subnet within their
building. No static routes are needed to access the directly connected segments.
Static routes can be used to access the networks in the other two buildings. The
example illustrates two examples of static routes for Building A. The possible static
route entries shown are:
Use a static route entry for each building:
Building B – ip route 10.1.0.0 /16 10.0.3.2
Building C – ip route 10.2.0.0 /16 10.0.3.2
Use one static route that matches both buildings:
The core switch requires 3 static routes between core and distribution
switches/routers. There is one static route for each building.
When next hop IP addresses are used without an egress interface, the router must
use the routing table lookup process twice.
If using these longer static route entries with both egress interface and next hop IP:
Packet arrives at Core router with a destination IP address of 10.2.3.4.
Using static route entry that it learns next hop IP and which interface to send
ARP for that IP
Now it must ARP for 10.3.0.9 MAC address to route the packet
Routes that specify the next hop egress interface and IP are the most efficient
static route entries.
Drawbacks include the fact that if the Layer 3 egress interface changes, they do not
recover automatically. They also introduce another layer of human error into the
process. The correct next hop IP is chosen but incorrect egress interface.
Important: DNOS 6 only uses egress VLAN interfaces (only SVI can
be L3 in DNOS 6).
When egress interfaces are used, they rely on the next hop router identifying itself
as able to route packets for that network.
If the route statement uses next hop IP as shown earlier, it will send an ARP
request for the next hop IP address. The switch use that destination MAC address
on the frame for the 10.2.3.4 packet and sent it out interface the VLAN 200 – to the
router.
However, if using an egress interface as the next hop parameter – the core does
not know which router on VLAN 200 can route packets for 10.2.3.4.
Instead the core sends an ARP for 10.2.3.4 out of the link VLAN 200. Keep in mind
10.2.3.4 is NOT on this link. However, the distribution router is on this link, and has
a route table entry for 10.2.0.0/16. It knows where it is. The ip proxy arp is
enabled by default, and distribution router in building C answers the ARP for
10.2.3.4 – with its own MAC address. The core will use the destination MAC
address on the frame for the 10.2.3.4 MAC address – like it would with next hop IP
address.
The only key difference in the process is that the router must have IP proxy ARP
enabled – it is by default. It must analyze its routing table and decide that it knows
On DNOS 6 the destination interface is the VLAN, not physical interface, since
DNOS 6 must use SVI.
In this example, each distribution layer router has two destination routes for each
remote route - a route to each of the core routers. Each core router has two
destination routes for each remote route - 2 routes to each campus building.
There is no detection of ‘best links’ based on link speed, as there would be in some
routing protocols.
As the number of links increase the chance for human error in CLI entries increase.
There are 12 static routes that are shared between the 2 core switches, and 12
static routes split among the distribution routers.
Each core has two routes for each 10.x.0.0/16 network (2*three networks = 6
routes per core).
Each core has one route each for 0.0.0.0/0 network (one route per core).Seven
routes * two core = 14 static routes.
Each distribution has one route for each link to the core 0.0.0.0/0 route.
Introduction
In this lesson you will learn how to configure, monitor, and troubleshoot the Open
Shortest Path First (OSPF) protocol.
OSPF Review
OSPF is a link-state protocol. It identifies all the network destinations and applies
the Shortest Path First algorithm to select the best routes. It uses three tables:
Neighbor (Adjacency database), Topology (LSDB database), and Routing. The
routers communicate with each other by exchanging packets - used to discover
neighboring routers and also to exchange routing information. The packets are:
hello packets, database description packets, link-state Request, link-state update,
and acknowledgments.
OSPF link costs are arbitrary and have only three restrictions:
1. Link cost value can be 1–65,535
2. More preferred links have lower cost
3. Costs are additive - it is the sum of the costs of all hops that make up the route
Routing table entries are constructed from the network destinations and their
associated link costs.
OSPF Topologies
Simple Topology—When there are only a few routers, the entire AS is managed
as a single entity. All the routers function as peers. Each router contains status
information for each of the other routers in its link-state database. The size of the
LSDB increases on each router for every router added to the topology. There is a
limit to the size of a network that should use a simple topology.
OSPF areas divide OSPF networks into smaller subnetworks. Because each
OSPF area contains fewer IP networks, each router has a more manageable
LSDB. A network that only needs one area is known as a simple topology.
Each OSPF area has a number assigned to it—the first OSPF area is area 0. The
configuration of each participating interface contains the area number in which it is
participating.
In a hierarchical topology, one area is known as the backbone area. The backbone
area must be area 0. All other OSPF areas must be connected to the backbone.
OSPF routers can be classified in relation to the OSPF areas in which they are
contained. They are one of four types:
Internal Router – Internal routers are routers whose interfaces all belong to the
same OSPF Area. They have only one Link-State Database
Area Border Router (ABR) – ABRs connect one or more areas to the
backbone area. ABRs are gateways for intra-area traffic. They must have at
least one interface in area 0. They require more RAM and computing resources
than internal routers, for they have an LSDB for each of their connected areas.
ABRs summarize the topological information for each area and forward it to
their neighbors into the other area.
Backbone Router – backbone routers are any router that has at least one
interface in area 0. A backbone router can be an internal router of area 0, or an
ABR.
Autonomous System Boundary Router (ASBR) – ASBRs are gateways to
other network domains using other routing protocols. A common example is an
OSPF router that connects the autonomous system to the Internet using the
Border Gateway Protocol (BGP.) ASBRs may also redistribute static routes into
the OSPF domain, and routes from other IGPs such as Intermediate System to
Intermediate System (IS-IS) or Enhanced Interior Gateway Routing Protocol
(EIGRP.)
Interface/Network Description
Type
To reduce this amount of traffic, every broadcast network has a Designated Router
(DR) and a Backup Designated Router (BDR). Each router on the network
exchanges link-state information (synchronizes databases and forms an adjacency)
only with the DR and BDR.
Each OSPF router is responsible for describing its local piece of the routing
topology through the transmission of link-state advertisements.
In case the router have been lost or corrupted in the tables of a neighboring router,
it retransmits its LSA information in 30-minute intervals.
All LSAs begin with a common 20-byte header. This header contains enough
information to uniquely identify the LSA using link-state type, link-state ID, and
Advertising Router. Multiple instances of the LSA may exist in the routing domain
simultaneously. Then, it is necessary to determine which instance is more recent. It
is accomplished by examining the LS age, LS sequence number, and LS
checksum fields that are also contained in the LSA Header.
The LS Age field contains a value representing the number of seconds since the
LSA was originated. If the LSA reaches 1800 seconds (30 minutes), the originating
router refreshes the LSA by flooding a new instance. If the LSA reaches 1 hour, it is
deleted from the database.
OSPF requires the incrementation of the LS Age field at each hop during flooding.
The increment breaks any flooding loop by causing the Age field of the looping LSA
to reach the maximum value.
The LS Type represents which type of LSA created the entry. Each LSA type has a
separate advertisement format. The LS Types that are defined in RFC 2178 are
Types 1, 2, 3, 4, 5 and 7 which
are discussed in detail later.
Every OSPF router transmits a single router-LSA describing its active interfaces
and neighbors.
When OSPF receives an LSA of an unknown LS Type, Option bits may be set,
indicating acceptance of the protocol extension. Otherwise, an OSPF router does
not store or forward the unknown LSA. The Options field identifies the LSAs that
router forwards and which to keep.
Advertising Router is the Router ID of the router that originated the LSA. For
example, in network-LSAs this field is equal to the Router ID of the DR.
By default, the router ID defaults to the largest IP address assigned to any of its
interfaces when OSPF was enabled. It is manually configurable and follows the
four-octet template of an IP address.
maximum sequence number from the routing domain. Then, it floods the new LSA
with the minimum sequence number.
send external routing information for redistribution. They use type 7 LSAs to tell the
ABRs about the external routes. An ABR translates to Type 5 external LSAs and
floods as normal to the rest of the OSPF network. An ASBR inside an NSSA
generates them to describe routes redistributed into the NSSA. LSA 7 is translated
into LSA 5 as it leaves the NSSA. Routes are displayed as N1 or N2 in the IP
routing table inside the NSSA. Like LSA 5, N2 is a static cost while N1 is a
cumulative cost that includes the cost up to the ASBR.
Stub Area
One or more area border routers of the stub area must advertise a default route
into the stub area using summary-LSAs. These summary default routes are used
for any destination that is not explicitly reachable by an intra-area or inter-area
path. An area can be configured as a stub when there is a single exit point from the
area. Also, use a stub area when the choice of exit point need not be made on a
per-external-destination basis.
The OSPF protocol ensures that all routers belonging to an area agree on whether
the area has been configured as a stub. It guarantees that no confusion arises in
the flooding of AS-external-LSAs. There are a couple of restrictions on the use of
stub areas. Virtual links cannot be configured through stub areas. Also, AS
boundary routers cannot be placed internal to stub areas.
Not-So-Stubby-Area (NSSA)
NSSAs are similar to the existing OSPF stub area configuration option, but have
the following two more capabilities:
External routes originating from an ASBR connected to the NSSA can be
advertised within the NSSA.
External routes originating from the NSSA can be propagated to other areas,
including the backbone area.
Stub, NSSA, totally stub, and totally NSSA all implement a default route towards
area 0.
OSPF routers learn about their neighboring routers or detect any failed links
through the periodic exchange of Hello Packets.
Each OSPF router within an AS learns about the active interfaces and their
associated costs of its neighbor routers through the exchange of LSAs.
LSAs are exchanged through the unique mechanism called Reliable Flooding. The
router compiles LSAs into the link-state database. In a stable OSPF network, every
router has a database identical with its neighbors.
An OSPF router derives its route table by applying an algorithm to the information
in the LSDB. Then, it calculates the lowest-cost path from the router to every
known destination. When represented graphically, it would look like a tree diagram
with the subject router at the root of the tree.
Routers that are responsible for the exchange of information between logical areas
pass summary information. The ABRs can aggregate the internal routes of a
member area into a single destination route.
Note: A larger version of the slide graphic appears at the bottom of this notes
section.
Once neighboring routers have exchanged Hello Packets and have established bi-
directional communication, they form adjacencies with the DR and BDR. If they are
the DR or BDR for a given subnet, and then they form adjacencies with every
router on the segment of the interface. Routers first become adjacent with their
neighbor to facilitate the exchange of LSAs and synchronize Link-State Databases.
Once the LSDBs have been synchronized, they are said to have established FULL
Adjacency.
Neighbor States
Full Adjacency
Routers can be in this state in Type-1 and Type-2 LSAs. If it is the initial phase, the
SPF Algorithm runs next.
Neighbor Events
SeqNumberMismatch
This event signals an error in the adjacency establishment process. The packet is
ignored, and the interface transitions to the ExStart state.
BadLSReq
An LSR has been received for an LSA not contained in the LSDB. It indicates an
error in the LSDB Exchange process.
BadLSReq is really a continuation from the Hello Protocol, events that may cause a
transition other lower states are also valid and may apply. These include 1Way,
KillNbr, inactivityTimer, LLDown, and AdjOK.
Neighbor Neighbor
State State
Init Init
Hello, Seen [null], RID 192.168.1.1
2-way
Hello, Seen [192.168.1.1], RID 192.168.2.1
2-way
Hello, Seen [192.168.1.1, 192.168.2.1], RID 192.168.1.1
DR
Hello, DR=z.z.z.z DR
Election* Election*
ExStart
DD (LSA Headers)
Exchange Exchange
DD (LSA Headers)
Full Full
*If Required
OSPF Packets:
Communicate directly over IP, using IP protocol 89.
Should be given preference over regular IP data traffic
Sent over adjacencies
Sent to multicast address:
224.0.0.6 (DR/BDR)
224.0.0.5 (all other OSPF routers)
Utilize a common protocol header
A router discovers neighbors by sending OSPF Hello packets out to all its
interfaces.
By default, a router sends Hellos every 10 s. If subsequent Hello packets are not
received within 40 s, neighbor relationship is terminated. They are only recognized
by routers that are attached to the same subnet with same subnet mask.
It contains information about parameters for: Hello Interval and Router Dead
Interval. A router learns the existence of a neighboring router when it receives the
OSPF Hello from its neighbor.
Failure is detected when a router does not receive a Hello from a neighbor within
40 s.
The Hello protocol ensures that neighboring routers agree on timing parameters
and can aid in link failure detection. A fault is detected way before this time by the
absence of Hello packets.
In a broadcast environment, it contains the OSPF router IDs of all routers the
sender has heard up to the point of transmission. The overhead of sending multiple
Hellos is eliminated.
The collection of all OSPF LSAs is called the link-state database. Each OSPF
router has an identical link-state database. It gives a complete description of the
network including the routers, the network segments, and how they are
interconnected.
Link-state databases are exchanged between neighboring routers soon after the
routers have discovered each other. The link-state databases are maintained
through a procedure called reliable flooding.
The routers use this procedure to synchronize their databases once the hello
protocol determines a bi-directional connection between router neighbors.
When a router detects that portions of its LSDB are out of date, it sends a link-
state request packet to a neighbor. It is a request for a precise instance of the
database entry.
It consists of the OSPF header plus fields that uniquely identify the database
information that the router is seeking.
Student Note:
Write down any key points that will support your understanding.
____________________________________________
____________________________________________
____________________________________________
OSPF Configuration
Enable interface
3. Enable at least one OSPF process – use the router ospf command
4. Configure OSPF:
Optional Configuration
Optional configuration
Configure the OSPF Router ID by setting the OSPF Router ID to a loopback
address reachable from the routed network. Enabled loopbacks are always
reachable. If the Router ID is set to an IP address that is down, it interrupts
OSPF operations.
Redistribute routes from other processes to OSPF.
Configure passive interfaces. If there is no other OSPF on a network, it is a
good practice to make the interface passive, preventing hackers from entering
the OSPF network.
Configure stub areas and virtual links.
Configure virtual links to OSPF areas that cannot be physically connected to the
backbone (Area 0).
Propagate the default route to other devices.
Optionally, the hello and dead interval can be changed on VSI interfaces to match
the intervals of connected neighbors. If the intervals do not match, adjacencies do
not form. The default hello interval is 10 s, and the default dead interval is 40 s.
Enter the VLAN interface configuration mode
Adjust the hello and dead intervals.
Verify OSPF operation of the local device by OFPF process ID. Verify OSPF
neighbors and status.
Student Note:
Write down any key points that will support your understanding.
____________________________________________
____________________________________________
____________________________________________
Module Summary
1. Using more and more Static Routes becomes more risky because there is more
likelihood of human error. True or False?
2. What number MUST be assigned to the OSPF backbone area?
3. On an Ethernet segment which router synchronizes its Link State Database with
all other routers on the segment?
Lab: Routing
Lab: Routing
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module covers the application of Policy-Based Routing in a Dell EMC N-Series
networking environment.
Introduction
Policy-based routing can be used to change the next hop IP address for traffic
matching certain criteria. This tool can be useful to override the standard routing
table for certain traffic types.
PBR is used in parallel with route determination through standard routing protocols.
Several departments in a company typically share large networks using VLANs,
which increases efficiency. With the use of Policy-Based Routing, another layer of
control is introduced. PBR enables administrators to evaluate incoming traffic on a
switch, and apply rules to each packet that override standard routing protocols.
With standard routing, when a router receives a packet, its route is determined
using the destination IP address. The router uses this information and determines
the next hop for the packet that is based on the routing or forwarding table. Also
known as the Routing Information Base, the routing table contains a list of the best
routes from each routing protocol. The router uses the routing table to modify the
source and destination MAC addresses of the packet, and then forwards it to the
next hop.
PBR is the process of altering the path of a packet, using criteria other than the
standard routing criteria. Besides the standard protocols, PBR can be used to
condition routers to consider different parameters for routing packets. PBR may
consider application, transport, network, and link layer data information contained
in the packet. PBR is often implemented using special rules which, when triggered,
assign or mark the packet to a specific routing table with unique route entries.
Consider an organization that has two network links between its two primary
locations. One link is a high bandwidth, low latency high-cost link, and the other a
low bandwidth, higher latency, lower-cost link. Using standard routing protocols
such as EIGRP or OSPF, the higher bandwidth link would get most of the network
traffic. Routing decisions are based solely on the metric calculations that are based
on bandwidth and/or latency characteristics. PBR gives the ability to intentionally
route higher priority traffic over the high bandwidth/low latency link. Also, lower
priority traffic may be sent over The low bandwidth/higher latency link.
PBR enables administrators to shape traffic to traverse the best route for the type
of data it carries. This option ensures that forwarding decisions are made that yield
optimized network traffic performance compared with link utilization costs. For
many power network users, PBR is the most cost-effective way of consistently
meeting performance expectations at the lowest cost possible. This method is far
better than enabling the standard routing protocols to send most or all traffic over
the highest-performing available paths.
In this use case example, the network administrator wants different applications to
use different network paths. A routing policy that supports this requirement could
be configured to inspect packet source and protocol information such as a
destination TCP port number. In this example, a routing policy has been created to
redirect HTTP traffic that connects to TCP port 80. The routing policy also redirects
FTP traffic that connects to TCP Ports 20 and 21 based on specific source
addresses.
PBR is set up and configured using a match/set process. PBR traffic is matched
against a special access control list - ACL - using the match command. ACL
statements are called clauses. The traffic path parameters are changed using a set
command. PBR uses the ACL with Route Map information to define the policy.
Route maps enable routing policy definition for the traffic, causing a packet to be
forwarded to a predetermined next-hop interface. Each entry in a route map
statement contains a combination of match and set statements. A route map
specifies the match criteria that correspond to ACLs, and then a set statement
specifying an action if a match clause is met. Multiple match and next-hop
specifications can be defined for the same interface. When a PBR policy has
multiple next hops to a destination, PBR selects the first operational next hop that
is specified in the policy. If none of the direct routes or next hops in a policy is
available, the packets are forwarded as per the standard routing table.
PBR policies are defined, and routing decisions made using the Access List and
Route Map:
PBR uses Access Lists and Route Maps to selectively route an IP packet
PBR uses a match/set process to find and make routing decisions
Traffic is matched against clauses in a Route Map using a match command
After a clause match, PBR changes traffic network path or parameters using a
set command
Routing must be enabled in the switch. The Time To Live - TTL - counter is
decremented for PBR routed packets. The destination MAC is rewritten in PBR
routed packets. ARP lookups are sent when required for unresolved next hop
addresses. Policy-routed packets are routed using routing table entries. Ensure
that routes exist in the routing table for PBR next-hop and default next-hop rules.
Configuring PBR consists of installing a route-map with match and set commands,
and then applying the corresponding route-map to the interface. IP routing must be
enabled both globally and on each routed interface.
PBR Actions
SET commands must be formed correctly to ensure proper and consistent policy-
based routing. Here is information about the SET commands function in PBR.
This feature causes the router to compare all incoming packets on the VLAN
interface against the route-map, to match certain criteria in the route-map. An
interface can only have one route-map tag, but an administrator can have multiple
route-map entries with different sequence numbers. If the criteria for a single entry
matches the incoming packet, the entry is chosen and its SET statements are
performed. If two or more entries match the criteria, the one with the lowest
sequence number is chosen and its SET statements are performed. If there is no
match, packets are routed as usual. A route-map statement that is used for PBR is
configured as permit or deny. If the statement is marked as deny, traditional
destination-based routing is performed on the packet meeting the match criteria. If
the statement is marked as permit, and if the packet meets all the match criteria,
the set commands in the route-map statement are applied. If no match is found in
the route-map, the packet is not dropped. Sometimes, there can be a match in an
ACL permit clause with a deny in the route-map. There may also be a match in an
ACL deny clause with a permit in the route-map. Either of these scenarios results in
the packet being routed using the destination-based routing protocol. The
difference is that the former increments the route-map counter while the latter does
not.
Introduction
This lesson presents three major use cases that benefit from policy-based routing.
An organization has several work groups that include the Human Resources and
Accounting departments. Each group is assigned its own IP address range within
the same subnet. There is a requirement to route HR traffic through ISP A only,
while Accounting department traffic is routed through ISP B only. The switch that
routes the traffic for the work groups can use policy-based routing to configure and
enforce the required segregation. PBR can isolate HR traffic to ISP A and
Accounting traffic to ISP B. PBR uses a route-map, where a match statement is
configured based on the IP address range of each group. Equal access, and
Source IP address-sensitive routing is achieved using this technique. Two access
control lists, one each for accounting and HR, are created to associate each packet
to its corresponding work group. Packets coming from one range of IP addresses
are associated with the Accounting group. Packets from another range of IP
addresses are associated with the HR group. The route-map is used to determine
the group that each packet belongs to and directs it through the wanted interface
using a “default next-hop” statement.
Remote servers X, Y, and Z are cached hourly to local servers A, B, and C. Users
on VLAN 10 use the local cache servers most of the time. But periodically the users
must access the most current data directly from servers X, Y, and Z. These servers
are located at a remote office and accessed over a dedicated WAN. Traffic on the
path between the local and remote servers is oversubscribed, often using 90% of
the available bandwidth. A Policy-Based Route is used to minimize delays between
the user workstations on VLAN 10 and avoid the bottleneck that is depicted with
the red arrow.
Introduction
In this example, PBR is used to route packets from host 192.168.5.5 in VLAN 5 to
host 192.168.10.10 in VLAN 10. The router uses the next-hop IP address of
192.168.15.15 in VLAN 15. Using these commands configures PBR to bypass
normal routing through VLAN 10 with a next-hop IP address of 192.168.10.10. The
configuration is validated by inspecting the Route Map for accuracy.
Command Description
Also use these and other show commands to help in validating PBR functionality
along with its coexistence alongside standard routing.
Command Description
Module Summary
Refer to the student lab guide for instruction to complete the lab.
Introduction
VRRP Overview
Introduction
VRRP Terms
VRRP Router - A router running the Virtual Router Redundancy Protocol. It may
participate in one or more virtual routers.
Virtual Router - An abstract object that VRRP manages. The virtual router acts as
a default router for hosts on a shared LAN. It consists of a Virtual Router Identifier
and associated IP addresses across a common LAN. A VRRP Router may back up
one or more virtual routers.
IP Address Owner - The VRRP router that has the IP addresses of the real
interfaces for the virtual router. This router responds to packets addressed for
ICMP pings, TCP connections, and so on.
Virtual Router Master - The VRRP router that is assuming the responsibility of
forwarding packets that are sent to the IP addresses associated with the virtual
router. The virtual router master also answers ARP requests for these IP
addresses. If the IP address owner is available, it always becomes the Master.
Virtual Router Backup - The set of VRRP routers available to assume the
forwarding responsibility for a virtual router, should the current Master fail.
VRRP Overview
Consider a typical network configuration using VRRP. Hosts on the network could
be configured with the IP address of Router 1, 2, 3 or 4 as the default gateway.
Instead, the virtual IP address that is configured for the VRRP Group is used.
When any host on the LAN segment wants to access the Internet, it sends packets
to the IP address of the virtual gateway.
To understand VRRP, first examine the issue that it resolves. When internal
networks require highly available access to external networks like the Internet, one
approach is to install duplicate sets of equipment that do not interact. That
separation provides connectivity, but at a higher than necessary cost. VRRP is an
alternative where existing network equipment for external access can be grouped.
The group of devices provide a single virtual address that internal users access for
external communications.
VRRP Groups are routers that are on a common subnet and share a group
number. There is a group master that owns the common (shared) virtual IP address
and virtual MAC address for the group. All group members have the same virtual IP
address or have that address as one their interfaces.
VRRP uses the Virtual Router Identifier-VRID to identify each virtual router
configured. VRRP packets are transmitted with the virtual router MAC address as
the source MAC address. The MAC address uses this format: 00-00-5E-00-01-
{VRID}. The first three octets are unchangeable. The next two octets (00 and 01)
indicate the address block that is assigned to the VRRP protocol, and are
unchangeable. The final octet changes depending on the VRRP Virtual Router
Identifier and enables up to 255 VRRP routers on a network.
VRRP specifies a MASTER router for end stations on a LAN. The MASTER router
is chosen from the virtual routers by an election process and forwards packets to
the next hop IP address. If the MASTER router fails, VRRP begins the election
process to choose a new MASTER router and that new MASTER continues routing
traffic. The other routers that are represented are BACKUP routers.
VRRP specifies a MASTER router that owns the next hop IP and MAC address for
end stations on a LAN.
An election process chooses the MASTER router, which forwards packets that are
sent to the next hop IP address. If the MASTER router fails, VRRP begins the
election process to choose a new MASTER router and that new MASTER
continues routing traffic.
VRRP uses the Virtual Router Identifier (VRID) to identify each virtual router
configured. The IP address of the MASTER router is used as the next hop address
for all end stations on the LAN. The other routers are BACKUP routers.
A default gateway is the router that provides you access to other networks, to the
rest of the world, to the Internet.
Redundancy means that there is another option when the acting default gateway
fails, or when the link connecting that router to the Internet fails.
If three keep alive messages are missed (from the Master), the backup router
assumes role as the Master.
IETF RFC 5798 defines VRRP.
If the static default IP gateway fails, VRRP prevents loss of network connectivity to
end hosts. By implementing VRRP, you can designate routers as backup routers if
the default master router fails. VRRP fully supports Virtual Local Area Networks
(VLANs) and stacked VLANs (S-VLANs).
In this simple VRRP scenario, the end-hosts have a default gateway route to the IP
address 172.16.0.1 and both routers run VRRP. The router on the left becomes the
Master for the virtual router (VRID 1). The router on the right is the Backup for the
virtual router. If the router on the left should fail, the other router takes over the
virtual router and its IP address. Having a backup and provides uninterrupted
service for the hosts. If the Router is the owner of the Virtual address, the priority
must be set to 255 with no preempt.
In this scenario you have two virtual routers, VRID 1 and 201. This configuration
not only enables redundancy, but also for load balancing between the routers.
Half of the hosts are configured with a default gateway of 172.16.0.1, and the other
half are set up with 172.16.0.201 as the default gateway.
In this scenario, half of the hosts install a default gateway route to virtual router
172.16.0.1. The other half of the hosts install a default gateway route to virtual
router 172.16.0.201. In this configuration, router 172.16.0.211 is the Backup router
for both Virtual Routers. No traffic is being sent through this middle router until one
of the Master routers of either Virtual Routers fails. This configuration provides full
redundancy for the Master routers, although the Backup router may become
overloaded if both Master Routers fail simultaneously.
VRRP Master is in charge of all routing functions. The backup does nothing for the
subnet it is backing up, other than check that the Master is alive. The Master only
advertises a single subnet. Protocols that are supported include Ethernet, Token
Ring, and MPLS using IPv4 or IPv6.
VRRP packets are transmitted with the virtual router MAC address as the source
MAC address.
The MAC address is in the following format: 00-00-5E-00-01-{VRID}
– The first three octets (00-00-5E) are unchangeable and are the
Organizationally Unique Identifier (OUI). The Internet Assigned Numbers
Authority (IANA) assigns this number.
– The next two octets (00 and 01) indicate the address block that is assigned
to the VRRP protocol and are unchangeable.
– The final octet changes depending on the VRRP Virtual Router Identifier and
enables up to 255 VRRP routers on a network.
VRRP Packet
– Version - the version field specifies the VRRP protocol version of this
packet. All N-series switches use version 2.
– Type - the type field specifies the type of this VRRP packet. The only packet
type that is defined in this version of the protocol is: 1 - ADVERTISEMENT.
A packet with unknown type is discarded.
– Priority - The priority field specifies the sending of VRRP router priority for
the virtual router. Higher values equal higher priority. The priority value for
the VRRP router that owns the IP addresses associated with the virtual
router is 255 (decimal). VRRP routers backing up a virtual router use priority
values from 1 to 254 (decimal). The default priority value for VRRP routers
backing up a virtual router is 100 (decimal). The priority value zero (0) has
special meaning indicating that the current Master has stopped participating
in VRRP. This number is used to trigger Backup routers to quickly transition
to Master without having to wait for the current Master to time out. This
method is a clean way to transition the Master responsibilities with minimal
delay.
– Count IP Addrs - The number of IP addresses contained in this VRRP
advertisement.
– Authentication Type - the authentication type field identifies the
authentication method being used. Authentication type is unique on a per
interface basis. A packet with unknown authentication type or that does not
match the locally configured authentication method is discarded.
The defined authentication methods are:
– No Authentication
– Simple Text Password (there is no default password)
– IP Authentication Header
– Adver Int - the Advertisement interval indicates the time interval (in seconds)
between ADVERTISEMENTS. The default is 1 second.
– Checksum - the checksum field is used to detect data corruption in the VRRP
message.
– IP Addresses - One or more IP addresses that are associated with the virtual
router. The number of addresses that are included is specified in the "Count IP
Addrs" field.
– Authentication Data - the authentication string is only used for simple text
authentication, similar to the simple text authentication found in SPF
If the primary cluster interface fails or is disconnected, the backup master uses the
health index of the backup master. This event triggers failover of the cluster master.
After the primary router loses all uplink connectivity, it will trigger the backup router
to immediately transition to the master.
To monitor an interface and use VRRP tracking, use the following command:
In this example, if the upstream connection to the Internet from R1 fails, then the
priority for R1 becomes: 200–150 = 50. This new priority results in R2 being elected
as the new master (as its priority is 100).
The lowered priority of the VRRP group may trigger an election, because the
Master/Backup VRRP routers are selected based on the VRRP priority of the
group. Tracking features ensure that the best VRRP router is the Master for that
group. The sum of all the costs of all the tracked interfaces should not exceed the
configured priority on the VRRP group. If the VRRP group is configured as Owner
router (priority 255), tracking for that group is disabled.
VRRP Configuration
The N-series VLAN interfaces are assigned IP addresses. The VRRP configuration
focuses on assigning the VLAN to the VRRP group.
Command Description
VRRP Verification
To verify the VRRP configuration, run the show vrrp command. The output
shows that the administrative state of the router is Master, the configured priority is
150, and VLAN group membership is VLAN 121.
The tracked interface command is linked to the VLAN interface with the
decrement option. The priority is set to 150. If the tracked interface loses
connectivity, it decrements the priority by 110. (150 - 110 = 40 If the backup router
has a higher priority than the current Master. The backup now assumes the Master
role.
You can disable preemption for a VRRP group member. If you disable preemption,
a higher-priority backup router does not take over for a lower-priority master router.
Virtual router group number for which authentication is being configured. The group
number is configured with the vrrp ip command.
Module Summary
3. What are two failure scenarios that trigger an election to a new Master
Gateway?
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module covers Voice over IP (VoIP) in a Dell EMC networking environment.
The technology and concepts that enable voice traffic on the campus network are
introduced. Switch requirements, Quality of Service (QoS), use cases,
configuration, and validation steps are also covered.
VoIP Overview
Introduction
This lesson introduces VoIP and compares it to a traditional campus phone system.
Common telephony components are introduced, and terminology is defined.
Explanation of terms:
PBX—Private Branch Exchange
This hardware is required at every site. PBX systems are at the core of circuit
switched telephone systems. In circuit switched systems, resources are
dedicated to individual phone calls. Dedicated resources result in good audio
quality, but is less efficient, and more expensive than packet-switched networks,
such as VoIP.
PSTN—Public Service Telephone Network
Telephone service from a provider like AT&T, or Quest in the U.S., and other
telecommunication companies throughout the world.
ACD—Automatic Call Distribution system
Explanation of terms:
SIP—Session Initiation Protocol—used for voice and video in a unified
communications solution. A SIP trunk is provided over a public or private
Internet connection through a SIP provider.
MPLS—Multi Protocol Label Switching—forward packets based on MPLS “tags”
instead of by IP addresses. This switching method enables forwarding one type
of traffic, such as voice, differently than other types of traffic. MPLS makes
virtual circuits possible.
ITSP—Internet Telephony Service Provider—provides SIP trunk for external
VoIP traffic.
POE—Power over Ethernet—IP telephone handsets require power. This power
is provided over the Ethernet cable. So VoIP capable network switches must
deliver power to attached handsets.
UPS—Uninterruptible Power Supply—used to ensure continuous power to the
phone network when there is a building power outage.
IP Phone Technology
The IP phone includes an internal 3-port Layer 2 switch to go with the phone
hardware. The IP phone has two external connections. There is a network
connection that also provides power to the phone. There is also a place to plug in a
desktop or laptop.
The IP phone includes an internal L2 switch. The switch has three ports:
A trunk port connects the phone to the L2 LAN switch. A trunk port carries traffic
for both the voice VLAN and the data VLAN
Port for voice traffic to and from the internal phone hardware
Port for data traffic between the phone and an attached desktop or laptop
Introduction
Description: This lesson covers the network switch features that are necessary to
support VoIP in a campus environment.
Voice data is transported through a VLAN that is separate from VLANs that carry
normal traffic. Devices such as IP phones and voice servers send packets for voice
traffic over the voice VLAN.
The VoIP phone is configured to generate tagged packets for the voice VLAN. The
personal computer generates untagged packets. The untagged VLAN is the native
VLAN for the port.
Ports on N-Series switches can be set to operate in one of three modes: access,
trunk, or general. Switch ports connected to IP phones should operate in general
mode. The switchport mode general command enables a port to support
multiple VLANs but not have to be configured as a trunk port.
Switchport mode should be set to general mode to support both voice and data
VLANs on the same interface.
A switch port set to general mode accepts both VLAN tagged traffic, for voice,
and untagged traffic from a personal computer attached to an IP phone.
VoIP operates as one of many data streams on the network. To ensure that calls
have good quality, voice data packets must be prioritized and delivered in a timely
manner. Standard circuit-switched phone systems have an average latency of 45
ms. This latency is the delay between speaking into the phone, and hearing the
voice at the other end of the line. VoIP aims to have an average latency of 75 to
100 ms. Quality of Service—QoS settings ensure that voice data is prioritized in the
presence of other network traffic, to meet this latency target.
The egress port is the port that transmits frames out of the switch. Each switch port
interface has a transmit buffer that is divided into several queues. Each queue is
configured with a scheduling policy to determine the order in which frames are
transmitted onto the network. Higher priority traffic is placed in high priority queues
Voice traffic on the network is marked at either the Data Link Layer—Layer 2, or at
the network layer—Layer 3. Ethernet operates at Layer 2. IEEE defines 802.1p,
sometimes called dot-one-p, which is a standard for marking CoS for Ethernet. It is
used at the Data Link Layer. IP, at the network layer, uses Differentiated Services
Code Point—DSCP to mark traffic for CoS.
DSCP marking uses 6 bits of the 8-bit Type of Service—ToS field in the IP
header. DSCP provides up to 64 classes, or code points, for traffic.
Voice uses the DSCP value of 10 1110 (46 decimal) – which means High
Priority, Expedited Forwarding—EF.
Dell Networking N-Series switches can be configured to trust the DSCP marking of
incoming packets. DSCP is used to apply a scheduling policy to ensure priority for
voice traffic.
The IPv4 header has a 1-byte Type of Service (ToS) field as shown. The first 3 bits
are called IP Precedence and can be mapped to CoS values.
To enable more granularity, the DiffServ model uses the first 6 bits. This byte is
now called the Differentiated Services field.
This table lists the drop precedence for the various DSCP values.
For each class, the Drop Precedence value gives further control on which packets
to drop. Higher Drop Precedence means more likely to be dropped.
IP Phone AutoConfiguration
IP phones throughout the network must be configured so they are using the same
VLAN and the same DSCP and 802.1p values. Manually configuring phones can
be labor-intensive. Switches can be configured to automatically send configuration
information to each attached phone.
CDP is a Cisco proprietary protocol. Cisco phones that request CDP can be
configured using ISDP, which is an open protocol that is compatible with
CDP.
N-Series switches use ISDP to communicate with Cisco phones.
Using LLDP-MED TLV and a feature that is known as “Voice VLAN” switches can
pass the following configuration information to phones:
VLAN ID used for voice traffic
802.1p or DSCP marking values for voice traffic
Phones that are compatible with the LLDP-MED TLV reconfigure their settings to
match those settings received from the switch.
LLDP Example
This screen image shows a Wireshark capture of an LLDP TLV from the switch to
each handset. When the voice VLAN is enabled and added to an interface, the
switch port automatically begins transmitting LLDP-MED TLV “network policy.”
RTP is Real-Time Transport Protocol, a network protocol for delivering audio and
video over IP networks. RTP is used for streaming media such as telephony, video
teleconference applications, and so forth.
Some Cisco phones may only have the ability to learn configuration through CDP.
DNOS 6 implements ISDP, a CDP compatible protocol
ISDP can transmit configuration information to CDP phones:
CDP/ISDP Considerations
There are occasional support issues with using CDP. For this reason, if given a
choice between LLDP and ISDP/CDP, use LLDP. LLDP is an industry standard
and is more reliable than ISDP/CDP.
Cisco IP phones transmit CDP to discover the neighboring switch. They may
also support LLDP for discovery.
Typically if a phone supports both LLDP and ISDP it is MORE reliable to use
LLDP.
Occasionally once a Cisco phone receives CDP, it does not respond to or
attempt further LLDP discovery.
Consider turning off ISDP on switch interfaces that are connected to phones
that support both LLDP and ISDP.
Introduction
Description: This lesson covers the N-Series switch default configuration, and CLI
commands that are used to configure the voice VLAN.
VLAN All switchport interfaces belong to the native VLAN, VLAN 01.
Voice VLAN The Voice VLAN is not enabled. Once enabled it provides high
priority for voice traffic using a DSCP value of 46.
Shown are example commands that are used to enable and configure the voice
VLAN using DNOS 6.5.2. Note that some of the commands are different in earlier
versions of DNOS.
Create VLAN 10 for data, and VLAN 20 for voice, using the vlan commands.
Use the switchport voice vlan command to globally enable the Voice
VLAN feature on the switch. Prior to DNOS 6.5.2, the voice vlan command
was used.
The older voice vlan command was deprecated with DNOS 6.5.2.
Enter the interface configuration mode with the interface command. The
interface range command may be used to configure a group of interfaces.
The switchport mode general command enables the interface to service
both tagged voice traffic and untagged data traffic.
The switchport general allowed VLAN command adds a VLAN to an
interface. The tagged parameter sets the interface to transmit tagged traffic for
a VLAN. The untagged parameter sets the interface to transmit untagged
traffic. Untagged is the default.
In this example, untagged data traffic defaults to VLAN 10, while voice traffic is
tagged with VLAN 20. This is configured with the commands shown:
Enable IEEE 802.1p trust mode for the Voice VLAN-tagged packets. The
802.1p priority in the tagged voice packets will be honored.
N1(config-if)# switchport voice vlan priority extend 5
trust
The minimum bandwidth setting on the CoS queues comes into effect only
when there is congestion. Configure internal CoS queue 2 as strict priority to
ensure that egressing voice traffic is transmitted first on this interface. This
reduces latency for transmitted voice traffic.
The last two commands that are shown in the example, manipulate the
processing of the switch hardware queues. These queues map to DSCP or
802.1p tags. CoS queue 2 is used for voice traffic. The min-bandwidth
parameter shows all of the CoS queues, and the minimum bandwidth for each if
there is congestion. In this example, queue 2 is set to have a minimum of 50%
of switch port bandwidth. Queues are numbered 0-7.
Introduction
Description: This lesson introduces CLI commands that are used to confirm a voice
VLAN configuration.
Use the show voice vlan command to verify that the voice VLAN is enabled
and that the correct settings are configured for the voice VLAN on the interface.
Strict priority queues are serviced first before any weighted queues. The highest
numbered queue sends data first, and then the next highest strict queue, until
all queues have been serviced.
The weighted queue scheduler type selects packets for transmission, based on
weights that are assigned to each queue. The default weight for each queue is
equal to the Queue ID + 1. These weights are used to calculate the total
number of bytes, not packets that are transmitted. The transmit buffers of each
interface are composed of these queues.
CoS hardware queue settings can be set globally, or per interface. If the show
command for all interfaces does not provide correct values, try a specific
interface.
Module Summary
1. For a good quality of service, what is the minimum latency for voice traffic?
2. What is the difference between switchport access mode and general mode?
4. What command is used to enable the Voice VLAN feature on the switch?
Introduction
This module reviews the functionality of DHCP and shows how to configure both
DHCP server and DHCP relay on Dell EMC N-Series networking switches.
DHCP Overview
Introduction
This lesson reviews basic DHCP concepts for those persons configuring DHCP
features on Dell EMC campus networking switches.
What Is DHCP?
amount of broadcast traffic on the network. Packets that are received from the
DHCP server are relayed to the DHCP client. The DHCP relay agent is
configured using ip helper-address on L3 interfaces.
The steps for a DHCP client to obtain an IP address from a DHCP server are as
follows:
1. DHCP client software requests an IP address lease in a discover message. The
discover message is broadcast to all possible DHCP servers.
2. All available DHCP servers respond with a unicast offer message.
3. Client accepts the first offer message that it receives, then broadcasts a request
message in response. The request message verifies the offered address.
4. DHCP server sends a unicast ACK frame to acknowledge that the address is
leased to the client.
Administrators may also manually configure static IP address bindings for clients
using the host command in DHCP Pool Configuration mode. Static IP addresses
are most often used for DHCP clients for which the administrator wants to reserve
an IP address. For example, a computer server or a printer may need an address
that never changes. A DHCP pool can contain automatic or dynamic address
assignments or a single static address assignment.
DHCP Configuration
Introduction
This lesson shows how to configure and verify the DHCP server feature on Dell
EMC N-Series switches
4. Configure the network addresses and subnet mask for the address pool. Use
the network command in IP DHCP Pool Configuration mode to define a pool of
IPv4 addresses for distributing to clients.
5. Use the domain-name command in IP DHCP Pool Configuration mode to set
the DNS domain name which is provided to a DHCP client by the DHCP server.
The DNS name is an alphanumeric string up to 255 characters in length. To
remove the domain name, use the no form of the command.
6. Use the dns-server Command in IP DHCP Pool Configuration mode to set
the IP DNS server address which is provided to a DHCP client by the DHCP
server.
7. Configure optional settings:
This example displays the differences between configuring static address pools
and dynamic address pools.
The major differences from the example on the previous slide are:
Use the hardware-address command in DHCP Pool Configuration mode to
specify the MAC address to attach to a manually assigned IP address. To
remove the MAC address assignment, use the no form of the command.
Use the host command to specify a manual binding between an IP address
and the MAC address that is specified in the preceding hardware-address
command. To remove the manual binding, use the no form of the command.
The DHCP relay agent role is configured using an IP helper address. These
examples demonstrate how to configure the IP helper address globally on each
switch acting in the role of a DHCP relay agent.
Command Explanation
Use the show ip dhcp pool all command to view the information for each of
the address pools on the switch with the DHCP server enabled.
Use the show ip interface vlan command to see if an IP helper address has
been configured for DHCP.
This example displays the information for switch N3. The IP interface for VLAN 109
has a helper address of 192.168.1.41 defined.
The commands that are shown are used to verify additional information for the
DHCP server configuration on a switch.
Securing DHCP
Introduction
This lesson covers the DHCP snooping feature and how it is used to enhance
network security.
For example, suppose that a malicious DHCP client is plugged into the network. It
could try to send a DHCP Release message for an authorized DHCP client in an
attempt to steal the identity. The DHCP snooping feature compares the DHCP
release message to the DHCP snooping database and see that the MAC address
and port do not match. So, the DHCP server logs the event and drops the
malicious DHCP release message.
The table shows commands to implement the DHCP snooping feature on a switch
with a DHCP server enabled.
Use the show ip dhcp snooping command to display the DHCP snooping
global configuration.
Use the show ip dhcp snooping binding command to display the DHCP
snooping binding entries.
Module Summary
3. What security feature monitors DHCP messages between a DHCP client and
DHCP server?
Refer to the student lab guide for instruction to complete the lab.
Introduction
IPv6 Overview
Introduction
IPv6 addresses the main problem of IPv4, that is, the exhaustion of addresses to
connect computers or host in a packet-switched network. IPv6 has a very large
address space and consists of 128 bits as compared to 32 bits in IPv4.
IPv6 uses 128 binary bits to create a single unique address on the network. An
IPv6 address is expressed by eight groups of hexadecimal numbers separated by
colons. Therefore, it is now possible to support 2^128 unique IP addresses, a
substantial increase in number of computers that can be addressed with the help of
IPv6 addressing scheme. This theoretically allows for as many as
340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. In addition,
this addressing scheme will also eliminate the need of network address translation
(NAT) that causes several networking problems (such as hiding multiple hosts
behind a pool of IP addresses) in end-to-end nature of the internet.
IPv6 addresses are represented in the form of eight hexadecimal numbers divided
by colons as in the following:
2001:cdba:0000:0000:0000:0000:3257:9652
To shorten the notation of addresses, leading zeroes in any of the groups can be
omitted, for example:
2001:cdba:0:0:0:0:3257:9652
2001:cdba::3257:9652
However, the double colon shortcut can be used only once in the notation of an
IPv6 address. If there are more groups of all zeroes that are not consecutive, only
one can be substituted by the double colon; the others have to be noted as 0.
The IPv6 address space is organized using format prefixes, similar to telephone
country and area codes that logically divide it in the form of a tree so that a route
from one network to another can easily be found.
An Internet Protocol version 6 (IPv6) data packet comprises of two main parts: the
header and the payload. The first 40 bytes/octets (40×8 = 320 bits) of an IPv6
packet comprise of the header (see Figure 1) that contains the following fields:
Version/IP version – The 4-bit version field serves the same purpose as in
IPv4. It indicates the version of the IP protocol. For IPv6 packets, it is set to the
value of 6.
Packet priority/Traffic class (8 bits) – The 8-bit Priority field is used by the
originating node and the routers to identify the data packets. The data packets
belong to the same traffic class and distinguish between packets with different
priorities.
Flow Label/QoS management – The 20-bit flow label field can be used by a
source to label a set of packets belonging to the same flow. A flow is uniquely
identified by the combination of the source address and of a nonzero Flow label.
Multiple flows may exist from a source to a destination and traffic that is not
associated with any flow (Flow label = 0). The IPv6 routers must handle the
packets belonging to the same flow in a similar fashion. One example of a flow
would be a Voice over IP, or VoIP, conversation.
Payload length – The 16-bit payload length field contains the length of the data
field in octets/bits following the IPv6 packet header. It puts an upper limit on the
There are three categories of IPv6 addresses - unicast, multicast, and anycast.
IPv6 does not use broadcasts, as the multicast type can perform its task.
A unicast address acts as an identifier for a single interface. An IPv6 packet
sent to a unicast address is delivered to the interface identified by that address.
A multicast address acts as an identifier for a group of interfaces that may
belong to the different nodes. An IPv6 packet delivered to a multicast address is
delivered to the multiple interfaces. For example, a streaming video session
could be sent to a multicast address, and any interface with that address would
receive it.
An anycast address acts as an identifier for a set of interfaces that may belong
to the different nodes. Unlike a multicast address, an IPv6 packet that is
destined for an anycast address is delivered to the nearest interfaces that is
identified by the address and by the routers' routing protocol.
Link-local addresses are identified by the Format Prefix of 1111 1110 10. The
address always begins with FE80. With the 64-bit interface identifier, the prefix for
link-local addresses is always FE80::/64.
Unique Local IPv6 Unicast Addresses or ULAs are also called Local IPv6
addresses. These addresses replaced the Site-Local IPv6 addresses that are being
deprecated. They are routable inside a site or between a limited number of sites,
but are not expected to be routable on the global internet. A ULA is globally unique,
thus avoiding intersite address collisions. The ULA is intended for local IPv6
communications, for instance for stable internal communication during
renumbering.
Aggregatable global unicast addresses are globally routable and reachable on the
IPv6 portion of the internet.
There are two classes of ICMPv6 messages. Error messages have a type from 0 to
127. Informational messages have a type from 128 to 255.
An ICMPv6 message "Packet Too Big" is sent when the packet cannot be
forwarded because the link MTU on the forwarding link is smaller than the size of
the IPv6 packet. In the "Packet Too Big" message, the type field is set to two and
the code field is set to zero. After the checksum field is the 32-bit MTU field that
stores the link MTU for the link on which the packet is being forwarded.
The IPv6 global address example shown has 64 bits for the network portion and 64
bits for the Interface identifier or host address.
IPv6 Implementation
Introduction
On the N-series, similarly start by enabling IPv6 unicast routing with the "ipv6
unicast-routing" command, and then configure the VLAN interfaces with the
appropriate IPv6 address as indicated above. Ensure the physical port is assigned
to the proper VLAN prior.
Review Questions
Explanation:
1. 128 bits
2. 340 undecillion whereas IPv4 had 4.3 billion
3. Hexadecimal provided more flexibility with the addition of ABCDEF beside
number 0-9
4. Unicast, Multicast, and Anycast
5. Link Local, Unique Local, and Global Address.
Module Summary
1. How many bits does an IPv6 address have compared with an IPv4 address?
2. How many possible addresses are there with IPv6 compared with IPv4?
3. Why is hexadecimal numbering used with IPv6 but not with IPv4?
Lab: IPv6
Lab: IPv6
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module covers Power over Ethernet in a Dell EMC networking environment.
The technology and concepts that enable Ethernet switches to supply electrical
operating power over standard Ethernet cabling to specific device types is
introduced. PoE standards, switch requirements, use cases, configuration,
validation, and troubleshooting steps are also covered.
Introduction
This lesson introduces power over Ethernet (PoE) and how it is used to provide
electrical power to network end devices.
Introduction
From enterprise storage arrays to the single IP telephone on a desk, all network
end devices need electrical power to operate. As each end device is installed, a
separate power outlet with enough electrical capacity for that device must be
provided. Further, it must be installed close enough to the device so the power
cable can be plugged in.
Running an electrical branch circuit to provide power close to each new end device
is an expensive proposition. Sometimes, because of location limitations, it is cost
prohibitive to install a branch circuit for each device requiring power. The problem
is exacerbated as more devices are added to a network. Adding to the problems is
when the number of devices increase and are also geographically dispersed.
All IT devices need electrical power to operate. Electrical supply force is measured
in volts and the amount is measured in amperes, or AMPs. The combined voltage
and power consumption requirements of an electrical device is measured in watts.
Traditional street-to-device power distribution was adequate for many years, and
usually it is still adequate today. However, changes in modern IT device use and
deployment have increased the number of end devices that are connected to a
network. Exacerbating the problem is that many of these devices are being placed
at many different sites and other locations. Devices such as IP telephones and
surveillance equipment are at the top of the list of new devices being added to
networks all the time. Each of these devices needs electrical power. Most are low-
power devices that require an extra device that is called a transformer. This method
for powering devices leads to complicated and costly power distribution, wiring, and
power outlet placement schemes.
Review Ethernet cables and RJ45 connector construct to carry Ethernet signals.
The illustration shows each RJ45 connector oriented the same way for clarity.
Normally when bent in a "U" shape the connectors shows wire numbering
opposite of each other.
PoE cable length limits do not affect nor change Ethernet maximum distance
specifications.
Explain that wattage is a measurement of electrical work. Explain that volts
measure electrical pressure and AMPs measure electrical flow rate. Explain that
[Wattage = Volts X AMPs].
Standard eight-wire cables that are used for Base 10/100 Ethernet do not use all of
the wire pairs. Wires that are connected to pins 4,5 and 7,8 of a cable are not used.
In this case, PoE takes advantage of the unused wire pairs to supply electrical
power to PoE devices. Gigabit Ethernet uses all four wire pairs in a cable. Since
there are no unused cables, PoE supplies power over two of the data wire pairs.
PoE supplies voltage over the cable in the range of 44V-57V DC, at maximum
current draw of 350 mA. Two wires for each of the positive and negative poles of
the DC circuit are used. This design is used because a single wire in the cable is
too thin to carry the full electrical load.
Gigabit Ethernet uses all four cable wire pairs to carry data signals. In such cases,
electrical power is transmitted over signal wires using the phantom power
transmission technique. Because electricity and data signals flow through wire at
opposite ends of the frequency spectrum, they can travel over the same cable
without interference. Alternating Current electricity has a low frequency of 60 Hz or
less. PoE uses Direct Current that technically has no alternations at all. Data
transmission signals have frequencies that can range from 10 Mhz to 100 MHz.
Which power transmission scheme PoE uses is transparent to network
administrators and users. PoE Powered Devices are designed to accept power
across the cable in either format.
As with most networking protocols, PoE also has IEEE standards that govern
engineering and use characteristics. IEEE 802.3af defines and governs standard
PoE characteristics.
PoE+ enhances the IEEE 802.3af specification. Its specific purpose to provide
more power capability to end devices. PoE+ adheres to all other functional
specifications of the 802.3af standard.
Upon connection, the switch first transmits a lower voltage signal to detect a
special PoE capability signature in PoE-compatible devices. When the signature is
detected, the switch knows that standard PoE voltages can be safely applied to
power the end device. Power over Ethernet is injected onto the cable at a voltage
of 44 VDC to 57 VDC, and typically 48 V is used. Smaller devices could use 5 VDC
through 12 VDC to operate. However, the high voltage that is used in PoE enables
more efficient power transfer along the cable. Voltage at the PoE standard level is
also considered safe in cases where there is exposed wiring, or a short circuit
condition.
Although the voltage is safe for users, it can still damage equipment that has not
been designed to use PoE. Before a PoE switch can enable operating power to
LAN connected equipment, it first performs the signature detection process.
Power classification follows the signature detection stage. After the end device
returns a classification signature, it may send optional power classification
information. The power classification informs the switch about power requirements.
All switches have a limited total power budget. They can use power classification
information to allocate power across all connected PoE devices. In many PoE
source devices, the final power delivery is determined using the Link Layer
Discovery Protocol-Media Endpoint Discovery - LLDP-MED - negotiation. LLDP-
MED is a standard that facilitates function information sharing between end devices
and network infrastructure devices such as Ethernet switches. Using LLDP-MED
enables for refinement or fine-tuning of the power limit.
High-Power PoE
Introduction
This lesson introduces the wide range of device types available for use in a PoE-
enabled network infrastructure.
IP Telephones
In most use cases, IP phones are standard telephones. Each IP phone requires an
Ethernet connection and power. Options for power are a standard AC/DC adapter
or PoE. Most IP phones are voice-only units. These require little power and are
compatible with standard PoE. Because of the integrated LCD display and extra
circuitry, voice/video IP phones require more power. PoE+ may be required
because of the additional power demand.
Most wireless access points are low-power devices compatible with standard PoE.
Although they can be powered using external DC adapters, providing power
through PoE is most economical when deploying them in quantity. Large and highly
populated areas such as office buildings and stores require them to be deployed at
strategic locations to provide uninterrupted coverage. This type of deployment is
among the best use cases for PoE powered wireless access points.
Surveillance cameras vary widely in size, feature, and function. Simple devices are
static and transmit video using available light. Others have integrated motors that
enable them to tilt and pan. Some include infrared light sources. Cameras that are
intended for outdoor use may have these features and integrated heating elements
to keep them operating in cold environments. The number and type of features
determine whether a camera can operate using standard PoE, require PoE+ or
cannot use PoE at all.
Electronic access control systems provide supervision over who or what is enabled
to gain access to a building, a room, or even a supply cabinet. Environmental
controls are used to monitor or control many different factors including temperature,
pressure, speed, humidity, and so on. These systems range from controls that
connect over proprietary wireless signals to a central LAN-connected PoE
controller. Some of these devices are individual units that are directly connected to
the LAN and use PoE.
In the past, building HVAC and industrial controls and sensors were connected to a
central management system through an RS-485 or RS-232 bus connection. Today
building and industrial-based control systems are rapidly adopting Ethernet as the
preferred communications infrastructure. This change in communication technology
adds the ability to use PoE to power these devices. These systems range from
sensors and controls that connect over proprietary wireless signals to a central
LAN-connected PoE controller. These devices are directly connected to the LAN
and use PoE.
Power Provisioning
Introduction
This lesson covers Power Over Ethernet standards and types and their application
in its role as a technology enabler in modern networks.
Networks are evolving to not only support business application systems and the
users that access them. Networks are fast becoming the key enabler of intelligence
gathering, analysis, and dissemination centers for real-time surveillance and
monitoring. Networks are becoming the center of environmental and industrial
systems control and monitoring. Because the diversity and complexity is growing
almost as fast as the number of PoE device deployments, PoE is considered an
enabler in the modern network. PoE specifications and capacities are evolving to
keep up as demands on the PoE infrastructure increase.
PoE specifications are arranged into four types. Each type summarizes information
about a version of PoE and its IEEE standard, and how that version is typically
used. Each type standardizes the maximum available power, and other key
information.
PoE Type 1 uses two wire pairs to connect many types of lower-powered devices
to the network. The IEEE 802.3af standard provides up to 15.4 W of DC power to
each PoE switch port. It provides up to 12.95 W of power for each device. PoE
Type 1 supports VoIP phones, sensors/meters, and wireless access points. It also
supports simple, static surveillance cameras that do not pan, tilt, or zoom or have
other high-power requirement features.
PoE Type 3 uses all four pairs in a copper cable. It is based on the IEEE 802.3bt
standard. The standard was ratified September 2018. It provides 60 W of DC
power to each PoE port and up to 51 W of power for each device. PoE Type 3 can
support even higher power demand devices such as video conferencing system
components and environmental, building, and industrial monitoring and
management devices. UPOE is a Cisco implementation of Type 3 PoE. The full
name is Cisco Universal Power Over Ethernet. Dell N-Series Ethernet switches are
fully UPOE compatible.
PoE Type 4 is based on the IEEE 802.3bt standard and along with Type 3, was
ratified September 2018. It provides up to 100 W of power to each PoE PSE or
switch port and up to 100 W of power for each device. PoE Type 4 can support
high-power devices such as laptops and other devices with more features, motors,
actuators, and larger LCD displays.
A POE injector, also called a midspan, is used to add PoE capability incrementally
to legacy, non-POE networks. Midspans can be used to upgrade existing LAN
installations to POE, and provide s a solution where fewer POE ports are required.
To upgrade a network segment to PoE, run network cables through the midspan.
As with native POE switches, PoE configuration and management are automatic.
Midspans are available as multiport rack-mounted units or single-port units. If a
network is evolving toward hosting more PoE enabled devices, it is best to upgrade
the switching infrastructure to native PoE switches. Upgrades should be planned
and accomplished as soon as possible, to take full advantage of the power
distribution economy aspect of PoE.
PoE-enabled switches have a total power budget that cannot be exceeded. If the
current draw exceeds the power budget limit, attached end devices could fail.
Power budget management at the switch is important. Switch power budget
allocation can be managed in either static or dynamic modes. In static mode, a
predetermined amount of power is deducted from the total power budget for the
switch. This deduction ensures that maximum power is always available to a
specific switch port. The specified power is guaranteed for only that interface. This
mode ensures that when the administrator specifies maximum power for a selected
interface is always reserved and cannot be shared with other switch ports. In
dynamic mode, power that is allocated from the total switch power budget for each
port is the power that is consumed at that port. The administrator can allocate any
unused portion of switch PoE power to the other end devices as needed.
Dell N1500 and N2000P models provide up to 48 ports of PoE+. Dell N3000P
models provide up to 48 ports of PoE+ and are UPoE ready.
Dell N1100-series switches each have a single internal power supply with no
options for more internal or external power supplies. The PoE power budget is 60
W for the N1108P-ON, 185 W for the N1124P-ON, and 370 W for the N1148P-ON
models.
Both the Dell N1524P and the N1548P switch have an internal 600-W power
supply that can power up to 24 PoE end devices. At full PoE+ power, this
configuration yields up to 500 W. An external modular power supply provides 1000
W and can power up to 48 PoE end devices. The combined internal and external
power supplies yield up to 1500 W.
The PoE power budget for each switch port is controlled through the switch
firmware. An administrator can limit the power that is supplied on a port or prioritize
power to some ports over others. The table shows N1524P and N1548P power
budget data in accordance with power supply configurations.
Both the Dell N2024P and the N2048P switch have an internal 1000-W power
supply that can power up to 24 PoE+ end devices. At full PoE+ power, this
configuration yields up to 850 W. An extra modular power supply provides 1000 W
and can power up to 48 PoE end devices. The combined internal and external
power supplies yield up to 1700 W.
The switch firmware controls the PoE power budget for each switch port. An
administrator can limit the power that is supplied on a port or prioritize power to
some ports over others. The table shows N2024P and N2048P power budget data
in accordance with power supply configurations.
Dell N3024P, N3048P, and N1548EP-OM switches each have an internal 1000-W
power supply that can power up to 24 PoE+ end devices. At full PoE+ power, this
configuration yields up to 850 W. An external modular power supply provides 1000
W, and can power up to 48 PoE end devices. The combined internal and external
power supplies yield up to 1800 W.
The switch firmware controls the PoE power budget for each switch port. An
administrator can limit the power that is supplied on a port or prioritize power to
some ports over others. The table shows N3024P and N3048P and N3048EP-ON
power budget data in accordance with power supply configurations. The N3024P
and N3048P and N3132PX switches implement four-pair Universal Power over
Ethernet (UPOE) on the first 12 ports. Four-pair UPOE enables power to be
supplied to Class 5 powered devices that may require up to 60 W. UPOE power
must be configured manually.
Dell N3024P Power budget is 550 W. The total Power budget is 1100 W.
PoE supplied power must not All 24 PoE+ ports can
exceed 550 W. supply maximum power.
Introduction
This lesson introduces Power Over Ethernet and how it is used to provide electrical
power to specific types network end devices.
The Per-Port Power Limit enables setting the power limit for each PoE+ switch
port. Static and dynamic power mode settings can be used to determine the
amount of power to make available to switch ports.
The static setting reserves a guaranteed amount of power for a PoE port. The
configured power is reserved for the port regardless of whether the port is powered
or not. This setting is useful for powering up devices which draw a variable amount
of power and provides them an assured power range to operate within.
The dynamic setting does not reserve power for a given port at any time. Subtract
the instantaneous power that each PoE port draws from the available power
budget. The result is the power available from the switch to add more devices. The
dynamic setting enables the switch to power more PoE devices simultaneously,
because no power is held in reserve. This feature is useful to efficiently power up
more devices when the available power with the PoE switch is limited.
Power Detection Mode - Sets the mode to PoE legacy 802.3af operation or 4-
point 802.3at plus legacy detection. 4-Point detection is a method of protecting the
switch and end device from a PoE mode power mismatch. It ensures the PD, or
end device, PoE mode is correctly detected.
LLDP-MED TLVs
There are three TLV types for LLDP-MED. The Power over Ethernet Management
TLV lets end devices advertise the power level and power priority that is required. It
also lets PoE switches advertise the amount of power that they can supply. The
Network Policy Discovery TLV simplifies deployment of large, multivendor networks
and aids in troubleshooting. This TLV lets end devices and switches advertise their
VLAN ID, IEEE Priority, and Differentiated Services Code Point - Layer 3 Priority -
assignments to each other. Inventory Management Discovery TLV lets an end
device transmit detailed inventory information to the switch. This self-inventory
information can include information such as vendor name, model number, firmware
revision, and device serial number.
Configuring LLDP-MED
LLDP-MED is disabled on all ports by default. Use the commands shown to enable
LLDP MED and verify status. Optional configuration commands are available
where required, but LLDP MED setting defaults are sufficient for most PoE
environments. TLV interface configuration code definitions: 0- Capabilities, 1-
Network Policy 2-Location, 3- Extended PSE, 4- Extended PD, 5-Inventory.
Execute the configuration command in the Interface Configuration (Ethernet) mode.
The main management configuration task for PoE switches is power management.
The default switch and port configuration is automatic and sufficient for most
applications. However, PoE power requirements can vary widely. The user network
environment mostly dictates these requirements. When needed, CLI commands
are available to custom configure switch and port power budget allocation and
device type settings. Also, CLI commands can condition PoE feature and power
function settings at each port.
The power inline command enables or disables the ability of a port to deliver
power. Auto enables the switch to negotiate with the powered device to learn the
desired power draw of the device. The default value is auto, which means that
device discovery is enabled and the port can deliver power. The power inline
detection parameter should be set to class. Execute this command in the CLI
Interface Configuration mode for Ethernet.
Command Description
To set the power management type, use the power inline management command
in Global Configuration mode. This command is used along with the power inline
priority command. To set the management mode to the default value, use the 'no'
form of this command. Execute this command in the CLI Global Configuration
mode.
Command Description
The calculation to find the correct static power management setting is:
Where Total Configured Power is calculated as sum of the configured power limit
configured on the port.
Where Total Allocated Power is calculated as the sum of the power consumed by
each port.
Available Power = Power limit of the Sources – Total Class Configured power.
Total Class Configured Power is calculated as the sum of the class-based power
allocation for each port. Class-based power management allocates power, based
on the class that is selected by the device using LLDP. Power is supplied to the
device in class mode per following table:
The power inline priority command configures the port priority level, for the delivery
of power to an attached device. The switch may not be able to supply power to all
connected devices. If adequate power capacity is not available for all enabled
ports, then port priority is used to determine which ports supply power.
Command Description
no power inline limit Sets the power limit type to the default of
32,000 milliwatts.
U se this command to enable high-power mode. To disable high power mode, use
the 'no' form of this command. High power is enabled by default. In high-power
mode, the switch (PSE) negotiates the power budget with the powered device (PD)
through LLDP. The system does not apply high power to the interface until an
LLDP-MED packet is received from the link partner requesting the application of
high power. Execute this command in the CLI Interface Configuration mode.
Use the power inline limit command to configure the type of power limit. The default
power limit is 32,000 milliwatts. To set the power limit type to the default, use the
'no' form of this command. User-defined limits are only operational if the power
management mode is configured as static. By default, the power management
mode is dynamic. If the operator attempts to set the limit to user-defined and the
power management mode is not configured as static, a warning is issued and the
command has no effect. Execute this command in the CLI Interface Configuration
mode.
The power inline priority command configures the port priority level, for the delivery
of power to an attached device. The switch may not be able to supply power to all
connected devices. If adequate power capacity is not available for all enabled
ports, the port priority is used to determine which ports supply power. For ports that
have the same priority level, the lower-numbered port has higher priority.
The power inline usage threshold command configures the system power usage
threshold level at which lower priority ports are disconnected. The threshold is
configured as a percentage of the total available power. The default threshold is
90%. To set the threshold to the default value, use the no form of the command.
Execute this command in the CLI Global Configuration mode.
Use the power inline reset command to reset the port. This command is useful if
the port has stopped responding and is in an error state. Power to the powered
devices may be interrupted as the port is reset. Execute this command in the CLI
Interface Configuration mode.
Use the show power inline command to report current PoE configuration and
status. If no port is specified, the command displays global configuration and status
of all ports. If a port is specified, then the command displays the details for the
single port. Use the detailed parameter to show power limits, detection type, and
high-power mode for the interface. Execute this command in the Privileged Exec
mode.
Troubleshooting
Introduction
This lesson introduces Power Over Ethernet and how it is used to provide electrical
power to specific types network end devices.
In most environments, testing PoE during deployment typically is little more than
connecting a PoE-enabled device to the switch and observing to see if it powers-
up. When a device does not power up, troubleshooting usually starts with moving
the device to another switch port or replacing the LAN cable. If the problem is not
found quickly however, as with most technology, the "go-no-go" type of approach
reaches its limits quickly. For PoE, the troubleshooter must consider the entire,
end-to-end cable infrastructure. Consider details such as the powered device - PD -
type, the type of PoE power it requires, and the standards it adheres to. Also, the
switch must be set up correctly. Troubleshooting not only includes the port PoE
configuration, but also how the switch is set up to distribute and use its power
budget.
Shown here are the most common post-deployment causes of trouble in PoE
network environments. When adding PoE to a LAN, it is best to also expand your
knowledge base beyond Ethernet and OSI specifications and protocols. Knowledge
should include low-voltage DC electric power transmission and device
characteristics. Understanding power would immeasurably help in dealing with
deployment and troubleshooting issues. While automated, installing LAN devices
that transfer both data and power over Ethernet should be done with people that
have DC electric power knowledge.
Dell N-Series switch models have different power over Ethernet characteristics.
Selecting the correct switch models and PSU configurations for a given PoE
environment is a key to ensure correct and consistent power provisioning post
deployment and over the life of the network.
Module Summary
2. Which PoE spefication type has a maximum source port power of 60W?
Introduction
Introduction
This lesson introduces Access Control Lists (ACLs) on DNOS 6 including the
purpose of access control and the commands that are used to permit access.
Access control lists (ACLs) are a collection of rules that provide security by
blocking selected packets from entering the switch. ACLs are implemented in
hardware and processed at line rate for the front-panel ports. A reduced
functionality set of ACLs is implemented in firmware for the Out-of-band (OOB)
port.
The Dell EMC Networking N-Series switches support ACL configuration in both the
ingress and egress direction. Egress ACLs provide the capability to implement
security rules on the egress flows, traffic leaving a port rather than the ingress
flows, traffic entering a port. Ingress and egress ACLs can be applied to any
physical port, port channel (LAG), or VLAN routing port.
When an ingress (the traffic entering) or egress (or traffic leaves) ACL is applied to
a port the ACL compares the criteria in its rules. It is compared in list order, to the
fields in a packet or frame to check for matching conditions. The ACL processes
the traffic that is based on the actions that are contained in the rules.
ACLs are organized into access groups. Access groups are numbered in priority,
lowest number has highest priority. Multiple access groups can be configured on an
interface, the lowest numbered access group is processed first, and then the next
lowest numbered access group. Within an access group, ACL rules are processed
in sequence, from the first, lowest numbered rule to the last, highest numbered rule
in the access group.
ACL Configuration
Command Description
console(config-mac-access-acl-
list)# permit any any
console(config-mac-access-acl-
list)# exit
Ingress ACLs can be applied to an interface such that only packets with specific
VLAN tags are filtered. Can apply to certain VLANs, sequence-number Enter a
number as the filter sequence number. Range: zero (0) to 65535.
Deny - Enter the keyword deny, to drop any traffic matching this filter.
Permit - to forward any traffic matching this filter, enter the keyword permit.
Any - Enter the keyword any to filter all packets.
Host mac-address - Enter the keyword host and then a MAC address to filter
packets with that host address.
The MAC ACL supports an inverse mask. A mask of ff:ff:ff:ff:ff:ff allows entries that
do not match and a mask of 00:00:00:00:00:00 only allows entries that match
exactly.
The MAC ACL supports an inverse mask. A mask of ff:ff:ff:ff:ff:ff allows entries that
do not match and a mask of 00:00:00:00:00:00 only allows entries that match
exactly.
Command Description
console# show mac-access-lists Display all MAC access lists and all
rules that are defined for the MAC ACL.
IP ACL Configuration
Command Description
console(config-ip-acl)# exit
IP ACL Verification
Command Description
console# show ip access-lists Display all IPv4 access lists and all rules
that are defined for the IPv4 ACL.
Scenario
A server admin recently deployed a new server on the network. The
admin is trying to FTP several files to the server, however they are
unable to connect to the server using FTP. The server admin advised
that they are able to access the server using RDP. He also confirmed
that all of the settings on the server are correct to allow FTP. The new
server was connected to port 20 on the switch. What could be
preventing the server admin from using FTP to transfer files?
A server admin recently deployed a new server on the network. The admin is trying
to FTP several files to the server, however they are unable to connect to the server
using FTP. The server admin advised that they can access the server using RDP.
The admin also confirmed that all settings on the server are correct to allow FTP.
The new server was connected to port 20 on the switch. What could be preventing
the server admin from using FTP to transfer files?
Discussion Notes:
Port Security
Introduction
Port security is used to enable security on a per-port basis. When a port is enabled
for port security, only packets with allowable source MAC addresses are forwarded.
All other packets are discarded. Port security allows a configurable limit to the
number of source MAC addresses that can be learned on a port.
The port security feature allows the administrator to limit the number of source
MAC addresses that can be learned on a port. When a port reaches the configured
limit, any additional addresses are not learned, and the frames that are received
from unlearned stations are discarded. Frames with a source MAC address that
has already been learned are forwarded.
This feature, which is also known as MAC locking, is to help secure the network by
preventing unknown devices from forwarding packets into the network. For
example, to ensure that only a single device can be active on a port, set the
number of allowable dynamic addresses to one. After the MAC address of the first
device is learned, no other devices will be allowed to forward frames into the
network.
The focus on security is mainly at Layer 3 not Layer 2, which creates a security
gap. The entry points into the network, edge routing devices, and wireless access
points attacks at Layer 2 are often left unconsidered in security discussions.
Layer 2 Attacks
Attacks that are launched against switches at Layer 2 can be grouped as follows:
MAC Layer Attacks - these attacks often focus on the MAC table.
VLAN Attacks
Spoof Attacks
Attacks on switch devices
MAC Poisoning
Two methods are used to implement port security: dynamic locking and static
locking. Dynamic locking implements a first arrival mechanism for MAC locking.
Static locking also has an optional sticky mode.
Dynamic Locking
Dynamic locking implements a ‘first arrival’ mechanism for MAC locking. The
administrator specifies how many dynamic addresses may be learned on the
locked port. The maximum dynamic MAC address limit is 600 MAC addresses. If
the limit has not been reached, and then a packet with an unknown source MAC
address is learned and forwarded normally. If the MAC address limit has been
reached, the packet is discarded. The administrator can disable dynamic locking by
setting the number of allowable dynamic entries to zero.
When a port security-enabled link goes down, all dynamically locked addresses are
freed. When the link is restored, that port can once again learn MAC addresses up
to the administrator specified limit. A dynamically locked MAC address is eligible to
be aged out when another packet with that MAC address is not seen within the
age-out time. If station movement occurs, dynamically locked MAC addresses are
also eligible to be relearned on another port. Statically locked MAC addresses are
not eligible for aging. If a packet arrives on a port with a source MAC address that
is statically locked on another port, and then the packet is discarded.
Static Locking
Static locking allows the administrator to specify a list of host MAC addresses that
are allowed on a port. The maximum static MAC address limit is 100 MAC
addresses. The behavior of packets is the same as for dynamic locking: only
packets that are received with a known source MAC address can be forwarded.
Any packets with source MAC addresses that are not configured are discarded.
The switch treats this action as violation and supports the ability to send an SNMP
port security trap.
If one or more specific MAC addresses that are connected to a particular port are
known, the administrator can specify those addresses as static entries. If you set
the allowable dynamic entries to zero, only packets with a source MAC address
matching a MAC address in the static list are forwarded.
Statically locked MAC addresses are not eligible for aging. If a packet arrives on a
port with a source MAC address that is statically locked on another port, and then
the packet is discarded.
Sticky Mode
Sticky mode configuration converts all the existing dynamically learned MAC
addresses on an interface to sticky. Sticky means that they are not aged out and be
displayed in the running-config. Also, new addresses that are learned on the
interface are also sticky. Note "sticky" is not the same as static. The difference is
that all sticky addresses for an interface are removed from the running-config when
the interface is taken out of sticky mode. Static addresses must be removed from
the running-config individually.
Sticky MAC addresses appear in the running-config in the following form:
switchport port-security mac-address sticky
0011.2233.4455 vlan 33
Statically locked MAC addresses appear in the running-config in the following
form:
NOTE: To remove dynamic or static MAC locking, the max learn value
must be set to 0.
NOTE: Port security should only be enabled on access mode ports and
not on trunk mode ports. This recommendation is not enforced by the
switch.
Command Description
console(config-if-gi1/0/3)#
switchport port-security
Command Description
Command Description
In this
example, the
output shows
2 statically
configured
MAC
addresses.
The VLAN is
identified for
the MAC
addresses
and indicates
that one of
the secure
MAC
addresses is
sticky.
Introduction
AAA Overview
Each service is configured using method lists. Method lists define how each service
is performed by specifying the methods available to perform the service. The first
method in a list is tried first. If the first method returns an error, the next method in
the list is tried. This process continues until all methods in the list have been
attempted. If no method can perform the service, and then the service fails. A
method may return an error due to lack of network access, misconfiguration of a
server, and other reasons. If there is no error, the method returns success if the
user is allowed access to the service and failure if the user is not. AAA gives the
user flexibility in configuration by allowing different method lists to be assigned to
different access lines. In this way, it is possible to configure different security
requirements for the serial console than for Telnet, for example.
AAA Methods
AAA Methods
none no no no
Methods that never return an error are not followed by any other methods in a
method list.
The enable method uses the enable password. If there is no enable password
that is defined, and then the enable method returns an error.
The ias method is a special method that is only used for 802.1X. It uses an
internal database (separate from the local user database) that acts like an
802.1X authentication server. This method never returns an error. It passes or
denies a user.
The line method uses the password for the access line on which the user is
accessing the switch. If there is no line password that is defined for the access
line, and then the line method returns an error.
The local method uses the local user database. If the user password does not
match, and then access is denied. This method returns an error if the username
is not present in the local user database.
The none method does not perform any service, but instead always returns a
result as if the service had succeeded. This method never returns an error. If
none is configured as a method, the user is authenticated and allowed to
access the switch.
The radius and tacacs methods communicate with servers running the
RADIUS and TACACS+ protocols, respectively. If the switch is unable to
contact the server, these methods can return an error.
Local Authentication
Command Description
This configuration allows either user to log in to the switch. Both users have
privilege level 1. If no enable password was configured, neither user could
successfully issue the enable command. The enable command grants access to
Privileged Exec mode, because there is no enable password set by default. The
default method list for Telnet enable authentication is only the “enable” method.
Here is an example of a public key configuration for SSH login. Using a tool such
as putty and a private/public key infrastructure, you can enable secure login to the
Dell EMC Networking N-Series switch without a password. Instead, a public key is
used with a private key kept locally on the administrator's computer. The public key
can be placed on multiple devices, allowing the administrator secure access
Command Description
RADIUS Authentication
Command Description
Module Summary
2. What two methods are used to implement port security on a Dell EMC N-Series
switch?
Lab: Security
Lab: Security
Refer to the student lab guide for instruction to complete the lab.
Introduction
This module covers basic concepts of stacking. The topologies and cable
connections for stacking Dell N-series switches and DNOS 6.X stacking features.
Stacking Overview
Introduction
Stacking Overview
Stacking:
Stacking is a well-known networking concept of cabling devices together into a
cohesive unit that behaves as a single, larger switch.
Stacking will elect a switch to act as the master. It maintains the running
configuration, controls the CLI operations. For any stack that has 2 or more
switches, there will be a Standby member. A single switch can operate as a
standalone stack master (the switch operates as master of a stack of one) this is
the default scenario for many stack-capable switches.
Ease of Management:
Stacking increases port count by creating a virtual chassis from multiple physical
devices. In multiple switches, stacking makes management easier because a stack
can be configured as a single virtual unit through the management device. A single
switch in the stack (known as the Master switch) manages all the units using a
single IP address. The master switch enables a user to access every port in the
stack from this IP address. The IP address of the stack does not change, even if
the master changes.
End devices can be cabled with redundant connections to different stack units in
the stack. If the acting management device fails, a standby device takes over as
the new management device, and an existing line/member device will take over as
the new standby device.
The three roles that a switch can take on are Stack Master, Standby, and regular
Members.
Stack Master: The Master device is the primary management unit that is used to
configure all other members of the stack using a single IP address. The Master
owns the control plane, and the other units maintain a local copy of the forwarding
databases. A user can connect a serial cable into the console port of the master
unit to access the CLI for the stack. Connecting a cable to a non-stack master unit
will result in a "CLI - Unavailable" message, as all management must be completed
from the master unit. Also if a virtual IP address has been configured this can be
used for remote management of the stack configuration.
Standby: The standby switch is used to manage the stack and becomes the stack
master if the original stack master fails or is powered off. The Standby needs to be
ready to take over at any time and should have all the configuration information
from the master. A standby unit is preconfigured in the stack. If the current stack
master fails, the standby unit becomes the stack master. When the failed master
resumes normal operation, it joins the stack as a member (not a master) if the new
stack master has already been elected. The stack master copies its running
configuration to the standby unit whenever it changes (subject to some restrictions
to reduce overhead). This enables the standby unit to take over the stack operation
with minimal interruption if the stack master becomes unavailable. If there was a
two-member stack, when the original stack master comes back online, it will join
back as Standby.
Member: All switches in a stack that are not designated as the master or standby
switch are called stack members. If the Master device fails, and Standby device
assumes new role as Master, and then a Member device becomes the new
Standby device. Also, the lack of a standby unit triggers an election among the
remaining units for a standby role.
LAG will aggregate multiple links into a single logical port channel between two
switches. LAG can be combined with stacking, where links from multiple switches
in a single stack can be combined into a port channel which connects the two stack
groups.
MLAG enables a port channel from a single switch to connect with two MLAG peer
switches. The peer switches must have a peer link between them.
Topology in Stacking
Introduction
Daisy Chain:
A daisy chain topology is a linear connection between all units through stacking
links. The daisy chain topology is not recommended because it does not have full
redundancy. If a link or switch fails that can result in a split stack, whereby each
surviving side of the stack is online but believes the other side is down.
Ring Topology:
In a Ring topology, all units in the stack are connected in a loop. It is similar to the
daisy chain except that the last unit is connected back to first unit which provides
redundancy if any stack link fails. The failure of one link in a ring does not remove
any switch from the stack. This is because there are redundant connections that
maintain stack functionality. So the ring topology is more reliable than a chain and
provides a more stable stack operation. This topology also provides more efficient
pathing as traffic will follow the least number of stack hops and additional cables
will also add more bandwidth.
N1500 Stacking
N1500 Series switches stack using the 10G SFP+ front-panel ports. Each stack
can have maximum of four units. Use at least two ports on each switch to enable a
ring topology connection.
The example in the CLI shows how to use two 10-GigabitEthernet ports for
stacking.
N2000 Stacking
Stacking Ports: Uses mini-SAS, 2xHG Stacking port mini-SAS type. Two LEDs
(LNK, ACT) are provided for indicating the existence of link and
Transmit/Receive activity. The details of both LEDs are given below.
LNK-LED
– Off - no link
– Solid Green - link exists
ACT-LED
– LED Off - no transmit/receive activity
– Blinking Green - transmitting/receiving
M LED: The M LED indicates Stack Master. If the GREEN LED is glowing, it
indicates Stack Master. If the Green LED of M is OFF. Then it indicates that this
switch is not Stack Master.
– The ACT and LNK LEDs are on the back side of the switch and the M LED
is located either on the front panel or port side of the switch.
N3000 Stacking
Stacking Ports: Uses mini-SAS, 2xHG Stacking port mini-SAS type. Two LEDs
(LNK, ACT) are provided for indicating the existence of link and
Transmit/Receive activity. The details of both LEDs are given below.
LNK-LED
– Off - no link
– Solid Green - link exists
ACT-LED
– LED Off - no transmit/receive activity
– Blinking Green - transmitting/receiving
M LED: The M LED indicates Stack Master. If the GREEN LED is glowing, it
indicates Stack Master. If the Green LED of M is OFF. Then it indicates that this
switch is not Stack Master.
– The ACT and LNK LEDs are on the back side of the switch and the M LED
is located either on the front panel or port side of the switch.
Dell EMC Networking N3000 series switches can stack up to eight units as of
firmware release 6.5.1.
N4000 Stacking
Dell 4000 series switches stack with other Dell Networking 4000 series switches
over front panel ports that are configured for stacking. All the port types on the
N4000 series switches can be used for stacking. Dell networking N4000 series
switches do not stack with different Dell networking series switches or Dell
PowerConnect series switches.
Configure Stacking
Introduction
This lesson describes how to create a stack DNOS 6 and its features. The lesson
also describes how to add and remove a unit from a stack and general stacking
guidelines for N series switches.
Creating a stack
DNOS 6 stacking features
Adding and removing a member from the stack
Managing the standby unit
Mixed stacking
General stacking guidelines
Creating a Stack
All stack member units must run the same version of firmware. Make sure to either
upgrade firmware on the new units to be added to match the firmware on the
Master, or use the automatic firmware update method that is shown in the section
DNOS 6.x Stacking Features for new members joining the stack.
For switch models that do not have dedicated stacking ports, user ports are used.
User ports that act as stacking ports must have their personality that is changed to
support stack framing.
The example that is shown in the image explains how to add a stack member to an
existing stack. Before cabling a new switch into the stack, perform the commands
in the image one by one to set up the switch ports to be stacked. Once configured,
continue to perform cabling to complete the task. If multiple new members are to be
added, complete the installation of one switch before going to the next. Complete
these steps again for each switch to be added.
Run the show switch command to see the current Stack configuration --
console#show switch
Make sure to verify exactly which ports are being used for stacking so they are
uncabled and rerouted last
To verify the ports, Run the show Switch Stack-Ports command --
console#show switch stack-ports | include Stack
Locate the switch to be removed using locate switch command --
console#locate switch
Only after rerouting the traffic through the remaining stack units, remove the
stacking cables from the switch to be removed.
Removing any member of a ring topology stack does not require a reload of any
member unit in the stack. If a unit in the stack fails, the Master unit removes the
failed unit from the stack and no changes or configuration are applied to the other
stack members; however, the dynamic protocol tries to reconverge as the topology
could change because of the failed unit. When there are no connected ports on the
failed unit, the stack is intact without changes.
A blinking LED light can be generated on the back of each physical unit. This
blinking LED is useful when identifying physical units and ports for running
diagnosis, sniffing, mirroring ports, and other basic troubleshooting. It is also helpful
when adding, removing, replacing, or tracing cables associated with these
interfaces. Use the locate switch command to blink the blue “Locator” LED on the
switch unit you are trying to locate.
Before removing a physical unit from a stack, prepare ports on the other stack
member units to receive the cables and traffic that is redirected to them from the
member unit being removed. Consider all LAGs, VLANs, STP, ACLs, security, and
so on, that needs to be configured on the new ports to accept cables, establish
links, and begin to forward traffic.
Do not remove or reroute stacking cables until prompted. Disconnect all other links
on the member to be removed and reroute the traffic that was going through this
unit so it now goes through the ports that were prepared on the remaining stack
unit members. Only after rerouting the traffic through the remaining stack units,
remove the stacking cables from the switch to be removed.
The show switch command shows the configuration and status of the stacking
units, including the active and standby stack management units, the pre-configured
model identifier, the plugged in model identifier, the switch status and the current
code version. Both the pre-configured switch types (as set by the member
command in stack mode) and the currently connected switchtypes, if any, are
shown.
Syntax
Find out which unit is currently in Standby status, by running the show switch
command.
Oper Stby is selected automatically by the Master during stack creation.
If the administrator decides to select a different unit to be Standby, and then it is
labeled Cfg Stby.
To change the standby to a different unit, use the standby x command.
Verify the change with the show switch command.
If the Master unit fails or is taken offline, a Standby unit automatically takes place
as Master. During this time, there is no more than a 50 ms interruption in unicast
connectivity. Run the Show switch command to find which switch is the standby
switch. The Standby Status column shows which unit is in Standby mode. There
are two standby modes: Oper Stby and Cfg Stby. Oper Stby is selected
automatically by the Master during stack creation. If the administrator decides to
select a different unit to be Standby, it is labeled Cfg Stby. Both Standby modes
work identically.
In this example, unit 2 is the stack standby for the Master unit. The standby x
command, where x is set to 3, changes the standby switch from unit #2 to unit #3.
Verify the change with the show switch command.
Mixed Stacking
Mixed Stacking
Dell EMC Networking N3132PX‐ ON switches can be mixed stack with N3000
Series switches of up to 8 units. Mixed stack of N3132PX‐ ON and N3000 supports
only 1024 active VLANs configurable in the range 1‐ 4093 and does not support
MMRP/MVRP. Dell EMC Networking N3132PX‐ ON switches have an expansion
slot to install optional stacking module with two mini‐ SAS stack ports. Dell EMC
Networking N3000 Series switches are available with two fixed mini‐ SAS stack
ports.
Dell EMC Networking N2128PX‐ ON switches can be mixed stack with N2000
Series switches of up to 12 units. Dell EMC Networking N2000 and N2128PX‐ ON
switches are available with two fixed mini‐ SAS stack ports.
Stack using same platform series. For example, Dell Networking N2000 series
switches only stack with other N2000 series switches, N3000 series switches only
stack with other Dell N3000 series switches. All members of stack must run the
same OS version. For specifics on number of switches that can be stacked,
methods of stacking (dedicated optional modules and cables, integrated modules
(mini-SAS), user/data port, expansion modules), speeds associated with stacking
ports, cabling distance limitations, and so on – see User Guides for individual
switching platforms. For switch models that do not have dedicated stacking ports,
user ports are used. User ports that act as stacking ports must have their
personality that is changed to support stack framing.
Module Summary
2. What are the two types of stacking topologies that can be used?
3. What are the three roles a switch can take when in a stack?
4. What feature enables a stack to continue forwarding end-user traffic when the
management unit in a stack fails?
Lab: Security
Lab: Stacking
Refer to the student lab guide for instruction to complete the lab.