You are on page 1of 222

Network Requirements for Avid NEXIS, and

MediaCentral
Eur-Ing David Shephard CEng MIET CCDP® CCNP® CCIP®
Consulting Cloud Network Engineer
MON 24 MAY 2021 – V2.8

This document is available from:


Network Requirements for NEXIS and MediaCentral.
http://avid.force.com/pkb/articles/en_US/Compatibility/en244197
Intended audience: General distribution

Abstract
This NETREQS V2 document outlines the fundamental requirements for Avid NEXIS solutions with MediaCentral, and
initially was a slimmed down version of NETREQS V1 for Avid ISIS and Interplay, but has grown with subsequent
revisions. It is intended to provide a summary of many documents and site experience. The document sets out minimum
requirements where such direction is not explicitly documented, but experience from existing installations is applicable. The
document content may be updated in line with product S/W and H/W releases or when other content is added not in direct
relation to a recent software release. Externally available URLs will be provided where possible. Some forward-looking
content exists which may not occur as expected.

This NETREQS document can be shared with customers and used for SoW content.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 1 of 222


© Avid Technology (Europe) ltd. This document is the property of Avid. The information contained in this document has been provided to
the intended recipients for evaluation purposes only. The information contained in this document should not be discussed with any other party
or persons without the express prior written permission of Avid. If the intended recipient does not accept these terms, this document and any
copies should be returned to the nearest Avid office. If you are not the intended recipient, employee or agent you are hereby notified that any
dissemination or copying of this document is strictly prohibited. If you have received this document in error, please return it to the nearest
Avid Technology office ( www.avid.com ).

Version 2 is primarily a reformatting exercise in a later version of word (from Word 2003
original) and also features some significant reorganization of content with some legacy
content moving to appendices.

Content removal
This new Version 2.x document dispenses with the majority of V1.x document content
related to ISIS and Interplay, which will remain available but NETREQS V1.X will not be
updated beyond V1.23. Some content will be migrated from the old V1.x document, but
hopefully the document size will reduce by at least 50%

Over time some REALLY useful V1.x content may be re-incorporated.

Branding Changes.
This document was first issued as V1.0, 04 July 2007, during that time the products have
evolved significantly in nature and name, and the document has grown with them.
While many of the product names have changed, in some cases the product has not or there is
a logical evolution and in some cases a product revolution. While the headline document
name may change the old content will remain, where possible, and applicable or be marked
as Legacy content.

Current Document HEADLINE Name:


Network Requirements for Avid NEXIS, and MediaCentral 2.x (MON YYYY)

Previous document HEADLINE names have been:


Network Requirements for Avid ISIS, Avid NEXIS, and Interplay PAM and
MAM V1.18
(JUN2016)
Network Requirements for ISIS and Interplay PAM and MAM V1.11 (MAR 2013)
Network Requirements for ISIS and Interplay Production V1.7 (JUL 2010)
Network Requirements for Unity ISIS and Interplay V1.0 (JUL 2007)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 2 of 222


Table of Contents

Table of Contents
Abstract ..................................................................................................................................... 1
Content removal ....................................................................................................................... 2
Branding Changes.................................................................................................................... 2
Additional Contributors .................................................................................................................. 7
Recent Revision history .................................................................................................................. 7
RECENT UPDATE 2.8 .................................................................................................................. 8
1.0 AVID NEXIS REQUIREMENTS .................................................................................... 9
1.0.1 NEXIS and Latency ............................................................................................................... 9
1.0.2 Switch Suitability status ....................................................................................................... 12
1.0.3 Zones - an evolved definition............................................................................................... 12
1.1 Qualified Switches for Avid NEXIS ........................................................................................ 13
1.1. Issues to be aware of with Dell S4100 Series Switches ......................................................... 15
1.1.2 Dell S4100 Series Switches Model Variants ....................................................................... 16
1.2 Approved Switches for Avid NEXIS ....................................................................................... 16
1.3 Architecturally Capable Network switches ............................................................................ 17
1.3.0 Known Cisco Issues impacting Avid Storage solutions....................................................... 20
1.3.1 Partially Tested Switches ..................................................................................................... 22
1.4 LEGACY Switches for Avid NEXIS ....................................................................................... 25
1.4.4 Using Force10 S60 with NEXIS .......................................................................................... 25
1.4.3 Using Force10 S25N with NEXIS ....................................................................................... 25
1.5 Transceivers and cables ............................................................................................................ 26
1.5.1 40G QSA or Breakout cables NEXIS .................................................................................. 26
1.5.2 Breakout cables NEXIS and NEXUS 93180 ....................................................................... 26
1.5.3 Breakout cables NEXIS and DELL S4048 .......................................................................... 27
1.5.4 Avid supplied 40G Transceivers and NEXIS ...................................................................... 27
1.5.5 3rd Party Transceivers and Switch Vendors ......................................................................... 28
1.5.6 Gigabit Media Converters .................................................................................................... 29
1.6 Switch combinations for E5 NEXIS and Big 40G deployments............................................ 29
1.6.1 1U Switch Families .............................................................................................................. 30
1.6.2 Chassis Based Switch Families ............................................................................................ 33
1.7 Cisco Catalyst Switches ............................................................................................................ 35
1.7.1 Catalyst 9000 - A series ...................................................................................................... 35
1.7.2 Catalyst 9000 - B series ....................................................................................................... 36
1.7.3 Catalyst 9000 - B series – USING SOFTMAX ONLY ....................................................... 38
1.8 Network Interface Cards .......................................................................................................... 39
1.8.1 Single-mode vs. Multi-mode................................................................................................ 39
1.8.2 USB-C NIC Information ...................................................................................................... 39
1.8.3 NICs in a VM environment – applies to bare metal too....................................................... 40
1.8.4 10GBASET Transceivers..................................................................................................... 42
1.8.5 I350-T2 NIC Eol – Replaced by I350T2V2......................................................................... 44
1.9 Airspeed 5500 (3rd GENERATION) network connection ..................................................... 44
1.10 NAT/PAT Incompatibility...................................................................................................... 45
1.10.1 Kubernetes – the “core” of the “network” problem with MCCUX ................................... 46
1.10.2 MCCUX “Kubernetes” – NO LONGER USING MULTICAST ...................................... 47

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 3 of 222


2.0 NETWORK DESIGNS FOR NEXIS ............................................................................. 48
2.1 Reference designs ...................................................................................................................... 48
2.2 Block based designs................................................................................................................... 49
2.2.1 The Traditional/Legacy way ................................................................................................ 50
2.2.2 MLAG/vPC/VSS Single Controller ..................................................................................... 51
2.2.3 MLAG/vPC/VSS DUAL CONTROLLER .......................................................................... 52
2.2.4 STACKED SWITCH with DUAL CONTROLLER ........................................................... 53
2.3 CISCO - Custom/Deployed designs ......................................................................................... 53
2.3.1 Cisco Nexus 5600 Based...................................................................................................... 54
2.3.2 Cisco Nexus 9500 Cases ...................................................................................................... 56
2.3.3 Cisco Nexus 93000 Cases .................................................................................................... 58
2.4 JUNIPER - Custom/Deployed designs .................................................................................... 59
2.4.1 QFX 10008 and QFX5100 ................................................................................................... 59
2.4.2 Juniper Buffer limitation with Trident 2 Merchant Silicon.................................................. 60
2.4.3 QFX 10008 and QFX5110 ................................................................................................... 62
2.5 ARISTA - Custom/Deployed designs ...................................................................................... 62
2.5.1 ARISTA 7500R and 7280R ................................................................................................. 62
2.5.2 ARISTA - PROVEN DEPLOYMENTS WITH NEXIS...................................................... 63
2.6 SPINE/LEAF - Custom/Deployed designs .............................................................................. 64
3.0 VIRTUAL MACHINES AND BLADE SERVERS ...................................................... 65
3.1 Cisco UCS .................................................................................................................................. 65
3.1.1 UCS with FI 6248 ................................................................................................................ 65
3.1.2 UCS with FI 6332 ................................................................................................................ 67
4.0 WIDE AREA AND DATA CENTER NETWORKING .............................................. 69
4.1 MPLS ......................................................................................................................................... 69
4.2 Wavelength-division multiplexing ........................................................................................... 69
4.3 Software Defined Networking –SDN ....................................................................................... 70
4.4 VXLAN ...................................................................................................................................... 70
4.4.1 VXLAN Overheads.............................................................................................................. 72
4.4.2 VXLAN questions................................................................................................................ 72
4.4.4 VXLAN and Multicast ......................................................................................................... 73
4.5 Spine/Leaf Networks ................................................................................................................. 73
4.6 Adapter Teaming ...................................................................................................................... 73
4.6.1 Teaming with Intel NICs...................................................................................................... 74
4.6.2 NIC Teaming Windows 2012 .............................................................................................. 75
4.6.3 NIC Teaming Windows 2016 .............................................................................................. 75
4.6.4 Linux Bonding and Teaming ............................................................................................... 75
4.6.5 TEAMING WITH FASTSERVE INGEST ......................................................................... 77
4.7 Cisco Transceiver Enterprise-Class versions ......................................................................... 78
4.8 Jumbo Frames and Avid applications..................................................................................... 79
4.9 NO specific VPN requirements for use with Thin Client applications................................. 79
5.0 MEDIA CENTRAL ......................................................................................................... 81
5.1 Media Central UX Bandwidth requirements ......................................................................... 81
5.2 Media Central Cloud UX Server Connectivity requirements............................................... 82
7.0 CUSTOM TESTING ....................................................................................................... 83
7.1 Testing Teamed Adapter with Media Central Cloud UX Server ......................................... 83
7.1.1 Primary Objective ................................................................................................................ 83
7.1.2 Conclusion Extract ............................................................................................................... 83
7.2 Testing Broadcom 57412 Adapter Media Central Cloud UX Server ................................... 84
7.2.1 Secondary Objective ............................................................................................................ 84
7.2.2 NIC INFORMATION .......................................................................................................... 84
99.0 LEGACY ISIS and Interplay Information ................................................................. 86

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 4 of 222


Appendix A - NEXIS Documentation - Landing pages ..................................................... 87
Appendix B - Switch configuration tips, good practices and Lessons from the field. ..... 88
B.1 Document your configs with DESCRIPTIONS ..................................................................... 88
B.1.2 Good documentation and config practices .......................................................................... 88
B.2 Setting Spanning tree to Rapid Spanning Tree ..................................................................... 90
B2.1 Spanning tree cost ................................................................................................................ 90
B.2.2 Spanning Cost type.............................................................................................................. 90
B.3 SET primary switch as STP master root primary ................................................................ 91
B.4. SET secondary switch as STP root secondary ...................................................................... 92
B.5 Deploy BPDU guard on all ports that use PORTFAST ........................................................ 92
B5.1 Use ROOT GUARD on any interfaces that cascade to other switches ................................ 93
B5.2 Using spanning-tree port type edge with Cisco Nexus and AVID NEXIS .......................... 94
B.6 Use the no shutdown command on all VLANs ...................................................................... 95
B.7 Use the shutdown command on all unused interfaces ........................................................... 95
B.8 Enable secret ............................................................................................................................. 95
B.9 Password encryption ................................................................................................................ 95
B.10 Enable telnet ........................................................................................................................... 96
B.11 Enable synchronous logging .................................................................................................. 96
B.12 Get Putty 0.06 ......................................................................................................................... 96
B.13 Logging .................................................................................................................................... 96
B.14 Using a Syslog Server ............................................................................................................. 96
B14.1 Freeware logging servers.................................................................................................... 97
B.15 Timestamps ............................................................................................................................. 97
B.16 Setting the Time ...................................................................................................................... 98
B.16.1 Command for Cisco NEXUS ............................................................................................ 99
B.17 Show tech support for CATALYST ..................................................................................... 99
B17.1 What is listed? .................................................................................................................. 100
B17.2 Show tech-support - CAVEATS ...................................................................................... 100
B17.3 How long does it take? ..................................................................................................... 100
B17.4 useful show commands .................................................................................................... 101
B17.5 TFTP tools ........................................................................................................................ 101
B.18 Handover Practices .............................................................................................................. 102
B.19 Cisco Catalyst 49XX setting of the CONFIG register ....................................................... 102
B.20 Multicast Propagation Does Not Work in the same VLAN in Catalyst and NEXUS
Switches.......................................................................................................................................... 103
B.20.1 Multicast Propagation - an Avid perspective .................................................................. 106
B.20.2 – Nexus switches Multicast Propagation ......................................................................... 107
Licensing Requirements for PIM: ............................................................................................... 112
B.20.3 – UCS Blade Servers Multicast Propagation................................................................... 116
B.20.4 Some other useful Multicast URL & Information ........................................................... 116
B.21 LoopGuard and FHRP ........................................................................................................ 118
B.22 Speed setting on Switch ports .............................................................................................. 118
B.23 Duplicate IP Address Error Message Troubleshoot – Later Cisco IOS .......................... 121
B.24 Using WINMERGE to compare config files (FIELD-TIP) .............................................. 122
B.25 Serial connection alternatives to DB9 using USB (FIELD-TIP) ...................................... 123
B25.1. Use the management network on NEXUS switches & remote connection ..................... 124
B.26 Ethernet connection alternatives using USB (FIELD-TIP) .............................................. 125
B.27 Ping testing Cisco links (FIELD-TIP) ................................................................................ 125
B.28 Service Timestamps and Time setting in F10 S4810 & S60.............................................. 125
B.29 Service Timestamps and Time setting in Dell N3024/48 ................................................... 126
B.30 Service Timestamps and Time setting in DELL S4048 ..................................................... 127
B.31 How to find IP address of a MAC address ......................................................................... 128
B.32 Minimum Length Of A Network Cable? ........................................................................... 129
B.33 Cisco Nexus vPC Best Practices .......................................................................................... 129
B.33.1 PATH diversity NEXUS 93000 series switches ............................................................. 131

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 5 of 222


B.34 Cisco Nexus 93000 series Port Breakout & QSA ............................................................... 133
B.34.1 For Nexus 93180LC-EX ................................................................................................. 133
B.34.2 For Nexus 93108TC-EX, 93180YC and 9336C-FX2 ..................................................... 135
B.34.3 Optical breakout useful information ................................................................................ 137
B.34.4 TWINAX breakout useful information ........................................................................... 142
B.34.5 QSA adapter .................................................................................................................... 143
B.35 What is "=" in Cisco part number?.................................................................................... 143
B.36 LINUX TEAM CONFIGURATION FOR TEAMED ADAPTERS ................................ 144
B.36.1 TEXT of COMMANDS for SFT .................................................................................... 146
B.36.2 TEXT of COMMANDS for LACP ................................................................................. 147
B.36.3 SFT TEAMING CONCLUSIONS .................................................................................. 148
B.36.4 LACP TEAMING CONCLUSIONS .............................................................................. 148
B.36.5 DISCONNECT TESTS CONCLUSIONS ...................................................................... 149
B.36.6 DEPLOYMENT CONSIDERTATIONS with MEDIA CENTRAL CLOUD UX ......... 149
B.36.7 CHECKING/DEBUGGING LACP BOND CONNECTION STATUS in LINUX........ 149
B.36.8 TEAMD or BONDING? ................................................................................................. 157
B.37 Nexus Watch Command (Field-Tip)................................................................................... 158
B.38 Automating Backing Up Cisco Config Files....................................................................... 158
B.38.1CATALYST ..................................................................................................................... 158
B.38.2 NEXUS............................................................................................................................ 159
B38.3 TFTP from NEXUS to tftp server .................................................................................... 159
B.39 NEXUS 93xxx USEFUL COMMANDS ............................................................................. 160
B.39.1 SHOW COMMANDS NEXUS 93xxx USEFUL COMMANDS FROM SHOW TECH
.................................................................................................................................................... 160
B.39.2 Show interface counters - Errors only ............................................................................. 162
B.39.3 NEXUS copy run start SHORTCUT............................................................................... 163
B.39.4 NEXUS other useful alias SHORTCUTS ....................................................................... 163
B.39.5 USEFUL NEXUS COMMANDS FOR MULTICAST DEBUGGING .......................... 165
B.40 NEXUS FX2 Models and ISIS STORAGE LIMITATIONS ..................................... 168
B.41 Using DHCP with Avid Applications.................................................................................. 169
B.42 IP address allocation with NEXIS and MediaCentral ...................................................... 170
B.43 Navigating Cisco NXOS versions ........................................................................................ 171
B.44 FIELD TIP: large file sending............................................................................................. 172
B.45 FIELD TIP: Upgrading Nexus 9000 switch firmware ...................................................... 172
B.46 Multicast propagation on DELL switches.......................................................................... 175
B.46.1 DELL 0S10 IGMP SNOOPING ..................................................................................... 177
B.47 Multicast propagation on Arista switches .......................................................................... 177
B.48 Useful cabling information .................................................................................................. 178
B.49 LACP for NEXIS clients – is it supported? ....................................................................... 178
B.50 Flow control with AVID Nexis storage – is it needed? ...................................................... 180
B.50.1 WHAT TRIGGERS FLOW CONTROL ........................................................................ 182
B.51 Flow control in Cisco NEXUS switches with AVID Nexis storage................................... 183
B.52 Flow control in Dell S4048 switches with AVID Nexis storage ........................................ 187
B.53 Mellanox ConnectX-3 adapter firmware version in NEXIS ............................................ 188
B.54 DELL S4100 OS10 vs. OS9 LACP and VLT commands .................................................. 188
B.55 CISCO SWITCH HARDWARE BEACONS .................................................................... 190
B.55.1 NEXUS 9000................................................................................................................... 190
B.55.2 CATALYST 9000 ........................................................................................................... 191
B.56 DELL N3224 FIRST SETUP............................................................................................... 191
B.57 DELL N3024 N2024 DEFAULT INTERFACE COMMANDS ....................................... 193
B.58 DELL N3024 N2024 PASSWORD RECOVERY .............................................................. 194
Appendix C - Power Connections....................................................................................... 196
Appendix D – NEXIS with MLAG Connections sequence .............................................. 197
D.1 C4500X VSS and dual controllers in NEXIS ....................................................................... 197

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 6 of 222


D.1.1 SDA................................................................................................................................... 197
D.1.2 ENGINE 1 ......................................................................................................................... 200
D.1.3 ENGINE 2 ......................................................................................................................... 201
D.1.4 ENGINE 3 ......................................................................................................................... 203
D.1.5 ENGINE 4 ......................................................................................................................... 205
D.1.6 ENGINE 5 ......................................................................................................................... 207
Appendix E – Useful articles ............................................................................................... 210
E.1 Cabling, Optics and Transceivers ........................................................................................ 210
Appendix F – Which Cisco Operating system to use? ...................................................... 211
F.1.2 Which version of (Cisco) IOS is supported? .................................................................... 211
Appendix Z Full Revision history....................................................................................... 218
Revision history .......................................................................................................................... 218

Table of Figures

Figure 1 – Reference design for NEXIS with DELL ............................................................... 49


Figure 2 – Traditional Block design for NEXIS ...................................................................... 50
Figure 3 – MLAG/VSS Block design for NEXIS ................................................................... 51
Figure 4 – MLAG/VSS Block design for NEXIS with dual controllers ................................. 52
Figure 5 – Stacked Block design for NEXIS with dual controllers ......................................... 53
Figure 6 – Custom design for NEXIS with Cisco Nexus 5600 ............................................... 54
Figure 7 – Custom design for NEXIS and ISIS 7500 with Cisco Nexus 5600 ....................... 55
Figure 8 – Custom design for NEXIS with Cisco Nexus 9500/93000 .................................... 56
Figure 9 – Custom design for NEXIS with Cisco Nexus 9336C-FX2 .................................... 58
Figure 10 – Custom design for NEXIS E5 with Juniper ......................................................... 59

Network Requirements for ISIS 7000 and Interplay PAM and MAM. This document is
available from:
http://avid.force.com/pkb/articles/en_US/Compatibility/en244197
Additional Contributors

Anthony Tanner

Recent Revision history


Note Version for this document number DOES NOT directly correlate to ISIS or Interplay
Production version

For Full Revision History see Appendix Z at end of this document

Version, Name & Comment


Date
Initial Issue V1.0 04 July 2007 - David Shephard

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 7 of 222


Version, Name & Comment
Date

V2.7
27JAN2021 ADD 1.1.1 Issues to be aware of with Dell S4100 Series Switches
UPDATE 1.10.1 Kubernetes – the “core” of the “network” problem with MCCUX
191 pages ADD 1.7.3 Catalyst 9000 - B series – USING SOFTMAX ONLY
UPDATE 1.8.3 NICs in a VM environment – applies to bare metal too
ADD 4.6.4.1 LINUX TEAMING
UPDATE B.20.2.2 – Field Knowledge NEXUS & Multicast PART 2UPDATE
B.25 Serial connection alternatives to DB9 using USB (FIELD-TIP)
ADD B25.1. Use the management network on NEXUS switches & remote
connection
ADD B.36.6 DEPLOYMENT CONSIDERTATIONS with MEDIA CENTRAL
CLOUD UX
ADD B.46.1 DELL 0S10 IGMP SNOOPING
ADD B.50 Flow control with AVID Nexis storage – is it needed?
ADD B.51 Flow control in Cisco NEXUS switches with AVID Nexis storage
ADD B.52 Flow control in Dell S4048 switches with AVID Nexis storage
ADD B.53 Mellanox ConnectX-3 adapter firmware version in NEXIS
ADD B.54 DELL S4100 OS10 vs. OS9 LACP and VLT commands

V2.8
24 MAY 2021 ADD 1.1.2 Dell S4100 Series Switches Model Variants
UPDATE 1.3.0 Known Cisco Issues impacting Avid Storage solutions
222 pages ADD 1.3.1.5 Cisco Nexus 93180YC-FX3
ADD 1.5.6 Gigabit Media Converters
UPDATE 1.8.3 NICs in a VM environment – applies to bare metal too
UPDATE 1.6.1 1U Switch Families
UPDATE 1.10.1 Kubernetes – the “core” of the “network” problem with MCCUX
(added paragraph at end.)
ADD 1.10.2 MCCUX “Kubernetes” – NO LONGER USING MULTICAST
MINOR UPDATE B.20 Multicast Propagation Does Not Work in the same VLAN
in Catalyst and NEXUS Switches
ADD 2.5.2 ARISTA - PROVEN DEPLOYMENTS WITH NEXIS
UPDATE B.20.2.1 – Field Knowledge NEXUS & Multicast PART 1
UPDATE B.36.2 TEXT of COMMANDS for LACP
SIMILAR UPDATE B.36.4 LACP TEAMING CONCLUSIONS
SIMILAR UPDATE B.49 LACP for NEXIS clients – is it supported?
UPDATE B.36 (.0) LINUX TEAM CONFIGURATION FOR TEAMED
ADAPTERS
ADD B.36.7 CHECKING/DEBUGGING LACP BOND CONNECTION
STATUS in LINUX
ADD B.55 CISCO SWITCH HARDWARE BEACONS
ADD B.56 DELL N32224 FIRST SETUP
ADD B.57 DELL N3024 N2024 DEFAULT INTERFACE COMMANDS
ADD B.58 DELL N3024 N2024 PASSWORD RECOVERY

RECENT UPDATE 2.8

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 8 of 222


1.0 AVID NEXIS REQUIREMENTS
Much of the ISIS information has a direct relevance to NEXIS; however, there will be
differences as the product evolves.

Avid Knowledge Base Avid NEXIS v7 Documentation


http://avid.force.com/pkb/articles/en_US/user_guide/Avid-NEXIS-v7-Documentation

Avid Knowledge Base - Avid NEXIS v6 Documentation


Documentation and Help files for the Avid NEXIS v6.x releases
http://avid.force.com/pkb/articles/en_US/User_Guide/Avid-NEXIS-v6-Documentation

Avid® NEXIS™ Network and Switch Guide Version 6.0


http://resources.avid.com/SupportFiles/attach/AvidNEXIS/AvidNEXIS_Network_Switch_v6
.pdf

1.0.1 NEXIS and Latency


While NEXIS is much more tolerant of dropped packets, using TCP with FRR, and with
PATHDIAG can reach 100+ MB/S while dropping packets, testing in SEP 2017 with a data
center approx. 30 miles/50KM from corporate offices, the corporate office located NEXIS
client performed significantly lower with default settings than a similarly configured client
connecting in the data center. The additional RTT latency, as measure by FPING, from client
to NEXIS SDA, increased from 0.2ms locally to 1.0ms remotely (average over 60 seconds),
the additional 0.8ms extra (average) had a big impact, and the link was not busy because it
was a recently installed 100Gbps, operating over a pure dark fibre wavelength, the
broadcaster operated its own DWDM over a rented fiber path.

*Given that the speed of light constant in a vacuum, 'c' is exactly 299,792,458 meters per second, the
figure of 1 millisecond per 300km might be an accurate estimate for the purpose of latency calculation
over distance
However, propagation speed in media is significantly lower than c, for glass roughly 1/2 - 2/3 of light
speed in vacuum, depending on the refraction index of the media, so a figure of 1 millisecond per
200km is more appropriate.

Hence a round trip time (RTT) of 1 ms per 100KM is a working figure is applied to longer distances, but
this does not consider delays encountered by network equipment such optical/electrical translation and
networks switches.

Jitter or the variation in latency is also a factor, but tends to have less of an impact than
latency, 1mS of jitter added to 1mS of latency = 2mS of latency and the performance of the
client will suffer. However, the usability of the application is dependent upon the nature of

For 1G client it was necessary to change the autotuninglevel from the default setting of
DISABLED (after NEXIS client install)
C:\Windows\system32> netsh int tcp set global autotuninglevel=
disabled | highlyrestricted | restricted | normal

For Windows 7/10/2008/2012 the current setting can be established in a Command


windows using

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 9 of 222


C:\Windows\system32>netsh int tcp sh global

Using a default PATHDIAG UNLIMITED W/R TEST


With RWIN setting as disabled achieve 88/56 MB/S W/R
Change RWIN to highly restricted and BW improved to NEXIS to 92/88 MB/S W/R
Change RWIN to restricted and BW improved to NEXIS to 92/110 MB/S W/R
Change RWIN to normal and BW improved to NEXIS to 94/110 MB/S W/R

For a MAC client the setting is likely to be handled more intelligently by the operating
system.

For a 10G windows client the impact was more significant and this currently under
investigation

FPING was used to measure latency as this is accurate to 0.1ms and the
extra granularity is required because the default window ping is only
accurate to 1mS.
http://www.kwakkelflap.com/fping.html ( URL MAY NO LONGER
WORK IN 2018/9)

ALTERNATIVE URL's

https://github.com/dexit/fping-windows
(works MARCH 2020)
http://blog.perceptus.ca/2017/11/10/fping-windows-download-the-last-version/
(works MARCH 2020)

and *IX ONLY)


https://fping.org/

I have also added a ZIP file with FPING-300 to my Vanilla Configs V0.4
(APR 2019)

A new PING utility found in 2020 is hrPing. I think this is very


comprehensive FREEWARE utility available from
http://www.cfos.de/en/ping/ping.htm

1.0.1.1 Latency measurements SEP 2017


Based on the SEP 2017 testing: For 1G clients, generally up to 10Km/6 miles is irrelevant,
because that adds (in theory) just 0.1ms of incremental RTT. Even up to 100km/60 miles
should be fine, because that adds (in theory) just 1.0ms of incremental RTT. But NEXIS with
TCP is more sensitive than ISIS with UDP, so I would look at an upper limit of 2ms before
some tuning may be required. For 10G clients I would say at the time of writing (JUL2018)
Latency above 1ms begins to make its presence felt.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 10 of 222


Insufficient testing has been done with NLE has been done with various RTT latency
variation. However, based on testing for a non-real time workflow and an RTT of 12ms a
10G client was able to achieve a maximum of 28MB/S second WRITE and 78 MB/S READ.,
but this test only had a single NEXIS engine so might be misleading. The end solution used
Nx1G in the client device and multiple NEXIS Engines to achieve a higher data rate.

Generally Dark fibre vs DWDM, is largely irrelevant, and much depends on whether ISP
used true optical multiplexers or electro/optical transponders, and also consider that both
types may be used at different parts of the network, optical multiplexers should operate at the
speed of light, but electro/optical transponders will add latency, typically 15-100us according
the article below. Also consider that Some ISP services promoted as dark fibre are not, so
check the fine print of any SLA and product offering. Hence, budget we should budget for
0.1ms (combining both directions) at each termination point.

https://www.telcocloudbridge.com/blog/what-your-dwdm-vendor-did-not-tell-you-about-
latency-for-data-center-interconnect/

1.0.1.2 Latency measurements NOV 2018


Based on the NOV 2018 testing using a multisite and multi hop MPLS MAN with 10G
presentation, and NEXIS 2018.9 the results are similar to those identified with ISIS 7500
testing in 2012.

The table below offers some guidance, but is not an “irrevocable source”, it has been derived
from different testing engagements with customer specific workflows.

The real time nature of editing high bandwidth video in a collaborative environment means
that tolerance for delay and jitter is small. The table below shows that 5ms RTT is the
maximum latency, which should be considered acceptable.

Value Behaviour Comments


0-1ms System performs on test network as if
locally attached for 1G clients and 10G
clients.
2-3ms Minimally noticeable degradation in RECOMMENDED Maximum Jitter and
scrubbing performance, slight delay in Latency – combined (UNLOADED)
play function (minimal) for 1G clients but Suitable for codecs up to 120Mbit/s
more noticeable for 10G clients. Unsuitable for non-real time high
bandwidth workflows

4-6 Minimally noticeable degradation in RECOMMENDED Maximum Jitter and


5ms scrubbing performance, slight delay in Latency – combined (UNLOADED)
play function (minimal) Suitable for codecs up to 50Mbit/s
Unsuitable for non-real time high
bandwidth workflows

10ms Particularly noticeable delay in scrubbing, Maybe useable for low bandwidth
1s delay from pressing play to material workflows
playing, may not be suitable for editors Unsuitable for non-real time high
bandwidth workflows
20ms NOT TESTED UNSUITABLE
50ms NOT TESTED NOT USEABLE

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 11 of 222


NOTE: subject to change depending on NEXIS client enhancements in
subsequent versions.

Based on the tests performed to determine maximum fibre optic distances, up to 5ms is an
acceptable latency, depending on the workflow and codec; this translates to a distance of a
connection of approx. 1000-1500km* where it would be acceptable to the operator.

*Given that the speed of light constant in a vacuum, 'c' is exactly 299,792,458 meters per second, the
figure of 1 millisecond per 300km might be an accurate estimate for the purpose of latency calculation
over distance
However, propagation speed in media is significantly lower than c, for glass roughly 1/2 - 2/3 of light
speed in vacuum, depending on the refraction index of the media, so a figure of 1 millisecond per
200km is more appropriate.

Hence a round trip time (RTT) of 1 ms per 100KM is a working figure is applied to longer distances but
this does not consider delays encountered by network equipment such optical/electrical translation and
networks switches.

Jitter or the variation in latency is also a factor, but tends to have less of an impact than
latency, 5mS of jitter added to 5mS of latency = 10mS of latency and the performance of the
client will suffer. However, the usability of the application is dependent upon the nature of
the application, for example an Interplay Production Browse client being used to review
material will be affected much less by latency than a Media Composer client actively editing.

1.0.2 Switch Suitability status


Historically Avid has two levels of switch suitability for project deployment. Qualified and
Approved. As described below in Q4 2017 and new category was added.

Qualified
We sell it or will sell it soon, and it should be tested by Avid with every Major S/W version
Dell S4048, Dell N30xx, Cisco C4500X

Approved
It was tested at a point in time, probably as customer funded testing
It was subjected to and passed vendor performed simulation testing
Configuration files available on request from “similar projects”

Architecturally capable
It was subjected to and passed vendor performed simulation testing
http://avid.force.com/pkb/articles/en_US/Compatibility/Avid-NEXIS-Network-Switch-
Infrastructure

1.0.3 Zones - an evolved definition


Historically Avid has used the concept of Zones to describe the connectivity model, and this
is discussed in more detail in NETREQS V1.x for ISIS. New concepts and methods such as
vPC and MLAG, and advances in technology, have blurred the lines versus the original
definitions. With NEXIS, the concept of Zones is less apparent, but the terminology is
equally applicable, and suitable edge buffering is still major factor in achieving and

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 12 of 222


maintaining successful workflows. The definition of switch below is one that is explicitly
Qualified, Approved, or considered Architecturally Capable.

Zone Definition Description


Zone 1 Does not apply to NEXIS
Zone 2 A layer 2 connection on the same switch and
connects with NEXIS, with a known QoS.
Zone 2.1 An indirect connection layer 2 connection on a
switch directly subordinate to the switch that
and connects with NEXIS, with a known QoS.
Zone 3 A layer 3 connection, with one routed hop, on
the same switch and connects with NEXIS, with
a known QoS.
Zone 3.1 An indirect connection layer 3 connection, with
one routed hop, on a switch directly subordinate
to the switch that and connects with NEXIS,
with a known QoS.
Zone 3.5 An indirect connection via multiple layer 3
hops in a tightly controlled network diameter,
connection on a switch indirectly linked to the
switch that and connects with NEXIS, with a
known QoS.
Zone 4 An indirect connection via multiple layer 3
hops with an uncontrolled network diameter,
with an unknown QoS, which does not mean
insufficient QoS.

1.1 Qualified Switches for Avid NEXIS


Avid has tested or reviewed the following switches for use in an Avid NEXIS environment.
1GbE and 10GbE Switches

The following switches work with the Avid NEXIS | PRO, Avid NEXIS | E2, Avid NEXIS |
E4, and the System Director Appliance. The switches are listed in no particular order.

Cisco Nexus 93180YC-FX, 93108TC-FX, 9348GC-FXP (EX versions also acceptable) -


added MAY 2018

Note: The cisco N93180-YC-EX has distance limitation for 25G interfaces
(compared to published standards) which are addressed by the newer 93180YC-
FX, because it uses a pre-standard Forward Error Correction implementation (FC-
FEC)
See:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Cisco_ACI_and_For
ward_Error_Correction.html#id_50152
And
https://supportforums.cisco.com/t5/application-centric/25g-ethernet-consortium-and-or-ieee-
standard/td-p/3182146

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 13 of 222


And
https://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/mat
rix/25GE_Tx_Matrix.html
**** LINK EXPIRE AUG 2018 in favour of TMGMATRIX.CISCO.COM

For 25 Gbps over 2M (5m typo correction) copper and for 25 Gbps multimode or
single mode the forward error correction mechanism is required because of the
high-speed transport of the packages over the cables.

The published IEEE standard chose for RS-FEC (Reed Solomon" or "Clause 91)
as Forward Error Correction mechanism which is used in the FX series 93000
Nexus switches. The ASIC design in the EX series uses FC-FEC ("FireCode" or
"BASE-R" or "Clause 74") because it was designed before the final IEEE
standards were defined.

The ASIC of the 93180YC-EX is designed before the 25G standard was
completed, the design of the ASIC of the 93180YC-FX is after the 25G standard.

For 25 Gbps over 5 metres copper and for 25 Gbps multimode or single mode the
forward error correction mechanism is required because of the high-speed
transport of the packages over the cables. In the standard is chosen for RS-FEC as
eventually error-correction mechanism.

When the ASIC of the – EX Nexus were produced, this choice for RS-FEC was
not yet known and there is just no RS-FEC in the ASIC but less powerful FC FEC
that allows: the-EX-Nexus up to 3 meters over copper and up to 10 meters with
active optical cable, SFP28-25G-SR not LR are not supported on Nexus 9300-EX
switches.
When the ASIC of the – FX Nexus were produced, the choice for RS-FEC was a
standard and therefore RS-FEC is in this ASIC: so, these FX Nexus can handle all
distances.

So, if you need more than 3 metres on the 93180YC-EX do use the AOC cables:
as you can see in the matrix you can with AOC up to 10 meters on the – EX:

This 25G distance limitation does not apply to the N9K-X97160-EX-LC line card
used in the NEXUS 9500 chassis.

Cisco Nexus 93108TC (see above for N93180YC)

Note: NXOS LICENSING WITHOUT PIM NEXUS 9000


When using the simple no ip igmp snooping command described in
section B.20.2.2. NX-OS Essential is sufficient….. NX-OS Advantage
(formerly called Enterprise) is not required.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 14 of 222


Dell Networking N2024
Dell Networking N3024
Dell Networking S4048-ON

Note: as the NETGEAR XS712T is end of sale the XS716T is acceptable,


it is the same architecture but has been tested by AVID.

However, in 2020 some customers experienced issues with this very


small buffer switch (especially the XS728T) with a heavy 1G workflow,
the nature of very small buffer means it is suitable for 10G clients and 10G
storage servers only, and less suited to 1G clients unless a very lightweight
workflow.

Cisco Catalyst 4500-X (END OF SALE 31OCT20 announced 31OCT19)


https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-x-series-
switches/eos-eol-notice-c51-743098.html

Cisco Catalyst 4948E (END OF LIFE OCT 2017) effective replacement for AVID is Nexus
9348GC-FXP and not Catalyst 3850 as in URL below:

https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4900-series-
switches/eos-eol-notice-c51-738116.html

Cisco Nexus 9372 PX/PXE/TX/TXE ( announced end of sale 01 MAY 18 Last day to order
is 30 OCT 2018, replacement switch is Nexus 93180 but this is not listed in the EOL notice)
at URL https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-
switches/eos-eol-notice-c51-740713.html

https://www.cisco.com/c/en/us/support/switches/nexus-9372px-switch/model.html

1.1. Issues to be aware of with Dell S4100 Series Switches

This sub section describes some know issues with Dell Switches that impact Avid
deployments and may also provide remedial information. Other vendors are addressed in
other sections of this document.

Note: Tips from the field. Try this at home ☺

1.1.1.1 DELL OS10 does not support stacking.


An S4048 stack running OS9 should not be upgraded to OS10.
Dell S4100 devices cannot be stacked and the minimum OS for S4100 is OS10.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 15 of 222


1.1.1.2 S4100 VLT issue with NEXIS
In JAN 2021 a project using S4148 with NEXIS and VLT encountered problems. A VLT
LACP to NEXIS controller would fail but a local LACP on a single switch would succeed.
However downstream Dell N3000 switches connecting to VLT/LACP would succeed. The
DELL S4148 switch in question was running 10.5.2.2.258. Local Dell support recommended
down grade to version 10.5.1.4.249 which resolved the issue toward NEXIS an allowed
VLT/LACP to operate correctly apparently other customers have also experience VLT/LACP
to HOST devices (e.g., Windows Server, Linux Server, VMWARE ESXi and possibly other
hypervisor an operating systems) when running DELL OS 10.5.2.2 on S4100 series switch.

See session 1.1.1 regarding know VLT-LACP to HOST in OS10.5.2.2 that will affect NEXIS
and possibly ESXi, and likely to affect other platform operating systems/devices too.

Current advice [JAN2021] is to downgrade to 10.5.2.0. or 10.5.1.4 to mitigate this issue.

As this is a dynamic situation Dell release notes should be consulted for status.

1.1.2 Dell S4100 Series Switches Model Variants


As at MAR2021 the current model list is:

S4112F-ON 12x 10G and 3x 100G


S4112T-ON 12x 10GbT and 3x 100G
S4128F-ON 28x 10G and 2x 100G
S4128T-ON 28x 10GbT and 2x 100G
S4148F-ON 48x 10G and 2x 40G and 4x 100G
S4148T-ON 48x 10GbT and 2x 40G and 4x 100G

They all use the same chipset so there is no appreciable difference in the capability of each
model in regard to operation with NEXIS and/or MediaCentral. It is possible that other model
variants may be added in time.

1.2 Approved Switches for Avid NEXIS


Avid has tested or reviewed the following switches for use in an Avid NEXIS environment.
Arista Networks 7048
Arista Networks 7150S
Arista Networks 7280SE
Cisco Catalyst 4900M
Cisco Catalyst 4948-10GE
Cisco Nexus 7000 series (specific I/O Cards)
Cisco NEXUS 93180 LC –Deployed Q4 2017 [ NEXUS 93180 FAMILY BECAME
QUALIFIED IN MAY 2018]
Cisco NEXUS 9336C-FX2, 93240YC-FX2, Nexus 93360YC-FX2 and Nexus 93216TC-FX2
(these are evolutions of NEXUS 93180 FAMILY, but may have issues with ISIS storage
see section B.40).

Cisco Nexus 5672 and Nexus 2248TPE FEX – Deployed Q1 2017 Europe
Any Cisco NEXUS 56xx parent switch with:
Nexus 2248TPE FEX, Nexus 2348 UPQ FEX, 2348 TQ/TQE FEX, 2232 TQ
FEX
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 16 of 222
NETGEAR XS712T - for small NEXIS PRO solutions

FEX COMPATIBILITY MATRIX for NEXIS 9000 NXOS

https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/fexmatrix/fexmatrix.html

1.3 Architecturally Capable Network switches

In NOV 2017, Avid has decided to create a new “standard level” of Architecturally Capable,
in addition to Qualified and Approved, the URL below provides further information.

http://avid.force.com/pkb/articles/en_US/Compatibility/Avid-NEXIS-Network-Switch-
Infrastructure

• Qualified:
o Fully qualified for a broad range of applications. Qualified switches are
typically part of the Avid engineering and test labs and part of ongoing testing.
(Qualified switches are listed in the Avid NEXIS Network and Switch Guide
for your release version.)
• Approved:
o Approved for deployment as detailed in the Avid ISIS / NEXIS & Media
Central Network Requirements Document. (Approved switches are typically
tested at a customer site as part of a specific commissioning engagement.
Approved switches are listed in the Avid NEXIS Network and Switch Guide
for your release version.)
• Architecturally Capable:
o Architecturally Capable switches have been stress tested by the switch vendor
in coordination with Avid and subject to an Avid specific test plan (see below
for details). This Knowledge Base article is the only source of information for
architecturally capable switches with Avid NEXIS.

Architecturally Capable Switches (as at MAY 2021) Check URL Above for updates

Architecturally Capable Switches (alphabetically by switch vendor)

Switch Vendor Product Notes Grade

Arista 2
7020TR-48 48x100/1000Mb and 6 SFP+
Networks
7020SR-24C2 24x10G and 2 QSFP100 3
7020SR-32C2 32x10G and 2 QSFP100
7050SX2 2
7050TX2

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 17 of 222


7280 Family E-Series (10/40GbE) or R-Series 3
7500 Family (10/25/100GbE)

Aruba 1/10GbE (SFP/SFP+ and 10GBASE-T) 2


8320
Networks and 40GbE connectivity

1/10GbE (SFP/SFP+ and 10GBASE-T) 3


and 40GbE connectivity
8325
(excessive capability for use as a 1GbE
edge switch)

3810 1/10GbE (SFP/SFP+ and 10GBASE-T) 1

Nexus 93180YC- 3 Qualified


EX
Nexus 93180YC-
FX Tested 1G, 10G, and 40G Ethernet
Cisco Nexus Nexus 93180LC- See the latest Avid Network and
EX Switch Guide on
Nexus 93180TC-
EX
Nexus 93180TC-
FX
Not explicitly tested but a variation of 3 Approved
above package.
Nexus 9336CFX2 36 100G ports
(100/50/40/25/10/4x10/4x5) SFP28
presentation.

Nexus 93240C- 3 Approved


Variation of Nexus 9336C-FX2 with
FX2
different physical presentation,
Nexus 93360C-
offering more 10/25GbE ports (and
FX2
fewer 100GbE ports)

Second-generation Catalyst 9300 series 1


with deeper buffers.
Suitable with 10G Avid NEXIS
Engines as:
C9300-24UB
Cisco Catalyst C9300-48UB
Edge switches to a larger system
C9300-24UXB
Self-contained single switch
core/access for small, stand-alone
systems

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 18 of 222


Smaller systems using stacked
configuration to provide resilient
solution

In general: smaller, entry-level systems


with moderate workflows.
You must modify the default buffer
configuration, which is not deep
enough for correct operation with Avid
NEXIS.
See Network Requirements for Avid
NEXIS, ISIS and Interplay / Media
Central for Softmax Configuration
Details for Cisco Catalyst C9300B
series.

N2200 Series are comparable 1


Dell N2200 Series
replacements for N3000 series.
1 or 3,
depending on
N3200 Series are good replacements
Dell N3200 Series exact model
for N3024/3048.
(packet buffer
memory)
Juniper 1
QFX5100 Testing completed at Layer 2
Networks
SN2700
Mellanox SN2410
Spectrum SN2100
SN2010

Architecturally Capable Switches: Merchant Silicon -- Small Buffer


Switch Vendor Product Buffer Size Notes Grade
10/40G; also supports 1G edge 2
Broadcom Trident 2/2+ 12.2/16MB
switch
10/40G; also supports 1G edge 2
Broadcom Maverick 12.2MB
switch
1G edge switch and/or two Avid 1
Broadcom Helix 4MB
NEXIS Engines at 10G
10/25/40/50/100G devices* not 2
Broadcom Tomahawk 16MB well suited to 1G edge use but
amply capable
10/25/40/50/100G devices* not 2
Broadcom Tomahawk+ 22MB well suited to 1G edge use but
amply capable

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 19 of 222


Primarily a 100G core switch 3
Broadcom Tomahawk2 42MB not aimed at 1G/10G edge use
but capable
Trident 3 1/2.5/5/10/25/40/50/100G 3
Broadcom 32MB
X4, X5, X7 devices* depending on variant
* NOTE: Avid NEXIS systems and clients do not currently support 2.5G, 5G, 25G, 50G, or 100G speeds.

Architecturally Capable Switches: Merchant Silicon -- Deep Buffer


Switch Vendor Product Buffer Size Notes Grade
10/40G; also supports 1G edge 3
Broadcom Dune N x 32GB
switch

10/25/40/100G; also supports 1G 3


Broadcom Qumran N x 4GB
edge switch

10/25/40/100G; also supports 1G 3


Broadcom Jericho N x 4GB
edge switch

1.3.0 Known Cisco Issues impacting Avid Storage solutions


Devices

Considering a defect where UDP packages are misclassified as described


below.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCva22756
It is recommended that all Nexus 93000 EX series (2nd and 3rd generation)
products used with any Avid storage (ISIS or NEXIS) is deployed as/or
upgraded to a minimum Nexus version S/W version of 7.0(3)I7(3) (FEB
2018).
RELEASE NOTES:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70373_nxos_rn.html?referring_site=RE&pos=2&page=https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70377_nxos_rn.html

NOTE this BUG has since been found in later versions Nexus version S/W
version of NXOS version 7.0(3)I7(6) should be used with AVID ISIS
application, AVID NEXIS should not be impacted, but tis cannot be ruled
out.

This applies to all Nexus 93108, 93180, 9500 chassis-based switches using
2nd generation or 3rd generation hardware (I.E using the N9K-C95xx-FM-E
Fabric card.) should use this software.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 20 of 222


The Cisco NEXUS 9336C-FX2 should be added to the above list. I have
recommended it to MANY customers, usually in combination with a
N93180YC/TC OR N9348GC.

It is basically two N93180YC-FX “glued” together with all 100G


(40/25/10/4x10/4x25) presentation. Its edge buffering capabilities are fairly
irrelevant as a core switch.

Nexus 93000 FX series products should use a minimum Nexus version S/W
version of NXOS version 7.0(3)I7(6). (MAR 2019)

RELEASE NOTES:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70376_nxos_rn.html?referring_site=RE&pos=2&page=https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70377_nxos_rn.html

Ideally use 7.0(3)I7(8) or later in the NEXUS 7.x family.

The Cisco NEXUS 9336C-FX2 QSA BUG

Link flap might cause the port to go down. CSCvq65989

Symptom: A Nexus 9000 series switch might experience ports stuck in


down/down state after a link flap event.

Known Affected Releases: 7.0(3)I7(7), 9.2(3), 9.2(4), 9.3(1)

Known Fixed Releases: 9.3(2) ,7.0(3)I7(8)

Affected ports: 1/1-6 and 1/33-36

May also impact NEXUS 93240-FX2 and 93360-FX2

This has affected one deployment (Q2/20) that a using and unusually high
number of QSA instead of optical breakout.

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70378_nxos_rn.html

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/93x/release/
notes/cisco-nexus-9000-nxos-release-notes-933.html

Not forgetting the “original” killer bugs CSCue96534 and CSCuj73571 – for Cisco Catalyst
4xxx in 2013/

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 21 of 222


1.3.1 Partially Tested Switches
Devices listed here have undergone some basic testing, but may not have been tested with
simulated traffic by the manufacturer, or had a full load test with 44 (or equivalent) clients.

1.3.1.1 C2960X Testing with NEXIS (JUL 2017)


The C2960X has just 4MB of packet buffer, whereas the C4948E has 17.5MB. The C2960X
would not be able to support ISIS 7500 clients, but the NEXIS data flow is different, and the
“standard Dell switch” model N3024/3048 sold by Avid also only has 4MB of RAM,
however the buffer organization of some lower cost Cisco switches has been found to be
incompatible with the Avid data flow, so this opportunity was taken to do rather basic testing
with a small number of workstations

The C2960X drop packets, and this was somewhat expected, but it has shown itself to be
capable of handling relatively high loads of NEXIS data. It must be state however that this
test used only 10 workstations, but those workstations were drawing a much higher load than
in normal workflows.

The READ BW flowing through the C2960 was approx. 640MB/S


640
RATE STREAMS
XDCAM HD
50 8 80
AVCI -100 14 45
DNxHD120 16 40
DNxHD120 18.5 34
DNxHD145 23.5 27
DNxHD220 28 22

Dependent on the working video resolution that may well exceed normal operational
requirements these values suggest the C2960X, with two 10G uplinks is capable of
supporting AVID NEXIS video traffic for 1G clients connecting to 10G E4 engines (the
result for 40G connecting Engine could be quite different, even though there would be
buffering stages in between at the 40G to 10G transition.

The interface discard counters were monitored and also the interfaces statistics, which are
reported over the default 300 second/5 minute period. As can be seen below the discard
%age is approx. 0.02%

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 22 of 222


Discards@ Discards@ Discards@ Discard
10 MIN 20 MIN 30 MIN Per 1 MIN Per 5 MIN packets/5 minutes %age
30 6
port G1/0/1 0 0 0
port G1/0/2 29313 56172 81069 2702 13511 135,725,668 0.01%
port G1/0/3 27535 53800 80164 2672 13360 134,490,242 0.01%
port G1/0/4 41828 43419 121109 4036 20184 136,190,776 0.01%
port G1/0/5 19755 40022 58011 1933 9668 66,658,029 0.01%
port G1/0/6 19554 40366 58345 1944 9724 66,235,063 0.01%
port G1/0/7 7546 13988 22431 747 3738 33,240,438 0.01%
port G1/0/8 3478 5469 9004 300 1500 5,298,000 0.03%
port G1/0/9 28934 55309 81232 2707 13538 138,796,895 0.01%
port G1/0/10 44382 90238 133101 4436 22183 137,115,475 0.02%
port G1/0/11 31429 61274 90372 3012 15062 89,052,152 0.02%
port G1/0/12 0 0 0

This switch should not be considered and approved switch, for use with NEXIS based on this
testing mini-test. However, based on this mini-test it is consider “acceptable deployment” but
without a commitment to explicitly support by Avid.

1.3.1.2 Cisco Nexus 9348-GC-FXP Field testing (FEB 2018)


The Cisco Nexus 9348GC-FXP Switch is a 1RU switch that supports 696 Gbps of bandwidth
and over 250 mpps. The 48 1GBASE-T downlink ports on the 9348GC-FXP can be
configured to work as 100-Mbps, 1-Gbps ports. The 4 ports of SFP28 can be configured as
1/10/25-Gbps and the 2 ports of QSFP28 can be configured as 40- and 100-Gbps ports. The
Cisco Nexus 9348GC-FXP is ideal for big data customers that require a Gigabit Ethernet
ToR switch with local switching. This switch has not been tested by Avid with ISIS 7500 but
is being considered for use with NEXIS (as at MAR2018 – WAS QUALIFIED FOR NEXIS
IN MAY 2018), generally any switch capable of working with ISIS 7500 is suitable for
NEXIS.

Note: This switch uses the 3rd generation Nexus 9000 chipset and has been
successfully field tested with ISIS 7500 (in March 2018) by a forward-
thinking customer using a small number of clients running AVCI-100
video resolution it was necessary to upgrade the switch to NXOS S/W
version of 7.0(3)I7(3) (FEB 2018). Earlier version supplied on the switch
did not operate correctly with ISIS 7500.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 23 of 222


Considering a defect where UDP packages are misclassified as described
below.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCva22756
It is recommended that all Nexus 93000 series (2nd and 3rd generation)
products used with any Avid storage (ISIS or NEXIS) is deployed as/or
upgraded to a minimum version S/W version of 7.0(3)I7(3) (FEB 2018).

This applies to all Nexus 93108, 93180, 9500 chassis-based switches using
2nd generation or 3rd generation hardware (I.E using the N9K-C95xx-FM-E
Fabric card.) should use this software.

1.3.1.3 Cisco Nexus 93180YC-FX (QUALIFIED MAY 2018)


The Cisco Nexus is a later generation of the N93180YC-EX, and has a very similar buffer
architecture to the N93180YC-EX. It also addresses 25G/100G interface distance limitation
but using RS-FEC instead of FC-FEC used in the N93180YC-EX.

1.3.1.4 Cisco Nexus 9336C-FX2 and 93240YC-FX2


The Cisco Nexus is an evolution of the Cisco Nexus 93180YC-FX (QUALIFIED MAY
2018), and has a very additional buffering using two FX series ASICS and provide a higher
port count of 10G interfaces. However, buffering in the core switch is of less relevance than
an edge switch for Avid “FAT” clients.

Nexus 93000 FX series products should use a minimum Nexus version S/W
version of nxos version 7.0(3)I7(6). (MAR 2019)

*** This addresses a UDP fragmentation Bug. - CSCvm70117 ***


*** Fragmented UDP packets goes to CPU - BFDC v4 PACKET IETF ***

First found as CSCue96534 Catalyst 45XX (2013 Cisco IOS-XE software


version 3.3.x and 3.4.0). The only fragmented packets that are “dropped”
[I.E .STOLEN AND NEVER RETURNED] contain the value 3784 or 3785 where
the UDP L4 portion would be if the packet were a non-fragment.

RELEASE NOTES:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70376_nxos_rn.html?referring_site=RE&pos=2&page=https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/release/notes/70377_nxos_rn.html

nxos.7.0.3.I7.6.bin

even better nxos.7.0.3.I7.8.bin. (MAR 2020)


https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/release/notes/70378_nxos_rn.html

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 24 of 222


1.3.1.5 Cisco Nexus 93180YC-FX3

The Cisco Nexus 93180YC-FX3 launched Q1 2021 is an evolution of the Cisco Nexus
93180YC-FX (QUALIFIED MAY 2018) and has a very similar buffering. At the time of
writing this section (MAY 2021) there is no intention by Avid to test this device, as it is not
deemed necessary.

The FX3 variant is an evolution of the existing FX switch platform, with additional features
like FEX mode along with a few more not important of minimal relevance to Avid dataflows,
like Telecom PTP, and some telemetry features. Buffer is exactly the same as for FX, single
slice with 40MB of shared buffer.

However, this Platform required NXOS 10.x and at the time of writing (MAY 2020) Avid
has zero experience with this version.

From A FEX operation perspective this platform should exceed the capability of previously
tested FEX platforms, and NXOS 10 is required.

1.4 LEGACY Switches for Avid NEXIS


Many questions are asked about using old network switches from previous ISIS 5x00 and
7x00 deployment when upgrading to NEXIS.
1.4.4 Using Force10 S60 with NEXIS

Late 2015 the S60 network switch became End of Sale. There appears to be no direct
replacement for the S60N that had very large buffers of 1.25 GB. As this is a legacy switch
there are no plans for Avid to test it. Avid does not explicitly support this switch with
NEXIS. However, it is considered architecturally suitable for any workflow. Generally, any
switch that was capable of supporting ISIS 5x00 data flows should be capable of supporting
NEXIS data flows. The limited quantity of 10G ports on this switch suggests it will only be
used on smaller NEXIS systems only or as a cascaded access switch from a higher capability
core switch.

1.4.3 Using Force10 S25N with NEXIS

In 2015 the S25N network switch became End of Sale. There direct replacement being the
Dell N3024. The S25N has small buffers (unlike the S60), just 2MB, and this is arranged as
1MB per 12 port group. Avid does not explicitly support this switch with NEXIS. Generally,
any switch that was capable of supporting ISIS 5x00 data flows should be capable of
supporting NEXIS data flows, however this switch is unlikely to be able to support
aggressive workflows, but is expect to be able to support limited workflows (i.e. 50Mbit/s
data rate) and/or client quantity.

Note that the S25N has TX and RX crossed on the RJ45 connector. For the Dell F10-S25 I
have used a RJ45 cat 5e coupler and a standard “twisted” Ethernet cable, as my “get out of
jail” solution to fit on the end of my Cisco rolled cable. Imperfect but seems to work.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 25 of 222


1.5 Transceivers and cables
Questions are often asked about this area. Some explanations are given below
1.5.1 40G QSA or Breakout cables NEXIS
Using a Cisco 40G- 10G QSA or 4x10G break out cables is no issue, as they are physical
layer devices. But QSA adapters a very inefficient because you get 1 x 10G port from a 40G
port.

I have successfully used a Cisco QSA adapter with a NEXUS 5648 while doing some ISIS
7500 testing.

The QSFP breakout cables is not something I have used myself, and the Cisco data sheet is
POOR
http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-
modules/data_sheet_c78-660083.html

Apparently there is not a 40G to 10G x4 “empty” SFP+ option, that can be used in
conjunction with an SFP+ into of your choice (i.e. SR or LR etc.) Hence you would have to
use the

QSFP-4SFP10G-CU (0.5M,1M,2M, 3M,4M, 5M) - QSFP to 4 SFP+ copper break-


out cables

This gives the equivalent of four TWINAX cables that could be connected to NEXIS, so you
still get 40G worth of throughput. Avid has not tested this cable, but it should present exactly
the same as a single 10G TWINAX that Avid have tested.

Some more details can be found in Appendix B.34

1.5.2 Breakout cables NEXIS and NEXUS 93180


From a DEMONS Mail thread in MAY 2019
Question about connecting E4 10G port to Cisco Nexus 93180LC-EX 40G port…….. or
93108TC-EX.
……………the devil is in the detail here.

Short answer: NOT TESTED but should be fine. This is all standards-based connections.

Long answer:
There are so many cable/transceiver/breakout variations, Avid will NEVER tested them all,
and probably only one or two!! Even Cisco has challenges with its new

Transceiver Module Group (TMG) Compatibility Matrix


https://tmgmatrix.cisco.com/home

The Nexus 93180 EX family all use the same chip inside. (Principle described below apply
equally to FX models)
The model variants have different external presentation. It is all based around a chip that has
100G ports that can present as 100G, or 2x50G or 4x25G or 1 x 40G or 4 x10G or…. with

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 26 of 222


QSA adapter 1x 10G or 1 x 1G (yes a 100G port working as a 1G port….. but sometimes it
has to be done).

So, the 10/25G SFP ports are just a (backend) 100G port externally (physically) presented at
four independent ports, to which one can connect and SFP+ or SFP28 or 10G TWINAX or
25G TWINAX (or AOC cables…. But that is another can of worms that Avid does not test
because the Mellanox CX3 NIC in NEXIS does not officially support AOC). Hence, a single
path TWINAX presents and SFP “style” connection the same a 4x10G TWINAX from
“single” 100G ports. It is just with breakout cable there is additional configuration necessary
versus a “hard port”.

To go to my favourite car metaphors: Same car…….. different style doors, some with tinted
windows some without.

Also never forget the 1/10G ports on a 93108TC-EX are just as fast capable as a 10G SFP+
port on the N93180YC-EX, it is just the distance options that differ based on transceiver
selection.

1.5.3 Breakout cables NEXIS and DELL S4048


This should not really be necessary on a Dell S4048-ON but is likely if using a S4048T-ON.
Which has RJ45 1/10GBaseT ports instead of SFP+ ports. Bear in mind S4048T-ON is not
sold by Avid.

There are three choices


1. TWINAX break out 40G to 4x10G, 5 m max distance
2. OPTICAL breakout, form QSFP+ via external breakout panel as described in my
NETREQS section B.34.3 Optical breakout useful information
Each of these will give you 4 x 10G ports, and will need additional commands in the
config
Lastly…
3. Use a QSA adapter (and then fit a 10G SFP+), which turns 40G port into a single
10G port, which is inefficient, especially of you need multiple 10G ports

None of these are sold by Avid so must have been provide externally, in which case the
reseller should asked their supplier of the DELL switch.

1.5.4 Avid supplied 40G Transceivers and NEXIS


This should be rather simple, Historically the E5 & E5SSD could be ordered with
7070-35070-00 Avid NEXIS 40GbE QSFP MPO connector Optic SR (100m, Short Range) for E5
& E2 SSD controller (Mellanox MC2210411-SR4) (40GbE, InfiniBand FDR10 SR4 MPO 150m 850nm)

Or you could use TWINAX, originally the was no Long Range option but that changed in SEP
2017 (from a Mellanox perspective) with the MC2210511-LR4 being supported in the NIC
when the later firmware Rev 2.42.5000 ( SEP 2017) or later is deployed in the controller.

But the MC2210411-SR4 became EOL by Mellanox


See: https://www.mellanox.com/related-docs/eol/LCR-000407.pdf so Avid had to change

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 27 of 222


But what to connect it with…. There are many twists and turn here, to describe just one:

The first is that this particular challenge has NO constant naming of this 40G connectivity
option between vendors, unlike other well defined 40GBASE standards like SR4 and LR4.

It appears that it is called:


Arista call it “QSFP-40G-XSR4”
Cisco QSFP-40G-CSR4”
Dell (and apparently Huawei) call it “qsfp-40g-esr4”
Extreme 40G-QSFP-eSR4
Juniper QFX-QSFP-40G-ESR4
Mellanox MC2210411-SR4E. (40Gb/s, QSFP, MPO, 850nm, up to 300m)

All apparently the same thing. Standards are a wonderful thing but sometimes the vendors
just add to the confusopoly for their own purposes. The other thing about the Cisco part and
again there appears to be a lack of commonality I expect that this applies to the others is that
the Cisco “extended” range devices also support 4x10G breakout while the standard version
do not, but for Mellanox it seems to be the opposite.

Apparently “standard” SR4 can interconnect with “extended” SR within the limits of
“standard” SR4, but I can only find ONE article to support that:

https://community.cisco.com/t5/other-data-center-subjects/interconnect-qsfp-40g-sr4-with-
qsfp-40g-csr4/td-p/3219528.

Another vendor has confirmed (APR 2020) that SR4E has a more powerful laser and a higher
power class, and that it can communicate with the lower power SR4 (100m, Short Range)
device within the capabilities of the lower power device.

1.5.5 3rd Party Transceivers and Switch Vendors


Fundamentally this is "beauty contest" between the switch vendors all wanting to sell their
own branded products at good margin. In reality correct operation is ultimately a function of
Vendor codes programmed into the local EEPROM on the transceiver/TWINAX, and there
are many vendors (web search will provide a long list) that sell "guaranteed compatible"
devices with the correct EEPROM codes already programmed appropriately for “intended
switch vendor”, because in reality all the optical transceivers and TWINAX cables are made
by a restricted number of suppliers, just as most NICS are based on a small collection of
merchant silicon.

Nexus 9k and Nexus 3k (and later version of Nexus 7K) don't have enforced ban of 3rd party
transceivers, when you plug in a 3rd party transceiver system (e.g. a different vendor
TWINAX cable) will display syslog that transceivers is not supported, but will allow port to
function correctly providing the cable/transceivers is in line with IEEE standard. Some other
Nexus switches need additional commands to service the unsupported transceiver (web
search will provide as it may not be "polite" to share this here).

Hence using a Mellanox cable from a Mellanox NIC to a Cisco Nexus 9000 switch will work,
but you may well get some benign "advisory/complaining" log entries in the switch.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 28 of 222


1.5.6 Gigabit Media Converters
In NETREQS Version 1. (finale version v1.24) there is a whole appendix (F) on media
converter testing, originally from 2006. The Allied Telesis models (MC1004 and MC1008)
see to have been superseded by now but the TPLINK device is still available.

Media converters are pretty basic devices, usually layer 1, so not much “invention” going
on….a bit like wheel and tyres, they are round and will (probably) always be round. Some
folk used to think there was a performance impact for these devices, but generally they just
work… there is no buffering to consider, light pulses go in and electrical pulse come out and
vice versa of course (no rocket science here)

So, in 2021 what might I suggest:


Premium Product. Allies Telesis
https://www.alliedtelesis.com/en/products/media-converters/mmc2000lc
• These products tend to be layer 2 device due to additional management functions
• This vendor provides some impressive large scale media conversion solutions too
• Street price approx. EUR/USD 150

Basic no-nonsense product. TPLINK MC220L or MC 200CM/ MC210CS


https://www.tp-link.com/uk/business-networking/accessory/mc200cm/
(BTW it won’t be calling home!!)
• One point which may be seen as advantageous (or not depending on perspective) is
the FIXED transceiver which means that no SFP needs to be purchase. Separately.
• Streep price EUR/USD 40

https://www.tp-link.com/uk/business-networking/accessory/mc220l/
• Streep price EUR/USD 30, SFP transceiver extra

There are many other suitable vendors such as Black Box or Startec, they all do the same
simple job, as with many “things Ethernet” they are all made in the same group of factories
by OEM suppliers and given different paintjobs and branding.

1.6 Switch combinations for E5 NEXIS and Big 40G deployments

The explicitly approved switches generally have limitation on the 40G ports, which limits the
size of E5 based solutions with full resilience.

Many switch vendors use the same merchant silicon, and product families of a single vendor
may share common silicon and hence buffering capability amongst 1U/2U products and
chassis-based products.

If 40G and 10G variants of 1U products can be stacked it might be possible to stack different
models from the same family subject to vendor OS capability/limitations.

The list below is not exhaustively tested or possibly explicitly supported and is provided for
information purposes only:

For Avid NEXIS storage is it the edge buffering toward the NEXIS clients that need the most
buffering.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 29 of 222


1.6.1 1U Switch Families
One Avid qualified switch (2016-2020 – then Eend of Sale) for NEXIS is the Broadcom
Trident 2 based S4048, but the Dell S6000 is the same device but with 40G presentation, so
can be used alongside with S4048 for 10/1G and Dell N3024/48 for 1G clients (NLE of
Server).

Cisco Nexus 9332PQ for 40G with N9372 (also Avid approved) for 10/1G (all same
Broadcom Trident 2 chipset 12+ 25MB buffer in ALE chip)

Note End of sale announced 05MAR19 end of support 24MAR24


https://www.cisco.com/c/en/us/support/switches/nexus-9332pq-switch/model.html

Arista 7050QX for 40G and 7050SX/TX for 1/10G (all same Broadcom Trident 2 chipset)
(we have done “some tests”)

Cisco Nexus 93180YC-EX (1/10/25 & 40/100) 93108TC-EX, (1/10 & 40/100) and 93180LC
(25/40/50/100) all have same (Home-grown Cisco) chip, LSE "40MB" is 18.7MB per slice
and used 2 Slices.

Arista 7280 QR for 40G and 7280 SR/TR for 1/10G (all same Broadcom Jericho chipset with
4GB buffers, not tested -yet– but VERY capable)

As this is Nexis, also for 1G edge consider the Arista 7010T which is same chipset
(Broadcom Helix) as Dell N3048

The HPE FlexFabric 5930 uses same Broadcom Trident 2 chipset (and the 5900 was Trident
+ [like the F10-S4810] which would also be acceptable, but this is now considered a legacy
switch)

The HPE Altoline 6900 Switch uses Broadcom Helix and provides 48G 4XG 2QSFP

All info above is in the public domain.

These alternative switches may not have been tested by Avid, and may not explicitly
supported by Avid, hence no guarantee of operation can be assumed. However, one
might consider the likelihood of successful operation with NEXIS clients, using one of these
products to be high. Many of the switches below could be used a Core or Edge in large
systems. Some switches categorized as “edge” could be use as the Core in small non-40G
systems.

The table below both summarizes and extends this information given above This is not an
exhaustive list as the product families are continually changing and evolving with newer
generations both merchant silicon and custom silicon. Some of then may already exist in
proven deployments.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 30 of 222


1.6.1.1. SWITCH TABLE

BRAND MODEL TYPE CHIPSET BUFFER


Dell S6000 40G switch Broadcom Trident 2 12.2MB
Dell S4048 10G Switch Broadcom Trident 2 12.2MB
Dell N3024/48 1G RJ45 Switch Broadcom Helix 4MB

Cisco NEXUS N9332PQ 40G switch Broadcom Trident 2 & 40 MB


“EOL ANNOUNCED” Cisco ALE

Cisco NEXUS N9372 PX-E 1/10G Switch Broadcom Trident 2 & 40 MB


“EOL ANNOUNCED” Cisco ALE
Cisco NEXUS N9372 TX-E 1G RJ45 Edge Broadcom Trident 2 & 40 MB
“EOL ANNOUNCED” Switch Cisco ALE

Cisco NEXUS N93180 LC-EX 25/40/50G switch Cisco LSE 37.4MB


Cisco NEXUS N93180YC-EX 1/10/25 switch Cisco LSE 37.4MB
Cisco NEXUS N93108TC-EX 1/10G RJ45 Switch Cisco LSE 37.4MB
Nexus 9300FX
93180YC-FX
Cisco NEXUS N93108TC-FX 1/10/25G Switch LS1800FX 40 MB
Cisco NEXUS 9336C-FX2 10/25/100G Cisco LS3600FX2 40 MB
93240YC-FX2 Various models
93360YC-FX2
Cisco NEXUS 93216TC-FX2 1/10G copper & Cisco LS3600FX2 40 MB
100G x 12
Nexus 9300
Cisco NEXUS (9332C, 9364C) 40/100G Switch S6400 40 MB
Nexus 9300FX3
Cisco NEXUS N93180YC-FX3 1/10/25G Switch LS1800FX3 40 MB
Cisco NEXUS Nexus9300GX 40/100G Switch LS6400GX 80 MB

Cisco NEXUS 5672 UP 1/10/40 switch Cisco UPC 150MB


Cisco NEXUS 5624/5648 PQ 10/40 switch Cisco UPC 360MB
“EOL ANNOUNCED”

Cisco NEXUS 5696Q 10/40 switch Cisco UPC modular


Cisco Catalyst 9300B 1/10G switch UADP 2.x XL 32MB

Arista 7050QX 40G switch Broadcom Trident 2 12.2MB


Arista 7050SX 1/10G Switch Broadcom Trident 2 12.2MB
Arista 7050TX 1/10G RJ45 Switch Broadcom Trident 2 12.2MB
Arista 7020TR 1G RJ45 Edge Broadcom QUMRAN 3GB
Switch

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 31 of 222


Arista 7010T 1G RJ45 Edge Broadcom Helix 4MB
Switch

Arista 7280QR 40G switch Broadcom JERICHO 4GB


Arista 7280SR 1/10G Switch Broadcom JERICHO 4GB
Arista 7280TR 1/10G RJ45 Switch Broadcom JERICHO 4GB

Arista 7050CX3 32 x 100G QSFP Broadcom Trident 3 32MB


2 x 10G SFP
Arista 7050SX3 48 x 25G SFP Broadcom Trident 3 32MB
8 x 100G QSFP
Arista 7050TX3 48 x 10G RJ45 Broadcom Trident 3 32MB
8 x 100G QSFP

Arista 7500R2 Chassis based Broadcom JERICHO+ 4-32GB


Arista 7280R2 Various models Broadcom JERICHO+ 4-32GB
Arista 7280R3 Various models Broadcom JERICHO2 4-32GB
Arista 7060xX2 Various models Broadcom 22MB
Tomahawk+
Arista 7260xX3 Various models Broadcom 42MB
Tomahawk2
Arista 720 XP Various models Broadcom Trident 3 8MB
(X3)

HPE FlexFabric 5930-32QSFP+ 40G switch Broadcom Trident 2 12.2MB


HPE FlexFabric 5930 2QSFP+ 2 1/10G Switch Broadcom Trident 2 12.2MB
1/10G RJ45 Switch
HPE Altoline 6900 1G RJ45 Edge Broadcom Helix 4MB
Switch

Juniper QFX5100-24Q- 40G switch Broadcom Trident 2 12.2MB


AFO
Juniper QFX5100-48S- 1/10G Switch Broadcom Trident 2 12.2MB
AFO 1/10G RJ45 Switch
Juniper QFX5100-48T- 1G RJ45 Edge Broadcom Trident 2 12.2MB
AFO Switch

Juniper QFX5120-48Y(M) 1/10/25/40/100G Broadcom Trident 3 x5 32MB


Juniper QFX5120-48T 1/10 RJ-45 100G Broadcom Trident 3 x5 32MB
Switch
Juniper QFX5120-32C 100G Switch Broadcom Trident 3 x7 32MB
Juniper QFX5220-128C 100G Switch Broadcom Tomahawk 64MB
3

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 32 of 222


Juniper EX4400 1/10 RJ-45 25G Broadcom Trident 3 8MB
Switch X3

NOT ALL OF THE ABOVE PRODUCTS HAVE BEEN TESTED AND


APPROVED BY AVID, BUT ALL ARE CONSIDERED
ARCHITECTURALLY CAPABLE.
SEE section 1.6.1.2

1.6.1.2 IS QUALIFIED STILL REQUIRED


MAY 2021: The need for formal QUALIFICATION of Switches (and NICs) with NEXIS is
reduced significantly compared to (legacy) ISIS, also the quantity of devices reaching
APPROVED status is reducing, as fewer companies are willing to pay for those stronger
assurances, and this is offset by the quantity of network vendors using proven merchant
silicon, and the evolution of those “silicon families”, the same principle applies to well-
known custom silicon families. There is little to be gained from testing newer members of a
proven “family”, that has evolved with greater capability than its successfully deployed
predecessor. While there will always be a low risk with newer product families and software
versions these have to be balanced against the resources consumed/purchased for testing.
Added to this must be the complexity of networks that are being deployed, it is common for
the companion applications such as Asset Management to be virtualised in a VM FARM on
one switch pair (edge/leaf) and for the storage to be on a different switch pair (edge/leaf), and
the Ingest & Playout or NLEs to be on yet a switch (edge/leaf). The concept of qualification
does not work in such complex distributed environments. It is the lower speed (in comparison
to NEXIS storage engine speed) edge devices that the most “demand” for sufficient edge
buffering. Therefore, in those complex network deployments it is important to understand the
path capabilities in combination with the desired workflows and apply the knowledge of
previous successful (or problematic) implementations to make the appropriate judgement call
for new projects. Of course, this applies to “fat” NEXIS clients, while for “thin” MCCUX
clients the path requirements are minimal, and the edge device is of little concern.

As can be seen many of the newer devices now listed in Section 1.6.1.1 exceed the
capabilities of the switches mentioned at the beginning of section 1.6.1 (from Original V2.0
DEC 2017 release of this document). To use a car analogy (something I am well known for),
don’t get corralled into the thinking that a newer 3.0L Engine cannot do what an older 2.0L
engine could do……... of course, a modern 2.0L fuel injected engine can do twice what and
older carburettor based 3.0L could, but that is just another example of how technology
marches on and moves the goal posts too, and that is before we talk about electric cars (oops I
better stop now!). You always have to understand the machine, and not be dazzled by the
brochure.

1.6.2 Chassis Based Switch Families


Many of the products above are also available as blades in chassis-based switch
Nexus 9500 have “equivalents” in the Nexus 9300 family, but not direct equivalents as
quantity of chips usually differ.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 33 of 222


There are three families of Nexus 9500
First Generation are based around Trident 2 merchant silicon from Broadcom
Second Generation EX/FX series are based around LSE custom silicon from Cisco
Both the first and second generation have Fabric cards
I/O cards and fabrics CANNOT be mixed between generations
Third Generation FX series are based around LS1800FX custom silicon from Cisco

There is also a deep buffer variant of the NEXUS 9500 family which is designated
9500R. This has not been teste with NEXIS but should be amply capable. However,
this deep buffer variant unexpectedly failed a test with ISIS 7x00 in 2018 probably
due to a firmware bug in conjunction with the high fragmented UDP payload used by
ISIS 7x00, which should not impact NEXIS which use a TCP Payload.

NEXUS 95XX I/O


9300/93000 Model CHIPSET BUFFER CARD ARCHITECTURE

NEXUS 95XX I/O


9300/93000 Model CHIPSET BUFFER CARD ARCHITECTURE
N9K-C9372PX-E Trident2 + ALE 48-port 1/10 G SP+ 6 x 40G
N9K-X9564PX 48p 1G/10G SFP+ & 4p 40G QSFP
N9K-C9372TX -E Trident2 + ALE 96-port 1/10 G-T& 6 x 40G
N9K-X9564TX 48p 1G/10G Base-T ports & 4p 40G QSFP
Nexus 9332PQ Trident2 + ALE 32x 40G
Trident2 + ALE (x2) N9K-X9536PQ 36 port 40G QSFP+
36 port 40G QSFP+
Trident 2 only (x3) N9K-X9636PQ Aggregation Module

Trident2 + ASE N9K-X9736PQ ACI SPINE MODE ONLY

Nexus 93120TX Trident2 + ALE 96-port 1/10 G-T& 6 x 40G (opt)


N9K-C9396PX TX Trident2 + ALE +GEM 48-port x1/10G & 12 x 40G or 4x100G
N9K-C93128TX Trident2 + ALE +GEM 96-port x1/10G & 8 x 40G or 2x100G

N9K-X9432C Flexible Speed 10,25,40,50,100G

48-port 1, 10 and 25 Gigabit Ethernet SFP+ with 6-port 100


N93180 YC-EX LSE 37.4MB (2x18.7) Gigabit Ethernet QSFP+

48-port 1, 10 and 25 Gigabit Ethernet SFP+ with 4-port 100


LSE x 2 75MB (2x LSE) N9K-X97160 YC-EX Gigabit Ethernet QSFP+
N93108TC-EX LSE 37.4MB (2x18.7)
N93180C-EX LSE 37.4MB (2x18.7)
LSE x 4 150 MB (37.4*4) N9K-X9732C-EX 32 x 100G 1:1
LSE x 4 150 MB (37.4*4) N9K-X97326-EX 36 x 100G 1.1:1
24 x 40/50-Gbps Enhanced QSFP (QSFP+) ports and 6 x 40/100-
N93180LC-EX LSE 37.4MB (2x18.7) Gbps QSFP28 ports

N93180YC-FX LS1800FX 40.8MB (1x 40.8)


N93108TC-FX LS1800FX 40.8MB (1x 40.8)
48-port 1 and 10GBASE-T plus 4-port 40/100 Gigabit Ethernet
LS1800FX x 2 81.6 MB (40.8 x2) N9K-X9788TC-FX QSFP28

N9348GC-FXP LS1800FX 40.8MB (1x 40.8)


N9336FX2 LS3600 FX2 40MB (2x20) 32 100G (2x50, 1x40, 4x25, 4x10)
Every QSFP28 supports 1 x 100, 2 x 50, 1 x 40, 4 x 25, 4 x 10,
LS1800FX x 4 163.2 MB (40.8 x4) N9K-X9732C-FX and 1x1/10 Gigabit Ethernet
Every QSFP28 supports 1 x 100, 2 x 50, 1 x 40, 4 x 25, 4 x 10,
LS1800FX x 4 163.2 MB (40.8 x4) N9K-X9736C-FX and 1x1/10 Gigabit Ethernet

64 x 100G 2U 100G/50G/40G/10G (single port mode


100G/50G/40G/10G (single port mode 100G/50G/40G/10G no
N9364C S6400 40.8MB (4 x10.2) breakout)

Arista 7500E have direct equivalents in the 7280SE family.


Arista 7500R have direct equivalents in the 7280R family.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 34 of 222


NOT ALL OF THE ABOVE PRODUCTS HAVE BEEN TESTED AND
APPROVED BY AVID
SEE section 1.6.1.2

1.7 Cisco Catalyst Switches


This section discusses the suitability, or not of Catalyst switches that have not been tested
directly by Avid.

1.7.1 Catalyst 9000 - A series


Applies to Catalyst 9300 (C9300), Catalyst 9400 (C9400), Catalyst 9500 (C9500).
These devices are not a supported or approved switch. In January 2018 Cisco shared with
Avid some results of simulated data packet testing (Avid defined data flows) on the C9300-
24UX and the C9500-40X, no results are available on the C9300 24T/48T (or P/U variants),
but they are likely to perform worse.
• SEE SECTION 1.7.2 for Catalyst 9300 B series

With the default settings, both of these models dropped packets with a dual 10G source (both
staggered and non-staggered), and a single 10G sources. The default setting does not allow
sufficient buffer depth for the packet stream from storage engine to NLE Client.

Applying a new single class of QoS profile with a deeper buffer configuration allowed the
simulated packet stream testing to be successful for a limited number of clients, in specific
port groups, during the simulated test, but this has not been tested with NEXIS clients.

PID – C9300 – 24UX (Added QoS Config)

Switch(config)#qos queue-softmax-multiplier 1200


Switch(config)#policy-map All
Switch(config-pmap)#class class-default
Switch(config-pmap-c)#bandwidth percent 100
Switch(config-pmap-c)#

Switch(config)#int ra te1/0/1-24
Switch(config-if-range)#service-policy output All

The C9300-24UX and the C9500-40X with 1G client ports and two 10G source ports could
support up to 8 clients in the simulated test emulating two 512KB chunks, and not using the
ports sequentially (this maximizes the capability of the architecture and is more akin to the
statistical spread of port usage during normal operations). This test used 24 Active receiver
ports. This is probably a good indication of the ability to support 16 x dual-stream NLE with
a 50Mbit/s codec. This would likely double with a 48 port device which has additional ASICs
and hence additional buffering.

With a lower active port count of 16 the C9500-40X, with 1G client ports and two 10G
source ports could support up to 16 clients in the simulated test emulating two 512KB

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 35 of 222


chunks, and not using the ports sequentially (this maximizes the capability of the architecture
and is more akin to the statistical spread of port usage during normal operations). This is
probably a good indication of the ability to support 32 x dual-stream NLE with a 100Mbit/s
codec.

However, the C9500X is really not targeted as device for 1G client connection, which is
really the target market for the C9300.

It should be noted at the time of writing (JAN2018 updated MAR 2019) this device has not
been tested directly with NEXIS storage and NLE clients. It is recommended not to deploy
this for use with NEXIS storage and NEXIS clients.

1.7.2 Catalyst 9000 - B series


Applies to Catalyst 9300B models listed below.

C9300-24UB
C9300-48UB
C9300-24UXB

Following Vendor testing in 2019 this product has been marked as Architecturally Capable in
JAN 2020.

As with all use of Architecturally Capable products deployed. Of course, this is not carte
blanche, for Vendor or Customers, there is still a lot of work to do, and lessons to learn. The
risks remain with the customer and Vendor for solution using this product until a funded
approval is done with Avid professional services, also a funded approval may only get
limited approval depending on the workflows and deployment scenario.

1.7.2.2 SOFTMAX CONFIGURATION DETAILS for Cisco Catalyst C9300B series

It is necessary to modify the default buffer configuration which is not deep enough for correct
operation.

This product has double the buffer of the first series of Catalyst 9000, which was found to be
insufficiently capable during simulated testing.

First Generation Cisco Catalyst C9300 series are not considered Architecturally capable.

The command below provides the basic commands and port numbers that should be used. If
using a stacked configuration, the port numbers will have to be modified accordingly.

C9300-24UB

Switch(config)# qos queue-softmax-multiplier 1200


Switch(config)# policy-map ALLPORTS
Switch(config-pmap)# class class-default
Switch(config-pmap-c)# bandwidth percent 100
Switch(config-pmap-c)# queue-buffers ratio 100

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 36 of 222


Switch(config)#int ra G1/0/1-24
Switch(config-if-range)#service-policy output ALLPORTS !!>> Release Hard MAX
buffers
Switch(config)#int ra T1/1/1-8
Switch(config-if-range)#service-policy output ALLPORTS !!>> Release Hard MAX
buffers
Switch(config)#qos stack-buffer disable

qos queue-softmax-multiplier 1200


policy-map ALLPORTS
class class-default
bandwidth percent 100
queue-buffers ratio 100
!
int range G1/0/1-48
service-policy output ALLPORTS
!
int range T1/1/1-8
service-policy output ALLPORTS
qos stack-buffer disable
!

C9300-48UB

Switch(config)# qos queue-softmax-multiplier 1200


Switch(config)# policy-map ALLPORTS
Switch(config-pmap)# class class-default
Switch(config-pmap-c)# bandwidth percent 100
Switch(config-pmap-c)# queue-buffers ratio 100

Switch(config)#int ra G1/0/1-48
Switch(config-if-range)#service-policy output ALLPORTS !!>> Release Hard MAX
buffers
Switch(config)#int ra T1/1/1-8
Switch(config-if-range)#service-policy output ALLPORTS !!>> Release Hard MAX
buffers
Switch(config)#qos stack-buffer disable

C9300-24UXB
Switch(config)# qos queue-softmax-multiplier 1200
Switch(config)# policy-map ALLPORTS
Switch(config-pmap)# class class-default
Switch(config-pmap-c)# bandwidth percent 100
Switch(config-pmap-c)# queue-buffers ratio 100

Switch(config)#int ra Ten1/0/1-24
Switch(config-if-range)#service-policy output ALLPORTS. !!>> Release Hard
MAX buffers
Switch(config)#int ra T1/1/1-8
Switch(config-if-range)#service-policy output ALLPORTS. !!>> Release Hard
MAX buffers
qos queue-softmax-multiplier 1200
policy-map ALLPORTS

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 37 of 222


class class-default
bandwidth percent 100
queue-buffers ratio 100
!
int range Ten1/0/1-24
service-policy output ALLPORTS
!
int range T1/1/1-8
service-policy output ALLPORTS
qos stack-buffer disable
!

1.7.3 Catalyst 9000 - B series – USING SOFTMAX ONLY


Funded customer testing In SEP 2020 revealed many nuances of the C9300UB series.

The point to make clear is that SoftMax is the key command to successful buffers depth for
use with NEXIS clients. However, it must also be understood that the impact of this
command in a C9300 stackable device stays withing the stackable device, but in a C9400
chassis-based switch the buffers are more thinly spread as there are a great many more ports,
but there are also 3 ASICS (and hence 6 cores)

Hence it may be feasible to adopt a 3-stage buffer optimization to prevent packet discard

1. qos queue-softmax-multiplier 1200


a. if not discarding stop here
2. apply QoS tuning
a. if not discarding stop here
3. qos stack-buffer disable
a. to gain the extra buffer pool, but does not affect buffer depth capability for
an individual port

Assuming it is a number of bytes per buffer cell is 256 bytes, that means a UDP 2.0 XL
device has the following buffer depths below.

UADP 2.0 XL UADP 2.0


256 bytes KBYTES MBYTES MBYTES
400 HARDMAX = 102400 100 0.10 0.05
1600 SOFTMAX = 409600 400 0.39 0.20
12x..1200 19200 SOFTMAX = 4915200 4800 4.69 4.79 2.34 2.39
28800 SOFTMAX = 7372800 7200 7.03 3.52
100% Q0 40000 SOFTMAX = 10240000 10000 9.77 9.86 4.88 4.93

PER CORE PER CORE


10 EGRESS 5
3.25 STACK 1.25
13.25 (MB) 6.25

1.25 INGRESS 1
1.5 HOLDING 0.75
2.75 (MB) 1.75

16 (MB) 8

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 38 of 222


The Testing with 9300B platform confirms that ONLY the SoftMax multiplier setting (step 1
above) should be required in 99% of expected workflows, the additional command listed
above in steps 2 and 3 are unlikely to be required.

1.8 Network Interface Cards


This will not be a large subsection, as the needs of NEXIS differ significantly from those of
ISIS7500. The NIC implementation for NEXIS does not have to be as capable, because
NEXIS is not sending large, fragmented datagrams, as we did with ISIS 7500 which
mandated the requirement of “server class NICs. But there are always special cases that
might need something a little different. Additionally, the needs of virtual machines and blade
servers and Fabrics bring new challenges to the “old order”.

1.8.1 Single-mode vs. Multi-mode


There are lots of articles on the web about the difference between Single-mode vs. Multi-
mode fibre, so this large topic will not be covered in any detail. Suffice to say that Single
mode is more expensive and more capable of greater distance and speed, but Multi-mode is
sufficient for most task of less than 100 metres.

However, many large data centres are now deploying single mode only to have better
longevity of investment, and “protection” for future technologies. But most ACR in the
media world still used Multi-mode within a building and Single-mode between buildings.

I have not yet seen a native Mono-mode/Single mode 1G optical NIC, and I didn’t
understand why one would be needed, but there are always opportunities to learn. Based on
traditional deployments of Avid storage, the Intel I350F would be the natural choice but is
MMF, and as the URL below indicates it is “Intel inside” so I see no issue
http://www.silicom-usa.com/pr/server-adapters/networking-adapters/gigabit-ethernet-
networking-server-adapters/pe2g2sfpi35-server-adapter/

I thank a customer for bringing this NIC to our attention, and SFP based NIC of this calibre is
a good find. Looking elsewhere of their site they have some really interesting products.

1.8.2 USB-C NIC Information


Here are the USB-C to 1 Gb Ethernet devices tested on the Windows based laptops most
recently [MAR 2018]:

Dell DA-200
Dell DBQBCBC064
Belkin F2CU040

These were tested using read, write, and mixed workflows both with ABU, PATHDIAG, and
Media Composer. As mentioned, there was an intermittent issue, with ABU only, that turned
out to have nothing to do with the adapters. It had to do with the retry logic, in the reporting
back from the benchmark agent to the system running ABU.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 39 of 222


There is a work-around for this issue until we get a fix, but requires the presence of a 2nd NIC
to dedicate as the ABU reporting NIC. This is more of a corner case for test purposes.

1.8.3 NICs in a VM environment – applies to bare metal too


Hardware for Virtual machines cause an interesting challenge for Avid applications, on many
levels. When Avid sold the server and client hardware alongside the application, we could
control the choice of CPU and NIC etc., and that was reasonably important in ISIS 7500/5500
deployments, which have “special needs” driving the requirement for specific NICs to use
driven by the use of fragmented UDP data transport to 1G clients, which negated the need to
use TCP offload cards which when ISIS 7500 was designed were approx. $1000 each (a good
idea in 2005 when UNITY ISIS was launched). With ISIS 10G clients (from 2010 using the
Myricom NIC) TCP was used as the primary transport method at 10G and then from ISIS
V4.7 it was used at 1G on ISIS 5500 too.

With the advent of NEXIS, the “special needs” are largely eliminated both 10G and 1G
clients. The NEXIS platform uses TCP for ALL primary data transport and only a little bit of
fragmented UDP (small fragments) for our “OTT” signalling protocols. So, in A NEXIS
world, most NICS will work fine, but some will be better than others in the performance
stakes. Also, NEXIS clients can survive in with less edge buffering and even a low
percentage of packet discard will not cause frame drop, because we use TCP with FRR
instead of Fragmented UDP as with ISIS 7x00.

Note: To This principle can also be applied to NLE, most Enterprise class
NICs will operate successfully as satisfy most workflows, but some will
perform better than others high demand workflows.

Note: The NIC is of much less importance in NEXIS than ISIS. Some
NICs will always perform better than others depending on platform
specifics, but maximum performance is not the only criterion, good
performance with stability is key. Also consider that virtualization abstracts
the REAL NIC too.

Avid chose Myricom 10G NIC for use with ISIS in 2010 (to replace the L2
only Chelsio card)……. for 2 reasons
1. It worked with ISIS 7x00 (there was no 5x00/2x00 at the time)
2. It had drivers for Windows and MAC. (Linux at the time was not a
consideration).…. Something which Intel 10G NICs did not have, even
though Intel was favoured for 1G operation with ISIS.

Hence it was a “good’ choice for high bandwidth application such as


COPY/MOVE server, and UHRC NLE (Windows and MAC).

In the early 2010s Broadcom improved its NIC offerings for servers
workstations; and in some servers Avid started to use the BCM57800 10G
NIC in some servers as the Interplay engineering and the Storage
engineering started to use different criteria, partly due to the move away

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 40 of 222


from AS3000 servers to Dell R630 in 2015, also for some time many
customers had been purchasing HP DL360.

When virtualisation is added to the mix the brand becomes even less
relevant, as the virtual NIC is abstracted from the REAL NIC, the on top of
virtualisation we had varying levels of convergence.

While Myricom 10G NIC was the “default” choice due to the legacy
elements I have explained above, it was never selected because of special
characteristics for Interplay/ MediaCentral applications.

That “historical ability” with 3rd party hardware is largely removed, as we move into a
Virtual machine world (and this could mean Cloud too where there is ZERO ability to specify
specific NIC vendors), the ability to control hardware is massively reduced, plus the adapter
presented is a Virtual NIC anyway, regardless of the real hardware which is being shared
with other VMs, and possibly non-avid applications. Also consider that some of the NICs
recommended by Avid the past when working in “known tin” (e.g. HP DL360 or Dell R630)
do not work well in a virtual environment.

Avid can only test a limited number of variations, and of key products it might sell or those
encountered on extremely large projects, but the silicon at the heart of most NICS are made
by a small number of providers, yes there will always be the specialists and innovators at the
leading edge of technology, but the bulk of NICs come from the few “foundries”. Also, many
of the Server manufactures re-brand (badge-engineer) well known NIC vendors’ products
although use different model numbers for the same basic chipset, but they are not completely
open about such facts unless one goes digging for the specifics.

The primary vendors of Server class NICS should all do a good job for Avid applications that
interfaces with Avid NEXIS storage. The extract from Interplay documentation below
suggest that Qlogic (Broadcom) and Mellanox are acceptable, but some older adapters should
be avoided. This does not mean ALL Intel NICS should be avoided, the X520 is a
particularly old one, and Avid would probably not attempt with the later X710 as this does
not come as standard in the server hardware for VM based solution that we have worked
with.

From Avid Interplay | Production V2017.2 Documentation

Interplay® | Production Virtual Environment with VMware®


Best Practices Guide Version 3.3 and later December, 2017
http://resources.avid.com/SupportFiles/attach/Interplay_Virtualization_Best_Practices.pdf

Network Adapters
The following network adapters are qualified and recommended:
• QLogic 57800, 57810, 578x0 (includes the 57840), QLE3442 (not for
iSCSI), QLE8442
• Mellanox ConnectX 3

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 41 of 222


• Cisco UCS VIC 1340 (or 13x0) plus UCS 2208XP Fabric Extenders

The following network adapters are not supported:


• Intel X520
• Myricom Myri-10G

The Avid NEXIS 2018.3 README Client is supported with VMWare ESXi v6.0.0 (Update
1) using a VMXNET3 adapter with the Mellanox ConnectX-3 adapter and the Mellanox ESX
OFED Driver version 1.9.10.2 or later.

Also consider that NEXIS controller uses Mellanox ConnectX-3 adapter, so I would not
expect any conflict with Mellanox ConnectX-4, ConnectX-5 (or later series), that you might
find in a server used for Virtual machine hosting.

The goal posts are forever moving, QLogic 57800 was a 10G NIC architecture, but I cannot
fine details of 40G product in this series. But there is QLogic FastLinQ QL4541xHLCU
40GbE Intelligent Ethernet Adapters. But 40G is an aging technology fast being replaced by
25/50/100G products such as there are new adapters from QLogic:
QLogic QL45000 25Gb and 50Gb Flex Ethernet Adapters (QL45214, QL45212 and
QL45262)
and also QLogic FastLinQ QL45611HLCU Single-Port 100G Intelligent Ethernet Adapter

Plus new “names” enter fray, in August 16, 2016 Cavium™ Completed Acquisition of
QLogic.

In April 2021 I was advised of successful deployment of Mellanox ConnectX5 100G NIC is a
Dell VM server hosting virtual Media Composers.

While this section (and Avid) cannot give a “carte blanche” for any NIC vendor to be used,
we can look at product families that have worked well before and consider their evolutions in
a favourable light until there is hard evidence of something that is broken and/or will not
work, that can be documented. If in doubt, contact Avid and we will do our best to at least
provide a judgement call on the suggested devices.

1.8.4 10GBASET Transceivers


While 10GBASE-T (or 10GBASET) exists as a standard since 2006, there are certain
challenges with deployment when using plug in transceivers, this is due to the limited power
budget in the SFP enclosure which does not allow a 100M connection. This means that plug
in 10Gbase-T transceivers are limited to 30M, not the full 100M of the published standards.
Many switch vendors do not supply a standard 10GBASE-T part, but there are 3rd party
suppliers that do supply parts and also offer “guaranteed compatibility” with several vendors.

Some possible suppliers are given below (price & quality will vary):

https://www.flexoptix.net/en/

https://portal.prooptix.se/en/sfp/3805-sfp-10g-t-p-cisco-sfp

www.prolabs.com

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 42 of 222


https://www.fs.com/uk/products/66612.html

www.10GTek.com (also available on Amazon)

www.startech.com

www.switchsfp.com

1.8.4.1 Nexus 93180 YC-EX SWITCH using SFP 10GBaseT


In a project engagement in NOV 2018 10G Copper RJ45 transceivers were used are from
10Gtek. T connect with a Broadcom BCM 57412 LoM implementation in a DELL servers.
These were later removed and replaced by Cisco 1-G SP transceivers as add-in SFP28 BCM
57412 based cards were deployed. Broadcom 57412 25G/10G NICS are not an officially
approved part but exceed the capabilities of the approved BCM5800 based 10G NIC.

They show up as: type is 10Gbase-SR name is OEM as shown below even though there
are not 10Gbase-SR, but 10GBaseT.
Ethernet1/13
transceiver is present
type is 10Gbase-SR
name is OEM
part number is SFP-10G-SR
revision is 02
serial number is CSSSRI90012
nominal bitrate is 10300 MBit/sec
Link length supported for 50/125um OM2 fiber is 82 m
Link length supported for 62.5/125um fiber is 26 m
Link length supported for 50/125um OM3 fiber is 300 m
cisco id is 3
cisco extended id number is 4

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 43 of 222


There was no time to really test these devices as SFP+ Based 10G optical NICS were provide
on 21 NOV 18. However there appears to be no performance issues.
Reliable information from the switch with regard to interface use/errors was not available
because there were frequent changes and no real data test were done with these transceivers
and the copper integrated NIC ports.

1.8.5 I350-T2 NIC Eol – Replaced by I350T2V2


ORGINALLY section 1.6.7.1 in NETREQS V1.x

The Intel I350T2 NIC (launch date Q2-2011) has been revised, the replacement is the I350-
T2V2 was launched Q3-2014, but the core controller is still the I350.

Reason for Revision: Incorrect Replacement MM#


The Intel® Ethernet Server Adapter I350-T2 and Intel® Ethernet Server Adapter
I350-T4 will undergo the following changes:
1. Existing Product is being EOLed to be replaced with new versions which have the
AUX Power component changes that will result in a decrease of in-rush current
during power supply start-up. Product functionality does not change.

https://qdms.intel.com/dm/i.aspx/B3EAA66A-ED91-4822-AE37-
29781EC0930D/PCN113232-01.pdf

Date of Publication: September 24, 2014


Key Characteristics of the Change: Product Discontinuance
Forecasted Key Milestones:
Last Product Discontinuance Order Date: Feb 14, 2015
Last Product Discontinuance Shipment Date: Aug 14, 2015
http://ark.intel.com/products/59062/Intel-Ethernet-Server-Adapter-I350-T2
http://ark.intel.com/products/84804/Intel-Ethernet-Server-Adapter-I350-T2V2
[ LINKS ACTIVE MAY 2019]

1.9 Airspeed 5500 (3rd GENERATION) network connection


With the changes in the connectivity standards of the newest generation of Airspeed 5500
now with Windows 10 we are encouraging installations to simplify the Installations by using
only the 10g interfaces, the reason is that we are unable to support the i217 built in ethernet
port as it causes system stability issues, while the i210 built-in ethernet port works flawlessly,
this presents a problem for clients interested in dual connections as they are required to use at
least one of the 10g ports on the system, though this fact we would offer use of the low cost
SFP+ 1G media adaptor from Finisar be used for both 10G ports abandoning the single i210
network adaptor in favour of higher performance network interfaces.

Avid tested native 10G interface link (with NEXIS storage, but not the EoL ISIS 7500 or
ISIS 5500) with no issues in both dual link AvidFOS configuration and teaming enabled for
link fault tolerance, below you will find the cable Selection that we are able to support,
additional cables and adaptors are also possible however Avid would be unable to provide
trouble shooting on additional interface modules.

Twinax for 10GbE

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 44 of 222


Dell Direct Attach 10G Cable. Copper 10GbE SFP+ twinax cable, 1 meter (AVID PART
NUMBER 7070-30615-01)
Dell Direct Attach 10G Cable. Copper 10GbE SFP+ twinax cable, 3 meter (AVID PART
NUMBER 7070-30615-03)
Cisco 10G SFP+ direct attach cable (twinax) 1 meter (AVID PART NUMBER 7070-30358-
01)
Cisco 10G SFP+ direct attach cable (twinax) 3 meter (AVID PART NUMBER 7070-30358-
03)
Cisco 10G SFP+ direct attach cable (twinax) 5 meter (AVID PART NUMBER 7070-30358-
05)

SFP+ 10 GbE optical transceivers.


Mellanox SFP+ 10 GbE SR optical transceiver. 10GbE Ethernet SFP+ LC 300m range, SR
850nm wavelength (AVID PART NUMBER 9900-65632-00)
Mellanox SFP+ 10 GbE LR optical 10Gbps transceiver. 10GbE Ethernet SFP+ LC 10km
range, LR 1310nm wavelength (AVID PART NUMBER 9900-65652-00)
JDSU plrxpl-sc-s43-22-n - 10GBase SR LC, 300 meters, 850 nm SFP+ transceiver < Tested

SFP+ 1GbE media adaptors


Finisar's FCLF8520P2BTL, FCLF8521P2BTL and FCLF8522P2BTL 1000BASE-T Copper
< Tested

FINISAR DATA SHEET


https://www.finisar.com/sites/default/files/downloads/finisar_fclf852xp2btl_1000base
-t_rohs_compliant_copper_sfp_transceiver_product_specification_0.pdf

Finisar’s FCLF8520P2BTL, FCLF8521P2BTL and FCLF8522P2BTL 1000BASE-T


Copper Small Form Pluggable (SFP) transceivers are based on the SFP Multi Source
Agreement (MSA). They are compatible with the Gigabit Ethernet and 1000BASE-T
standards as specified in IEEE Std 802.32. The transceiver is RoHS compliant and per
Directive 2011/65/EU3 and Finisar Application Note AN-2038. The 1000BASE-T
physical layer IC (PHY) can be accessed via I2C, allowing access to all PHY settings
and features.

The FCLF8520P2BTL uses the SFP’s RX_LOS pin for link indication, and
1000BASE-X auto-negotiation should be disabled on the host system. The
FCLF8521P2BTL is compatible with 1000BASE-X auto-negotiation, but does not
have a link indication feature (RX_LOS is internally grounded). See AN-2036,
“Frequently Asked Questions Regarding Finisar’s 1000BASE-T SFPs”, for a more
complete explanation on the differences between the two models and details on
applications issues for the products. The FCLF8522 shall support both RX_LOS pin
for link indication and 1000BASE-X auto-negotiation.

A typical price for this device I approximately USD $100 depending on purchased quantity
(at time of writing this section in MAY 2018)

1.10 NAT/PAT Incompatibility


Not many folks have attempted this, those that have failed to get it working successfully.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 45 of 222


The NEXIS client is not designed to be used with Network Address Translation or Port
Address Translation (as at OCT 2019), and as currently deployed the NEXIS solution is
fundamentally incompatible with such techniques.

Recent encounters (Q3/2019) with a LINUX VM based solution that tried to do this has
bought this issue to the fore.

The used of NAT/PAT/NPAT has never been supported in any ISIS 7x00/5x00/2x00 solution
that preceded NEXIS.

1.10.1 Kubernetes – the “core” of the “network” problem with MCCUX


The MediaCentral Cloud UX particular issue with NEXIS (site was using FS 2019.6, but that
is incidental to the issue) that caused the hardship on this deployment with a VM based
design was down to a bit of bad luck in the choice of IP subnets.

The problem was that the Nexis ("172.19.126.10;172.19.126.12") is in the default


Kubernetes virtual network range (172.19.0.1/16).

During installation, it is needed to define a Kubernetes network range which does not
conflict with any other network which provides servers/services which are needed by
CloudUX.
The Kubernetes network range can be defined during the initial/first installation as
part of the “avidctl platform host-setup” step. Changing at a later point is in principle
possible but difficult.

For example, it was not able to fix this particular test system – a new installation was
required.

Here is internal documentation of how to change the Kubernetes network after


installation
https://avid-
ondemand.atlassian.net/wiki/spaces/PFENG/pages/1679425701/Platform+Service+H
ost+Troubleshooting+-+Change+Kubernetes+Internal+Network

Hence has the customer chosen 10.x.y.z or 192.168.y.z or any 172.16.y.z (except for
172.19.0.0/16) this problem would not have occurred .

This problem manifested itself in such a way as to “blame” the network path because packets
were getting “dropped,” but it was not the “fault of the network switches. Yes, it is a network
path issue but in the “last metre”, other Windows VMs worked fine. Below are some more
fine details taken from Avid internal systems that adds the necessary “colour” to the
challenge encountered by CS & Engineering in diagnosing the customer issue.

NEXIS Linux client connects to System Director and has functioning file metadata
services, but TCP data transfer to and from the Storage Managers fails. Small reads
and writes (which are piggybacked as “immediates” in the UDP control packets)
succeed correctly.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 46 of 222


The problem here is address translation (after all), albeit in a strict and subtle sense:
Addresses are not getting translated, but TCP ports are. NEXIS does not function with
address translation (even if only for ports).

A log snippet exemplifies the issue. The client commands and successfully makes a
connection on (it thinks) its TCP port 50093, but port translation encapsulated beneath
the NEXIS components causes the actual connection to be made on TCP port 12476.
The client then sends a read request to the server, specifying target port 50093, the
one upon which it had just commanded the connection, but the server cannot find that
(address, port) tuple in its destination table, so it rejects the request with an
FSTATUS_NO_CONNECTION error.

Another network range to Avoid according to MCCUX documentation is

From MCUX Install Guide (page number depends on doc version)


Kubernetes Networking Options. During the deployment process, the script defines
two internal networks that are used by Kubernetes for internal communication:
- Kubernetes Service Network: The default range of this network is 10.254.1.0/24.
- Kubernetes Pod Network: The default range of this network is 172.19.0.1/16.

RECENT FIELD INFORMATION DEC 2020 also suggests……..


- Docker0 172.17.0.0/16 *(not documented)

Thankfully 10.254.x.x is a rarely used range, which when used is likely to be for point-to-
point /32 links. Still one to be wary of when “unexplained” problems occur.

FIELD KNOWLEDGE: In Jan 2021 there was an another “incident” resulting in “IP conflicts
and MCCUX Server. A customer was using a 172.17.248.0/22 as their range for their VPN
incoming devices into the Network range 10.61.191.0/24 for MediaCentral. The
MediaCentral Production elements could communicate successfully with the VPN clients, but
the VPN clients could not communicate with MCCUX server. The resolution was to change
the internal IP ranges used within the MCCUX Cocker/Kubernetes environment.

This URL may help find out what is in use.


https://www.lullabot.com/articles/fixing-docker-and-vpn-ip-address-conflicts

1.10.2 MCCUX “Kubernetes” – NO LONGER USING MULTICAST

Media Central Cloud UX Server Kubernetes does not use multicast. Keepalived is configured
to use unicast by default (multicast configurations are not supported).

Media Central Cloud UX V2018.1 initial release AUG 2018.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 47 of 222


2.0 NETWORK DESIGNS FOR NEXIS
There are no qualified network designs in the “Avid® NEXIS® Network and Switch Guide
July 2017”.

Most designs that work for ISIS 7x00 will work for NEXIS. Some Designs for ISIS 5x00 will
work for NEXIS. Other network designs need to be approved. Avid network consultants have
experience of many successful network deployments, plus they have learned some valuable
lessons from some deployments which encounters a few challenges.

This section provides basic detail on known successful deployments. Some of these products
may not yet have achieved formal approval but might have been tested as part of custom
funded testing.

The designs may not show the precise type of quantity of NEXIS engines, this is deliberate to
decrease the complexity of the drawing.

There some Vanilla Configs available at the same URL as this document. The feature some
Visio diagrams and VPC enabled config files for Nexus 5600 and Nexus 9300, and a VSS
config for C4500X.

2.1 Reference designs


At the same URL as this document there are some block design concepts; these are
deliberately not product specific. It should be possible to use the formally approved products
in these roles as shown below. Single switch designs will not be shown. Below is an example
design complex design with qualified switches.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 48 of 222


Figure 1 – Reference design for NEXIS with DELL

2.2 Block based designs


Most network solutions with resilience for NEXIS follow a common design concept, but they
will vary in scale. The diagrams below in this section give some suggestions. Sometimes it is
best to think of these necessary elements as Lego; some blocks are bigger than others, and
some smaller blocks may be used repeatedly, for example there might be 8 edge switches and
two core switches.

These block designs can be used with any of the qualified, approved or architecturally
capable switches in the core and edge combinations. There may be 6 engines not 2 as shown,
there may be 4 edge switches not 2 and shown. Think of the diagrams below as foundational
building blocks, not limited designs.

There is no consideration for routed uplinks in the block-based designs below.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 49 of 222


2.2.1 The Traditional/Legacy way
Two core switches, FHRP, Spanning tree

STORAGE ENGINE

STORAGE ENGINE

LACP

FHRP

CORE SWITCH CORE SWITCH

RSTP
EDGE SWITCH EDGE SWITCH

Figure 2 – Traditional Block design for NEXIS

Note: HSRP style C4500X implementations deployed for use with ISIS
7x00 offer partial reliance for use with NEXIS, for full resilience with
NEXIS an VSS must be used and this will require downtime to convert
between from HSRP to VSS. Probably better to deploy a new MLAG
capable pair of switches. Avid PS NETWORK CONSULTANCY
RECOMMENDED.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 50 of 222


2.2.2 MLAG/vPC/VSS Single Controller
Two core switches…So what’s changed? MLAG/vPC/VSS/VLT LACP instead of STP.
Basic connectivity concept is little changed.
When the aggregate connection is distributed across two switches, they must be a coordinated
MLAG pair, hence any of the technique found in the URL below should be viable. Explicit
Avid internal testing has only been done with Cisco VPC, Cisco VSS and Dell VLT. Other
vendors deployments have been successful on Juniper and Arista.
https://en.wikipedia.org/wiki/MC-LAG

MLAG designs should use a minimum of 2 x 40G (ideally TWINAX)


PEER CONNECTION (for C4500X-VSS 4 x 10G) and a resilient PEER
KEEP ALIVE (dual 1G or if dual 1G not available dual 10G TWINAX)

STORAGE ENGINE

LACP

STORAGE ENGINE

MLAG/vPC/VSS

CORE SWITCH CORE SWITCH

LACP
EDGE SWITCH EDGE SWITCH

Figure 3 – MLAG/VSS Block design for NEXIS

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 51 of 222


2.2.3 MLAG/vPC/VSS DUAL CONTROLLER
More Resilience
Dual controllers with LACP
Basic connectivity concept is little changed

MLAG designs should use a minimum of 2 x 40G (ideally TWINAX)


PEER CONNECTION (for C4500X-VSS 4 x 10G) and a resilient PEER
KEEP ALIVE (dual 1G or if dual 1G not available dual 10G TWINAX)

STORAGE ENGINE

LACP

STORAGE ENGINE

MLAG/vPC/VSS

CORE SWITCH CORE SWITCH

LACP
EDGE SWITCH EDGE SWITCH

Figure 4 – MLAG/VSS Block design for NEXIS with dual controllers

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 52 of 222


2.2.4 STACKED SWITCH with DUAL CONTROLLER
Stacked switch - VSS is similar
Switches appear as “modules”
SW1 T1/1 >> T1/1/1
SW2 T1/1 >> T2/1/1
Basic connectivity concept is little changed

STORAGE ENGINE

LACP

STORAGE ENGINE

MLAG LACP

STACKED CORE SWITCH

STACKED CORE SWITCH

LACP
EDGE SWITCH EDGE SWITCH

Figure 5 – Stacked Block design for NEXIS with dual controllers

2.3 CISCO - Custom/Deployed designs


This section provides detail of successful deployments that have been implemented with
some element of funded testing attached, using Cisco products.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 53 of 222


2.3.1 Cisco Nexus 5600 Based

2.3.1.1 Cisco Nexus 5672 with Nexus 2248TPE FEX


This is an approved/deployed design

Figure 6 – Custom design for NEXIS with Cisco Nexus 5600

2.3.1.2 Cisco Nexus 5672 with Nexus 2348UPQ FEX and N2248TPE
This is an approved/deployed design. This solution is similar to the above and hosts both
NEXIS and ISIS 7500. The NEXUS 5672 connects to the ISIS 7500. The subordinate Nexus

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 54 of 222


2348 UPC FEX connect to the 10G NEXIS E4 Engines, and also hosts a VM farm for
Interplay. The subordinate Nexus 2248TPE is used to connect NLE clients. The FEXs uplink
to the parent switch i.e. the N5672.

Figure 7 – Custom design for NEXIS and ISIS 7500 with Cisco Nexus 5600

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 55 of 222


2.3.2 Cisco Nexus 9500 Cases

2.3.2.1 Cisco Nexus 9500 with Nexus N93108TC

CISCO NEXUS 9500 FABRIC N9K-C9504-FM-E


I/O N9K-X97160YC-EX I/O N9K-X9732C-EX

Figure 8 – Custom design for NEXIS with Cisco Nexus 9500/93000

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 56 of 222


2.3.2.2 Cisco Nexus 9500 EX with Nexus N93180 YC EX

APRIL 2018
SERVER EDGE LAYER 2 CORE LAYER 3 CLIENT EDGE LAYER 2
N93180-YC-EX N9500 EX N93180-YC-EX

N9K-C9508-FM-E
N9K-X9736C-EX

The solution performed as expected in terms of supporting the video load with no errors.
Correct operation was observed with ISIS 7500 storage and clients for video and PATHDIAG
Correct operation was observed with ISIS 5500 storage and clients for video and PATHDIAG
Correct operation was observed with NEXIS storage and clients for video and PATHDIAG

This testing was done with a small number of clients and storage engines so does not
represent full load testing, but does reflect operational capability.

2.3.2.3 Cisco Nexus 9500 R with Nexus N93180 YC EX


APRIL 2018
SERVER EDGE LAYER 2 CORE LAYER 3 CLIENT EDGE LAYER 2
N93180-YC-EX N9500 R N93180-YC-EX

N9K-C9508-FM-R
N9K-X9636C-R
The solution performed as expected in terms of supporting the video load with no errors.
Correct operation was NOT OBSERVED with ISIS 7500 storage and clients for video but was
observed for PATHDIAG
Correct operation was observed with ISIS 5500 storage and clients for video and PATHDIAG
Correct operation was observed with NEXIS storage and clients for video and PATHDIAG

This testing was done with a small number of clients and storage engines so does not
represent full load testing, but does reflect operational capability.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 57 of 222


2.3.3 Cisco Nexus 93000 Cases

2.3.3.1 Cisco Nexus 9336C with Nexus N93108TC and/or 9348GC

Figure 9 – Custom design for NEXIS with Cisco Nexus 9336C-FX2

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 58 of 222


2.4 JUNIPER - Custom/Deployed designs
This section provides detail of successful deployments that have been implemented with
some element of funded testing attached, using Juniper products.
2.4.1 QFX 10008 and QFX5100
This testing was conducted in SEPTEMBER 2017. The QFX 10008 as core switch with
QFX-5100-24Q as server edge and QFX5100-48 T as the client edge switch.

Note the QFX5100 is a Trident 2 based device and was only tested as a Layer 2 switch.

Figure 10 – Custom design for NEXIS E5 with Juniper

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 59 of 222


2.4.2 Juniper Buffer limitation with Trident 2 Merchant Silicon
This family underwent customer testing in SEP 2017, as a layer 2 switch, All Layer 3
functions were performed by an upstream QFX10000 switch. However, it is expected that
this would perform in a similar manner as other approved Trident 2 based switches.

The Juniper QFX 5100-48T (also applies to the QFX-5100-48S) as a layer 2 switch is
suitable for use with Avid NEXIS edge clients. By default it will drop packets, but the
NEXIS clients survive this. The buffers can be configured so as not to drop packets, and this
is recommended as the preferred configuration style. This switch was not tested as a layer 3
device with NEXIS storage directly connected, but it is expected that this would perform in a
similar manner as other approved Trident 2 based switches.

The Juniper QFX 5100-48T (also applies to the QFX-5100-48S) as a layer 2 switch is
suitable for use with Avid ISIS 7500 edge clients only when the enhanced buffer
configuration (described elsewhere in this document) are used to prevent packet dropping.
This device was not tested as a layer 3 device with ISIS 7500 storage directly connected, but
it is expected that this would perform in a similar manner as other approved Trident 2 based
switches.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 60 of 222


The Juniper QFX 5100-24Q (as a stacked switch) was used to connect the NEXIS engines at
40G, as a layer 2 switch, and is considered suitable for this task. The edge buffering was not
changed from default as the path toward 40G Engines would normally be oversubscribed, but
the deployed of the enhanced buffer command should not have a detrimental effect. This
switch was not tested as a layer 3 device with NEXIS storage directly connected, but it is
expected that this would perform in a similar manner as other approved Trident 2 based
switches.

The Juniper QFX 5100-24Q (as a stacked switch) was used to connect the ISIS 7500 engines
at 10G, via 40G-10G breakout, as a layer 2 switch, and is considered suitable for this task.
The edge buffering was not changed from default as the path toward 40G Engines would
normally be oversubscribed, but the deployed of the enhanced buffer command should not
have a detrimental effect. This device was not tested as a layer 3 device with ISIS 7500
storage directly connected, but it is expected that this would perform in a similar manner as
other approved Trident 2 based switches.

2.4.2.1 Juniper QFX-5100 Configuration statements used for setting the egress shared
buffer:

Setting the percentage of available (non-reserved) buffers used for the egress global shared
buffer pool -
#set class-of-service shared-buffer egress percent 100

Configuring the shared buffer for the egress based on the recommended values for unicast
traffic–
#set class-of-service shared-buffer egress buffer-partition lossless percent 5
#set class-of-service shared-buffer egress buffer-partition lossy percent 75
#set class-of-service shared-buffer egress buffer-partition multicast percent 20

root@str-er1-ss12> show configuration class-of-service


shared-buffer {
egress {
percent 100;
buffer-partition lossless {
percent 5;
}
buffer-partition lossy {
percent 75;
}
buffer-partition multicast {
percent 20;
}
}
}

Commands to deactivate the shared buffer configuration:


#deactivate class-of-service shared-buffer egress percent 100
#deactivate class-of-service shared-buffer egress buffer-partition lossless percent
5
#deactivate class-of-service shared-buffer egress buffer-partition lossy percent 75

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 61 of 222


#deactivate class-of-service shared-buffer egress buffer-partition multicast
percent 20

Commands to activate the shared buffer configuration:


#activate class-of-service shared-buffer egress percent 100
#activate class-of-service shared-buffer egress buffer-partition lossless percent 5
#activate class-of-service shared-buffer egress buffer-partition lossy percent 75
#activate class-of-service shared-buffer egress buffer-partition multicast percent
20

2.4.3 QFX 10008 and QFX5110


APRIL 2018
SERVER EDGE LAYER 2 CORE LAYER 3 CLIENT EDGE LAYER 2
QFX5110 -48S-FSI QFX10008 QFX5110 -48S-FSI
QFX10000-30C 10/40GE deep
(Trident 2+) buffer line card (Trident 2+)

The solution performed as expected in terms of supporting the video load with no errors.
Correct operation was observed with ISIS 7500 storage and clients for video and PATHDIAG
Correct operation was observed with ISIS 5500 storage and clients for video and PATHDIAG
Correct operation was observed with NEXIS storage and clients for video and PATHDIAG

This testing was done with a small number of clients and storage engines so does not
represent full load testing, but does reflect operational capability.

It was possible to create packet drop with default settings for buffering with BOTH ISIS 7500
and NEXIS.
It was possible to prevent packet drop with enhanced settings for buffering, with the
limitation of the testing resources. An enhanced buffer setting, as described in section 2.4.2,
should be used for a deployment, with a percentage split between LOSSY/LOSSLESS
suitable for all applications used in the desired deployment.

2.5 ARISTA - Custom/Deployed designs


This section provides detail of successful deployments that have been implemented with
some element of funded testing attached, using ARISTA products.
2.5.1 ARISTA 7500R and 7280R

TESTING MARCH 2018


SERVER EDGE LAYER 2 CORE LAYER 3 CLIENT EDGE LAYER 2
7280R 7500R 7280R

Leaves = 2xDCS-7280SR2A- Spines = 2x DCS-7504N, Leaves = 2xDCS-7280SR2A-


48YC6, version 4.19.3F version 4.19.3F 48YC6, version 4.19.3F
Line cards = 2x7500R-36CQ-
LC per spine

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 62 of 222


The solution performed as expected in terms of supporting the video load with no errors.
Correct operation was observed with ISIS 7500 storage and clients for video and PATHDIAG
Correct operation was observed with ISIS 5500 storage and clients for video and PATHDIAG
Correct operation was observed with NEXIS storage and clients for video and PATHDIAG

This testing was done with a small number of clients and storage engines so does not
represent full load testing but does reflect operational capability.

2.5.2 ARISTA - PROVEN DEPLOYMENTS WITH NEXIS

2.5.2.1 SPINE/LEAF

LOCATION YEAR SPINE LEAF


USA 2019 7500R (Jericho) 7280QR-C36 (Jericho) LEAF for STORAGE
and VM
7280SR-48C6 (QumranMX) a LEAF for
BARE METAL SERVERS AND NLE
AFRICA 2019 7280CR2A-30 7280SR-48C6 (QumranMX) LEAF for VM &
(Jericho+) BARE METAL SERVER
7280TR-48C6 (QumranMX) LEAF for NLE
APAC 2020 7500R2 (Jericho+) 7280QR-C36 (Jericho) LEAF for STORAGE
and VM
7280SR-48C6 (QumranMX) LEAF for BARE
METAL SERVERS
7280TR-48C6 (QumranMX) LEAF for NLE

2.5.2.2 MULTITIER

LOCATION YEAR CORE EDGE


EUROPE 2020 DCS-7280QR-C36- EXISTING NETWORK
F (Jericho)
EUROPE 2020 Arista 7050CX3 EXISTING NETWORK
(Trident3)
LATAM 2021 DCS-7280QR-C36- DCS-7280SR-48C6-F (QumranMX)
F (Jericho) SERVERS
DCS-7020TR-48-F (QumranAX) NLE
ALL DEVICES USING Software image
version: 4.23.5M

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 63 of 222


2.6 SPINE/LEAF - Custom/Deployed designs
This section provides detail of successful deployments that have been implemented, these are
not vendor specific

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 64 of 222


3.0 VIRTUAL MACHINES AND BLADE SERVERS
Beginning with Interplay V3.3 In July 2015, Avid begin to support Virtual Machine based
solutions on Dell R630 servers and HP DL360Gen8 hardware platforms. Avid publishes a
document Interplay Virtualization Best Practices, which is updated in line with different
Interplay versions as appropriate (see interplay documentation). The purpose of this section is
to provide some additional information on differing hardware platforms that have undergone
informal testing or have been deployed in larger customers.

One principle of virtual machines is to allow deployment on homogenous hardware


platforms, with the necessary resources such CPU, RAM, NIC, HDD etc. However, this can
“conflict” with some vendor specific supported solutions, but those vendors, such as Avid,
now have to adjust.

Sometimes the full-on performance of a tightly specified bare metal hardware platform
cannot be matched by a blade server. In this situation, providing the solution still works
sufficiently, the benefits of a lower overall performance may be outweighed by the benefits
of a homogenous solution.

Some virtual machine hardware platforms also can support high performance video adapters
that used to be the domain of workstations, this allows centralize deployment alongside
suitable advance KVM platforms.

3.1 Cisco UCS


Cisco offers two UCS platforms, the C class is a bare metal format, while the B class is a
blade format. With the Blade format, it is also necessary to deploy the Virtual Interface Card
(VIC) and the Fabric Interconnect and the FEX/IOM (Fabric Extender /Input Output Module
Input), and this is effectively collapsing much of the Access Layer into a single box, greatly
reducing the number of external cables to the outside world, by using converged backplane
10GBASE-KR and/or 40GBASE-KR connections. The C class bare metal format, can use
the VIC along with and external FEX or use traditional NIC solutions.

The UCS products enable a converged environment where Ethernet, Fiberchannel and ILO
are combined into a single adapter called a VIC, which can provide up to 256 virtual adapters

The Fabric Interconnect solutions are evolutions of Nexus 1U switches, running different s/w
to add new features. Some architectures that were not well suited to deployment ISIS 7500
storage servers and 1G edge clients do not encounter the same challenges when deployed
with 10G VMs and/or NEXIS storage servers.

The integrated FEX/IOM (Fabric Extender /Input Output Module Input) are evolutions of the
1U Nexus 2000 solutions.

3.1.1 UCS with FI 6248


As part of a USA based customer proof of concept system in 2016, on a Cisco UCS loaner
platform, some comparison tests were done between a four-engine ISIS5x00 stack and the
following:

• AS3000/Myricom
• R630/QLOGIC 57840

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 65 of 222


• R730/QLOGIC 57810
• Cisco 5108 chassis/VIC1340/2208FEX/6248 Fabric Interconnect

There was also some limited ISIS7x00 tests alongside the ISIS5x00 tests. The short answer is
that the cisco configuration performed as well as the other configurations for a single VM on
the UCS. When additional VMs were added to see if the result would scale, 2 VMs worked
OK, but the write latencies increased when a 3rd VM was added. That said, the test bed was
also hosting VMs for other Interplay development work, so those could have impacted the
linearity of these scalability tests. The customer deployment with ISIS 7500 is in production
without issue.

The FI 6248 platform is based on an enhanced version the Nexus 5548 platform. The FEX
2208 is based on a re-packaged version the Nexus 2248 platform.

This platform combination has not been exhaustively tested by Avid or received any official
sanction. However empirical evidence suggests that it is broadly suitable/field proven when
configured appropriately.

3.1.1.1 Media Central VM best practices


A Cisco UCS hardware platform has been tested with Media Central in a VM environment as
described in the extract below from the MediaCentral document:
MediaCentral Platform Services Virtual Environment with VMware® Best Practices Guide

The URL will vary with version, which is currently V2.9 at time of writing this section
(MAR 2017).
http://avid.force.com/pkb/articles/en_US/readme/Avid-MediaCentral-Version-2-9-x-
Documentation

Avid used a Cisco UCS Blade Server as a validation system for the host server and
the vSphere cluster. Avid followed the VMware best practices for setting up the
validation environment.
• ESXi 6 installed on hard drives provided on the UCS blades
• Assigned Enterprise Plus license on all three blades.
The following table lists the technical details of the Cisco UCS 5108 Blade Server
used for VMware validation:

• Processor Intel Xeon E5-2697 v3


• Form factor Cisco UCS blade (B200 M4)
• Number of Processors 2
• Processor Base Frequency 2.6 GHz
• RAM 128 GB DDR4 RDIMM - ECC

Networking
Cisco UCS VIC 1340 with UCS 2208XP Fabric Extenders and dual UCS 6248UP
Fabric Interconnects (FI)
Connections:
• Eight connections between the Cisco 5108 and the FI’s.
• Four 10GbE connections between the FI’s (as a port channel group) and the
iSCSI VLAN of the parent switch (Dell S4810).

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 66 of 222


MediaCentral Server is a demanding application on CPU and memory.

3.1.2 UCS with FI 6332


The Cisco UCS B200-M4 with IOM 2304 and FI 6332 has been deployed by major USA
broadcaster for almost a year (written MAR 2017) it has been working fine. This
configuration has the FI 6332’s uplinked to a Nexus7710 via 4 x 40GB trunk and ISIS 7x00
also connected to the same N7710 core.

Another major USA broadcaster also have a PoC system that we installed a few months ago
(written MAR 2017) running Interplay PAM and MAM on UCS B200-M4.

The FI 6332 platform is based on a version of the Nexus 9300 Trident 2 based switches
platform (but without the ALE). The FEX 2304 is based on a re-packaged version the Nexus
2348 platform.

This platform combination has not been exhaustively tested by Avid or received any official
sanction. However empirical evidence suggests that it is broadly suitable/field proven when
configured appropriately.

The Picture below shows and example deployed design.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 67 of 222


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 68 of 222
4.0 WIDE AREA AND DATA CENTER NETWORKING
This section discusses some of the newer techniques used I modern data centres and how they
might (or might not) affect NEXIS operation.

This document is not intended to educate the reader in these areas. There are many tutorials,
blogs and whitepapers which will are better placed to achieve such an objective.

4.1 MPLS
Multi Protocol Label Switching is essentially as a high-speed WAN technology provide by
Internet Service providers, which has been adopted by many companies that present IT
services internally on Service Provider model.

From a simplistic point of view MPLS is an encapsulation technology which add 4 bytes to
the packet for each encapsulation stage, generally there are 2 or 3 stages so a maximum of 12
bytes is added. This does not decrease the original packet MTU as the network path must be
able to cope with this. At the final stage of de-encapsulation, the original packet is sent to the
terminating device.

The new outer bytes allow a predetermined path to be taken with fast routing or switching of
the packet based on its label, rather than traditional hop-by-hop routing which is slower and
more CPU intensive.

For many years (since 2007) several Broadcasters have used MPLS on their campus
deployments, and WAN circuits between regions to Transport ISIS 7x00 and 5x00 traffic and
latterly NEXIS traffic.

This document is not intended to educate the reader in this area. There are many tutorials,
blogs and whitepapers which will are better placed to achieve such an objective.

4.2 Wavelength-division multiplexing


The principle of WDM is simple, use differing wavelengths along the same cable to get more
transmission capacity, and there no problem using it for NEXIS or ISIS clients, if it is
implemented properly. After all a one wavelength is a good as another when it comes to
capacity. This is a layer one function that has no impact on buffering or throughput.

So, to the principle; instead of sending white light between devices, send red green and blue
separately and triple the capacity, in fact it is all variation of red (dark almost invisible to
humans red),
CWDM is generally 4-16 colours and DWDM is generally 32-96 colours and a lot more
expensive.

The normal MMF (850nm) or SMF (1310nm) transceivers are referred to as “grey”
transceivers (but they are actually red too, as visible Red is 620–750 nm), and extra-long-
range device uses 1550nm.

https://en.wikipedia.org/wiki/Visible_spectrum
https://en.wikipedia.org/wiki/Small_form-factor_pluggable_transceiver
https://en.wikipedia.org/wiki/Wavelength-division_multiplexing

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 69 of 222


WDM devices can be electrical multiplexers or optical multiplexers. An electrical
multiplexer takes a “grey” signal ++( could be 850mn or 1310 nm - for 10Gor RJ45 Ethernet)
and converted to an electrical signal then to a specific colour which is then transmitted onto
the fibre and de-multiplexed at the receiver to. An optical multiplexer requires that the light is
already using one of the “special colours” before it enters the multiplexer, hence it is normal
to use a transceiver that uses the particular wavelength from the outset. Usually this
transceiver is fitted into the network switch or router, some transceivers are programmable
for different wavelengths. Sometime a transceiver will be use in a NIC.

For use with Avid storage solution the best method is to use an electrical multiplexer, or if an
optical multiplexer is deployed, then a transceiver in the Switch is the better solution and not
a transceiver in a NIC, because a transceiver in a NIC servers one device only and also the
Avid supported NICs may not work with “coloured” transceivers.

4.3 Software Defined Networking –SDN


Essentially SDN is a method of Automation and programmatic control. The control plane is
managed by and external device which tells the local device data plane what to do. As ever it
is the capability of the network edge devices that matter the most for Avid storage traffic
(storage servers and storage clients). If the hardware is capable, and correctly configured by
upstream devices, then the NEXIS (or ISIS) traffic passing through the data plane should not
be impacted by the control plane separation.

This document is not intended to educate the reader in this area. There are many tutorials,
blogs and whitepapers which will are better placed to achieve such an objective.

4.4 VXLAN
VXLAN is an L2 overlay over an L3 network, Virtual Extensible LAN (VXLAN) uses
network virtualization overlay to mitigate the scalability problems in large data centers,
which often feature Spine & Leaf designs and quickly reach limitations of convention
VLANS. It is essentially and encapsulation, like MPLS, and borrow many techniques from
MPLS (such as Q-in-Q) but applies them differently.

Each overlay network is known as a VXLAN Segment and identified by a unique 24-bit
segment ID called a VXLAN Network Identifier (VNI).
Only virtual machines on the same VNI are allowed to communicate with each other. But
virtual machines will/may be mobile around the network estate.

VXLAN encapsulation may begin within the VM, at the virtual switch, or for real tin devices
it will begin at the physical switch. Essentially, just like MPLS, the encapsulation is removed
before it gets to the end device so packet should be exactly the same as when they began their
journey.

Some good tutorials are:


VxLAN | Part 1 - How VxLAN Works (part 1 of 6)

https://www.youtube.com/watch?v=YNqKDI_bnPM&list=PLDQaRcbiSnqFe6pyaSy-
Hwj8XRFPgZ5h8

https://www.youtube.com/watch?v=YNqKDI_bnPM

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 70 of 222


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 71 of 222
4.4.1 VXLAN Overheads
A large network vendor assisted with most of this sub-section:

Best practice for VXLAN fabrics is adjusting MTU on the network equipment to
accommodate 50bytes overhead. Here is a snippet from a config guide:

• MTU Size in the Transport Network Due to the MAC-to-UDP encapsulation,


VXLAN introduces 50-byte overhead to the original frames. Therefore, the maximum
transmission unit (MTU) in the transport network needs to be increased by 50 bytes.
If the overlays use a 1500-byte MTU, the transport network needs to be configured to
accommodate 1550-byte packets at a minimum. Jumbo-frame support in the transport
network is required if the overlay applications tend to use larger frame sizes than
1500 bytes.

With the advent of widespread use Merchant Silicon, in since 2012, I would guess that all the
major vendors offer enterprise class and above equipment is jumbo capable, so there should
be no need for MTU squeeze and hence likely fragmentation of packets. And I expect that
applies equally to vendor custom silicon and hence cloud providers.

4.4.2 VXLAN questions

The extracts below are from an and email exchange in FEB 2018.

1.VXLAN introduces 50-byte overhead to the original frames. Can this cause a problem with
AVID?

[DS] I am not aware that NEXIS has been explicitly tested in a VXLAN scenario, however
this would not be submitting a VXLAN frame directly to NEXIS, the VXLAN header would
be stripped beforehand at the edge switch just like MPLS. We have sites successfully using
ISIS 7500 with MPLS for many years. If the ability of the network infrastructure to carry the
VXLAN overhead in a mini-jumbo frame exists, and the NEXIS frame is not reduced the
existence of VXLAN should be transparent as with MPLS.

2.Is there restrictions or limitations with MTU using Jumbo Frames?

[DS] The NEXIS solution has not been tested in a jumbo frame environment, and a max
MTU of 1500 is what we expect. However, some of the restriction of the ISIS infrastructure
which absolutely made jumbo frames impossible are not part of NEXIS. Nexis still use small
UDP fragments, and this part has not been tested. For TCP, there would be a negotiation of
MSS between the endpoints which would restrict the MTU anyway.

3. What MTU AVID is using and is it under 9000 bytes?

[DS] The NEXIS solution has not been tested in a jumbo frame environment, and a max
MTU of 1500 is what we expect.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 72 of 222


4.Do you have other clients using SDN data centre and do they have any issues with AVID?

[DS] The NEXIS solution has not been tested in a SDN environment. However, our products
are being developed for use in the Azure cloud, which means we lose control of the network
environment, and VXLAN encapsulation is a “given” based on the automated/portable
deployment nature of the virtual machines that are used. SDN is just a control protocol, what
is important for successful operation of Avid applications is low latency and edge buffering.
Nexis clients can survive in with less edge buffering and even a low percentage of packet
discard will not cause frame drop, because we use TCP with FRR instead of Fragmented
UDP as with ISIS 7x00.

4.4.4 VXLAN and Multicast

See section below in Appendix B - B.20.4 Some other useful Multicast URL & Information

4.5 Spine/Leaf Networks


Spine and Leaf networks are a common deployments model for data centre networks and
there are a multitude of resources on the Internet, that negate the need to explain here in great
detail.

Unfortunately, what some folk refer to as Spine/Leaf deployment could almost be applied to
simple (classical/legacy) C4500X deployments with multiple C4948E edge switches, but true
Spine/Leaf brings a host on new challenges and protocols to address.

Spine/Leaf is likely to have BGP-EVPS and VXLAN encapsulation and off device portability
of virtual machines. How the underlay network deals with the overheads of the overlay
network needs to be better understood. Possibly even throw in SDN as part of the recipe too.
However, if the underlying MTU of 1500 bytes common in the “traditional” network
environment is maintained/preserved and the switches have sufficient edge buffering to
address the needs of the speed transition experiences along the path and from source to
destination, then such a transport path should be transparent to the Avid applications. I have
customer that have been using MPLS with ISIS 7x00 for 10 years without issue. For me
Spine/Leaf means 2-6 spine switches and tens or hundreds of leaves and a data centre
environment with homogenous server hardware. Is the user-edge e.g. for the NLE, within a
campus element of a network deployment another leaf, or is it a separate network with the
appropriate segregation, perhaps via firewall? Avid would need to know a lot more about
any intend deployment to give a anything more than a general answer.

4.6 Adapter Teaming


The teaming of adapters is a very nebulous subject. The support for this technique,
commonly used in data centres, across Avid application is not consistent. Also, adapter
teaming capabilities and characteristics differ across operating systems and/or NIC vendors.
This subsection provides pointers to useful resources rather than extolling and explicit
support.

Historically many systems in the file have deployed INTEL NIC teaming with Clients on
Windows 7 and Servers on Windows 2008, but with Windows 10 and Server 2012/2016 the

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 73 of 222


methodologies have changed. Add to this the “relevance” of NIC teaming change
considerably when using Virtual Machines, Blade Servers and Fabric based solutions such as
Cisco UCS. There is no “one size fits all” solution, the best solution for any one situation
depends on many different criteria, and should be tested prior to deployment for normal
operation and disrupted operation.

Section 5.8 of NETREQS V1.2x contains legacy information on Adapter Teaming


which may be helpful to the reader.
4.6.1 Teaming with Intel NICs
Teaming with Intel® Advanced Network Services
Last Reviewed: 13-Dec-2017
Article ID: 000005667

https://www.intel.co.uk/content/www/uk/en/support/articles/000005667/network-and-i-
o/ethernet-products.html
Adapter teaming with Intel® Advanced Network Services (Intel® ANS) uses an intermediate
driver to group multiple physical ports. You can use teaming to add fault tolerance, load
balancing, and link aggregation features to a group of ports.

The seven teaming modes:


1. Adapter Fault Tolerance (AFT)
2. Switch Fault Tolerance (SFT)
3. Adaptive Load Balancing (ALB)
4. Receive Load Balancing (RLB)
5. Virtual Machine Load Balancing (VMLB)
6. Link Aggregation (LA), Cisco* Fast EtherChannel (FEC), and Gigabit EtherChannel
(GEC)
7. IEEE 802.3ad Link Aggregation

4.6.1.1 INTEL NIC Teaming in Windows 10


Teaming was not available in early version of Windows 10. There are many articles on the
internet about this, and some of them refer to the use of LBFO functionality from Windows
2012. It appears that Microsoft, in its infinite wisdom, considers the teaming is unnecessary
on client devices. The Intel article below has some pointers that will explain more about the
possibilities available.

https://www.intel.co.uk/content/www/uk/en/support/articles/000021723/network-and-i-
o.html
Enable NIC Teaming in Windows® 10 Using PowerShell*
Last Reviewed: 08-Nov-2017
Article ID: 000021723

Teaming for Avid application is still possible by using multiple NICs and
the “integrated AALB” functionality, and this can be very useful when
additional bandwidth is necessary, but the “next NIC size” is not viable.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 74 of 222


4.6.2 NIC Teaming Windows 2012
NIC Teaming, also known as load balancing and failover (LBFO), allows multiple network
adapters on a computer to be placed into a team for the following purposes:

• Bandwidth aggregation

• Traffic failover to prevent connectivity loss in the event of a network component


failure

This feature has been a requirement for independent hardware vendors (IHVs) to enter the
server network adapter market, but until now NIC Teaming has not been included in
Windows Server operating systems.

NIC Teaming Overview 08/31/2016 Updated: June 7, 2016


https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-
R2-and-2012/hh831648(v=ws.11)

For more information about NIC Teaming in Windows Server® 2012, see Windows Server
2012 NIC Teaming User Guide.

https://gallery.technet.microsoft.com/windows-server-2012-nic-bae6d72e

For more information about NIC Teaming in Windows Server® 2012 R2, see Windows
Server 2012 R2 NIC Teaming User Guide.
https://gallery.technet.microsoft.com/windows-server-2012-r2-nic-85aa1318

Note: Intel web pages on teaming recommends using Microsoft teaming


on Server 2012/2016

4.6.3 NIC Teaming Windows 2016


NIC Teaming allows you to group between one and thirty-two physical Ethernet network
adapters into one or more software-based virtual network adapters. These virtual network
adapters provide fast performance and fault tolerance in the event of a network adapter
failure.

NIC Team member network adapters must all be installed in the same physical host computer
to be placed in a team.

4.6.4 Linux Bonding and Teaming


Linux network Bonding is a creation of a single bonded interface by combining 2 or more
Ethernet interfaces. This helps in high availability of your network interface and offers
performance improvements on your data traffic flow. Bonding is also referred as NIC
trunking or teaming.

Linux Network bonding – setup guide


Posted on July 17, 2015Posted in Linux
https://www.cloudibee.com/network-bonding-modes/

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 75 of 222


Bonding allows you to aggregate multiple ports into a single group, effectively combining the
bandwidth into a single connection. Network Bonding also allows you to create multi-gigabit
pipes to transport traffic through the highest traffic areas of your network. For example, you
can aggregate three 1GBps ports into a 3GBps trunk port. That is equivalent with having one
interface with 3GBps speed.

There are 7 modes of Bonding, and these are explained in more detail in the reference article.
The modes are

mode=0 (Balance Round Robin)


mode=1 (Active backup)
mode=2 (Balance XOR)
mode=3 (Broadcast)
mode=4 (802.3ad)
mode=5 (Balance TLB)
mode=6 (Balance ALB)

DESCRIPTIONS OF BALANCING ALGORITHM MODES


The balancing algorithm is set with the xmit_hash_policy option.

Possible values are:

layer2 Uses XOR of hardware MAC addresses to generate the hash. This algorithm will place
all traffic to a particular network peer on the same slave.

layer2+3 Uses XOR of hardware MAC addresses and IP addresses to generate the hash. This
algorithm will place all traffic to a particular network peer on the same slave.

layer3+4 This policy uses upper layer protocol information, when available, to generate the
hash. This allows for traffic to a particular network peer to span multiple slaves, although a
single connection will not span multiple slaves.

encap2+3 This policy uses the same formula as layer2+3 but it relies on skb_flow_dissect to
obtain the header fields which might result in the use of inner headers if an encapsulation
protocol is used.

encap3+4 This policy uses the same formula as layer3+4 but it relies on skb_flow_dissect to
obtain the header fields which might result in the use of inner headers if an encapsulation
protocol is used.

The default value is layer2. This option was added in bonding version 2.6.3. In earlier
versions of bonding, this parameter does not exist, and the layer2 policy is the only policy.
The layer2+3 value was added for bonding version 3.2.2.

Source:
https://help.ubuntu.com/community/UbuntuBonding#Descriptions_of_balancing_algorithm_
modes

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 76 of 222


Also see:
https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

4.6.4.1 LINUX TEAMING


NEXIS uses TEAMING in preference to BONDING for resilient outbound connections, as
TEAMING is a newer and better than BONDING. The articles below provide some
background.

https://tobyheywood.com/network-bonding-vs-teaming-in-linux/

https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html/networking_guide/sec-
comparison_of_network_teaming_to_bonding

https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html/networking_guide/sec-
understanding_the_network_teaming_daemon_and_the_runners

4.6.5 TEAMING WITH FASTSERVE INGEST


FastServe Ingest is a Linux device. Page 44 of the 2019 FastServe Ingest Administrator guide
mentions TEAMING
http://resources.avid.com/SupportFiles/attach/FastServe/FastServe_Ingest_AG_v2019.6.pdf

The teaming mode is “bonding with active-backup redundancy” which can be best compared
with AFT and SFT modes listed for Windows.

Specifically use these parameters when bonding adapter is created:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 77 of 222


BONDING_OPTS="resend_igmp=1 use_carrier=0 miimon=100 fail_over_mac=1
primary_reselect=2 downdelay=0 mode=1"

for more information

https://www.kernel.org/doc/Documentation/networking/bonding.txt

4.7 Cisco Transceiver Enterprise-Class versions


Cisco and possible other vendors have different capability optics. Some cheaper version are
now listed as Enterprise class. What this really means in ETHERNET only. There are
version for many different Ethernet speeds. For example, a LR 10G SPF comes as

SFP-10G-LR-S= 10GBASE-LR SFP Module, Enterprise-Class $2,000


SFP-10G-LR= 10GBASE-LR SFP Module $3,995

(US $ list priced shown)

SFP-10G-LR-S=10GBASE-LR SFP Module, Enterprise-Class = S class

See article at
https://community.cisco.com/t5/optical-networking/sfp-10g-lr-s-vs-sfp-10g-lr/td-p/2630525

The use of iSCSI is not impacted because it is contained within Ethernet frames.

This make Cisco parts more price competitive because often the lower cost third party
products are Ethernet only and not “Carrier class” and able to support different layer 2
protocols.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 78 of 222


4.8 Jumbo Frames and Avid applications
Jumbo frames are a very misunderstood technology, and there are many caveats and many
twists and turns, and this is not just for Avid.

Avid applications do not use (or need) jumbo frames.

Avid does not test its applications in a Jumbo frame enable environment, as at SEP 2019.
Non jumbo frame TCP application can work successfully over a Jumbo frame enabled
environment, because the MSS (and hence MTU) is negotiated before transmission. Hence if
the local MTU of the device is set within the OS to 1500 (as normal) all should be fine.
HOWEVER, UDP applications, especially those that use fragments such as ISIS/NEXIS, can
have issue in a jumbo frame, especially when “helpful” intermediate devices might adjust the
size of fragments (or fully reassemble) based on larger MTU sizes. A prime example of this
is EVS where separate adapter must be used with different MTU settings, although using
jumbo frames fixes a problem which no longer exists at 1/10G but will become apparent
again as we move to 100G+ applications.

VXLAN overlay networks are a prime example of a Jumbo Frame technology which is used
to carry no-jumbo frame messages (to ensure VM portability), and this is common in a
spine/leaf architecture, but with VXLAN the VTPE (VXLAN tunnel Endpoint) is responsible
for local connection at appropriate MTU (which may differ by application).

There is no reason for a MediaComposer or any Avid MediaCentral (or Interplay) to be


configured to use jumbo frames.

Any VM deployment (as a VM hypervisor can be a VTEP) should be configured to ensure


that no jumbo frames are sent to any Avid application.

Regarding NEXIS and jumbo frames, in theory it is capable of accepting Jumbo frames
because the NIC is capable but not configured to do so as shown below.
gx0
Link encap:Ethernet HWaddr e4:1d:2d:4f:4f:80
inet addr:10.42.25.146 Bcast:10.42.25.255 Mask:255.255.255.128
inet6 addr: fe80::e61d:2dff:fe4f:4f80/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:91132536 errors:0 dropped:0 overruns:0 frame:0
TX packets:89538034 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:28596021094 (26.6 GiB) TX bytes:79158572019 (73.7 GiB)

Hence the NIC should reject a jumbo frame if one is received.

Also see section 4.4 VXLAN to understand how jumbo frames are used in an overlay
network.

4.9 NO specific VPN requirements for use with Thin Client applications
While Avid has not done any testing of vendor specific VPN solutions, it probably does not
need to. The requirement to do testing on network switches has always been about the need
for edge buffering because of speed changes between storage server and FAT clients, this
should not be an issue for VPN, as the speed change has probably already occurs at the edge
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 79 of 222
switch in front of the VPN server (unless it is 10G connected), or it happens somewhere in
the path from VPN server egress toward the receiving client where no control is possible.

Also, when using solutions based around the MediaCentral THIN client applications, the hard
work has already been done by the MCS Server and it delivers a low bandwidth TCP stream
toward the receiving client.

There are many different types of VPN, and many different names that get associated with
them, but all do the same basic thing, tunnel, encrypt, authenticate and restrict (firewall). So,
you have IPSEC VPN, SSL VPN, L2TP VPN, INTRANET VPN, EXTRANET VPN,
Dynamic Multipoint VPN, SITE-TO-SITE VPN, GRE VPN, MPLS VPN. Many of these
features overlap or are a mixture of the others.

So, the key points of any VPN solution are


Does it have the link capacity for intended workload?
Does it have the client quantity capacity for intended workload?
Does it have the desired encryption capability for intended workload?
Does it have the desired authentication capability for intended workload?
Does it have the desired integrity verification capability for intended workload?

Does it have the required firewalling capability for intended workload?

The URL below from which this next image is taken provide a good explanation of the
various concepts that can be applied in Virtual Private Networks
https://en.wikipedia.org/wiki/Virtual_private_network

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 80 of 222


5.0 MEDIA CENTRAL
This section will be used for the newly branded Media central Family release in 2018

5.1 Media Central UX Bandwidth requirements


Data extracted from

http://resources.avid.com/SupportFiles/attach/MediaCentral_Cloud_UX/MCCUX_2017_2_0
_ReadMe.pdf

The following table presents single-user bandwidth guidelines for MediaCentral Cloud UX
playback. The table is provided for guidance, and is not an indication of performance
guarantees.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 81 of 222


The following table explains the contents in detail.

Item Description
Width aspect ratio. The table assumes an aspect ratio of 16:9.
Quality Refers to the quality setting in the Player set via the UI.
Compatibility Notes and Issues
14
Value Each quality setting has a numeric value. In JPEG
compression literature, these are often thought of as
percentages of original image (e.g. 100% is equivalent to
uncompressed; 1% represents a severely degraded image).
Peak Video with high color variation (e.g. wide shot of a crowd)
Valley Video with low color variation (e.g. half of frame consists of a
clear blue sky)
Typical A wide range of footage (e.g. interiors, exteriors, interviews).
The typical shot tends closer to valley values than peak values.
Audio All bandwidth figures include audio consisting of 44.1 kHz
sample rate x 16-bit/sample x 2 tracks = 1.4 Mbps

5.2 Media Central Cloud UX Server Connectivity requirements


It is documented that for Media Central Cloud UX Server, the NEXIS client must bind to a
single adapter. This has precluded the use of teamed adapters to provide network resilience.
See custom testing section 7.1 for results of successful testing with teamed network adapters.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 82 of 222


7.0 CUSTOM TESTING
This section will focus on custom testing that has been conducted on customer site and that is
not mentioned elsewhere in this document. See section B.36 for implementation notes.

7.1 Testing Teamed Adapter with Media Central Cloud UX Server

This so-site testing was conducted in NOV 2018 for an APAC customer. This section
contains the Executive Summary and some of Conclusions appropriate for this section.

This was done with Media Central Cloud UX Server V2018.9 and the deployment will be
upgrade to 2018.11 before deployment. It is high likely that it is full backward compatible
with earlier version as far back as Media Central Server 2.x. The NIC use for this testing was
Broadcom 57412 2 x 10Gb SFP+ (see section 7.2 for more details)
7.1.1 Primary Objective
The primary objective is to test resilient connections with Media Central Cloud UX Server
which has never been explicitly supported (or even tested) by Avid.

The testing will attempt using Switch Fault tolerant connection mode=1 (Active backup) and
mode=4 (802.3ad) LACP. This will require some config changes on the Switch.

A more detailed conclusion can be found in section 9.0 7.1.2.

The testing has proven that using two 10G connections as a teamed connection configured
with 802.1ad LACP in conjunction with a switch that supports MLAG (Multi-Chassis Link
Aggregation) provides significant reliability improvements for connection disruption vs a
single connected device. A single “adaptor” is presented to the NEXIS client by the
operating system. The enhanced robustness is a major improvement in reliability.

This adjustment should be considered acceptable custom configuration.

7.1.2 Conclusion Extract


The testing has proved that the characteristics of teaming with Linux are very different to
Windows & and Intel NIC drivers.

Active/Standby failover (Switch Fault Tolerant) connections will failover as expected, but
not fail back as expected, hence their suitability for a resilient connection is this workflow is
limited.

This testing with example workflows has proved that teamed connections offer and increased
level of resilience, however, fail-over and fail-back conditions may not be seamless.

First testing with REAL data using a simulated workflow on Media Central Cloud UX clients
showed that a resilient connection using LACP is quite robust to link disconnection.

SFT TEAMING CONCLUSIONS


SFT Teaming appears unsuited operation with Media Central Cloud UX Server. See section 5
for more detail.

LACP TEAMING CONCLUSIONS

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 83 of 222


LACP Teaming is well suited operation with Media Central Cloud UX Server. While not
seamless, its operation is Robust and predictable. This method should be used in
combination with a switch that supports Multi-Chassis Link Aggregation (MLAG).

DISCONNECT TESTS CONCLUSIONS


It is far better to have resilient link than single links, it gives choices of how and when to deal
with the failure that are not available to single connected machine. The benefits associated
with this "unsupported method" greatly exceed the risks.

Remedying a failed link or a failed switch is probably best delayed until a suitable
maintenance window at the end of the day to minimise disruption.

7.2 Testing Broadcom 57412 Adapter Media Central Cloud UX Server

This so-site testing was conducted in NOV 2018 for an APAC customer. This section
contains the Executive Summary and some of Conclusions appropriate for this section.

This was done with Media Central Cloud UX Server V2018.9 and DELL R640 Servers.
7.2.1 Secondary Objective
A secondary objective was to use/test the Broadcom 57400 series adapter (approx. 2017
launch date) which offer 10/25G connectivity. The specification exceeds that of the qualified
(but older approx. 2010 launch date) Broadcom 57810 series I/O card, in several aspects.

This work was not a formal test of the NIC capabilities, but just a quick test to ascertain basic
capability. Insufficient time and resources were available to achieve such a test which would
have been a minimum of one full day in itself.

This I/O card performed well in the Dell R640 server, but extensive performance testing was
not possible due to time and resource limitations.

This card was used for the workflow tests with adapter teaming as described above in section
7.1.

7.2.2 NIC INFORMATION


The DELL RG640 Server comes supplied in default configuration with Dell Quad Port
Broadcom 57412 2 x 10Gb SFP+ + 5720, 2 x 1Gb Base-T network connections.

This testing used the plug-in option card with SFP+ connection at 10G.

New PCI-E Broadcom 57412 Dual port Optical cards were purchased and will be used for
testing, there are 3 cards for 3 servers with 6 connections (using teaming).

While the Broadcom 57412 interfaces has not been officially tested by Avid, this is not
considered a problem.

The Broadcom 57400 series is a newer family 57800 series, that is recommend use in
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 84 of 222
http://resources.avid.com/SupportFiles/attach/Interplay_Virtualization_Best_Practices.pdf
but Qlogic 57810 (There is a “relationship” between Qlogic and Broadcom)
The Qlogic 57810 uses PCI 2.x and is a 10/40G family (40G as 4 x 10G QUAD port)

https://www.broadcom.com/products/ethernet-connectivity/controllers/bcm57416/
https://lenovopress.com/lp0705.pdf
This uses PCI 3.x and is a 10/25/40/100G product family.

The Broadcom/Qlogic 57810 NIC tested by Avid in a Dell R630 platform as part of Avid
Interplay® | Production Virtual Environment with VMware® Best Practices Guide (DEC
2017) This card used PCIe 2.x Specification to communicates with the host platform. This
card was launched in 2010 and is 10G operation only.

Therefore, a Broadcom 57400 series device should exceed the capability of the older Qlogic
57800 series.

Interplay® | Production Virtual Environment with VMware®


Best Practices Guide Version 3.3 and later - December, 2017

Networking QLogic 57800 2x10Gb SR/SFP+ + 2x1Gb BT Network Daughter Card,


with SR Optics

The testing for this project will use Broadcom 57412 series I/O cards in the servers and the
results will be equally valid for Copper or Optical variants.

The I/O card will be connected to Riser 1. I/O card p/n: BCM957412A4120DLPC_08

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 85 of 222


The Broadcom 57400 series adapter (approx. 2017 launch date) which offer 10/25G
connectivity. The specification exceeds that of the qualified (but older approx. 2010 launch
data) Broadcom 57810 series I/O card, in several aspects, such as link speed and PCI
backplane capacity.

NO performance difference would be expected (at 10G) between the copper RJ 45


presentation cards and the SFP+ connected card.

99.0 LEGACY ISIS and Interplay Information


This information will remain available in NETREQS V1.23

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 86 of 222


Appendix A - NEXIS Documentation - Landing pages

Avid NEXIS v2018/19/20/21 Documentation


http://avid.force.com/pkb/articles/en_US/user_guide/Avid-NEXIS-Documentation

Avid NEXIS v7 Documentation


http://avid.force.com/pkb/articles/en_US/user_guide/Avid-NEXIS-v7-Documentation

Avid NEXIS v6 Documentation


http://avid.force.com/pkb/articles/en_US/User_Guide/Avid-NEXIS-v6-Documentation

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 87 of 222


Appendix B - Switch configuration tips, good practices and Lessons from the
field.

This is not an exhaustive list of best practices, as these may vary from site to site based on
customer policies and procedures and security requirements.
B.1 Document your configs with DESCRIPTIONS
Use the description command to apply simple description to interfaces and VLANS
such as AIRSPEED, MEDIA INDEXER, INTERPLAY ENGINE

In Foundry the name or port-name command is used.

A simple description
conf t
interface g1/1
description AIRSPEED

CISCO
PWDEMO-Cisco4948#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PWDEMO-Cisco4948(config)#int t1/49
PWDEMO-Cisco4948(config-if)#desc CONNECTION TO ISIS VLAN LEFT
PWDEMO-Cisco4948(config-if)#int t1/50
PWDEMO-Cisco4948(config-if)#desc CONNECTION TO ISIS VLAN RIGHT
PWDEMO-Cisco4948(config-if)#exit
PWDEMO-Cisco4948(config)#exit
PWDEMO-Cisco4948#

B.1.2 Good documentation and config practices


I reviewed some config recently (FEB2017) on a Cisco Catalyst C4500X VSS setup, with 3
C4948E edge switches and thought it worth adding the connect as section in this appendix,
there is a bit of overlap with other sections in this appendix, and while this is based on Cisco
Catalyst, it applies similarly to Cisco Nexus, albeit with some different terminology. The
fundamental principles also applies (with appropriate changes) to Arista, Brocade, Dell,
Juniper etc.

###### ###### ###### ###### ###### ###### ###### ###### ######

Looking at these configs I have some concerns, but I am not trying to shoot messenger, and
they might also be a work in progress, in fact I do hope this is not the finished article.

I am a stickler for in-config documentation, and I do not see much here, and what there is, is
sparse. For me in-config documentation is a massive helper when it comes to fault diagnosis.
A well-documented file is like having a torch in a dark tunnel, and logged events ideally to an
external syslog server is a gold-plated bread-crumb trail.

The level of in-config documentation I strive for is shown in my vanilla configs.

I want to see named VLANs, described Switched Virtual interfaces, accurately described
10G interfaces for ALL devices with port information.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 88 of 222


“Issues” in the Core C4500X
VLANs are not named
There is no differentiation on where the NEXIS connections are of upper or lower controller,
or which port.
Core switch port channels are not documented
There is no RAPID PVST configured
There is no long costing configured
There is no root primary configured in the core switch for STP, yes it is still running and
needs to be configured, even in a VSS, for when someone does something unexpected when
plugging in another switch.
There is no logging link event status configured on key links
There is no logging server configured
There is no ntp server configured
There is no timezone or DST configured
VLAN 1 should be shutdown

“Issues” in the C4948E Edge switch


VLANs are not named
There is no RAPID PVST configured
There is no long costing configured
Core switch port channels are not documented
There is no logging link event status configured on key links this is especially important for
server class devices and uplinks
Even plain NLE should be configured maybe just as NLE
There is no logging server configured
There is no ntp server configured
There is no timezone or DST configured
There should be a command to disable ip routing in all C4948E edged switches
OPTIONAL There is no need to have an SVI configured in each VLAN
The ip route command should be removed – would happen via step 9
OPTIONAL With the above two items there should be a corresponding ip default gateway
command
VLAN 1 should be shutdown

There interface description of Server and clients and uplinks will help with identifying what
SHOULD be connected, versus what MIGHT be connected, versus what is UNLIKLEY or
SHOULD NOT be connected

Now none of these suggestions will make the network run any faster, but it will definitely
reduce your diagnostic time by 90% or more, in 90% or more diagnostic situations. I would
say that is a pretty good investment. And even more important when you ask an external
resource who is unfamiliar with the network and solution as a whole.

###### ###### ###### ###### ###### ###### ###### ###### ######

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 89 of 222


B.2 Setting Spanning tree to Rapid Spanning Tree
CISCO
PWDEMO-Cisco4948#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PWDEMO-Cisco4948(config)#span
PWDEMO-Cisco4948(config)#spanning-tree mode rap
PWDEMO-Cisco4948(config)#spanning-tree mode rapid-pvst
PWDEMO-Cisco4948(config)#exit
PWDEMO-Cisco4948#
PWDEMO-Cisco4948#

FOUNDRY
BigIron(config)# spanning-tree 802-1w

or

BigIron(config)# vlan 10
BigIron(config-vlan-10)# spanning-tree rstp

B2.1 Spanning tree cost

When costing up links this can be done on a per link basis or on a per VLAN basis across a
given link

PER LINK/INTERFACE
PWDEMO-Cisco4948(config-if)#spanning-tree cost 10

PER VLAN, PER LINK.INTERFACE

PWDEMO-Cisco4948(config-if)#spanning-tree vlan 10,20 cost 10

The value of 10 is appropriate when the short method is used spanning-tree path cost. When
using the long method for spanning-tree path cost, a value if 5,000 is appropriate. This value
is chosen so as not to reflect a pre-define value.

ISIS can be described as a Layer 1.5 device, it is half switch, half hub! Hence when ISIS is
connected to two switches and uses a FHRP setup, spanning tree will always need to be
configured to achieve the required operation. Each customer will have slightly different
preferences on which path to block, some will chose to block one of the ports facing ISIS
other will choose the block some (the ISIS) VLANs via the SW-SW link. The option most
suited will depend on which FHRP is deployed and how it is configured and the polices of
the site, so no specific recommendation can be given in this document.
B.2.2 Spanning Cost type

Spanning tree cost can be Long or Short, different value applied as per the links below

http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.1/12.1e/command/reference/
S1.html#wp1029022

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 90 of 222


Usage Guidelines

This command applies to all the spanning tree instances on the switch.

The long path cost calculation method uses all the 32 bits for path cost calculation and yields
values in the range of 1 through 200,000,000.

The short path cost calculation method (16 bits) yields values in the range of 1 through
65,535.

Examples

This example shows how to set the path cost calculation method to long:

Switch(config#) spanning-tree pathcost method long


Switch(config#)

This example shows how to set the path cost calculation method to short:

Switch(config#) spanning-tree pathcost method short


Switch(config#)

http://en.wikipedia.org/wiki/Spanning_Tree_Protocol#Data_rate_and_STP_path_cost

Data rate and STP path cost

The table below shows the default cost of an interface for a given data rate.

Data rate STP Cost (802.1D-1998) STP Cost (802.1t-2001)


4 Mbit/s 250 5,000,000
10 Mbit/s 100 2,000,000
16 Mbit/s 62 1,250,000
100 Mbit/s 19 200,000
1 Gbit/s 4 20,000
2 Gbit/s 3 10,000
10 Gbit/s 2 2,000

B.3 SET primary switch as STP master root primary


Spanning tree settings must be done for each VLAN

PWEMO-Cisco4948#conf t
PWDEMO-Cisco4948(config)#spanning-tree vlan [NUM] root primary
PWDEMO-Cisco4948(config)#exit
PWDEMO-Cisco4948#

The command that actually gets entered in to Cisco running config will look like this

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 91 of 222


spanning-tree priority 24576 but the number will be change depending on the VLAN
numbers.

FOUNDRY
BigIron(config)#spanning-tree priority 24576

Syntax: [no] spanning-tree [forward-delay <value>] | [hello-time <value>] | [maximum-age


<value>] | [priority <value>]

Here is the syntax for port STP parameters.


Syntax: [no] spanning-tree ethernet | pos <portnum> path-cost <value> | priority <value>
priority: Possible values: 1 – 65535. Default is 32768. A higher numerical value means a
lower priority;
thus, the highest priority is 0.

B.4. SET secondary switch as STP root secondary


Spanning tree settings must be done for each VLAN

PWEMO-Cisco4948#conf t
PWDEMO-Cisco4948(config)#spanning-tree vlan [NUM] root secondary
PWDEMO-Cisco4948(config)#exit
PWDEMO-Cisco4948#

The command that actually gets entered in to Cisco running config will look like this
spanning-tree priority 28672 but the number will be changed depending on the VLAN
numbers

FOUNDRY
BigIron(config)#spanning-tree priority 28672.

B.5 Deploy BPDU guard on all ports that use PORTFAST


Ensure that all appropriate ports that use the PORTFAST setting are protected against
BPDUs.

Note: These recommendations in this subsection originated with ISIS 7x00


and Cisco Catalyst Switches and may not apply equally to use with AVID
NEXIS or Cisco NEXUS, or may have alternative commands listed later
in this subsection, that will supersede earlier information.

The PORTFAST setting should only be used on ports 1G or 10G that connect clients and
servers
• 10G ports which face ISIS switches should NOT use the portfast setting
• 10G ports which connect to other switches (e.g. 4900M to 4900M inter-switch link,
or 4900M to cascaded 4948) should NOT use the portfast setting

BPDU Guard will administratively shutdown any port which receives BPDUs.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 92 of 222


LAB-4948-10GE-S(config)#spanning-tree portfast uard default

This is a global command.

Also consider that when STP BPDU guard disables the port, the port remains in the disabled
state unless the port is enabled manually. You can configure a port to re-enable itself
automatically from the errdisable state. Issue these commands, which set the errdisable-
timeout interval and enable the timeout feature:

Cisco IOS Software Commands


CatSwitch-IOS(config)# errdisable recovery cause bpduguard

CatSwitch-IOS(config)# errdisable recovery interval 400

Note: The default timeout interval is 300 seconds and, by default, the timeout feature is
disabled.

For more information see:


http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a008009482f.shtml
[url active MAY 2011]

Note spanning-tree bpduguard can be enabled individually on any interface that is not
protected by PORTFAST

BPDUGUARD should be disabled on all switch interfaces that face ISIS


7000 ISS ports. As ISIS 7000 ISS will transparently pass BPDU there can
be some undesirable effects, such as port shutdown when more than one
network switch is connected such as in and FHRP (HSRP/GLBP/VRRP
etc.) deployment.

B5.1 Use ROOT GUARD on any interfaces that cascade to other switches
Even though the root Primary and root secondary switch may be set, there are still situation
which they can be superseded. ROOT GUARD can circumvent this.

For example, in a typical 4900M HSRP configuration with cascaded 4948 switches this can
be enabled on all ports 4900M ports which downlink to a 4948 or that are unused.

spanning-tree guard root

LAB-4948-10GE-S(config)#interface TenGigabitEthernet1/4
LAB-4948-10GE-S(config-if)# spanning-tree guard root
LAB-4948-10GE-S(config-if)#

CISCOLAB-4948-10GE-S(config-if)#do sh run int t1/4

interface TenGigabitEthernet1/4
switchport access vlan 10
switchport mode access
spanning-tree guard root

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 93 of 222


For more information see:
http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a00800ae96b.shtml
[url active MAY 2011]

Using both on an interface is not required as ROOT GUARD is subordinate to BPDU


GUARD
spanning-tree bpduguard enable
spanning-tree guard root

What Is the Difference Between STP BPDU Guard and STP Root Guard?
BPDU guard and root guard are similar, but their impact is different. BPDU guard disables
the port upon BPDU reception if PortFast is enabled on the port. The disablement effectively
denies devices behind such ports from participation in STP. You must manually re-enable the
port that is put into errdisable state or configure errdisable-timeout.

Root guard allows the device to participate in STP as long as the device does not try to
become the root. If root guard blocks the port, subsequent recovery is automatic. Recovery
occurs as soon as the offending device ceases to send superior BPDUs.

ROOT GUARD should be disabled on all switch interfaces that face ISIS
7000 ISS ports. As ISIS 7x00 ISS will transparently pass BPDU there can
be some undesirable effects when more than one network switch is
connected such as in and FHRP (HSRP/GLBP/VRRP etc.) deployment.

B5.2 Using spanning-tree port type edge with Cisco Nexus and AVID NEXIS
Most NEXIS solutions that are deployed on Cisco NEXUS switches will benefit from using
this command toward all storage engines and all NLE clients.
should use this command as they should connected as fast as possible and not wait for
spanning-tree normal processes.

To configure an interface connected to a host as an edge port, which automatically transitions


the port to the spanning tree forwarding state without passing through the blocking or
learning states, use the spanning-tree port type edge command. To return the port to a normal
spanning tree port, use the no spanning-tree port type command.

Sometimes if bpduguard is deployed as a global command it will also might see:


spanning-tree bpduguard disable

But this does not enable the fast transition of port coming up.

The basic command is shown below


switch(config-if)# spanning-tree port type edge

Additionally, bpduguard default can also be added to ensure the port shots down as “err
disabled” if a switch is connected:

switch(config-if)# spanning-tree port type edge bpduguard default

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 94 of 222


B.6 Use the no shutdown command on all VLANs
Because new VLANs are shutdown [off] by default, make sure you use the no shutdown
command.

Layer 2 interfaces are on by default

Layer 3 interfaces (routed or switched) are off by default.

PLEASE DO MAKE SURE THAT VLAN1 IS SHUTDOWN

B.7 Use the shutdown command on all unused interfaces


Because all Layer 2 interfaces are on by default, it is a good security practice to use the
shutdown command, and add a description as SHUTDOWN

CISCO
PWDEMO-Cisco4948#conf t
Enter configuration commands, one per line. End with CNTL/Z.
PWDEMO-Cisco4948(config)#int g1/40
PWDEMO-Cisco4948(config-if)#desc SHUTDOWN – INTERFACE NOT USED
PWDEMO-Cisco4948(config-if)#exit
PWDEMO-Cisco4948(config)#exit
PWDEMO-Cisco4948#

If not being used

B.8 Enable secret


It is important to apply an enable secret to all configs, usually during the install phase this
will be a simple password such as avid which will be changed by the customer after system
handover. Don’t waste time with an enable password as this offers backward compatibility
with Pre 12.x IOS images (last used in the 1990s!), never deployed with any Cisco Catalyst
4948 or 4900M, and it just adds confusion, also it is superseded by an enable secret
anyway.

B.9 Password encryption


Encrypting passwords is a good thing to do, but possibly this should only be done after
system handover.
PWDEMO-Cisco4948(config)#service password-encryption

This will apply Cisco level 7 encryption to all clear text passwords such as line console and
VTY

The enable secret is encrypted at Cisco level 7 by default

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 95 of 222


B.10 Enable telnet
Telnet is not enabled in a Cisco 4948 by default but is enable in a Foundry by default
A Cisco switch will need the following commands to enable telnet
line vty 0 4
password avid
login

A Cisco switch will need an enable secret before allowing telnet access, this is in addition to
a telnet password, but the can be the same e.g. avid.

B.11 Enable synchronous logging


When synchronous logging is enabled, information items sent to console will not interrupt
the command you are typing. The command will be moved to a new line

line con 0
logging synchronous
stopbits 1
line vty 0 4
password avid
login
!
B.12 Get Putty 0.06
Hyper terminal which comes with Windows is liability. It may corrupt data outside the main
display window.

Putty is freeware, and v0-60 supports serial ports too. This product can be configured with a
large scroll back buffer. Can be re-sized and run multiple instances to allow communications
with multiple devices concurrently.

B.13 Logging

• Logging is a great diagnostic tool


– Logging is normally to a console session only
– Logs do not persist a power cycle or reload
– Logging can be sent to and external server
• A syslog server
• Logging can be sent to telnet session
– Issue the command terminal monitor
– REMEMBER to turn it off!!
• no terminal monitor

B.14 Using a Syslog Server


• Many syslog implementations
– Usually part of a commercial SNMP packages
– Freeware applications are numerous
• Personal favourite is KIWI Syslog Daemon
– Which will also perform as a BASIC SNMP trap manager
» Just to prove they are being sent!....and received!!
– Cisco commands
logging trap debugging THIS IS WHAT TO SEND

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 96 of 222


logging 10.134.87.87 THIS IS WHERE TO SEND

• Logging is only useful with time stamps


– That means the clock must be configured
– And the correct commands exist in the config file
– Which is the most useful output below?
7w4d: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)
7w4d: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)
*Sep 14 15:39:24: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)
*Sep 14 15:39:46: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)

Aug 5 15:15:56.124 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed


state to down
Aug 5 15:15:58.472 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed
state to up

– Sep 14 at 15:39 is a whole lot more useful than 7 weeks and 4 days!!

Also see: How to configure logging in Cisco IOS


https://supportforums.cisco.com/document/24661/how-configure-logging-cisco-ios

B14.1 Freeware logging servers


Linux offers this by default but it may have to be enabled and there are many logging servers
for Windows, but many of the allegedly free systems are not free for commercial use. But the
one below looks like it is TOTALLY free

Visual Syslog Server for Windows: Syslog Server for Windows with a graphical user
interface

http://maxbelkov.github.io/visualsyslog/

Also this TFTP application provide an Syslog server too (64 bit version is available).
http://tftpd32.jounin.net/tftpd32.html

KIWI syslog used to be FREE too, but since it was purchased by Solarwinds that has changed
to a 14 days trial .

Also consider KLEVERS SOFTWARE;s free syslog server. I am a great fan of KLEVER
software for TFPF too with PUMKIN

http://kin.klever.net/klog#.WXcRQdPytmM

B.15 Timestamps
• service timestamps log uptime
7w4d: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)

• service timestamps log datetime


*Sep 14 15:39:24: %SYS-5-CONFIG_I: Configured from console by vty0 (10.134.132.86)
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 97 of 222
• Also a command for Debug statements
– service timestamps debug uptime [datetime]
– This parameter need to be set to if you want any sensible output when using
debug commands

– The default is uptime as no time may be set!!

service timestamps debug datetime


service timestamps log datetime

or even

service timestamps debug datetime msec localtime show-timezone


service timestamps log datetime msec localtime show-timezone

B.16 Setting the Time


The time can be controlled manually or by an NTP server. To manually set the clock, use the
command example below. This method usually loses during a restart or power cycle.

Switch#clock set 17:30:00 14 SEP 2012


– NOT an exec level command!

The command below can be used for NTP

Switch(config)#ntp server 10.184.106.99

Or depending on NTP setup

Switch(config)#ntp peer 10.184.106.99

Peers are a class that allows for both responses to NTP Requests as well as acceptance of NTP Updates, while an NTP Server
will only respond to requests and not permitting updates.
There is more information available from the INE's CCIEBlog -URL - http://blog.internetworkexpert.com/tag/ntp/ - that may
clear up some of the intricate detail.

Should also set the timezone and daylight savings parameters (or just use UTC), below are
command that work in CISCO CATALYST, Cisco Nexus has different syntax, see section
B.16.1

clock timezone GMT 0


clock summer-time BST recurring last Sun Mar 3:00 last Sun Oct 3:00

For Europe

clock timezone CET 1


clock summer-time CEST recurring last Sun Mar 2:00 last Sun Oct 2:00

For USA –
clock timezone EST -5
clock summer-time EDT recurring 2 Sun Mar 2:00 1 Sun Nov 2:00 60

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 98 of 222


*NOTE this example specifies and offset of 60 minutes, which is the default when not
specified.

For Sydney Australia –

clock timezone AEST 10 0


clock summer-time AEDT recurring 1 Sun Oct 2:00 1 Sun Apr 3:00

For Qatar (Arabian Standard Time) no daylight saving time change.


clock timezone AST 3

See articles at:

http://www.networkworld.com/community/node/11875
And
http://www.timeanddate.com/library/abbreviations/timezones/
And
http://www.cisco.com/en/US/docs/ios/12_2/configfun/command/reference/frf012.html
http://www.cisco.com/en/US/docs/voice_ip_comm/bts/4.1/command/reference/91TMZ.pdf

Also consider using the update calendar command for devices like C49XX series which has
in internal H/W clock
SWITCH> clock update-calendar

B.16.1 Command for Cisco NEXUS


Must use Hours and Minutes even if minutes is zero otherwise command is rejected

For Europe

clock timezone CET 1 0

And then for automatic summertime clock adjustment

clock summer-time CEST 5 sun mar 02:00 5 sun oct 02:00


Must have week numbers of month 1= first 5= last

B.17 Show tech support for CATALYST


**** SEE Section B.39 for Cisco NEXUS
One way to extract a lot of information form a switch is to use show tech-support

• A tool on Cisco and Foundry to do a dump of the system and all the status
information
– Shows state of all interfaces in brief and detail
• Another reason why interface descriptions are important!
– More information from Cisco than Foundry
• Approx 2MB from a 4948!
– Can be piped to a file on a tftp server
– 2611XM-BLUE#show tech-support | redirect
tftp://10.124.87.87/redirect-sh-tech.txt

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 99 of 222


– Can be captured into a putty log file via telnet ( or other suitably capable telnet
application).

B17.1 What is listed?

What is listed varies between switch models and s/w versions, but below is an indication

show clock show interfaces trunk


show version show logging
show running-config show module
show stacks show mac-address-table count
show interfaces show platform chassis
show controllers show platform cpu packet statistics
show user show platform cpu packet driver
show data-corruption show platform crashdump
show file systems show platform hardware interface all
show bootflash: all VERY LONG SECTION!!!
show cat4000_flash: all show platform health
show memory statistics show platform environment variables
show process memory show platform portmap
show process cpu show platform software interface all VERY
show process cpu history LONG SECTION!!!
show cdp neighbors detail show power detail
show diagnostic result module all detail show spanning-tree summary
show environment show vlan
show interfaces counters errors show buffers
show interfaces status show inventory
show region

B17.2 Show tech-support - CAVEATS


• Use with caution from the Console port
– Output of a 2MB (4948) file will be very slow and cannot be interrupted
– It will take SEVERAL cups of coffee to finish at 9600bps!!
– Much faster via telnet at Fast Ethernet speed
– OK to PIPE from console port to TFTP server.
– 2611XM-BLUE#show tech-support | redirect
tftp://10.134.97.97/redirect-sh-tech.txt
– If capturing by telnet and copy/pasting into a text file, make sure application
has sufficient scroll buffer, Putty 0.6 can be configured for large amounts, or
set up the putty log file to capture it for you.
– HyperTerminal or Telnet from Window CLI ….NOT AVISED

B17.3 How long does it take?

• Telnet Access to switch and PIPE via Gigabit Ethernet to TFTP server
– about 10 seconds
• CLI access to a switch and PIPE via Fast Ethernet (management port) to TFTP server
– About 60 seconds

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 100 of 222


• TELNET to CLI ( and capture in putty log file)
– About 3 seconds at Gigabit Ethernet or 15 seconds at Fast Ethernet
• Console to CLI
– About 2100 seconds, yes that is 35 Minutes! Time for several coffees or even
lunch!!
B17.4 useful show commands
When show tech support is just too much information, as on NEXUS switches, the command
below can be very useful

SECTION MOSTLY DELETED SEE SECTION B.39.1 NEXUS 93xxx USEFUL


COMMANDS FROM SHOW TECH. for a more complete list

show version
show module
show interface status
show interface description
show running-config
show cdp neighbor
show cdp neighbor detail
show interface counter errors < do twice 10 minutes apart>
show Etherchannel summary | show portchannel summary
show hsrp | glbp | vrrp brief
Show spanning-tree

SECTION MOSTLY DELETED SEE SECTION B.39.1 NEXUS 93xxx USEFUL


COMMANDS FROM SHOW TECH. for a more complete list

B17.5 TFTP tools


There are many free tools: One of my favorites for Windows is PUMKIN from KLEVER
SOFTWARE

Apparently this is now available for MAC OSX too

http://kin.klever.net/pumpkin#.WXcSDNPytmN

For MAC OSX, it has a built in tftp server, but it needs CLI access, but I found this TFTP
“application” which does the hard work.
https://www.macupdate.com/app/mac/11116/tftpserver#
http://ww2.unime.it/flr/

Also consider TFTP application provide an tftp server (and sysylog, DHCP etc) too (64 bit
version is available).
http://tftpd32.jounin.net/tftpd32.html

Tftpd32 is a free, opensource IPv6 ready application which includes DHCP, TFTP, DNS,
SNTP and Syslog servers as well as a TFTP client.
The TFTP client and server are fully compatible with TFTP option support (tsize, blocksize
and timeout), which allow the maximum performance when transferring the data.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 101 of 222


Some extended features such as directory facility, security tuning, interface filtering; progress
bars and early acknowledgments enhance usefulness and throughput of the TFTP protocol for
both client and server.
The included DHCP server provides unlimited automatic or static IP address assignment.

Tftpd32 is also provided as a Windows service.

Tftpd64 is the same application compiled as a 64 bits application.

B.18 Handover Practices


One way to ensure easy review Cisco config file documentation would be to collect the
information below with three commands as part of the handover pack.

-- show version
-- show running-config
-- show interfaces status

The latter command will show which interfaces are connected and which have descriptions,
so connected interfaces without descriptions can be easily identified without

Below is a sample
Port Name Status Vlan Duplex Speed Type
Gi1/1 WEBSERVER connected 51 a-full a-1000 10/100/1000-TX
Gi1/2 ** MC Nitris 1 notconnect 51 auto auto 10/100/1000-TX
Gi1/3 ** Protools connected 51 a-full a-1000 10/100/1000-TX
Gi1/4 ** MC Nitris 3 connected 51 a-full a-1000 10/100/1000-TX
Gi1/5 ** DS connected 51 a-full a-1000 10/100/1000-TX
Gi1/6 notconnect 51 auto auto 10/100/1000-TX
Gi1/7 >> Interlink VLAN connected 410 a-full a-1000 10/100/1000-TX
Gi1/8 notconnect 51 auto auto 10/100/1000-TX
Gi1/19 notconnect 51 auto auto 10/100/1000-TX
Gi1/20 DOWNLINK TO 3750 V connected 51 a-full a-1000 10/100/1000-TX
Gi1/21 notconnect 52 auto auto 10/100/1000-TX
Gi1/22 connected 52 a-full a-1000 10/100/1000-TX
Gi1/23 connected 52 a-full a-1000 10/100/1000-TX
Gi1/24 notconnect 52 auto auto 10/100/1000-TX
Gi1/25 connected 52 a-full a-1000 10/100/1000-TX

The additional commands are optional


-- show cdp neighbors detail
-- show spanning-tree summary
-- show vlan
-- show logging
B.19 Cisco Catalyst 49XX setting of the CONFIG register

Cisco 4948 switches supplied by Avid are configured with a Configuration Register value of
0x2101, which means the switch will boot from the first IOS that appears in bootflash. Cisco
instructs you to set the Configuration Register to 0x2102, which means the switch will look
for a boot string that points to the IOS from which to boot.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 102 of 222


B.20 Multicast Propagation Does Not Work in the same VLAN in Catalyst and NEXUS
Switches

Multicast is back! It is used for internode communication in Interplay Common services


V1.3, which is a key component for Interplay Central, Interplay Sphere and Interplay MAM
solutions. The default address used is 226.94.1.1, unless the administrator configures a
different address. This Multicast Propagation requirement also applies to Interplay Media
Indexer NOMI configurations beginning V3.5. Only layer 2 communication is required, but if
nodes are on different switches connected by an inter-switch link it may not work as
expected, this is due to IGMP snooping features which are enabled by default on some
switches. As a two-switch configuration supported by a First Hop Resiliency Protocol
(FHRP) such as HRSP, GLBP or VRRP, this could become a common challenge.

Beginning Interplay 3.0 the mechanisms for media indexers to communicate with each other
started to change changed, beginning Interplay V3.5, JINI is no longer used and the ASF
(Avid Service Framework) is not used.

amq default which is multicast://239.255.2.3:6155

Beginning Interplay V3.5 everything that was using ASF to keep two or more server Media
Indexers (in a High Availability Group that is now replaced by a Network of Media
Indexers (NOMI)) synchronized and ready for failover is now done using the local multicast
introduced with Interplay V3.0. These Multicast messages are only needed in the Interplay
VLAN so no Multicast Routing is required.
This is one reason why Interplay 2.7.x can no longer be used with Interplay 3.5.

Also consider this article for Media Indexer NOMI

http://avid.force.com/pkb/articles/en_US/How_To/Multicast-Time-to-Live-adjustment

Note that the local Media Indexers on the editors do not require Multicast
communication with the servers. Multicast communication is only required
between the Media Indexer servers. The multicast address used is
239.255.2.3 on UDP port 6155.

Note: for Interplay Central Services ICS 1.4 and 1.5 (H1/2013) the default
multicast address is 239.192.1.1
corosync_mcast_addr=${corosync_mcast_addr:-"239.192.1.1”

Beginning V2.0 (approx. 2014/2015) ICS was renamed MediaCentral


Server and the last version of this product was V2.10 in MAR 2017 (initial
release, final release 2.10. OCT 2020).

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 103 of 222


Media Central Cloud UX Server Kubernetes does not use multicast.
Keepalived is configured to use unicast by default (multicast configurations
are not supported).

Media Central Cloud UX V2018.1 initial release AUG 2018

For Cisco Catalyst the quick fix is:

SWITCH# (config)#int vlan 30


SWITCH# (config-if)# ip pim sparse-dense-mode

This must be applied in the appropriate VLAN, it does not have to be applied to all VLANs.
The above example assume that VLAN 30 is the location for the Interplay Common Services
“cluster” of nodes

The URL below contains more detailed explanation


http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008059
a9df.shtml
or

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/68131-
cat-multicast-prob.html#igmp

However, it might be preferable to use Solution 2 or Solution 3 from this article

Document ID: 68131


This document discusses a common problem that occurs when you deploy the multicast
application for the first time on a Cisco Catalyst switch network and the multicast fails to
work. In addition, some servers/applications that use multicast packets for the cluster/high-
availability operation can fail to work if you do not configure the switches appropriately. The
document covers this issue as well.

Below is an extract from that document ID: 68131 manually edited/“Tailored” toward Avid
configuration for C45xx/C49XX switches.

Solution 1: Enable PIM on the Layer 3 Router/VLAN Interface


All Catalyst platforms have the ability to dynamically learn about the mrouter port. The
switches passively listen to either the Protocol Independent Multicast (PIM) hellos or the
IGMP query messages that a multicast router sends out periodically.
This example configures the VLAN 30 switched virtual interface (SVI) on the Catalyst 4500
with ip pim sparse-mode.

Switch1#show run interface vlan 30


!
interface Vlan1
ip address 192.168.30.1 255.255.255.0
ip pim sparse-mode
end

Switch 1 now reflects itself (Actually the internal router port) as an


Mrouter port.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 104 of 222


Switch1#show ip igmp snooping mrouter
vlan ports
-----+----------------------------------------
1 Router

Switch 2 receives the same PIM hellos on its T1/1 interface. So it assigns
that port as its Mrouter port.

Switch2#show ip igmp snooping mrouter


Vlan ports
---- -----
1 Te1/1(dynamic)

Solution 2: Enable IGMP Querier Feature on a Layer 2 Catalyst Switch


The IGMP querier is a relatively new feature on Layer 2 switches. When a network/VLAN
does not have a router that can take on the multicast router role and provide the mrouter
discovery on the switches, you can turn on the IGMP querier feature. The feature allows the
Layer 2 switch to proxy for a multicast router and send out periodic IGMP queries in that
network. This action causes the switch to consider itself an mrouter port. The remaining
switches in the network simply define their respective mrouter ports as the interface on which
they received this IGMP query.

Switch2(config)#ip igmp snooping querier

Switch2#show ip igmp snooping querier


Vlan IP Address IGMP Version Port
-------------------------------------------------------------
30 192.168.30.2 v2 Switch

Switch 1 now sees port T1/1 linking to Switch 2 as an mrouter port.


Switch1#show ip igmp snooping mrouter
vlan ports
-----+----------------------------------------
1 T1/1

When the source on Switch 1 starts to stream multicast traffic, Switch 1 forwards the
multicast traffic to the Receiver 1 found via IGMP snooping (i.e., out port Te 1/8) and to the
mrouter port (i.e., out port Te 1/1).

Solution 3: Configure Static Mrouter Port on the Switch


The multicast traffic fails within the same Layer 2 VLAN because of the lack of an mrouter
port on the switches, as the Understand the Problem and Its Solutions section discusses. If
you statically configure an mrouter port on all the switches, IGMP reports can be relayed in
that VLAN to all switches. As a result, multicasting is possible. So, in the example, you must
statically configure the Catalyst 4500 Switch to have Tengigabitethernet 1/1 as an mrouter
port. In this example, you need a static mrouter port on Switch 2 only:
Switch2(config)#ip igmp snooping vlan 30 mrouter interface
Tengigabitethernet 1/1

Switch2#show ip igmp snooping mrouter


Vlan ports
---- -----
1 Te 1/1(static)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 105 of 222


B.20.1 Multicast Propagation - an Avid perspective

The issue is explained from an Avid perspective here and applied equally to MCS servers
connected across two switches or Media Indexer servers can be explained with the help of the
diagram below. In the early days of Ethernet, multicast packets (to be correct they should be
termed FRAMES, but the two terms will be used interchangeably) they were treated a little
like broadcast packets, but as multicast become more popular that posed challenges for
bandwidth use and CPU cycle use on clients that did not need to receive multicast frames.
One of the tools

In the early days of Ethernet, a multicast frame from SOURCE-1 one would be received by
RECIEVER 1-4, and a multicast frame from SOURCE-2 one would be received by
RECIEVER 1-4. This was not scalable.

Also consider that a multicast capable device can be both and source and a receiver.

With the Advent of IGMP Snooping, the default state became NOT TO ACCEPT to
multicast frames from another switch without some sort of pre-configuration to do so.
Hence the current state in most switches arranged as above is:

Receiver 3 will be forwarded multicast frames from Source 1 as on the same switch, but not
from Source 2 because it is on a different switch.

Receiver 4 will be forwarded multicast frames from Source 2 as on the same switch, but not
from Source 1 because it is on a different switch.

To enable the Media Indexers (or Media Central Servers) to communicate with multicast
across a switch boundary then one of the methods described in the articles referenced in this
section B.20 must be deployed.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 106 of 222


B.20.2 – Nexus switches Multicast Propagation
A similar problem should not exist with Nexus switches which for a vPC pair. It is quite
possible that similar fixes will work but there is not an “identical” article for Nexus as for
Catalyst. This issue was encountered in a major broadcaster during SEP2016 consultancy
where Interplay 3.5 was being deployed on a system using A Nexus 5672 vPC Pair along
with Nexus 2248TPE fabric extenders. It also affected EVS products which were unable to
use multicast correctly

The articles below, reference similar issues, but the Nexus 5000 documentation from Cisco is
not sufficiently clear with regards to configuring IGMP snooping to overcome the issue

here are the articles found:


http://blog.alainmoretti.com/pim-ssm-through-nexus-vpc/
http://blog.lah.io/2014/01/troubleshooting-cisco-nexus-5500-igmp.html

and these are the VLANs where multicast needs to be propagated within VLANs (not
routed):
1241
1251

These commands were entered on both N5672, multicast routing was not enabled, it is not
required, using PIM in this way allows the L2 features and/or “side effects” of PIM to be
used to address the problem of insufficient “local” multicast propagation.

conf t
feature pim
int vl 1241
ip pim sparse-mode
int vl 1251
ip pim sparse-mode

The issues was resolved after these commands were entered into BOTH Nexus 5672
switches. Interplay NOMI and the EVS devices were able to communicate within their own
VLAN.

Also see
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide
/cli/CLIConfigurationGuide/IGMPSnooping.html

It is possible that command below will have the net effect as ip pim sparse-mode in a
VLAN.

switch(config-vlan)# ip igmp Configures a snooping querier when you do


snooping querierIP-address
not enable PIM because multicast traffic
does not need to be routed. The IP address
is used as the source in messages. The
default is disabled.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 107 of 222


At the time of writing (MAR2017) this command has not been used/tested by Avid.

Looking at this document about NEXUS 9000


https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Interfaces_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-
OS_Interfaces_Configuration_Guide_7x_chapter_01000.html

vPC Multicast—PIM, IGMP, and IGMP Snooping

The software keeps the multicast forwarding state synchronized on both of the vPC peer
devices. The IGMP snooping process on a vPC peer device shares the learned group
information with the other vPC peer device through the vPC peer link; the multicast states are
always synchronized on both vPC peer devices. The PIM process in vPC mode ensures that
only one of the vPC peer devices forwards the multicast traffic to the receivers.

Each vPC peer is a Layer 2 or Layer 3 device. Multicast traffic flows from only one of the
vPC peer devices. You might see duplicate packets in the following scenarios:

Also for NXOS 9.x See Similar article

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/92x/multicast/b-
cisco-nexus-9000-series-nx-os-multicast-routing-configuration-guide-92x/b-cisco-nexus-
9000-series-nx-os-multicast-routing-configuration-guide-92x_chapter_0110.html

B.20.2.1 – Field Knowledge NEXUS & Multicast PART 1


Depending on the setup VPC/HSRP or HSRP only different command may be necessary for
NEXUS 9000 deployments.

Config
Vlan configuration <vlan id>
ip igmp snooping querier <ip of the gateway in the vlan>
ip igmp snooping fast-leave

A field engagement with FastServe and Pivot apparently needed these commands, but it was
never conclusively proved that was the case. This should be done on all switches. The
example below uses the VIP of the HSRP supported SVI

Config t
vlan config 103
ip igmp snooping querier 10.1.3.1

It is unclear whether as SVI must be configured ion the L2 edge switch but as a precautionary
measure I would advise this is also done.

USEFUL NEXUS COMMANDS FOR MULTICAST DEBUGGING

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 108 of 222


The command below may be useful, to see what is happening in respect of L2 multicast. In
some cases use them with “pipes” to hone in on specific IP addresses.

mtrace events for IGMP process


show ip igmp internal event-history igmp-internal | no-more
show ip igmp internal event-history debugs | no-more
show ip igmp route vrf all | no-more
show ip igmp interface vrf all | no-more
show ip igmp snooping vlan <x> | no-more
show ip igmp internal | no-more
show ip igmp internal errors | no-more
show ip igmp groups vrf all summary | no-more
show ip igmp internal event-history errors | no-more
show ip igmp internal event-history msgs | no-more
show ip igmp internal event-history vrf | no-more
show ip igmp internal event-history events | no-more
show ip igmp internal event-history policy | no-more

policy events for IGMP process

show ip igmp internal event-history cli | no-more


show ip igmp snooping groups detail | no-more
show ip igmp snooping explicit-tracking | no-more
show ip igmp snooping internal event-history vlan | no-more

PING MULTICAST 224.0.0.1 (ALL HOSTS)


PING MULTICAST 224.0.0.2 (ALL ROUTERS)

ping multicast 224.0.0.1 int vl [number] count 1

Some of these commands will produce 100,000 plus lines so best to have putty logging
(printable output) setup to harvest to a text file.

NOTE: The two command that seem to give the best “short” output are:

show ip igmp snooping groups detail | no-more


show ip igmp snooping explicit-tracking | no-more

EXAMPLE CONFIG

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 109 of 222


THIS SYSTEM has 4 SWITCHES
2 x NEXUS 93180YC as a VPC PAIR LAYER 3 CORE
2 x NEXUS 9348 as STANDALONE L2 EDGE

MEDIA INDEXER is connected on the NEXUS 9348 as STANDALONE L2 EDGE


PRODUCTION ENGINE is on the NEXUS 93180YC as a VPC PAIR LAYER 3 CORE

SW1 CORE
conf
int lo9
desc LOOPBACK FOR MCAST ip igmp snooping querier
ip address 1.1.1.101/32
exit
vlan configuration 10,20,30
ip igmp snooping querier 1.1.1.101
end

SW2. CORE
conf
int lo9
desc LOOPBACK FOR MCAST ip igmp snooping querier
ip address 1.1.1.102/32
exit
vlan configuration 10,20,30
ip igmp snooping querier 1.1.1.102
end

SW3. EDGE

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 110 of 222


conf
int lo9
desc LOOPBACK FOR MCAST ip igmp snooping querier
ip address 1.1.1.103/32
exit
vlan configuration 10,20,30
ip igmp snooping querier 1.1.1.103
end

SW4. EDGE
conf
vlan configuration 10,20,30
ip igmp snooping querier 1.1.1.104
exit
int lo9
desc LOOPBACK FOR MCAST ip igmp snooping querier
ip address 1.1.1.104/32
end

TO REMOVE

no vlan configuration 10,20,30

Some command that might help in Debugging Multicast L2 operation are:

This Windows program that might help one understand multicast better but will need some
“playtime” on a suitable network, it supports Win 10 but not explicitly and of the server OS.

MPING
https://www.microsoft.com/en-us/download/details.aspx?id=52307

OMPING
https://www.claudiokuenzler.com/blog/656/test-multicast-connectivity-working-omping-
linux

I have not had the opportunity to trial them at the time of writing (DEC 2019).

We found that Pivot/FastServe communication would fail if ip pim sparse-mode was not
configured on the (VLAN103) NEXUS CORE switch(es) along with the igmp querier.

Note: While using the feature PIM technically requires the enhanced licensing for
NEXUS switches, only the most basic features of PIM necessary to “apparently”
correctly deal with L2 multicast are used, and no multicast routing is involved, so
morally this is not exploiting the “honor based” system. If only good reliable, tried
and tested application note were provided by Cisco (and other vendors) this would

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 111 of 222


not be necessary (maybe it does exist but is well hidden amongst the plethora of
other similar article and internet searches cannot dive deep enough).

Licensing Requirements for PIM:


PIM requires an Enterprise Services license. For a complete explanation of
the Cisco NX-OS licensing scheme and how to obtain and apply licenses, see
the Cisco NX-OS Licensing Guide.

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-
x/multicast/configuration/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Multicast_Routing_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_N
X-
OS_Multicast_Routing_Configuration_Guide_7x_chapter_0100.html#reference_15
77C753E1C24CCC8AE12104FC84AC78

Or later version (maybe mainly based around NX-OS v9.x


https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-
os/licensing/guide/b_Cisco_NX-OS_Licensing_Guide/b_Cisco_NX-
OS_Licensing_Guide_chapter_01.html

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 112 of 222


As can be seen from above table extract The ESSENTIALS “standard” license is
sufficient for operation with NEXIS and MEDIACENTRAL.

Note: NXOS LICENSING WITHOUT PIM NEXUS 9000


When using the simple no ip igmp snooping command described in
section B.20.2.2. NX-OS Essential is sufficient….. NX-OS Advantage
(formerly called Enterprise) is not required.

Using the commands directly below that seem to give the best “short” output the following
information was tabulated:

show ip igmp snooping groups detail| no-more


show ip igmp snooping explicit-tracking | no-more

MULTICAST ADDRESS KNOWN FUNCTION ASSUMED FUNCTION


224.0.1.84 JINI
224.0.1.85 JINI
225.1.1.1 FAST SERVE/ PIVOT
ADMIN - SCOPED ADMINISTRATIVELY SCOPED ADMINISTRATIVELY SCOPED
239.255.102.18 UNKNOWN UNKNOWN
239.255.255.250 Simple Service Discovery Protocol
(SSDP)
And /or
multicast DNS (mDNS)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 113 of 222


239.255.255.253 SLP, Service Location Protocol.

239.255.2.3 USED BY AVID MEDIA


INDEXER NOMI

The IP addresses Associated with FastServe and Pivot were the only ones using 225.1.1.1

ping multicast 224.0.0.1 interface vl 164 count 1


ping multicast 239.255.2.3 interface vl 164 count 1

ping multicast 239.255.255.253 interface vl 164 count 1 - may not get any replies to this

ping multicast 224.0.1.84 interface vl 164 count 1


ping multicast 224.0.1.85 interface vl 164 count 1 - may not get any replies to this

ping multicast 239.255.102.18 interface vl 164 count 1 - may not get any replies to this

B.20.2.2 – Field Knowledge NEXUS & Multicast PART 2


The ip igmp snooping querier method described above is IMHO rather cumbersome. A
simple method is described below. Thanks to my friendly technical marketing engineer
contact. This has yet to be field tested in an Avid solution at time of writing (JUL2020)

By doing this:
switch(config)# vlan configuration 5
switch(config-vlan-config)# no ip igmp snooping

You disable IGMP snooping on VLAN 5. Also if you chose range for VLAN configuration,
this can be executed for multiple VLANs at the same time.
This has to be done for all necessary VLANs in all NEXUS switches that use that VLAN

Note: This is the “simplest method” but may not be the “best method”
where there is lots of multicast routing in progress …... in which case it is
necessary to get “busy” with igmp snooping queriers as described in
NETREQS appendix B20.2.1. Multicast is a “dark art” and I prefer simple
solutions.

You can do this with NXOS, should be across the board from N3K to N9K and anything in
between, by configuring IGMP snooping per VLAN, more info for N9K, but other should
have same section: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/93x/multicast/b-cisco-nexus-9000-
series-nx-os-multicast-routing-configuration-guide-93x/b-cisco-nexus-9000-series-nx-os-multicast-routing-configuration-guide-
93x_chapter_0110.html#task_767AD7DE9F554649B7C7D5071521752B

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 114 of 222


This has been proven on a NEXUS 93000 (some older NXOS might supported it but it
might be deprecated on newer versions, or vice versa).

Some might consider this method to be a bit of a blunt instrument. I have published other
methods in this section should more granularity/complexity be desired. But I do not consider
this a dangerous option, as it is used only on specific VLAN, especially as other vendors
products have IGMP disabled globally as default. If there is no multicast routing into this
VLAN, and why on earth would there be in the Avid SERVER VLAN, then this is not an
issue?

The IGMP snooping RFC and subsequent implementations have a major flaw IMHO, they
should have excluded 239.x.x.x link local addresses and allowed them to propagate as
normal, they are generally small and insignificant. After all IGMP was brought in to dampen
the burden of Multicast ROUTED packets, not L2 multicast.

Maybe it would be nice if vendors enabled the exclusion of IP ranges from IGMP operations
based on an access list, but I don’t not think they do. I am not a multicast guru and prefer
simple solutions where possible.

In this Section B.20 I have re-published some Cisco articles from Catalyst days…. But I have
yet to find similar article for NEXUS.

Using ip pim sparse-mode has licensing implications in NEXUS (which was not the case in
Catalyst and may not be an issue for CBS), needing the enhanced Cisco licenses even though
IGMP forwarding functionality is the tip of the iceberg and when we have used it…. Avid
needs, with link local multicast use a side effect of pim sparse-mode not any multicast
routing functionality.

Of course TVSTATIONS makes a lot of use of multicast in in corporate LANs, to watch user
selected channels via their workstations, so have lots of experience to bring to the table when
it used for MROUTING, but as I stated above I would question why any such MROUTING
would need to exist in the Avid VLANs, and if these production network switches are a
shared environment with other equipment vendors where MROUTING does exists for other

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 115 of 222


VLANs then this simple approach should be joined by a suitable ACL to prevent such
packets entering the AVID VLANs.

B.20.3 – UCS Blade Servers Multicast Propagation


A similar challenge might exist with Cisco UCS (and maybe other vendors too). The Cisco
Community article below has some helpful information, including a short PPTX on one of
the posts.
https://community.cisco.com/t5/unified-computing-system/multicast-on-ucs/td-p/1377577

B.20.4 Some other useful Multicast URL & Information

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipmulti_pim/configuration/xe-
3s/asr903/imc-pim-xe-3s-asr903-book/ip_multicast_technology_overview.pdf

Limited Scope Addresses


The range 239.0.0.0 to 239.255.255.255 is reserved as administratively or limited scoped
addresses for use in private multicast domains. These addresses are constrained to a local
group or organization. Companies, universities, and other organizations can use limited scope
addresses to have local multicast applications that will not be forwarded outside their domain.
Routers typically are configured with filters to prevent multicast
traffic in this address range from flowing outside an autonomous system (AS) or any user-
defined domain. Within an AS or domain, the limited scope address range can be further
subdivided so that local multicast boundaries can be defined.

RFC 2365—Administratively Scoped Addresses


RFC 2365 provides limited guidelines on how the multicast address space can be divided and
used privately by enterprises.
The terminology “Administratively Scoped IPv4 multicast space” relates to the group address
range of 239.0.0.0 to 239.255.255.255. The key properties of Administratively Scoped IP
Multicast are that:
• Packets addressed to Administratively Scoped multicast addresses do not cross configured
administrative boundaries. The limits of these scope boundaries often are called “Zones” or
“Scoped Zones.”
• Administratively Scoped multicast addresses are locally assigned and are not required to be
unique across administrative boundaries. Table 9 and the bullet items that follow it
summarize the recommendations of RFC 2365.
• Organization-Local Scope addresses are recommended for private use within an
organization for intersite applications that will be run regionally or globally.
• The address range numerically below the Organization-Local Scope is intended as the
expansion space for the Organization-Local Scope. Organizations can allocate or subdivide
this range as needed either to extend the OrganizationLocal Scope or to create other
geographically smaller subscopes within the Enterprise.
• Site-Local Scope addresses represent the smallest possible scope in the network. More
applications are being developed that default to using this scope (unless configured
otherwise) to insure that the scope of their application is limited to the smallest scope size.
This is why it is important to adhere to RFC 2365 guidelines for the Site-Local Scope.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 116 of 222


Note: It is unfortunate that many applications do not behave in this manner and instead often
default to using addresses in a global scope instead. This results in their application traffic
being multicast far beyond where it is desired by the network administrator.
• The address range numerically below the Site-Local Scope is intended as expansion space
for the Site-Local Scope. Organizations can allocate these ranges for private use if they
exceed the 239.255.0.0/16 Organization-Local range.

Table 9 Administratively Scoped Addresses 239.0.0.0/8


Range Description Reference
239.000.000.000-239.191.255.255 Organization-Local Scope Expansion Space [Meyer, RFC
2365]
239.192.000.000-239.195.255.255 Organization-Local Scope [Meyer, RFC 2365]
239.195.000.000-239.254.255.255 Site-Local Scope Expansion Space [Meyer, RFC 2365]
239.255.000.000-239.255.255.255 Site-Local Scope [Meyer, RFC 2365]

https://www.cisco.com/c/dam/en/us/support/docs/ip/ip-multicast/ipmlt_wp.pdf

https://www.tldp.org/HOWTO/Multicast-HOWTO-2.html

TTL.
The TTL (Time To Live) field in the IP header has a double significance in multicast. As
always, it controls the live time of the datagram to avoid it being looped forever due to
routing errors. Routers decrement the TTL of every datagram as it traverses from one
network to another and when its value reaches 0 the packet is dropped.

The TTL in IPv4 multicasting has also the meaning of "threshold". Its use becomes evident
with an example: suppose you set a long, bandwidth consuming, video conference between
all the hosts belonging to your department. You want that huge amount of traffic to remain in
your LAN. Perhaps your department is big enough to have various LANs. In that case you
want those hosts belonging to each of your LANs to attend the conference, but in any case
you want to collapse the entire Internet with your multicast traffic. There is a need to limit
how "long" multicast traffic will expand across routers. That's what the TTL is used for.
Routers have a TTL threshold assigned to each of its interfaces, and only datagrams with a
TTL greater than the interface's threshold are forwarded. Note that when a datagram traverses
a router with a certain threshold assigned, the datagram's TTL is not decremented by the
value of the threshold. Only a comparison is made. (As before, the TTL is decremented by 1
each time a datagram passes across a router).

A list of TTL thresholds and their associated scope follows:

TTL Scope
----------------------------------------------------------------------
0 Restricted to the same host. Won't be output by any interface.
1 Restricted to the same subnet. Won't be forwarded by a router.
<32 Restricted to the same site, organization or department.
<64 Restricted to the same region.
<128 Restricted to the same continent.
<255 Unrestricted in scope. Global.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 117 of 222


Nobody knows what "site" or "region" mean exactly. It is up to the administrators to decide
what this limits apply to.

The TTL-trick is not always flexible enough for all needs, specially when dealing with
overlapping regions or trying to establish geographic, topologic and bandwidth limits
simultaneously. To solve this problems, administratively scoped IPv4 multicast regions were
established in 1994. (see D. Meyer's "Administratively Scoped IP Multicast" Internet draft).
It does scoping based on multicast addresses rather than on TTLs. The range 239.0.0.0 to
239.255.255.255 is reserved for this administrative scoping.

Also consider that some of the reading on VXLAN suggests that 239.x.y.x addresses
are also used for propagation of BUM traffic between VTEPs, and NOMI uses .

Configure VXLAN Flood and Learn with Multicast Core


https://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/nx-os-software/200262-
Configure-VxLAN-Flood-And-Learn-Using-Mu.html

VXLAN Series – Multicast usage in VXLAN – Part 3


https://blogs.vmware.com/vsphere/2013/05/vxlan-series-multicast-usage-in-vxlan-part-3.html

• Section 21.2.3: Multicast and Broadcast over VXLAN


https://www.arista.com/assets/data/pdf/user-manual/um-eos/Chapters/VXLAN.pdf

B.21 LoopGuard and FHRP


The use of Loop Guard and a first Hop Redundancy protocol has proven to be unsuitable for
use with ISIS 7000 deployments. The URL below provide a detailed review of this feature
and the reader should quickly establish that it will cause err-disable port shutdowns on the
Cisco switches.

http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a0080094640.sh
tml

The STP loop guard feature provides additional protection against Layer 2 forwarding loops
(STP loops). An STP loop is created when an STP blocking port in a redundant topology
erroneously transitions to the forwarding state. This usually happens because one of the ports
of a physically redundant topology (not necessarily the STP blocking port) no longer receives
STP BPDUs. In its operation, STP relies on continuous reception or transmission of BPDUs
based on the port role. The designated port transmits BPDUs, and the non-designated port
receives BPDUs.

B.22 Speed setting on Switch ports


Avid documentation does not mandate the setting of fixed speeds on NICs and Ethernet
switch ports. There are many differing opinions within Avid. Generally, for all modern
environments the settings on both the NIC and the Switch should be 'auto'. Only in the rare

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 118 of 222


instances where 'auto' doesn't produce the desired result should it be changed and then, the
change needs to be made on the NIC and the Switch.

Speed and Duplex are negotiated during the first few milliseconds of communication
between NIC and Switch. Once negotiated, it won't change. So, if 'auto' negotiates to 1000
Mbps Full Duplex, as it almost always does, then manually setting the NIC and Switch to
1000 Full will not get you better performance - and you will have to remember to re-
configure the NIC and Switch Port if you want to use it for something else.

All situations have different variables. Also considered that devices that are turned off will
report a lower speed, when connected but "dormant" and physically powered.

This is one reason why interface descriptions are a MASSIVE help when diagnosing
problems, this combined with commands like show interfaces status
(Cisco). Yes, it takes time, but interface descriptions and good documentation are like a
pension payment, and investment in the future that WILL pay back.

If SNMP and/or a simple logging is correctly configured, that will show such things as speed
changes, but you do not want to configure logging on devices that are regularly disconnected
or powered off because that just generate "noise", I.E. useless event data.

Should an operator go the effort of manually setting the speed/duplex on a troublesome


connection, this generally does not fix the cause, but overcomes the symptom. Frequently
such incorrect speed issues are usually a sign of a cabling defect.

One European (Q4 2017) customer reported SEMAPHORE Errors with ISIS 7x00 clients and
connecting via a FEX 2248TPE. There were multiple FEXs all with same configuration, but
one FEX had clients that showed intermittent SEMAPHORE errors, there was no pattern to
the occurrence of errors. As a diagnostic step the customer set some clients to a fixed
connection of speed 1000 full duplex and left others to use auto-negotiation. The clients that
were fixed to1000 FDX no longer exhibited the error. The theory here is that while all the
FEX ports showed operation at Gigabit speed, that a possible cabling or NIC issue was
causing intermittent renegotiation to 100 Mbps operation, and then flipping back 1000 Mbps
operation, this would not cause link status events to be reported but it would upset the NLE
operation.

The interface commands below will assist. If having intermittent disconnect problems
perhaps select some ports to monitor logging status to see if NEXIS disconnect events
coincide with port status changes (of course need to record timing a look for a match.

>>> logging event port link-status


Also we should change some (different) ports to be fixed speed and full duplex to see if that
>>> speed
{10 | 100 | 1000 | auto}
>>> duplex
{auto | full | half}
Of course, this needs reliable interface port descriptions to be added too.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 119 of 222


Here are three “port flap sequence” from one customer log extracts, on three different ports,
that cause clients to disconnect at layer2/1, and hence causing the NEXIS clients software on
the workstation to disconnect from the storage interrupting the workflow on the NLE.
2019 Mar 27 11:50:47.632 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface
Ethernet1/32 is down (Link failure)
2019 Mar 27 11:51:02.880 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/32,
operational speed changed to 1 Gbps
2019 Mar 27 11:51:02.880 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/32, operational duplex mode changed to Full
2019 Mar 27 11:51:02.880 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/32, operational Receive Flow Control state changed to on
2019 Mar 27 11:51:02.880 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/32, operational Transmit Flow Control state changed to off
2019 Mar 27 11:51:03.006 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/32 is
up in mode access
2019 Mar 27 11:51:11.602 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface
Ethernet1/32 is down (Link failure)
2019 Mar 27 11:51:15.782 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/32,
operational speed changed to 1 Gbps
2019 Mar 27 11:51:15.782 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/32, operational duplex mode changed to Full
2019 Mar 27 11:51:15.782 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/32, operational Receive Flow Control state changed to on
2019 Mar 27 11:51:15.782 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/32, operational Transmit Flow Control state changed to off
2019 Mar 27 11:51:15.793 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/32 is
up in mode access

And
2019 Mar 27 12:23:27.063 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface
Ethernet1/36 is down (Link failure)
2019 Mar 27 12:23:31.325 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/36,
operational speed changed to 1 Gbps
2019 Mar 27 12:23:31.325 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/36, operational duplex mode changed to Full
2019 Mar 27 12:23:31.325 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/36, operational Receive Flow Control state changed to on
2019 Mar 27 12:23:31.325 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/36, operational Transmit Flow Control state changed to off
2019 Mar 27 12:23:31.336 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/36 is
up in mode access
2019 Mar 27 12:23:49.321 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface
Ethernet1/36 is down (Link failure)
2019 Mar 27 12:23:53.832 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/36,
operational speed changed to 1 Gbps
2019 Mar 27 12:23:53.832 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/36, operational duplex mode changed to Full
2019 Mar 27 12:23:53.832 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/36, operational Receive Flow Control state changed to on
2019 Mar 27 12:23:53.832 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/36, operational Transmit Flow Control state changed to off
2019 Mar 27 12:23:53.842 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/36 is
up in mode access

And

2019 Mar 27 13:33:36.510 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface


Ethernet1/37 is down (Link failure)
2019 Mar 27 13:33:40.197 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/37,
operational speed changed to 1 Gbps
2019 Mar 27 13:33:40.197 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/37, operational duplex mode changed to Full

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 120 of 222


2019 Mar 27 13:33:40.197 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/37, operational Receive Flow Control state changed to on
2019 Mar 27 13:33:40.197 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/37, operational Transmit Flow Control state changed to off
2019 Mar 27 13:33:40.211 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/37 is
up in mode access
2019 Mar 27 13:34:37.471 TVS-EDITOR-N9K %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface
Ethernet1/37 is down (Link failure)
2019 Mar 27 13:34:41.754 TVS-EDITOR-N9K %ETHPORT-5-SPEED: Interface Ethernet1/37,
operational speed changed to 1 Gbps
2019 Mar 27 13:34:41.754 TVS-EDITOR-N9K %ETHPORT-5-IF_DUPLEX: Interface
Ethernet1/37, operational duplex mode changed to Full
2019 Mar 27 13:34:41.754 TVS-EDITOR-N9K %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface
Ethernet1/37, operational Receive Flow Control state changed to on
2019 Mar 27 13:34:41.754 TVS-EDITOR-N9K %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface
Ethernet1/37, operational Transmit Flow Control state changed to off
2019 Mar 27 13:34:42.160 TVS-EDITOR-N9K %ETHPORT-5-IF_UP: Interface Ethernet1/37 is
up in mode access

My Vanilla Configs always use the minimum required setting to create expected normal
operation and hence additional polices may be applied, based on each site methods and
specific requirements.

However, one might conclude that 1G clients must always work at 1G, so why not disable
auto-negotiation it offers no benefit. Under normal circumstances setting a fixed speed will
not be harmful, setting the duplex is “belt and braces” because 1G and above should only
function in Full Duplex mode anyway. The same applies to 10G and 40G devices. But really
the end device should negotiate the highest speed available, the fact that this is showing
disconnects/reconnects is suggesting other issues might be the root cause, and that setting
fixed speed/duplex might overcome the issue while masking root cause.
interface Ethernet1/32
description MEDIA COMPOSER NLE HP workstation Z840
switchport
switchport access vlan 20
speed 1000
duplex full
spanning-tree port type edge
no shutdown

B.23 Duplicate IP Address Error Message Troubleshoot – Later Cisco IOS


The referenced URL below explains a bug between the way Windows later than Vista
execute their duplicate IP check and how the Cisco switches react to that ARP. Later s/w
versions of C4500-X and C4948E/C4900M can exhibit the bug, and ISIS 5x00 systems have
been known to fall victim

The command below, added to cisco ports connecting to ISIS engines to control "ip device
tracking" feature of later Cisco IOS, can be used to address this problem.

ip device tracking maximum 0

http://www.cisco.com/image/gif/paws/116529/116529-problemsolution-product-00.pdf

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 121 of 222


https://supportforums.cisco.com/discussion/12172096/duplicate-ip-0000-conflict-8021x-
windows-7-clients

http://communities.labminutes.com/routing-and-switching/device-tracking-issue-on-catalyst-
switches-15-2(1)e/

Alternative IOS command discussed in the article are

ip device tracking probe delay 10


or
ip device tracking probe use−svi

Also from VMWARE KB:

False duplicate IP address detected on Microsoft Windows Vista and later virtual machines
on ESX/ESXi when using Cisco devices on the environment (1028373)

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC
&externalId=1028373

Symptoms
• When you assign an IP address on Windows Vista and later versions, you see a
duplicate IP address conflict.
• When you restart Windows Vista and later versions, you receive a 169.254.x.x IP.
• When you set up the same virtual machine on a vSwitch with no uplink port on
the vSwitch, the IP address is assigned successfully.
• When you assign the same IP address to a Windows 2003 virtual machine on the
same vSwitch, the IP address is assigned successfully.
Cause
This issue occurs when the Cisco switch has gratuitous ARPs enabled or the ArpProxySvc
replied to all ARP requests incorrectly.

B.24 Using WINMERGE to compare config files (FIELD-TIP)


WINMERGE is an open source solution for Windows (NOT FOR MAC unless in a VM! )
that will compare two (text) files and highlight all the changes on common lines which need
not be the SAME line number in the files

WinMerge (http://winmerge.org) is an open source and free differencing and merging tool for
Windows. It allows you to open two text files (in our case switch config files) in a split
screen, the two files are automatically aligned and all the different lines are highlighted.
Furthermore, the specific differences are additionally highlighted within that line. It is a very
effective tool to compare the configuration files of redundant switches to make sure there are
no unintentional differences and it helps you spot configuration mistakes. A second usage
case is to compare the before- and after- configurations to verify what was really changed.

Example: comparison of two core switches where the automatic highlighting helped us
identify a miyconfigured HSRP interface – a crucial line has been omitted on Core 1!

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 122 of 222


B.25 Serial connection alternatives to DB9 using USB (FIELD-TIP)
Many PCs no longer have a 9 pin D-TYPE, and if you have a MAC you are also out of luck
form “REAL” serial connection.

For serial connectivity I use STARTEC adapter. This works on Windows 7 x64 using since
2010),and Windows XP32 (using since 2008). I have also made it work on a MAC in
conjunction with iTERM2, and in a Windows VM so I can use Putty. I have been doing this
on a MAC too since 2014.

USB to RS-232 Serial DB9 Adapter

http://us.startech.com/product/ICUSB232-USB-to-RS232-DB9-Serial-Adapter-Cable-Male-
to-Male-Serial-Adapter-USB-to-Serial

This I use in combination with a Cisco rolled cable for my connectivity, or via various other
adapters and flat cable for some of the NON-CISCO vendors who have different
presentation.

In 2016 I was advised about a combined cable which integrates the USB-Serial adapters and
works with. When us techy-types congregate, we always have to swap tips!

On Amazon.com search for


Ftdi USB to Serial / Rs232 Console Rollover Cable for Cisco Routers - RJ45
Price - USD12.90

On Amazon.co.uk search for


Asunflower® Cisco USB Console Cable FTDI USB to RJ45 for Windows Vista MAC Linux
RS-232
Price GBP 11.99

NOTE USB -C versions also available

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 123 of 222


Works great on MAC and PC.

B25.1. Use the management network on NEXUS switches & remote connection
While on Catalyst switches using the management interface ports was and extremely painful
process, on NEXUS switches it is a breeze. When setting up a common network
deployments style such as two Core and two Edge devices, having two USB sunflower cables
and a cheap 8 port unmanaged network switch, and 5 Ethernet patch cords costing less than
50 EUR/GBP/USD, will save a massive amount of time and will repay it outlay several times
over. After doing basic setup via slow console cables, you can telnet/ssh to all switches at the
same time. Such improved speed and flexibility this could save a day (or more) in config and
diagnostics. It is worth buying for EVERY project even if it is never used again, but more
environmentally sustainable if moved from site to site. Amazon next day delivery (or local
equivalent) is your friend. Have this in your toolbox for site. This is even MORE important
for doing this with the restriction that are placed upon us in during the Covid 19 Pandemic
where a lot more has to be done remotely.

Although this diagram shows VNC, RDP is just fine too, probably better in an all Windows
environment. With a 1U server and eight LAN ports, the config JUMP workstation and all
the Dumb hosts can exist in one physical device. The serial ports in the Windows JUM
device appear and COM6, COM 7, COM 8, COM9.

At the site in Mainland Europe where the above diagram is taken from, we are using the open
source OPENVPN (https://openvpn.net/) solution, which can be deployed as a virtual
appliance in the SAME host server and the first two connections are no recurring cost. And I
will be doing something similar for a project in USA next month. All from the comfort of my
Desk in UK.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 124 of 222


This probably works fine for other switch vendors too.

Considering most recent 1U servers come with 4 x 1G, adding another 4 ports card is not
expensive and any cheap NIC will do for the task of a DUMB PING host. Again, on Amazon
I have seen then for less than 50 EUR/GBP/USD, but certainly less than 100
EUR/GBP/USD.

B.26 Ethernet connection alternatives using USB (FIELD-TIP)


Messing about with your primary Ethernet connection while on site always has “ hidden
teeth” when retuning to the office and you cannot connect, because you changed the
parameters.

If all you need on site is a basic connection for telnet and TFTP into switches than a simple
USB 2.0 LAN Adapter to Ethernet RJ45 Network Cable 10/100Mbps Mac Windows 7/8
will cost a less than $10. Mine came from Ebay, and works just fine, normally I use it in the
Windows VM, but often the windows VM must have a ROUTE ADD statement at
CLI/COMMAND Window (must be RUN AS ADMIN) to direct packets to the test networks
(or VLANS being configured) instead of the primary bound path of the VM to the “outside
world”.

B.27 Ping testing Cisco links (FIELD-TIP)


When ping testing to validate a path, always use something like

Ping X.X.X.X repeat 10000

You can have 5 out 5 good replies multiple times in a row, but when you send 10,000 pings
and only get 9946 replies – it means something fishy is going on, that 0.5% failure rate
should not be there. It takes only a couple of minutes to finish if all goes well.

Note that this is about pink testing at the CLI between directly connected Cisco devices, amd
not at the windows CLI which would take an epoc!
B.28 Service Timestamps and Time setting in F10 S4810 & S60
To correctly view logs it is important that the correct time exists and that the switch is
configured to timestamp the logs otherwise you get a used log entry such as the one below.

25w3d4h: %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Changed interface state to down: Gi 1/6


25w3d4h: %STKUNIT0-M:CP %IFMGR-5-OSTATE_UP: Changed interface state to up: Gi 1/6

This config example was extracted from a running S4810 and the S60 was identical/

service timestamps log datetime localtime show-timezone


service timestamps debug datetime localtime show-timezone
ntp server 10.10.10.41 <<<< CHANGE IP ADDRESS AS APPROPRIATE
clock timezone EST -5
clock summer-time EDT recurring 2 Sun Mar 02:00 1 sun Nov 02:00

Note this example is for USA but give then flavor of how to do this for any time Zone

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 125 of 222


B.29 Service Timestamps and Time setting in Dell N3024/48
To correctly view logs it is important that the correct time exists and that the switch is
configured to timestamp the logs otherwise you get a used log entry such as the one below.

In the Dell N3024/3048 logs are automatically time stamped, there is no service timestamps
command to be manipulated but the syntax of the summer-time command is different, as the
is NTP setting.

sntp server 10.10.10.41


sntp unicast client enable
clock timezone -5 minutes 0 zone TEST
clock summer-time recurring EU|USA offset 60 ZONE EDT
or
clock summer-time recurring first sun mar 02:00 last sun oct 03:00 offset 60 ZONE
ABC
Note that the week parameter can be first and last or 1-5

Note that SNTP needs a second command sntp unicast client enable otherwise it
will not pickup the time.

This is a config example from a running switch.

sntp unicast client enable


sntp server 10.229.252.150
clock summer-time recurring EU zone "CEST"
clock timezone 1 minutes 0 zone "CET"

THIS URL is a good primer:


https://timetoolsltd.com/network-time-servers/the-difference-between-ntp-and-sntp/

The CLI output below shows that the ntp server and ntp status and that time source is SNTP.

KLAB_ACCESS03#show clock

11:33:46 CET(UTC+1:00) Jan 21 2019


Time source is SNTP

KLAB_ACCESS03#show sntp status

Client Mode: Unicast


Last Update Time: Jan 21 09:11:09 2019

Unicast servers:
Server Status Last response
--------------- ---------------------- --------------------------
10.229.252.150 Success 09:11:09 Jan 21 2019

KLAB_ACCESS03#show sntp server

Server Host Address: 10.229.252.150


Server Type: IPv4

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 126 of 222


Server Stratum: 2
Server Reference Id: NTP Bits: 0x7fc04431
Server Mode: Server
Server Maximum Entries: 8
Server Current Entries: 1

SNTP Servers
------------

Host Address: 10.229.252.150


Address Type: IPv4
Priority: 1
Version: 4
Port: 123
Last Update Time: Jan 21 09:11:09 2019
Last Attempt Time: Jan 21 10:33:21 2019
Last Update Status: Success
Total Unicast Requests: 5041
Failed Unicast Requests: 0

KLAB_ACCESS03#

B.30 Service Timestamps and Time setting in DELL S4048


To correctly view logs ,it is important that the correct time exists and that the switch is
configured to timestamp the logs otherwise you get a used log entry such as the one below.

AFTER
Jul 11 09:50:19 UTC: %STKUNIT1-M:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable authentication success on vty0 (
10.134.133.221 ) for user avid
Jul 11 09:50:14 UTC: %STKUNIT1-M:CP %SEC-5-LOGIN_SUCCESS: Login successful for user avid on line vty0 ( 10.134.133.221 )
Jul 11 09:50:07 UTC: %STKUNIT1-M:CP %SEC-5-LOGOUT: Exec session is terminated for user avid on line vty0 ( 10.134.133.221 )
Jul 11 09:49:27 UTC: %STKUNIT1-M:CP %SYS-5-CONFIG_I: Configured from vty0 ( 10.134.133.221 )by avid

BEFORE
2w5d13h: %STKUNIT1-M:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable authentication success on vty0 (
10.134.133.221 ) for user avid
2w5d13h: %STKUNIT1-M:CP %SEC-5-LOGIN_SUCCESS: Login successful for user avid on line vty0 ( 10.134.133.221 )
2w5d13h: %STKUNIT1-M:CP %SEC-5-LOGOUT: Exec session is terminated for user avid on line vty0 ( 10.134.133.221 )
2w5d13h: %STKUNIT1-M:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable authentication success on vty0 (
10.134.133.221 ) for user avid
2w5d13h: %STKUNIT1-M:CP %SEC-5-LOGIN_SUCCESS: Login successful for user avid on line vty0 ( 10.134.133.221 )

This config example was extracted from a running S4810 and the S60 was identical/

service timestamps log datetime localtime show-timezone


service timestamps debug datetime localtime show-timezone
ntp server 10.10.10.41 <<<< CHANGE IP ADDRESS AS APPROPRIATE
clock timezone EST -5 0
clock summer-time EDT recurring 2 Sun Mar 02:00 1 sun Nov 02:00

Note this example is for USA but give then flavor of how to do this for any time Zone

FROM A S4048 IN JAN 2019


clock timezone CET 1 0
clock summer-time CEST recurring last Sun Mar 2:00 last Sun Oct 2:00

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 127 of 222


service timestamps log datetime localtime show-timezone
service timestamps debug datetime localtime show-timezone

Note: that the Dell S4048 logs are listed with newest first (unlike Cisco)
the command below will list them in natural order and also no ask for you
to press the space bar after each page

S4048#show logging reverse | no-more

The CLI output below shows that the ntp server status and associations
KLAB_VLTCORE1#sh ntp status
Clock is synchronized, stratum 3, reference is 10.229.252.150, vrf-id is 0
frequency is -20.796 ppm, stability is 0.066 ppm, precision is -19
reference time dff01867.51c59290 Mon, Jan 21 2019 10:07:35.319 UTC
clock offset is -1.265075 msec, root delay is 10.630 msec
root dispersion is 54.397 msec, peer dispersion is 15.435 sec
peer mode is client
KLAB_VLTCORE1#
KLAB_VLTCORE1#sh ntp associations
remote vrf-Id ref clock st when poll reach delay
offset disp
===========================================================================
=========
169.254.1.13 0 0.0.0.0 16 - 1024 0 0.00000
0.00000 0.00000
*10.229.252.150 0 216.239.35.0 2 940 1024 377 0.34600 -
1.2650 0.61500
* master (synced), # backup, + selected, - outlier, x falseticker
KLAB_VLTCORE1#

B.31 How to find IP address of a MAC address


First this command
TVS_SWC002_5672-2_CORE(config-if)# sh mac add int e1/5
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+------------------
* 1241 0060.dd43.2bd8 dynamic 0 F F Eth1/5

Then this command


MIBCTSWC002_5672-2_CORE(config-if)# sh ip arp | inc 0060.dd43.2bd8
10.64.124.35 00:01:35 0060.dd43.2bd8 Vlan1241
TVS_SWC002_5672-2_CORE(config-if)#

And voila! The device of port e1/5 has a mac address of 0060.dd43.2bd8 and an IP
address of 10.64.124.35

Note: This only works on the same switch. MAC Addresses on a


subordinate L2 switch cannot be found this way

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 128 of 222


There Managed Switch Port Mapping Tool is available from
https://www.switchportmapper.com/

An open source variant for Windows, called Switch Miner which appears
to do much the same job is available from
https://sourceforge.net/projects/switchminer/

It is a little buggy but does the job, and is fine for occasional use. It work
well on Cisco Catalyst (not been able to try NEXUS at the time of writing
FEB 2019) devices, that is designed for, less good on other vendors
DELL/F10 as is to be expected .

B.32 Minimum Length Of A Network Cable?


From a networking perspective I would always suggest minimum of 2m….. While I have
heard of this many times as best practice, I have looked before and found no confirmed
reference. I recall it from optical connections , but this could be networking folklore as one
can buy shorter cat 5/6 patch cables from reputable suppliers.

A Google search "minimum length network cable” and found these two articles
https://networkengineering.stackexchange.com/questions/7483/minimum-ethernet-cable-
length

There is no minimum cable length when talking about standard copper-cables. When
it comes to fiber, there is a minimum length depending on technology, diodes and so
on.

note, fibre minimums are a function of power. longer reach (ie. higher power) expects
higher attenuation; at short lengths the signal can blind (and even damage) the
receiver. –

http://boards.straightdope.com/sdmb/showthread.php?t=599432

I did check 802.3, to find that the 2.5m minimum length applies to CSMA/CD
networks, per IEEE 802.3, where overly short cables can cause the collision detection
to malfunction. This might apply to half-duplex gigabit lines as well. It should not
apply to switch based networks which won't have physical layer collisions.

So, there is both confirmation and a little contradiction.

B.33 Cisco Nexus vPC Best Practices


When setting up vPC pairs with Cisco Nexus it is important to have a vPC keep alive link in
additional to a vPC peer link. This is explained in the URL below.

This can be done on the management ports or via a front panel ports, when an aggregate link
can be used for “double protection”.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 129 of 222


Cisco Nexus 9000 Series NX-OS Interfaces Configuration Guide, Release 9.3(x)
Chapter: Configuring vPCs

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/93x/interfaces/con
figuration/guide/b-cisco-nexus-9000-nx-os-interfaces-configuration-guide-93x/b-cisco-
nexus-9000-nx-os-interfaces-configuration-guide-93x_chapter_01000.html

Configuring vPCs NEXUS 9000

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-
x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Interfaces_Configuration_Guide/configuring_vpcs.pdf

Design and Configuration Guide: Best Practices for Virtual Port Channels (vPC) on Cisco
Nexus 7000 Series Switches Revised: June 2016

https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_b
est_practices_design_guide.pdf

Data Center SwitchesCisco Nexus 5000 Series SwitchesWhite Papers


Virtual PortChannel Quick Configuration Guide

https://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-
switches/configuration_guide_c07-543563.html

An example is given below from Nexis 7000


Main switch N7k

interface port-channel77
description VPC-PEER-LINK
switchport
switchport mode trunk
spanning-tree port type network
no lacp suspend-individual
vpc peer-link

feature vpc

vpc domain 77
peer-switch
role priority 4096
system-priority 4096
peer-keepalive destination 10.10.10.2 source 10.10.10.1
delay restore 150
peer-gateway

interface mgmt0
vrf member management
ip address 10.10.10.1/30

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 130 of 222


Secondary switch N7k

interface port-channel77
description VPC PEER-LINK
switchport
switchport mode trunk
spanning-tree port type network
no lacp suspend-individual
vpc peer-link

feature vpc

vpc domain 77
peer-switch
role priority 8192
system-priority 4096
peer-keepalive destination 10.10.10.1 source 10.10.10.2
delay restore 150
peer-gateway

interface mgmt0
vrf member management
ip address 10.10.10.2/30

The vPC peer keepalive must be present for successful vPC pairing. The
loss of a vPC peer keepalive path on a running switch while the vPC peer
link is active will not break the pairing, but if the vPC peer link fails while
there is no active vPC peer keepalive path, unexpected operation can occur,
hence additional resilience in the vPC peer keepalive but using Port
channel with two members and a dedicate VRF context is desirable. On a
Nexus 93180YC the most cost-effective option is to use 10G TWINAX on
while on a Nexus 9336C either two 100G have to be “sacrificed” with 40G
TWINAX or optical BREAKOUT is used possible on two different ports s
6 more 10G ports main available. Do not use QSA and optics as this is
more expensive and inefficient than TWINAX.

B.33.1 PATH diversity NEXUS 93000 series switches


In a 1U device there is limited HW diversity that can be achieved as there are no I/O cards,
but often there is different slice or PHY segments hence it can often be advised to deploy
VPC peer links and vPC peer keepalive paths at opposite end of the switch. Some devices
have no slice diversity as shown below

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 131 of 222


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 132 of 222
B.34 Cisco Nexus 93000 series Port Breakout & QSA
This section is primary describing differing option for port breakout on Nexus 93000/9300
series of switches.

These principle apply equally to other vendors network switch products but the syntax
necessary to achieve it may differ.

B.34.1 For Nexus 93180LC-EX


The use for ports on the Cisco Nexus 93180 LC for port speed less than 40/25G has
limitations based the s/w version and the articles below explain some of the intricacies.

Cisco Nexus 93180LC-EX NX-OS Mode Hardware Installation Guide | Managing the Switch
| Configuring Ports

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/hw/n93180lcex_hig/g
uide/b_c93180lcex_nxos_mode_hardware_install_guide/b_c93180lcex_nxos_mode_hardwar
e_install_guide_chapter_01011.html

Configuring Ports

This switch has 32 ports of which 28 are configured as 40/50-Gigabit ports and 4 are
configured as 40/100-Gigabit ports. You can change the ways that these ports are used
by using templates to configure all the ports in commonly used ways or by configuring
ports individually. Three of the templates that you can use are the following:

• 28 40/50-Gigabit ports and 4 40/100-Gigabit ports (default configuration)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 133 of 222


• 24 40/50-Gigabit ports and 6 40/100-Gigabit ports

• 18 40/100-Gigabit ports

You can individually configure the ports 1 to 28 as indicated in the following table:

Odd Numbered Port (1 Even Numbered Port (2 to 28) below the Odd
to 27) Numbered Port

40-Gigabit QSFP+ port 40-Gigabit QSFP+ port (default)


(default)

40-Gigabit port with Hardware disabled


4x10-Gigabit breakout
feature

100-Gigabit QSFP28 Hardware disabled


port

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 134 of 222


Odd Numbered Port (1 Even Numbered Port (2 to 28) below the Odd
to 27) Numbered Port

100-Gigabit port with Hardware disabled


4x25-Gigabit breakout
feature

1/10-Gigabit port using 1/10-Gigabit port using a QSFP-to-SFP adapter in


a QSFP-to-SFP adapter the port
in the port Note
Even numbered port must use the same
speed as the odd numbered port in the
same vertical pair of ports. Connect the
odd numbered port first to set the speed
for the vertical pair of ports.

You can individually configure ports 29 to 32 as follows:

• 40/100-Gigabit QSFP+/QSFP28 uplink port (default)

• 40/100-Gigabit QSFP+/QSFP28 uplink port can be individually broken out with the
4x10-Gigabit or 4x25-Gigabit breakout feature.

For information on configuring ports for this switch, see the Cisco Nexus 9000 Series
NX-OS Interfaces Configuration Guide.

Also see

Cisco 40-Gigabit Ethernet Transceiver Modules Compatibility Matrix | Cisco Nexus 9000
Series (Fixed 9300)

https://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibilit
y/matrix/40GE_Tx_Matrix.html#_Toc501460917

B.34.2 For Nexus 93108TC-EX, 93180YC and 9336C-FX2

This also applies to Nexus 93240YC-FX2 and 93360YC and also other family model with
100G QSFP28 ports.

Cisco Nexus 93108TC-EX NX-OS Mode Switch Hardware Installation Guide


Updated: November 13, 2017

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/hw/n93108tcex_hig/g
uide/b_c93108tcex_nxos_mode_hardware_install_guide/b_c93108tcex_nxos_mode_hardwar
e_install_guide_chapter_01011.html#concept_efn_m5q_4x
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 135 of 222
Chapter: Connecting the Switch to the Network

Uplink Connections
The six 40- and 100-Gigabit Ethernet QSFP28 uplink ports support 4 x 10-Gigabit and 4 x
25-Gigabit breakout cables and 1- and 10-Gigabit Ethernet with QSFP-to-SFP adapters.

The six 40- and 100-Gigabit Ethernet QSFP28 uplink ports support 10-, 25-, 40-, 50-, and
100-Gigabit connectivity. You can use 4x10-Gigabit and 4x25-Gigabit breakout cables with
these ports.
For a list of transceivers and cables used by this switch for uplink connections, see http://
www.cisco.com/c/en/us/support/interfaces-modules/transceiver-modules/products-device-
support-tables-list.html.

interface breakout module 1 port 49 map 10g-4x

This particular command maps the 40GE to 10GE.

Other options of the command are:

LAB_93180YC-EX(config)# interface breakout module 1 port 50 map ?


10g-4x Breaks out a 40G high BW front panel port into four 10G ports
25g-4x Breaks out a 100G high BW front panel port into four 25G ports
50g-2x Breaks out a 100G high BW front panel port into two 50G ports

Software

NXOS: version 7.0(3)I4(2)


BIOS compile time: 08/26/2016
NXOS image file is: bootflash:///nxos.7.0.3.I4.2.bin
NXOS compile time: 7/21/2016 8:00:00 [07/21/2016 16:09:32]

Hardware
cisco Nexus9000 93180YC-EX chassis

The ports would show as shown below, in this example ports 1/1 and 1/1 on a NEXUS
9336C-FX2 are broken out to 10G using the command

interface breakout module 1 port 1-2 map 10g-4x

N9336C-asw02# sh int status


--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
Eth1/1/1 10GE link to avid notconnec 14 auto auto QSFP-4X10G-AC10M
Eth1/1/2 link to Avid Nexis connected 18 full 10G QSFP-4X10G-AC10M
Eth1/1/3 ***parked port*** disabled 998 auto auto QSFP-4X10G-AC10M
Eth1/1/4 ***parked port*** disabled 998 auto auto QSFP-4X10G-AC10M
Eth1/2/1 10GE link to avid xcvrAbsen 14 auto auto --
Eth1/2/2 ***parked port*** xcvrAbsen 998 auto auto --
Eth1/2/3 temp isis tests ri xcvrAbsen 171 auto auto --
Eth1/2/4 temp isis tests ri xcvrAbsen 171 auto auto --
Eth1/3 link to Avid Nexis connected 18 full 40G QSFP-H40G-AOC10M
Eth1/4 link to Avid Nexis connected 18 full 40G QSFP-H40G-AOC10M

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 136 of 222


Depending on the breakout methodology used, it maybe be necessary to apply port speed and
duplex commands to the new broken out interfaces.

B.34.3 Optical breakout useful information


A QSA adapter is a simple but inefficient was to make a 40G port become a 10G port. But in
fact a QSFP+ is a already four SFP+ combined into a single package and the switch
software/configuration defines how the fibers will be “physically” presented as 40G (four
lanes of 10G “transmission”) or four independent ports,

So a 40G interface might be e1/49


But once broken out as four 10G ports, they would appear as :
e1/49/1
e1/49/2
e1/49/3
e1/49/4

While 40G was a “development” of 10G, not all QSFP+ support breakout so the equipment
vendor should be consulted for the correct devices. For Cisco see 40GBASE QSFP
Modules Data Sheet:

https://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-
modules/data_sheet_c78-660083.html

So just as a 40G port can be broken out to a 4 x 10G TWINAX, an optical QSFP+ can to the
same thing, it just needs some extra parts for the physical connection. A

Have a look at these URLs


https://www.fs.com/uk/products/68402.html?gclid=EAIaIQobChMI4eTqq5vN4gIVrL_tCh3x
lwMIEAYYAyABEgKV3PD_BwE

and
https://www.youtube.com/watch?v=HER-Cu83AbI

and
http://www.fiberopticshare.com/modular-patch-panel-breakout-cabling-choose-future-
proofing-network.html

Of course the same principles apply to 100G >> 4 x 25G (or 4x10G)

Here are some pictures of product that I found with a Google search of “QSFP breakout
panel”. No special cables have to be made, a COTS MTP cable connects from the QSFP to
the back of the breakout box, and the front provided the LC connection for onward patching
to end 10/25G device

Outer sheath yellow fibre cables indicate it is a Long Range (LR) optical solution and cyan
cable indicate it is a Short Range (SR) optical solution (OM3 or OM4)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 137 of 222


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 138 of 222
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 139 of 222
GREAT VIDEO https://www.youtube.com/watch?v=HER-Cu83AbI (BROKEN
DEC 2020)
MTP Breakout Cables Video. https://www.youtube.com/watch?v=kodwHjVDiEc
BLACK BOX MTP Connector Rackmount Fiber Solutions. https://www.youtube.com/watch?v=ns-CNrus9dM
MPO connectors and cables | Fiber optic tutorial
https://www.youtube.com/watch?v=aDBII83W82Q
Explanation of MTP/MPO fibers https://www.youtube.com/watch?v=a1kMpvdc86U

Or, in some cases use a simple patch cable like below typically available as 1-100M length:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 140 of 222


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 141 of 222
B.34.4 TWINAX breakout useful information
Breakout using TWINAX is a little different. Below are some images from a Google search
of “TWINAX breakout cable”. These cables are short range, typically 3-5M (passive), no
extra patching is required as they are direct point-point connection.

Another great video: Direct Attach Cable (DAC) vs Active Optical Cable (AOC) - Which Do
I Need To Buy For My Rack? https://www.youtube.com/watch?v=ACTMTHg-FVk

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 142 of 222


B.34.5 QSA adapter
When breakout is not possible/viable, an alternative approach is to use a QSA adapter. This is
a physical layer device, so will not need additional testing, however it should not have an
adverse impact on buffering characteristics, but it will change the way a port (based on
speed) interacts with the buffering.

These devices are usually multivendor capable if purchased from a reputable source.

A port using a QSA adapter can use SFP+ transceivers and SFP transceivers.

When using a QSA adapter, it may be necessary to apply port speed and duplex commands to
the new broken out interfaces, bit in most case this is unlikely as the devices used are likely
to have fixed parameters.

The use of QSA is a very blunt tool for reducing port speed. It is a very inefficient use of a
highly capable port. It is much more efficient to use a QSFP+ device which is capable of
breakout s described above.

B.35 What is "=" in Cisco part number?


https://supportforums.cisco.com/t5/application-networking/what-is-quot-quot-in-cisco-part-
number/td-p/700991

For example:

Q.
What does the "=" mean in Cisco part number? It seems there is no difference between the
parts with a "=" and the parts without a "=", such as "GLC-SX-MM" and "GLC-SX-MM="

A.
• A part with the = sign at the end is what is called a "FRU" part; where FRU
stands for "Field Replaceable Units".
• Those are the parts that can be used as "spare" or be shipped individually to
replace damaged units.
• The new parts that are ordered directly from a reseller or from Cisco usually
don't come with the = sign.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 143 of 222


B.36 LINUX TEAM CONFIGURATION FOR TEAMED ADAPTERS

This example provides the command based on the physical interfaces being called p1p1 and
p1p2 and the teamed device being called team0. Of course, the “binding” to the NEXIS
clients must still be performed using the documented procedure such as shown below.

30APR 2019
This has not been tested by Engineering, hence not fully ratified,
and not fully supported, hence exists in the grey area.

IMHO it is better to have it, than not to have it…. Like a space-
saver spare tyre. It is not perfect, but better than no spare tyre.

It is easily tested to see if it is the “cause” of any problem by


removing one of the links! It will prevent more problems than it
might create, but it also needs to be deployed by sufficiently skilled
technicians as there is also correct config that must exist in the
network switch so I would advise any customer that deployed to
include network consultant involvement, and get test document to
support the deployment.

It is an OS function, not a driver function.

I would argue with CS vigorously on behalf of any site that deploys


it, that encounters CS inertia.

avidctl platform config nexis --system-name=wavd-nexis --systemdirectors=


wavd-nexis1.wavd.com --user=wavdadmin --password=Avid123 --
net-use=team0 --mode=1

Depending on the version of Linux, a GUI option such as NMUTILS might also be available.

For CENTOS 7 used by MediaCentral Cloud UX server this resource is very helpful

https://www.snel.com/support/setup-lacp-bonding-interface-centos-7/

One of the key files that might need editing is to change the load balancing HASH:

nano /etc/sysconfig/network-scripts/ifcfg-bond0

these two articles/URLs below give great information on the xmit_hash_policy=value

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 144 of 222


https://access.redhat.com/documentation/en-
us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

https://www.unixmen.com/linux-basics-create-network-bonding-on-centos-76-5/

The default is load balancing hash is layer 2 (parameter=0). But layer 2+3 is a much better
(parameter=2). Later 3+4 should be avoided as it does not work well with the UDP based
NEXIS signalling protocol.

# cat /etc/sysconfig/network-scripts/ifcfg-Bond0
BONDING_OPTS="downdelay=0 miimon=1 mode=802.3ad xmit_hash_policy=2 updelay=0"

TEAMD is a newer teaming/bonding method and is used by NEXIS Engine to control it


bonding/teaming and it offers more granular option for Transmit HASH load balancing (
xmit_hash_policy), and the naming of the function is different.

runner.tx_hash (array)
List of fragment types (strings) which should be used for packet Tx hash
computation. The following are available:

eth — Uses source and destination MAC addresses.

vlan — Uses VLAN id.

ipv4 — Uses source and destination IPv4 addresses.

ipv6 — Uses source and destination IPv6 addresses.

ip — Uses source and destination IPv4 and IPv6 addresses.

l3 — Uses source and destination IPv4 and IPv6 addresses.

tcp — Uses source and destination TCP ports.

udp — Uses source and destination UDP ports.

sctp — Uses source and destination SCTP ports.

l4 — Uses source and destination TCP and UDP and SCTP ports.

Beginning with March 2021 NEXIS release the TEAMD configuration will use “l3”. Because
it works better with the UDP based NEXIS signalling protocol.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 145 of 222


B.36.1 TEXT of COMMANDS for SFT

ifcfg_p1p1 ifcfg_p1p2
TYPE=Ethernet TYPE=Ethernet
PROXY_METHOD=none PROXY_METHOD=none
BROWSER_ONLY=no BROWSER_ONLY=no
BOOTPROTO=dhcp BOOTPROTO=dhcp
DEFROUTE=yes DEFROUTE=yes
IPV4_FAILURE_FATAL=no IPV4_FAILURE_FATAL=no
IPV6INIT=yes IPV6INIT=yes
IPV6_AUTOCONF=yes IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy IPV6_ADDR_GEN_MODE=stable-privacy
NAME=p1p1 NAME=p1p2
UUID=82f52b69-92e9-3d0b-bf74- UUID=34668a02-c7c4-3df4-b23a-
f62d5de380e1 50837d949d9b
DEVICE=p1p1 DEVICE=p1p2
ONBOOT=yes ONBOOT=yes
AUTOCONNECT_PRIORITY=-999 AUTOCONNECT_PRIORITY=-999

ifcfg_team0 Avid registry


DEVICE=team0
BONDING_OPTS="updelay=0 resend_igmp=1 # Written by nexis agent - modifications will be
use_carrier=1 miimon=100 arp_all_targets=any overwritten
min_links=0 downdelay=0
xmit_hash_policy=layer2
primary_reselect=always fail_over_mac=none AvidFos\Parameters\UsrvTransport\Hires 1
AvidFos\Parameters\UseIfnames team0
arp_validate=none mode=active-
AvidFos\Parameters\RemoteSystemDirectors
backup lp_interval=1 primary=p1p1 sda.thk-avid.local
all_slaves_active=0 arp_interval=0
ad_select=stable num_unsol_na=1
num_grat_arp=1"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=172.18.10.34
PREFIX=24
GATEWAY=172.18.10.1
DNS1=172.18.10.28
DNS2=172.18.10.29

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 146 of 222


DOMAIN=thk-avid.local
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=team0
UUID=c5efe9fc-7c5a-4622-885c-8bceaedc5591
ONBOOT=yes

ifcfg_team0_slave1 ifcfg_team0_slave2
TYPE=Ethernet TYPE=Ethernet
NAME=team0_slave1 NAME=team0_slave2
UUID=fcac276b-94b3-430b-83c3- UUID=8254f867-968c-42d8-8805-
24120fbbd924 f9033855d401
DEVICE=p1p1 DEVICE=p1p2
ONBOOT=yes ONBOOT=yes
MASTER=team0 MASTER=team0
SLAVE=yes SLAVE=yes
MASTER_UUID=c5efe9fc-7c5a-4622-885c- MASTER_UUID=c5efe9fc-7c5a-4622-885c-
8bceaedc5591 8bceaedc5591

B.36.2 TEXT of COMMANDS for LACP


NOTE: SUMMARISED as full details given above for SFT and this is a small variation.

DEVICE=team0
BONDING_OPTS="resend_igmp=1 updelay=0 use_carrier=1 miimon=100
arp_all_targets=any min_links=0 downdelay=0 xmit_hash_policy=layer2
primary_reselect=always fail_over_mac=none lp_interval=1 mode=802.3ad
all_slaves_active=0 ad_select=stable num_unsol_na=1 num_grat_arp=1"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=172.18.10.34
PREFIX=24
GATEWAY=172.18.10.1
DNS1=172.18.10.28
DNS2=172.18.10.29
DOMAIN=thk-avid.local
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=team0
UUID=c5efe9fc-7c5a-4622-885c-8bceaedc5591
ONBOOT=yes

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 147 of 222


Note: this example uses a default “xmit_hash_policy=0”. (layer2), it
would be better to use xmit_hash_policy=2 (layer2+3) because this would
use IP addresses and not MAC addresses, which would not have sufficient
variation if other end of connection in a different subnet as default gateway
mac would always be used. However, some FHRP solution may allow two
MAC addresses to be used. Giving some outbound load balancing even at
Layer 2 . Using xmit_hash_policy=1”. (layer3+4) should be avoided.

Here is another CENTOS example (FEB2021 deployment), as can be seen the defaults seem
to be omitted

[root@TVSppspcux00200 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0


BONDING_OPTS="downdelay=0 miimon=1 mode=802.3ad updelay=0"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=172.19.42.62
PREFIX=24
GATEWAY=172.19.42.254
DNS1=172.31.1.52
DNS2=172.31.1.53
DOMAIN=paris.tv5monde.org
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=bond0
UUID=86e7d2f5-15a0-4b07-9c4a-acee05fa85b9
DEVICE=bond0
ONBOOT=yes
[root@TVSppspcux00200 ~]#

B.36.3 SFT TEAMING CONCLUSIONS


During customer testing in NOV 2018: SFT Teaming (do not confuse the used of “teaming”
here with TEAMD the term teaming used in the sense of Intel NIC teaming) appears unsuited
to operation with Media Central Cloud UX Server.

B.36.4 LACP TEAMING CONCLUSIONS


During customer testing in NOV 2018: LACP Teaming is well suited to operation with
Media Central Cloud UX Server. While not seamless, its operation is Robust and
predictable. This method should be used in combination with a switch that supports Multi-
Chassis Link Aggregation (MLAG).

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 148 of 222


Note: Linux uses a default “xmit_hash_policy=layer2”, it would be better
to use xmit_hash_policy=layer3 because this would use IP addresses and
not MAC addresses, which would not have sufficient variation if other end
of connection in a different subnet as default gateway mac would always be
used. However, some FHRP solutions may allow two MAC addresses to be
used. Giving some outbound load balancing even at Layer 2. Using
xmit_hash_policy=layer4” should be avoided.

Similar principles would apply to Windows Server.

B.36.5 DISCONNECT TESTS CONCLUSIONS


During customer testing in NOV 2018: It is far better to have resilient link than single links,
it gives choices of how and when to deal with the failure that are not available to single
connected machine. The benefits associated with this "unsupported method" greatly exceed
the risks.

B.36.6 DEPLOYMENT CONSIDERTATIONS with MEDIA CENTRAL CLOUD UX


The Bonded Interface or Teamed Interface (differing techniques for a similar outcome)
should be created before the installation of Media Central Cloud UX server, this is because
the keep-alive daemon needs to “bind” to the “appropriate” named interface.

Deployment of a Bonded Interface or Teamed Interface after the installation of Media Central
Cloud UX server will require several configuration files to be modified to re-focus keep-alive
communication to the “correct” named interface, as it does not bond to an IP address but to a
NAMED interface.

At the time of writing (DEC 2020) using teamed/bonded connection is not explicitly
supported on Bare Metal server deployments (Linux or Windows), but with Kubernetes
Containers the network connection of the Pod/Application is abstracted from the external
physical connection, hence there should be no conflict or issues……and many data centres
successfully deploy in this manner.

B.36.7 CHECKING/DEBUGGING LACP BOND CONNECTION STATUS in LINUX


The URL is an excellent resource for understand how to configure and check in many
different LINIX distributions and include the CENTIS use by AVID MCCUX.

https://backdrift.org/lacp-configure-network-bonding-linux

and
https://serverfault.com/questions/810649/how-does-one-diagnose-linux-lacp-issues-at-the-
kernel-level

The text extracts below are best viewed on a bigger screen but have been squeezed into a
table below for comparison convenience.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 149 of 222


MCCUX_LINUX LACP BOND GOOD MCCUX_LINUX LACP BOND DOWN
MediaCentral Cloud_UX SERVER MediaCentral Cloud_UX SERVER
This port is connected to a switch configured for
LACP (in fact a Cisco vPC). This port is connected to a switch NOT YET
configured for LACP (in fact a Cisco vPC).
Previously the SWITCH was not configured for
LACP but then the LACP config was applied Note that in this example the load balancing
and the switch ports were toggled to bring up hash policy is using the default layer 2. This is
the port channel, hence there are some historical not optimal (but it is acceptable) for Avid data
errors that would disappear of the LINUX flows, as if devices are in different VLANs,
networking was restarted or the device rebooted. MAC addresses will not change much because
of default gateway…. However if there is an
FHRP in place at least there should be two
destination MAC addresses at play so there at
least half a change of load balancing….But load
balancing decisions are not based on bandwidth
balance but a mathematical XOR base
algorithm.
# cat /proc/net/bonding/bond0 $ cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.7.1 Ethernet Channel Bonding Driver: v3.7.1
(April 27, 2011) (April 27 2011)

Bonding Mode: IEEE 802.3ad Dynamic link Bonding Mode: IEEE 802.3ad Dynamic link
aggregation aggregation
Transmit Hash Policy: layer2 (0) Transmit Hash Policy: layer2 (0)
MII Status: up MII Status: up
MII Polling Interval (ms): 1 MII Polling Interval (ms): 1
Up Delay (ms): 0 Up Delay (ms): 0
Down Delay (ms): 0 Down Delay (ms): 0

802.3ad info 802.3ad info


LACP rate: slow LACP rate: slow
Min links: 0 Min Links: 0
Aggregator selection policy (ad_select): Aggregator selection policy (ad_select):
stable stable
System priority: 65535 System priority : 65535
System MAC address: 00:60:dd:42:24:fa System MAC address : 00:06:dd:42:24:fa
Active Aggregator Info: Active Aggregator Info:
Aggregator ID: 2 Aggregator ID: 1
Number of ports: 2 Number of ports: 1
Actor Key: 15 Actor Key: 15
Partner Key: 32831 Partner Key: 1
Partner Mac Address: 00:23:04:ee:be:05 Partner Mac Address: 00:00:00:00:00:00

Slave Interface: enp20s0 Slave Interface: enp20s0


MII Status: up MII Status: up
Speed: 10000 Mbps Speed: 10000Mbps
Duplex: full Duplex: full
Link Failure Count: 2 Link Failure Count: 0
Permanent HW addr: 00:60:dd:42:24:fa Permanent HW addr: 00:60:dd:42:24:fa

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 150 of 222


Slave queue ID: 0 Slave queue ID: 0
Aggregator ID: 2 Aggregator ID: 1
Actor Churn State: none Actor Churn State : none
Partner Churn State: none Partner Churn State: churned
Actor Churned Count: 0 Actor Churn Count : 0
Partner Churned Count: 1 Partner Churn Count: 1
details actor lacp pdu: details actor lacp pdu:
system priority: 65535 System priority : 65535
system mac address: 00:60:dd:42:24:fa System MAC address : 00:06:dd:42:24:fa
port key: 15 port key: 15
port priority: 255 port priority: 255
port number: 1 port number: 1
port state: 61 port state: 77
details partner lacp pdu: details partner lacp pdu:
system priority: 4096 System priority : 65535
system mac address: 00:23:04:ee:be:05 System MAC address : 00:00:00:00:00:00
oper key: 32831 oper key: 1
port priority: 32768 port priority: 255
port number: 265 port number: 1
port state: 61 port state: 1

Slave Interface: ens5 Slave Interface: ens5


MII Status: up MII Status: up
Speed: 10000 Mbps Speed: 10000Mbps
Duplex: full Duplex: full
Link Failure Count: 3 Link Failure Count: 2
Permanent HW addr: 00:60:dd:42:24:fb Permanent HW addr: 00:06:dd:42:24:fb
Slave queue ID: 0 Slave queue ID: 0
Aggregator ID: 2 Aggregator ID: 2
Actor Churn State: none Actor Churn State : churned
Partner Churn State: none Partner Churn State: churned
Actor Churned Count: 1 Actor Churn Count : 1
Partner Churned Count: 1 Partner Churn Count: 1
details actor lacp pdu: details actor lacp pdu:
system priority: 65535 System priority : 65535
system mac address: 00:60:dd:42:24:fa System MAC address : 00:06:dd:42:24:fa
port key: 15 port key: 15
port priority: 255 port priority: 255
port number: 2 port number: 2
port state: 61 port state: 69
details partner lacp pdu: details partner lacp pdu:
system priority: 4096 System priority : 65535
system mac address: 00:23:04:ee:be:05 System MAC address : 00:00:00:00:00:00
oper key: 32831 oper key: 1
port priority: 32768 port priority: 255
port number: 16649 port number: 1
port state: 61 port state: 1
[root@w0ppspcux00200 ~]#

Note the “oper key” is 328231, when 32768 LOCAL MAC ADDRESS
subtracted = 63, the vpc and po ID used on this 00:06:dd:42:24:fb
port just happened to be 61 too, no coincidence. ss MYRICOM
We will see it the same applies for another
po….#61

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 151 of 222


Note that PORTSTATE=61 is GOOD
OPER KEY is related to local PO reference of
switch.
This below show that the Transmit Hash Policy
has been successfully adjusted

$ cat /proc/net/bonding/bond0 [root@w0ppspcux00300 ~]# cat


/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1
Ethernet Channel Bonding Driver: v3.7.1
(April 27, 2011)
(April 27 2011)

Bonding Mode: IEEE 802.3ad Dynamic link Bonding Mode: IEEE 802.3ad Dynamic link
aggregation aggregation

Transmit Hash Policy: layer2 (0) Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Status: up
MII Polling Interval (ms): 1
MII Polling Interval (ms): 1
Up Delay (ms): 0 Up Delay (ms): 0

Down Delay (ms): 0 Down Delay (ms): 0

802.3ad info 802.3ad info


LACP rate: slow
LACP rate: slow
Min links: 0
Min Links: 0
Aggregator selection policy (ad_select): Aggregator selection policy (ad_select):
stable stable

System priority : 65535 System priority: 65535


System MAC address: 00:60:dd:42:25:08
System MAC address : 00:06:dd:42:25:38
Active Aggregator Info:
Active Aggregator Info:
Aggregator ID: 1 Aggregator ID: 1

Number of ports: 2 Number of ports: 1

Actor Key: 15 Actor Key: 15

Partner Key: 32829 Partner Key: 1

Partner Mac Address: Partner Mac Address:


00:23:04:ee:be:05 00:00:00:00:00:00

Slave Interface: enp20s0


MII Status: up The all-ZEROS MAC address shows that the
Speed: 10000Mbps LACP is no operations yet
Duplex: full
Link Failure Count: 3 BONDING_OPTS="downdelay=0 miimon=1
Permanent HW addr: 00:60:dd:42:25:38 mode=802.3ad xmit_hash_policy=2 updelay=0"
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State : none
Partner Churn State: none This can be added to
Actor Churn Count : 0
/etc/sysconfig/network-scripts/ifcfg-bond0
Partner Churn Count: 1
using nano (or even vi for the bravehearted)
details actor lacp pdu:
System priority : 65535
System MAC address : 00:06:dd:42:25:38
port key: 15
port priority: 255
port number: 1
port state: 61

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 152 of 222


details partner lacp pdu:
System priority : 4096
System MAC address : 00:23:04:ee:be:05
oper key: 32829
port priority: 32768
port number: 16641
port state: 61

Slave Interface: ens5


MII Status: up
Speed: 10000Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 00:60:dd:42:25:39
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State : none
Partner Churn State: none
Actor Churn Count : 1
Partner Churn Count: 1
details actor lacp pdu:
System priority : 65535
System MAC address : 00:06:dd:42:25:38
port key: 15
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
System priority : 4096
System MAC address : 00:23:04:ee:be:05
oper key: 32829
port priority: 32768
port number: 257
port state: 61

IT DOES 32829-32768 = 61 and this port is using


po/vpc 61

Note that PORTSTATE=61 is GOOD


OPER KEY is related to local PO reference of
switch.

Note: The offered MAC address is the same for each link, is controlled but
the sending device in this case vPC, ad is the same for every link, but as
this is a dedicated segment (not shared/hub) that is not an issue for correct
Ethernet operation.
CORE2# sh vpc role

vPC Role status


----------------------------------------------------

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 153 of 222


vPC role : primary
Dual Active Detection Status : 0
vPC system-mac : 00:23:04:ee:be:05
vPC system-priority : 4096
vPC local system-mac : 40:b5:c1:00:a0:27
vPC local role-priority : 120
vPC local config role-priority : 120
vPC peer system-mac : 54:9f:c6:48:82:27
vPC peer role-priority : 120
vPC peer config role-priority : 120
CORE2#

Other useful resources


Port state definition for LACP
/* Port state definitions (43.4.2.2 in the 802.3ad standard) */
#define AD_STATE_LACP_ACTIVITY 0x1
#define AD_STATE_LACP_TIMEOUT 0x2
#define AD_STATE_AGGREGATION 0x4
#define AD_STATE_SYNCHRONIZATION 0x8
#define AD_STATE_COLLECTING 0x10
#define AD_STATE_DISTRIBUTING 0x20
#define AD_STATE_DEFAULTED 0x40
#define AD_STATE_EXPIRED 0x80

See this URL for more information and onward references


https://access.redhat.com/discussions/3357541
https://www.ieee802.org/3/ad/public/mar99/seaman_1_0399.pdf

Below is an example of a broken connection Below is an example of a CLEAN BOOT,


due to issue in the server. where LACP is already configured on the
switch. Also shown in the bond config files is
The root cause of this was a FLAPPING the
xmit_hash_policy=2
interface which then went into
ERRDISABLE status. Which presents as
shutdown wait 10 seconds and no shutdown Transmit Hash Policy: layer2+3 (2)

fixed the issue and interface came up In the bonding status


correctly.
$ cat /proc/net/bonding/bond0 [root@TVSppspcux00100 ~]# cat
/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 Ethernet Channel Bonding Driver: v3.7.1
(April 27 2011) (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link Bonding Mode: IEEE 802.3ad Dynamic link
aggregation aggregation
Transmit Hash Policy: layer2+3 (2) Transmit Hash Policy: layer2+3 (2)
MII Status: down <<<<<<<< MII Status: up
MII Polling Interval (ms): 1 MII Polling Interval (ms): 1

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 154 of 222


Up Delay (ms): 0 Up Delay (ms): 0
Down Delay (ms): 0 Down Delay (ms): 0

802.3ad info 802.3ad info


LACP rate: slow LACP rate: slow
Min Links: 0 Min links: 0
Aggregator selection policy (ad_select): Aggregator selection policy (ad_select):
stable stable
System priority : 65535 System priority: 65535
System MAC address : 00:06:dd:42:25:08 System MAC address: 00:60:dd:42:25:38
bond bond0 has no active aggregator <<<< Active Aggregator Info:
Aggregator ID: 1
!! TEXT BELOW MISSING Number of ports: 2
<<<<<<<<<<<< Actor Key: 15
!! Active Aggregator Info: Partner Key: 32829
!! Aggregator ID: 1 Partner Mac Address:
!! Number of ports: 1 00:23:04:ee:be:05
!! Actor Key: 15
!! Partner Key: 1 Slave Interface: enp20s0
!! Partner Mac Address: MII Status: up
00:00:00:00:00:00 Speed: 10000 Mbps
Duplex: full
Slave Interface: enp20s0
Link Failure Count: 0
MII Status: down <<<<<<<<<<<<
Permanent HW addr: 00:60:dd:42:25:38
Speed: 10000Mbps Slave queue ID: 0
Duplex: full Aggregator ID: 1
Link Failure Count: 31 Actor Churn State: monitoring
Permanent HW addr: 00:60:dd:42:25:08 Partner Churn State: monitoring
Slave queue ID: 0
Actor Churned Count: 0
Aggregator ID: 2
Partner Churned Count: 0
Actor Churn State : churned
details actor lacp pdu:
Partner Churn State: churned system priority: 65535
Actor Churn Count : 1 system mac address: 00:60:dd:42:25:38
Partner Churn Count: 2 port key: 15
details actor lacp pdu: port priority: 255
System priority : 65535
port number: 1
System MAC address : 00:06:dd:42:25:08
port state: 61
port key: 0
details partner lacp pdu:
port priority: 255 system priority: 4096
port number: 1 system mac address: 00:23:04:ee:be:05
port state: 5 oper key: 32829
details partner lacp pdu: port priority: 32768
system mac address: 00:23:04:ee:be:05
port number: 16641
oper key: 32833 <<<< PO65 OFFERED <<<<
port state: 61
port priority: 32768
port number: 273 Slave Interface: ens5
port state: 7 <<<<<<<<<< MII Status: up
Speed: 10000 Mbps
Slave Interface: ens5 Duplex: full
MII Status: down <<<<<<<<<<
Link Failure Count: 0
Speed: 10000Mbps
Permanent HW addr: 00:60:dd:42:25:39
Duplex: full Slave queue ID: 0
Link Failure Count: 31 Aggregator ID: 1
Permanent HW addr: 00:06:dd:42:25:09 Actor Churn State: monitoring

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 155 of 222


Slave queue ID: 0 Partner Churn State: monitoring
Aggregator ID: 2 Actor Churned Count: 0
Actor Churn State : none Partner Churned Count: 0
Partner Churn State: churned details actor lacp pdu:
Actor Churn Count : 1 system priority: 65535
Partner Churn Count: 2 system mac address: 00:60:dd:42:25:38
details actor lacp pdu: port key: 15
System priority : 65535 port priority: 255
System MAC address : port number: 2
00:06:dd:42:25:08 <<< port state: 61
port key: 0 details partner lacp pdu:
port priority: 255 system priority: 4096
port number: 2
system mac address: 00:23:04:ee:be:05
port state: 13
oper key: 32829
details partner lacp pdu:
port priority: 32768
system mac address: 00:23:04:ee:be:05 port number: 257
oper key: 32833 <<< PO65 OFFERED <<< port state: 61
port priority: 32768
port number: 16657 [root@TVSppspcux00100 network-scripts]# cat
port state: 7 <<<<<<<<<< /etc/sysconfig/network-scripts/ifcfg-Bond0
BONDING_OPTS="downdelay=0 miimon=1
mode=802.3ad xmit_hash_policy=2 updelay=0"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=172.19.42.61
PREFIX=24
GATEWAY=172.19.42.254
DNS1=172.31.1.52
DNS2=172.31.1.53
DOMAIN=paris.tv5monde.org
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=Bond0
UUID=de3f2498-d152-4be8-a9d5-0e8a2408b5bf
DEVICE=bond0
ONBOOT=yes
[root@TVSppspcux00100 network-scripts]#

To summarise the basic of configure and test:

1. Configure the bond BEFORE and NEXIS client is loaded.


2. Configure the bond with xmit_hash_policy=2. To ensure this results in layer2+3
load balancing,

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 156 of 222


[root@TVSppspcux00100 network-scripts]# cat /etc/sysconfig/network-
scripts/ifcfg-Bond0
BONDING_OPTS="downdelay=0 miimon=1 mode=802.3ad xmit_hash_policy=2 updelay=0"

[root@TVSppspcux00100 ~]# cat /proc/net/bonding/bond0


Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation


Transmit Hash Policy: layer2+3 (2)

3. Configure NEXIS client to bind with the bonded interface.


4. Test the client with ABU to ensure BW flows from the client across both path and
into the client across both paths.
5. Understand that load balancing will only occur to between multiple end points, a
single client and a single engine is a single flow therefore will use a single path only,
even with two engines the paths may chose a single member.

B.36.8 TEAMD or BONDING?


Do you prefer and automatic gearbox or a manual gearbox? A rhetorical question but not a
sill one…. read on…..

From what I have seen of TEAMD is does not really offer many advantages, for “basic
teaming” and the way it is used is NEXIS it becomes "bound" to the application/FW version
which could be considered a disadvantage. It is not documented (and that frustrates me too)
but NEXIS is ALWAYS using “teaming” of sorts, but TEAMD controls what type of
teaming from within the GUI, either SFT (or other variations of the same meaning such as
MS LBFO or active-backup in *IX speak) or LACP. I think TEAMD is "necessary" to get
GUI functionality, while bonding cannot do that (now you understand the reference above to
auto vs. manual gearbox!!)

Three useful articles below to help the reader understand some more of the fine detail without
too much glossy marketecture.

https://tobyheywood.com/network-bonding-vs-teaming-in-linux/
https://www.redhat.com/en/blog/if-you-bonding-you-will-love-teaming
https://www.admin-magazine.com/Articles/Link-aggregation-with-kernel-bonding-and-the-
Team-daemon/(offset)/6

So, after reading those I think one could also use another metaphor… do you want “DIY” or
“Silver Service” both have their places and appropriate optimal deployment scenarios.

Also look at section 4.6 for details on Naming that have been round for a LONG time,
unfortunately every different OS seems to interpret the details a little differently and add a
fresh dialect.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 157 of 222


B.37 Nexus Watch Command (Field-Tip)
On a project in NOV 2018, the watch command was used on Nexus 93180 switch to auto
refresh the statistics from the interfaces, multiple SSH sessions were used to allow several
interfaces to be monitored concurrently.

watch sh int e1/17 | section rate

Using the pipe modifier allows a reduces set of pertinent information from the show interface
output to be viewed, with regular updating of 2 seconds.

Some of example output from the screen shot above is show below
Every 2.0s: vsh -c "sh int e1/17 | section rate" Wed Nov 21 14:47:17 2018

5 seconds input rate 391224 bits/sec, 164 packets/sec


5 seconds output rate 213456 bits/sec, 148 packets/sec
input rate 391.22 Kbps, 164 pps; output rate 213.46 Kbps, 148 pps

B.38 Automating Backing Up Cisco Config Files


Below are some articles on automating the backup of config files from Cisco devices
B.38.1CATALYST

https://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-122-
mainline/46741-backup-config.html#ab

Backup Configuration to a TFTP Server


This example is to save the running config to a TFTP server (10.1.1.1) every Sunday at
23:00:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 158 of 222


Router(config)#kron policy-list Backup

Router(config-kron-policy)#cli show run | redirect


tftp://10.1.1.1/test.cfg
Router(config-kron-policy)#exit
!
Router(config)#kron occurrence Backup at 23:00 Sun recurring
Router(config-kron-occurrence)#policy-list Backup

B.38.2 NEXUS
http://crazyitpro.blogspot.com/2013/07/schedule-automatic-backup-config-in.html

Nexus-Sw1(config)#feature scheduler //Enable scheduler service in Nexus

Nexus-Sw1(config)#scheduler job name backup-daily // Job Name

Nexus-Sw1(config)#scheduler aaa-authentication username abcd password


abcd@123 // AAA - Authentication for Job created above (If AAA configured)

Nexus-Sw1(config)#scheduler job name backup-daily


copy running-config tftp://192.168.1.23/$(SWITCHNAME)-cfg.$(TIMESTAMP)
//IP of TFTP SERVER , file will saved with switch name and timestamp
exit

Nexus-Sw1(config)#scheduler schedule name backup-daily // Setup Schedule to run


for the JOB

Nexus-Sw1(config-schedule)# time ?
daily Specify a daily schedule
monthly Specify a monthly schedule
start Specify a future time schedule
weekly Specify a weekly schedule

Example :
Nexus-Sw1(config-schedule)# time start now repeat 00:00:05
Schedule starts from Mon Mar 4 10:44:41 2013
Nexus-Sw1(config-schedule)# job name backup-daily // Job Name to be
schedule in Scheduler
Nexus-Sw1(config-schedule)# exit
Nexus-Sw1(config)# exit
Nexus-Sw1# show scheduler config
Nexus-Sw1# show scheduler logfile

B38.3 TFTP from NEXUS to tftp server


It is a bit different in NEXUS than Catalyst

copy running-config tftp://10.42.10.8/sw1-run-config1.bak vrf default

OR can do manually

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 159 of 222


BEWARE copy run tftp: must use the colon or it copies to a
file in NVRAM called “tftp”

B.39 NEXUS 93xxx USEFUL COMMANDS


Some tips and tricks….
B.39.1 SHOW COMMANDS NEXUS 93xxx USEFUL COMMANDS FROM SHOW TECH
**** See Section B.17 for Cisco Catalyst

Using SHOW TECH-SUPPORT on a NEXUS switch is not as useful as it was for Catalyst as
now it creates a 200-800MB file with up to 8 million lines (depending on model) rather than
a 2MB file, and 99% of it is useless unless “distilled” by a computer. Hence below are listed
some of the most useful commands for diagnostic purposes.

These should not be pasted in together as one command set,


but used individually and saved to a logging file for further analysis

!! There might be some syntax variation if used in CATALYST switches

Some commands will not apply to L2 only switches or standalone L3 switches*

These show commands were taken from NEXUS 7.x show tech-support Some command may
differ or not exist between NEXUS versions; or depend on configured features and elements.

Some commands may not apply depending on which NEXUS “features” exist in the config
file

Commands with longer output can use the syntax show command | n to avoid having to keep
pressing the space bar to continue.

COMPACT SET

NATIVE COMMAND WITH NO MORE COMMAND


show version
show module show version | no-more
show clock show module | n
show running-config show clock | n
show interface brief show running-config | n
show interface status show interface brief | n
show interface description show interface status | n
show interface counters errors show interface description | n
show interface show interface counters errors | n
show cdp neighbors show interface | n
show cdp neighbors detail show cdp neighbors | n
show cdp neighbors detail | no-more

WITH NEXUS the commands can be pasted as


a single set via telnet and captured into the
logging facility of the terminal emulator.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 160 of 222


FULL SET

NATIVE COMMAND WITH NO MORE COMMAND


show switchname show switchname | no-more
show interface mgmt0 show interface mgmt0 | n
show version show version | n
show module show module | n
show clock show clock | n

show running-config show running-config | n


show startup-config show startup-config | n
!
show interface brief show interface brief | n
show interface status show interface status | n
show interface description show interface description | n
show interface counters errors show interface counters errors | n
show interface trunk show interface trunk | n
show interface transceiver show interface transceiver | n
show interface show interface | n
!
show port-channel summary show port-channel summary | n
show system reset-reason show system reset-reason | n
show inventory show inventory | n
show environment show environment | n
!
show hsrp brief show hsrp brief | n
show hsrp show hsrp | n
show vpc brief show vpc brief | n
show vpc show vpc | n
! !
show ip static-route show ip static-route | n
show ip route show ip route | n
! !
show cdp all show cdp all | n
show cdp global show cdp global | n
show cdp neighbors show cdp neighbors | n
show cdp neighbors detail show cdp neighbors detail | n
show port-channel summary show port-channel summary | n
show port-channel usage show port-channel usage | n
show port-channel load-balance show port-channel load-balance | n
show vlan show vlan | n
show vlan all-ports show vlan all-ports | n
show lldp neighbors show lldp neighbors | n
show lldp neighbors detail show lldp neighbors detail | n
show spanning-tree active show spanning-tree active | n
show spanning-tree summary show spanning-tree summary | n
show spanning-tree detail show spanning-tree detail | n
show ip igmp snooping show ip igmp snooping | n
show processes cpu history show processes cpu history | n
show interface priority-flow-control show interface priority-flow-control | n
show interface flowcontrol show interface flowcontrol | n

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 161 of 222


SOME COMMAND may not work depending WITH NEXUS the commands can be pasted as
on enable features a single set via telnet and captured into the
logging facility of the terminal emulator.
COMMANDS IN BOLD ARE CONFIG
DEPENDANT, alternatives :
* show hsrp | glbp | vrrp brief

show logging info show logging info | n


show logging logfile show logging logfile | no-more

show logging log ….. LAST BECAUSE IT LOOONNNNGG

B.39.2 Show interface counters - Errors only

LeafSW-MAS# show interface counters errors

This is a great command, so distil just the counter information but is show ALL the
interfaces, and depending on the switch that can be a LOOONNNNGG list and give lot of
information about counters that are (hopefully) all zero

This worked on a Nexus 93180YC running NXOS 7.x


LeafSW-MAS# show interface counters errors | exc " 0 -- 0
0 0 0" | exc " 0 0 0 0 0
0" | no

Depending on the NXOS version it will need a bit of tweaking because different version
report counter errors in a differing ways i.e. more/less columns

show interface counters errors | exc “ 0 0 0 0 0 0”

But a little persistence can save a lot of time.

Often the management interfaces always show because the column presentation is different

Also…. can use sh int co er shorthand command

BEWARE when this document is save as a PDF the spacing will mess up
So, the ZEROS section may need to copied from a “REAL” show interface counters
errors show command which are zeros to re-cred the text correctly (thanks Adobe!!!)

Of course with Arista CLI…….. the -nz command can be used to do this

And later version of NXOS have this command:


show interface counters errors non-zero

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 162 of 222


B.39.3 NEXUS copy run start SHORTCUT
Those nice people at Cisco decided to stop the use of the wr mem command (or wr for
short) in NXOS

The command below creates an alias of your choice, mine is wrc (write config) to do this
task, but it could be anything (I successfully tried “qaz” too for lazy typist!!), which does not
match and existing command.

cli alias name wrc copy run start

The full command

N9000_EDGE4# copy running-config startup-config


[########################################] 100%
Copy complete, now saving to disk (please wait)...
Copy complete.
N9000_EDGE4#

Cisco standard shortcut

N9000_EDGE4# copy run start


[########################################] 100%
Copy complete, now saving to disk (please wait)...
Copy complete.
N9000_EDGE4#

With the above shortcut

N9000_EDGE4# wrc
[########################################] 100%
Copy complete, now saving to disk (please wait)...
Copy complete.
N9000_EDGE4#

I know which I prefer!!

B.39.4 NEXUS other useful alias SHORTCUTS

cli alias name wrc copy run start


cli alias name shc show cdp neighbors
cli alias name shp show port-channel summary
cli alias name shpn show port-channel summary | no-more
cli alias name shv show version
cli alias name shh show hsrp brief
cli alias name shr show running-config
cli alias name shrn show running-config | no-more
cli alias name shri show running-config interface
cli alias name shro show ip route

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 163 of 222


cli alias name shi show interface
cli alias name shie show interface counters errors
cli alias name shin show interface counters errors non-zero
cli alias name shid show interface description
cli alias name shis show interface status
cli alias name shidn show interface description | no-more
cli alias name shisn show interface status | no-more
cli alias name shma show mac address-table
cli alias name shmai show mac address-table interface
cli alias name shmaa show mac address-table address

cli alias name shv show version


cli alias name shvp show vpc
cli alias name ct configure terminal

cli alias name shvp show vpc

USEFUL commands fore debugging/finding deviecs


sh ip arp vlan 465 | in 00:26:55:d9:74:59

sh ip arp vlan 465 | in 00:26:55:d9:74:59

sh ip arp vlan 164 | in 00:26:55:D9:76:09

sh ip arp vlan 465 | in 00:26:55:D9:76:09

sh ip arp vlan 164


sh ip arp vlan 465

10.140.165.157

0026.55d9.7459
00:26:55:d9:74:59

ping multicast 224.0.0.1 interface vl 164 count 1


ping multicast 239.255.2.3 interface vl 164 count 1

show mac address-table address f018.98f0.afc2


show mac address-table address

Being a slow and poor typist. I might use my alias cooand above

shmaa 9440.c930.7b0d

AVID-9348-EDGE3# show mac address-table address 9440.c930.7b0d


Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay
MAC

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 164 of 222


age - seconds since last seen,+ - primary entry using vPC Peer-
Link,
(T) - True, (F) - False, C - ControlPlane MAC, ~ - vsan
VLAN MAC Address Type age Secure NTFY Ports
---------+-----------------+--------+---------+------+----+----------------
--
* 164 9440.c930.7b0d dynamic 0 F F Eth1/7

To make copying to TFTP via the management interfaces easier when building configs:

cli alias name tvs1 copy run tftp://10.255.255.109/sw1 vrf


management
cli alias name tvs2 copy run tftp://10.255.255.109/sw2 vrf
management
cli alias name tvs3 copy run tftp://10.255.255.109/sw3 vrf
management
cli alias name tvs4 copy run tftp://10.255.255.109/sw4 vrf
management

This assumes there are 4 switches, and each switch will have its OWN version of the alias to
avoid confusion.

B.39.5 USEFUL NEXUS COMMANDS FOR MULTICAST DEBUGGING


Multicast is the cursed gift that keeps on giving….. well actually consumes huge amounts of
debugging time because people (well programmers) actually think it is “better” that using
broadcasters for layer 2 discovery. HOW WRONG THEY ARE. Multicast L2 issues due
IGMP are a massive time burner.

These command below will show some of the Multicast addresses in use and who is using
them. I would not expect most reader of this document to go anywhere near this level of
debugging, but this documents is a great repository for the author to keep stuff for future use.

mtrace events for IGMP process


show ip igmp internal event-history igmp-internal

show ip igmp internal event-history debugs

show ip igmp route vrf all

show ip igmp interface vrf all

show ip igmp snooping vlan <x>

show ip igmp internal

show ip igmp internal errors

show ip igmp groups vrf all summary


NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 165 of 222
show ip igmp internal event-history errors

show ip igmp internal event-history msgs

show ip igmp internal event-history vrf

show ip igmp internal event-history events

show ip igmp internal event-history policy

show ip igmp internal event-history cli

show ip igmp snooping groups detail

show ip igmp snooping explicit-tracking

show ip igmp snooping internal event-history vlan`

PING MULTICAST 224.0.0.1


PING MULTICAST 224.0.0.2

ping multicast 224.0.0.1 int vl [number]

This CLI OUTPUT below shows some content extracted from a show tech support saved
form a single NEXUS 90000 switch that was in a live broadcaster environment.

MI 1 and 2 connected to eth1/4 and 1/5 on VLAN20. With IP addresses 172.18.10.30 and 31

IGMP
L3VM Lookup Errors: 0
`show ip igmp route vrf all`
IGMP Connected Group Membership for VRF "default" - 4 total entries
Type: S - Static, D - Dynamic, L - Local, T - SSM Translated, H - Host Proxy
Group Address Type Interface Uptime Expires Last Reporter
224.0.1.84 D Vlan20 5w4d 00:02:27 172.18.10.31
224.0.1.85 D Vlan20 5w4d 00:02:29 172.18.10.28
225.0.0.64 D Vlan20 5w4d 00:02:33 172.18.10.180
229.111.112.12 D Vlan20 03:56:43 00:02:33 172.18.10.108

`show ip igmp interface vrf all`


IGMP Interfaces for VRF "default", count: 4
Vlan20, Interface status: protocol-up/link-up/admin-up
IP address: 172.18.10.4, IP subnet: 172.18.10.0/24
Active querier: 172.18.10.2, expires: 00:02:22, querier version: 2
Membership count: 4
Old Membership count 0
IGMP version: 2, host version: 2
IGMP query interval: 125 secs, configured value: 125 secs
IGMP max response time: 10 secs, configured value: 10 secs
IGMP startup query interval: 31 secs, configured value: 31 secs
IGMP startup query count: 2

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 166 of 222


IGMP last member mrt: 1 secs
IGMP last member query count: 2
IGMP group timeout: 260 secs, configured value: 260 secs
IGMP querier timeout: 255 secs, configured value: 255 secs
IGMP unsolicited report interval: 10 secs
IGMP robustness variable: 2, configured value: 2
IGMP reporting for link-local groups: disabled
IGMP interface enable refcount: 1
IGMP interface immediate leave: disabled
IGMP interface suppress v3-gsq: disabled
IGMP VRF name default (id 1)
IGMP Report Policy: None
IGMP Host Proxy: Disabled
IGMP State Limit: None
IGMP interface statistics: (only non-zero values displayed)
General (sent/received):
v2-queries: 2/30760, v2-reports: 0/3359881, v2-leaves: 0/1729
Errors:
Report version mismatch: 7222, Query version mismatch: 0
Unknown IGMP message type: 0
Interface PIM DR: No
Interface vPC SVI: No
Interface vPC CFS statistics:

2019 Nov 8 21:58:04.326740 igmp [25143]: [25573]: Processing clear route


for igmp mpib, for VRF default (172.18.10.30/32, 224.0.1.85/32),
inform_mrib due to MRIB delete-route request

2019 Nov 8 21:57:04.282899 igmp [25143]: [25573]: Processing clear route


for igmp mpib, for VRF default (172.18.10.31/32, 224.0.1.85/32),
inform_mrib due to MRIB delete-route request

2019 Nov 11 13:32:17.548648 igmp [25143]: : Received v2 Report for


224.0.0.251 from 172.18.10.30 (Vlan20)

2019 Nov 11 13:32:17.548648 igmp [25143]: : Received v2 Report for


224.0.0.251 from 172.18.10.30 (Vlan20)

2019 Nov 11 13:32:16.832498 igmp [25143]: : Received v2 Report for


224.0.0.251 from 172.18.10.31 (Vlan20)

2019 Nov 11 13:32:19.768262 igmp [25143]: [25573]: Updating oif entry from
M2RIB PSS forvlan_id = 20, src = 0.0.0.0, grp = 224.0.1.84
2019 Nov 11 13:32:19.768211 igmp [25143]: [25573]: Updating oif entry from
M2RIB PSS forvlan_id = 20, src = 0.0.0.0, grp = 224.0.1.84

2019 Nov 11 13:23:59.285751 igmp [25143]: [25250]: SN: <20> Suppressing


report for (*,224.0.1.84) came on Eth1/5
2019 Nov 11 13:23:59.285744 igmp [25143]: [25250]: SN: <20> Updated oif
Eth1/5 for (*, 224.0.1.84) entry
2019 Nov 11 13:23:59.285729 igmp [25143]: [25250]: SN: <20> Received v2
report: group 224.0.1.84 from 172.18.10.31 on Eth1/5
2019 Nov 11 13:23:59.285720 igmp [25143]: [25250]: SN: <20> Process a valid
IGMP packet, pkttype:v2report(22), iif:Eth1/5
2019 Nov 11 13:23:59.166873 igmp [25143]: [25250]: SN: <20> Forwarding to
internal SVI Vlan20 (<vlan 20>)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 167 of 222


2019 Nov 11 13:23:59.166863 igmp [25143]: [25250]: SN: <20> Don't forward
back on router-port Po10 .
2019 Nov 11 13:23:59.166859 igmp [25143]: [25250]: SN: <20> Forwarding the
packet to router-ports . (iif Po10)
2019 Nov 11 13:23:59.166849 igmp [25143]: [25250]: SN: <20> Report for
link-local group 224.0.0.252 is ignored
2019 Nov 11 13:23:59.166845 igmp [25143]: [25250]: SN: <20> Received v2
report: group 224.0.0.252 from 172.18.10.171 on Po10
2019 Nov 11 13:23:59.166837 igmp [25143]: [25250]: SN: <20> Process a valid
IGMP packet, pkttype:v2report(22), iif:Po10
2019 Nov 11 13:23:58.998808 igmp [25143]: [25250]: SN: <20> Forwarding to
internal SVI Vlan20 (<vlan 20>)
2019 Nov 11 13:23:58.998743 igmp [25143]: [25250]: SN: <20> Forwarding
packet to router-port Po10 (iod 10) .
2019 Nov 11 13:23:58.998726 igmp [25143]: [25250]: SN: <20> Forwarding the
packet to router-ports . (iif Eth1/4)
2019 Nov 11 13:23:58.998717 igmp [25143]: [25250]: SN: <20> Report for
link-local group 224.0.0.251 is ignored
2019 Nov 11 13:23:58.998712 igmp [25143]: [25250]: SN: <20> Received v2
report: group 224.0.0.251 from 172.18.10.30 on Eth1/4
2019 Nov 11 13:23:58.998705 igmp [25143]: [25250]: SN: <20> Process a valid
IGMP packet, pkttype:v2report(22), iif:Eth1/4

B.40 NEXUS FX2 Models and ISIS STORAGE LIMITATIONS


This document is for NEXIS but many systems transition from ISIS to NEXIS and deploy a
new switch infrastructure.

In Early 2019, as part of a network refresh and NEXIS deployment, one European customer
deployed ISIS 7500 on a Nexus 9336C-FX2 (used as an access layer device with a Nexus
9500-EX core) via breakout connection and encountered the same problems as had been seen
in 2017 with ISIS 7500 1G windows clients (UDP) but where 10G windows clients (TCP)
operated successfully. NEXIS 1G clients operated successfully.

This customer site has previously tested with Cisco Nexus 9500 EX with Nexus N93180 YC
EX, as described in section 2.3.2.2 Cisco Nexus 9500 EX with Nexus N93180 YC EX of this
document.

The “solution” was to connect ISIS 7500 (using a breakout connection from a 40/100G port)
to a Nexus N93180YC-FX (an alternative/nearby access layer device).

This is related to information discussed in NETREQS V1.x section 1.5.7 and summarised in
section 1.3.1.2 Cisco Nexus 9348-GC-FXP Field testing (FEB 2018) of this document:

Considering a defect where UDP packages are misclassified as described below.


https://bst.cloudapps.cisco.com/bugsearch/bug/CSCva22756

The challenge with 9336C-FX2 is considered to be “structural” and not a physical layer
dependency.

Customer connected ISIS 7x00 with 4x10G copper breakout (not supported but seems to
work), 10G Windows clients worked OK but 1G windows clients did not, so it feels like the
same bugs as they had previously “fixed” on 9500-EX core.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 168 of 222


There was not an opportunity to do more extensive testing and pass traffic via the
N93180YC-EX and then through the 9336C-FX2 and onto the 9500 chassis. The quick
workaround used will suffice for the remaining life of the ISIS 7x00 at customer so there will
not incentive to spend time, effort and money on debugging this issue. Hence, we may never
know the root cause, only that such a combination needs to be avoided.

Any 1G NEXIS or ISIS client connection that use UDP for primary data transport (See
NETREQS V1.x section 1.0.4 for more details) client traffic for ISIS storage is likely fail (as
at MAY 2019) if is transits an FX2 series product. During data migration from ISIS to
NEXIS the “active client” should be placed as close to ISIS storage as possible to mitigate
encountering such issues, however such issues would not be expected for 10G connected
“transfer/migration) clients.

B.41 Using DHCP with Avid Applications


The V1.x NETREQS document for ISIS and Media Central provided some advice on using
DHCP. While some of that advice still has general merit it also needs to change with the
times, because many of the goalposts have moved. With NEXIS (and MediaCentral) the
concept of Zones is less applicable, but the principles still apply.

Of course, NEXIS storage itself must use STATIC, there is no provision (at the time of
writing) for DHCP and this is not expected to change.

Generally, server class devices are fixed so will most likely use a STATIC IP address, but
with the correct configuration a DYNAMIC address should be viable, and as devices move
into VMs or containers or cloud-based deployments this might become mandatory. There is
not a one-shot silver bullet answer that can be offered, because as always “it depends” on
various factors. Potentially “statically assigned dynamic addresses” might be solution where
the IP address is tied to the MAC address, which make IP renumbering exercises much
easier, but that adds a different OAM task if the hardware changes, but can be very helpful in
a VMAWARE type of deployment where MAC addresses can be “sticky-coded”.

As for workstations/clients, it depends on portable vs. fixed and where they might connect as
to the best choice, plus DHCP for Avid client devices in corporate networks will be
dependent on customers’ policies. A mobile laptop is likely to get its IP address via DHCP,
while dedicated workstation might use either STATIC or DHCP depending on whether it
connects via an “Avid” administered network or a corporate IT administered network.

When using STATIC IP addresses, the DNS systems must be correctly configured for
FORWARD and REVERSE lookup otherwise some Avid application will not work correctly,
especially where FQDN is used.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 169 of 222


It is important to ensure that Update associated point [PTR] record is checked (or similar
function in other DNS servers) for REVERSE lookup to operate correctly, and this should be
the default setting.

When using DHCP this should also occur by default.

A FORWARD lookup entry in Windows Server A REVERSE lookup entry in Windows Server
DNS, using a STATIC IP address. DNS using a STATIC IP address.

A FORWARD lookup entry in Windows A REVERSE lookup entry in Windows Server


Server DNS, using a DHCP IP address. DNS using a DHCP IP address.

B.42 IP address allocation with NEXIS and MediaCentral


NEXIS storage is much more frugal in its use of IP addresses than ISIS 7x00. Externally,
each controller has one IP addresses for an Engine with a single controller, and for dual
controllers there will be just two IP addresses per Engine, and all of this exists in a single

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 170 of 222


VLAN. Hence 60 Engines with dual controllers could be squeeze into a /25 subnet, but such
an approach has no expansion so is a bad idea, I just want to illustrate a point, Also consider
that such a large system would be better of using E5 engines anyway.

While with NEXIS everything can be in the same VLAN, but that I not always best
depending on many criteria. I like to reduce the “blast radius” and keep my subnets small
where viable. Also, for better security implementation I like to have NLE Workstations in a
different subnet to NEXIS and MediaCentral Application servers, so different rule sets can be
applied easily. Also, sometimes Media Central Cloud UX servers might be in yet another
subnet, and Media Central Cloud UX web clients are likely to be on the corporate IT
network.

When sharing a subnet for servers and users, I tend to prefer putting user devices in the upper
half of the subnet and server devices in the lower half, to allow for access lists to be applied if
required, and not use “100’s” as a border point, such “borders” should be on “subnet
boundaries” such as 64 or 128.

B.43 Navigating Cisco NXOS versions


This is not a section about which version is recommended for use with NEXIS storage, or a
recommendation on switches. This a little help on finding out about version information on
Cisco Nexus switches that are a popular choice for use with Avid systems, of course there are
suitable other vendors too. But as Cisco is such a large vendor, searching and finding what
you want to know can be a mammoth task.

There are three Nexus families that Avid has been deployed ion at many sites are NEXUS
5600, Nexus 7000/7700 and NEXUS 9000 (9300, 93000, 9500)

As at JUN 2019 this URL is a great starting point but some of the URLs on this page do not
point where might be expected
https://www.cisco.com/c/en/us/products/ios-nx-os-software/nx-os/index.html

Obviously not wanting to read lots of verbose release notes, and develop an encyclopaedic
knowledge (which of course would be a life’s work), this helps to obtain for a simplified
version. For instance on Nexus 9000 there is no version 8.x, but on Nexus 7000 there is V8.x
but not 8.x as on Nexus 9000.

For the Nexus 5600 look here:


https://www.cisco.com/c/en/us/support/switches/nexus-5000-series-switches/products-
release-notes-list.html

For the Nexus 7000 family look here:


https://www.cisco.com/c/en/us/support/switches/nexus-7000-series-switches/products-
release-notes-list.html

For the Nexus 9000 family look here:


https://www.cisco.com/c/en/us/support/switches/nexus-9000-series-switches/products-
release-notes-list.html

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 171 of 222


Beware on this URL , as well as NXOS version, you will find details on ACI versions (which
have higher numbers) and “R” versions ( which use different chips and fabrics).

Hopefully this will help navigate the nebula.

B.44 FIELD TIP: large file sending


While Avid has its own solution for doing this, sometimes you just want a quick and easy
solution to get out of a hole which to send large files, without having to sign up.

Here are two solutions I have used, because they are simple and work well.

Domestically I have used WETRANSFER https://wetransfer.com, I like it because: Simple


file-sharing, No registration, It's free. It has a 2GB “basic limit”. Also you get “download
receipts”

Also, I found this one FIREFOX SEND https://send.firefox.com which I like too. Again:
Simple file-sharing, No registration, It's free; Plus: you can password protect and also set
expiry parameters, it has a 1GB “basic” limit.

For a more extensive list checkout this URL:


https://www.creativebloq.com/design-tools/send-large-files-clients-free-tools-3132117

B.45 FIELD TIP: Upgrading Nexus 9000 switch firmware


The information found on the web generally discussed using SCP with a Cisco account. In
many cases this is not possible, and it must be performed locally.

One of the first things that MUST be done is to ad these two commands, otherwise it is a
non-starter

feature scp
feature bash

Use WINSCP

The example here shows a machine running

Software
BIOS: version 07.56
NXOS: version 7.0(3)I4(2)
BIOS compile time: 06/08/2016
NXOS image file is: bootflash:///nxos.7.0.3.I4.2.bin
NXOS compile time: 7/21/2016 8:00:00 [07/21/2016 11:09:32]

With a bootflash variable of:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 172 of 222


boot nxos bootflash:/nxos.7.0.3.I4.2.bin

it will be upgraded to:

NXOS image file is: bootflash:///nxos.7.0.3.I7.4.bin


NXOS compile time: 6/14/2018 2:00:00 [06/14/2018 09:49:04]

So the bootflash variable. Will need to be changed to:

boot nxos bootflash:/nxos.7.0.3.I7.4.bin.

once this is done the config must be saved, and then the switch must be reloaded to boot into
new code.

I left the old code there just in case and there. Was plenty of space available in bootflash

techctr-edge3# dir bootflash:


4096 Jan 23 21:08:09 2020 .rpmstore/
4096 Aug 05 04:53:54 2019 .swtam/
2097193 Jan 23 11:36:07 2020
20200121_163501_poap_26002_1.log
1128719 Jan 23 17:15:46 2020
20200121_163501_poap_26002_2.log
1048635 Jan 21 19:09:11 2020
20200121_163501_poap_26002_init.log
1777998029 Aug 05 04:50:10 2019 aci-n9000-dk9.14.1.2g.bin
0 Jan 23 21:08:21 2020 bootflash_sync_list
4096 Aug 05 04:46:53 2019 home/
16384 Aug 05 04:45:08 2019 lost+found/
882475008 Aug 05 04:46:50 2019 nxos.7.0.3.I7.1.bin
964875776 Nov 06 09:52:02 2018 nxos.7.0.3.I7.4.bin
0 Jan 23 17:16:35 2020 platform-sdk.cmd
4096 Aug 05 04:54:15 2019 scripts/
4096 Aug 05 04:53:57 2019 virtual-instance/

Usage for bootflash://sup-local


4035223552 bytes used
112688205824 bytes free
116723429376 bytes total

this article is helpful

Two features must be enabled on the NEXUS switch (ideally - for safety- they should be
removed after the upgrade) and some specific setting need to be made in WIN SCP settings
for the site details.

https://community.cisco.com/t5/data-center-documents/getting-winscp-on-n9k-to-work-
settings-config-required/ta-p/3818490

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 173 of 222


If you get this error you have not competed all the steps in the URL above

These settings have to be applied to each NEXUS switch that is a destination.

Remote directory must be /bootflash

Shell command is run bash sudo su

The transfer is rather slow expect approx. 20-30 minutes for a 9.3 Code set which it approx.
1.6GB, it does not occur anywhere need line speed

BEFORE:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 174 of 222


AFTER:

AFTER TRANSFER

B.46 Multicast propagation on DELL switches


This section is closely related to section B.20 which is about Dell switches

According to this article on 9.14 (saw same for 9.9 too)

https://www.dell.com/support/manuals/us/en/04/force10-s4048-on/s4048-on-9.14.0.0-
cli/igmp-snooping-commands?guid=guid-d4f08dbc-eeca-4616-9d00-
aa954a573972&lang=en-us

https://www.dell.com/support/manuals/us/en/04/force10-s4048-
on/s4048_on_9.9.0.0_cli_pub/igmp-snooping-commands?guid=guid-d4f08dbc-eeca-4616-
9d00-aa954a573972&lang=en-us

One might assume that IGMP snooping is not enable by default on an S4048, and that it has
to be enable on a VLAN by VLAN basis in which case Link level multicast in 239.x.y.z
would be propagated in a similar manner to broadcasts.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 175 of 222


Looking at a S4048.

KLAB_VLTCORE1_RACK4#sh ip igmp snooping ?


groups IGMP snooping group membership
information
interface IGMP snooping interface information
mrouter Multicast router ports information
KLAB _VLTCORE1_RACK4#sh ip igmp snooping mr
KLAB _VLTCORE1_RACK4#sh ip igmp snooping gr
KLAB _VLTCORE1_RACK4#sh ip igmp snooping interface
KLAB _VLTCORE1_RACK4#

Again, this suggests IGMP snooping it is not active by Default in this switch

I connected into a Dell N3024

KLAB_ACCESS05_RACK7#show ip igmp

IGMP admin mode................................ Disabled


IGMP router-alert check........................ Disabled

IGMP INTERFACE STATUS


Interface Interface-Mode Operational-Status
--------- -------------- ----------------
Vl1 Disabled Non-Operational
Vl30 Disabled Non-Operational

A similar article seems to suggest the same as above that by default multicast packets are
flooded

https://www.dell.com/community/Networking-General/N2000-IGMP-snooping-filtering/td-
p/4688311

and although this is an old article


https://www.dell.com/downloads/global/products/pwcnt/en/app_note_6.pdf
it again seems to suggest the same as above that by default multicast packets are flooded on
Dell switches

Which is not a problem for me as for MediaCentral applications the need only exists in the
239.x.x.x link-local range within a small network diameter of /23 max but more likely /24.

DELL have since confirmed

On OS9 and OS6 Dell switches have IGMP disabled and normally flooding is done
for multicast.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 176 of 222


On OS10 it can be different as there it would be enabled by default (at least in later
versions).

B.46.1 DELL 0S10 IGMP SNOOPING


IGMP snooping uses the information in IGMP packets to generate a forwarding table that
associates ports with multicast groups. When switches receive multicast frames, they forward
them to their intended receivers. OS10 supports IGMP snooping
on virtual local area network (VLAN) interfaces.

Effective with OS10 release 10.4.3.0, IGMP snooping is enabled by default.

Extract from Pages 830-834. Dell EMC SmartFabric OS10 User Guide Release 10.5.1
ip igmp snooping
Enables IGMP snooping on the specified VLAN interface.

Syntax ip igmp snooping

Parameters None
Default Default Depends on the global configuration.
Command Mode VLAN INTERFACE
Usage When you enable IGMP snooping globally, the configuration
Information applies to all VLAN interfaces. You can disable IGMP snooping
on specified VLAN interfaces. The no version of this command
disables IGMP snooping on the specified VLAN interface.
Example OS10(config)# interface vlan 100
OS10(conf-if-vl-100)# no ip igmp snooping

Supported Releases 10.4.0E(R1) or later

IGMP snooping should be disabled on MediaCentral VLANS that need to use Link Level
Layer 2 multicast (239.x.x.x) for local lookup such as Media Indexer NOMI and MCCUX
clustering. THIS IS NOT ABOUT MUTLICAST ROUTING.

B.47 Multicast propagation on Arista switches


According to the excerpt below from Arista documentation, IGMP snooping can be disabled
on a per VLAN basis, allowing the flooding of link-level (used by Media Central), and well
known multicast addresses within a VLAN.

https://www.arista.com/assets/data/pdf/user-manual/um-
eos/Chapters/IGMP%20and%20IGMP%20Snooping.pdf

39.4.1 Enabling Snooping


The switch provides two control settings for snooping IGMP packets:
• Global settings control the availability of IGMP snooping on the switch. Snooping is
globally enabled by default.
• Per-VLAN settings control IGMP on individual VLANs. If snooping is enabled on
the VLAN, it follows the global snooping state.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 177 of 222


The ip igmp snooping command controls the global snooping setting. The ip igmp snooping
vlan command configures snooping on individual VLANs.
Examples
• This command globally enables snooping on the switch.
switch(config)#ip igmp snooping
switch(config)#
• This command disables snooping on VLANs 2 through 4.
switch(config)#no ip igmp snooping vlan 2-4
switch(config)#

see https://www.arista.com/en/um-eos/eos-igmp-and-igmp-snooping

B.48 Useful cabling information


Some informative articles on cabling standard differences to clear the mist.

The Difference Between Cat6 vs Cat6A Ethernet Cable (APR 2019)


https://www.truecable.com/blogs/cable-academy/cat6-vs-cat6a

Demystifying Ethernet Types— Difference between Cat5e, Cat 6, and Cat7 (FEB 2016)
https://planetechusa.com/demystifying-ethernet-types-difference-between-cat5e-cat-6-and-
cat7/

Great article: How to Decipher the Data Center Fiber Alphabet Soup

https://www.commscope.com/Blog/How-to-Decipher-the-Data-Center-Fiber-Alphabet-Soup/

Know your GBASE-SR from your GBASE-DR, and your LR4 from your SR16

and
https://en.wikipedia.org/wiki/Terabit_Ethernet

B.49 LACP for NEXIS clients – is it supported?


This section results from a customer enquiry in JUN2020 and IS SUBJECT TO CHANGE!!!

One of our customers in NALA wants to double check with us the network configuration
attached.
I do not see a problem, but we would like your blessing since they use Arista switches and
my experience with them is limited.

As far as I understand, they need path redundancy, but with only one controller inside their
NEXIS E2s. I rather have two, but budget limits won’t allow them to do so.

The main questions are:

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 178 of 222


• Is LACP Supported on Nexis Clients running Windows 10 Pro Workstations with
ATTO NICs?
• Is LACP Supported on Nexis Clients running on Mac Pro Workstations with ATTO
NICs?

I am unaware of ANY testing done at the client level with LACP by Avid Engineering? More
is the pity I have been saying we should do this for years, but it never makes the cut. IMHO
this is low hanging fruit and I have recently been talking with Dana and Tim about things we
need to do better that would be low cost but great benefit. Product Management and
Engineering have always gone down the AALB route, which worked well for ISIS/NEXIS
but had some unfortunate shortcomings for Interplay (and probably this still lurk in Media
central because Asset Management group don’t test NEXIS/resilience features.

The last testing I did was with ISIS 4.x in 2014 and windows 7 with the Intel driver Pro 1000
series of NICS, while on a PS Job in South Korea I had the opportunity so seized it with both
hands. It worked really well with NEXUS 7000 and the M148 1G card, the one small
bugbear was that ISIs client only “saw” 1G of BW, but it was the most robust connection I
have ever had the pleasure of causing distress in the name of testing and learning.

As far as LACP a lot of things have altered with Windows 10 vs 7, the way NIC drivers
operate has undergone BIG changes. Interface teaming has always been more of a server
class function not client class, after all the Intel Pro 1000 was a server class NIC.

I have never done it on a MAC but this Article suggest it will work for OSX 10.14/15

https://support.apple.com/en-gb/guide/mac-help/mchlp2798/mac

As per the articles below, later version of Window 10 PRO can do it, perhaps that is why
NEXIS engineering stayed well clear…. Limited support for Microsoft an Apple
NIC Teaming Windows 10 (Works)

https://linustechtips.com/main/topic/1003888-nic-teaming-windows-10-works/

How to Set up Teaming with an Intel® Ethernet Adapter in Windows® 10 1809?

https://www.intel.co.uk/content/www/uk/en/support/articles/000032008/network-and-i-o/ethernet-products.html

Windows 10 NIC Teaming, it CAN be done!


https://www.reddit.com/r/homelab/comments/a7uszq/windows_10_nic_teaming_it_can_be_done/

LACP does not need MLAG to test, it can be tested on a single switch, and that is where I
would start. Of course, for extra BW there is always AALB…. Which is just 2 NIC with
discrete IP addresses in the same VLAN.

I would be a bit of a science project…. But a good one.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 179 of 222


Note: Linux uses a default “xmit_hash_policy=layer2”, it would be better
to use xmit_hash_policy=layer3 because this would use IP addresses and
not MAC addresses, which would not have sufficient variation if other end
of connection in a different subnet as default gateway mac would always be
used. However, some FHRP solutions may allow two MAC addresses to be
used. Giving some outbound load balancing even at Layer 2. Using
xmit_hash_policy=layer4” should be avoided.

Similar principles would apply to Windows Server.

B.50 Flow control with AVID Nexis storage – is it needed?


There are many debates about flow control, and my published vanilla configs do not use it by
default. Link Level layer 2 Flow control (802.3x) is a good tool for controlling the flow of
data between devices in a single switch, but for devices operating between switches (or when
MLAG solutions are deployed??) with a SW-SW interlink the outcome may not be as
successful or operate as expected, or operate with some unforeseen outcomes, and need a
detailed “matching” configuration in all members of the chain. Which is why Priority Flow
Control (PFC; IEEE 802.1Qbb) is better but still needs “careful handling” on networks with
multiple switches and SW-SW links as it could suppress the wrong traffic because there is no
way to precisely identify the “offending” sender.

https://blogs.cisco.com/perspectives/to-flow-or-not-to-flow

two good explanation on Priority Flow Control, one from Ivan Pepelnjak and another from
Juniper

https://blog.ipspace.net/2010/09/introduction-to-8021qbb-priority-flow.html

https://www.juniper.net/documentation/en_US/junos/topics/concept/cos-priority-flow-
control.html

and
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-
switches/white_paper_c11-542809.pdf

and some information on deployments commands, simple and complex

Also see (ignore vendor or product concentrate on the message)


Understanding and Implementing Flow Control on Dell Force10 Switches
http://humairahmed.com/blog/?p=5316

Purpose of Ethernet Flow control


https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/W
hat_is_the_potential_impact_of_PAUSE_frames_on_a_network_connection%3F
and
https://en.wikipedia.org/wiki/Ethernet_flow_control

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 180 of 222


There are differences in operation for LINK LEVEL flow control versus PRIORITY flow
control
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/qos/configuration/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Quality_of_Service_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-
OS_Quality_of_Service_Configuration_Guide_7x_chapter_01010.html

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/qos/configuration/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Quality_of_Service_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-
OS_Quality_of_Service_Configuration_Guide_7x_chapter_01011.html

In SEP/OCT 2020 has a customer with multiple 40G engines which were mirrored, and a
HEAVY write workflow was a factor. The systems could provide the desired READ
bandwidth but could not achieve the desired WRITE bandwidth, only hitting about 60% of
target. This customer has 3 x E5 engines (with a fourth due to be added) connecting via two
NEXUS 9364C switches (access L2) in an MLAG pair with a 400G vPC peer link and 400G
uplinks to core N9364C MLAG pair and the Edge switches were N93180YC with 200G
uplinks, so a very capable network, but the “system” appeared to be struggling. Was it the
NEXIS or the network? Testing in Burlington indicated a similarly configured NEXIS system
with multiple of 2x10G clients could achieve a full WRITE bandwidth. For the desired Media
Pack quantity.

The WRITE performance increased jumped from 1800MB/S to 3200MB/S (reaching the
target figure of 3000MB/S and eventually got to 4000MB/S) when flow control was applied
to the NEXUS interface ports facing NEXIS. This size of WRITE bandwidth is an extreme
figure and unlikely to feature in the operational workflows for the site, as it was based on all
WRITE capable device operating concurrently at full projected load, even 50% of that value
is unlikely in normal operation, regardless of whether or not the system was sized to achieve
it.

They key point here is that when there is a mirrored NEXIS system, there is a lot of cross
engine bandwidth to fulfil the mirrored writes, this is different to client bandwidth, but it
means the 40G ports are having to work a lot harder. Even though the N9364C is a BIG 100G
switch, it has moderate buffers, and the incoming data from the uplinks was considerable.
Along with that data between the engines there would have been congestion which cause
write request to go unfulfilled between engines which in turn cause a push back on the write
requests. This is not the type of congestion that could have been addressed with a QoS profile
because all the data was of the same class/value. Also it might not be the Engine/Media-
Packs that is struggling but the NIC

This is a great example of how the targeted use of Link Level Flow Control (where not using
class-based indicator) with in a single switch is the correct approach to maximise
performance of similarly capable servers with heavy bidirectional communication.

I did not have access to the systems to look at what was happening on the NEXUS switch
ports, which apparently did not show interfaces discards or buffer utilisation, before or after
flow control was applied. I could not see the full extent of any QoS policies that were
associated with the flow control and I am reliably informed that without then a pause frame
(802.3x or 802.1Qbb) will have little effect on what the switch is doing.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 181 of 222


B.50.1 WHAT TRIGGERS FLOW CONTROL
Interesting question!! And again, open to much debate. Is the application pushing back? Is it
the NIC pushing back? Is the OS pushing back? Maybe it is a single element or a
combination, depending on how the application has been designed.

Basically RECEIVE 802.3 flow control is a way for a NIC to exert back pressure for a very
short period of time and ask the device that receives the pause frame to stop sending for up to
3.35ms at 10G or 0.83ms at 40G. So, if a receiving NIC in an NLE is having difficulty
processing incoming traffic and sending data up the stack, the NIC will issue a pause frame,
if connected to a switch it is “asking” the switch to buffer the traffic, but the switch may not
send an onward frame to the sending device.

TRANSMIT 802.3 flow control from a switch would send a pause frame to a sending device
(or all ports configured as TX=ON) when it buffer reaches a certain FILL point.

As Avid NEXIS does not implement any flow control it is up to the NIC driver and/or OS to
react (or not).

Consider a NEXUS CLIENT connecting at 40G to a single NEXIS Storage Engine


connecting at 40G, there should be no congestion here. In this example a Mac system,
specifically a MacPro 7.1 (2019 BIG cheese grater) running OSX 10.15.7 with the Atto N352
40 Gb NIC using Atto’s latest driver (Q3 2020). Not picking on Atto here, it just happened to
be used in the MAC workstation that was being ‘tested”.

Tested on a Dell S4048 network switch, this single 40G client to a single 40G storage server
will perform below expectations when flow control RX=off on client port and (e.g.)
<2000MB/S and then with flow control RX=on we get approx. 3200MB/S, no changes made
to the storage server port. There should be no oversubscription between the two devices, so
no need to send pause frames to/from this switch.

Using 40G without Jumbo frames (NEXIS MTU =1500 bytes) will cause a lot of TCP
processing to be required, but surely a Modern 40G NIC has sufficient TCP offload
capabilities, but it is probable in this case that the NIC needs a little breathing space and each
PAUSE if using the maximum quanta value at 4G will cause a 0.8ms hiatus and allow
equivalent to approx. 4MB of data

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 182 of 222


QUANTA
SPEED Gbps VALUE
65535 milliseconds microseconds
10 3.355 3355.392
25 1.342 1342.157
40 0.839 838.848
100 0.336 335.539

BITS BYTES KB MB
33553920 4194240 4095.938 4.000

Below is another situation one that WILL cause congestion that can be helped by flow
control, regardless of whether devices are 10G or 40G, and it may need RX=ON and
TX=ON, because there is a Potential 3:1 oversubscription on the Storage Engines as not only
must the WRITE form the NEXIS client be fulfilled, but also the MIRROR write to a
different engine. Of cause such congestion will only occur under a HIGH WRITE LOAD
scenario

B.51 Flow control in Cisco NEXUS switches with AVID Nexis storage

NEXUS QoS is quite different to Catalyst and deploys many new and innovative tools, but
some of the tools supporting elements that are used by default are well hidden. Help
gratefully received from my friendly Cisco NEXUS Guru who is well known to many.

NOTE AVID NEXIS DOES NOT SUPPORT IEEE 802.1Qbb Priority Flow Control (PFC)
at time of writing but commands included for completeness

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 183 of 222


Scenario – Multi-tier or Spine leaf network (no spine Po)

S1 S2

1/20-21,
PO2000 1/1,
1/1, 1/4, PO100 1/3,
PO100 1/2, 1/3,
PO100 PO101 1/2, PO101 1/4,
PO101
PO100 PO101

1/10, 1/10, 1/10, 1/11, PO101 1/10, 1/11,


1/11, 1/11, PO100 PO100 PO100
PO100 PO100 PO100 PO100
S3 S6
S4 S5
1/20-21, 1/20-21,
1/1, 1/1, 1/1, 1/1,
PO1000 PO3000
PO10 PO10 PO11 PO11

Configuration - Common config for all switches

Network-qos policy applies to all devices as determines which class is non-drop, and as such
configuration must be uniform across all switches, all switches will have same configuration:
Switch(config-pmap-c-que)# policy-map type network-qos N_AVID_POLICY1
Switch(config-pmap-nqos)# class type network-qos c-8q-nq3
Switch(config-pmap-nqos-c)# pause pfc-cos 3
Switch(config-pmap-nqos-c)# exit
Switch(config-pmap-nqos)#
Switch(config-pmap-nqos)# system qos
Switch(config-sys-qos)# service-policy type network-qos N_AVID_POLICY1

Configuration – Classification on host ports on all switches


As traffic coming from the host is not marked with CoS, nor with DSCP, and as Nexus 9000
switch does not support classification based on ACL for non-drop class of traffic,
classification is done based on DSCP 0 attached to specific interface:
Switch(config)# class-map type qos AVID_TRAFFIC1
Switch(config-cmap-qos)# match dscp 0
Switch(config-cmap-qos)# policy-map type qos AVID_POLICY1
Switch(config-pmap-qos)# class type qos AVID_TRAFFIC1
Switch(config-pmap-c-qos)# set qos-group 3
Switch(config-pmap-c-qos)# set dscp CS3

Traffic is remarked to DSCP CS3 that will be later used for classification on spine layer and
on uplinks from spine on leaf switches, into qos-group 3

Switch(config-sys-qos)# interface port-channel 10 - 11


Switch(config-if)# service-policy type qos input AVID_POLICY1

***** FOR LLFC configuration need on interface *****


Switch(config-sys-qos)# interface port-channel 10 - 11
Switch(config-if)# flowcontrol send on
Switch(config-if)# flowcontrol receive on

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 184 of 222


**** FOR PFC configuration needed on interface****
Switch(config-sys-qos)# interface port-channel 10 - 11
Switch(config-if)# priority-flow-control mode on

Configuration – Classification on uplink ports on leaf switches, and all ports on spine,
and peer-link
All traffic that we want to match should be marked with DSCP CS3, and we are matching it
on uplinks on switch, on peer-link, and on all links on spine:
Switch(config)# class-map type qos AVID_TRAFFIC1
Switch(config-cmap-qos)# match dscp CS3
Switch(config-cmap-qos)# policy-map type qos AVID_POLICY1
Switch(config-pmap-qos)# class type qos AVID_TRAFFIC1
Switch(config-pmap-c-qos)# set qos-group 3

Traffic marked as DSCP CS3 is mapped to qos-group 3 on spine layer and on uplinks from
spine on leaf switches
Switch(config-sys-qos)# interface port-channel 100-101, port-channel 1000
Switch(config-if)# service-policy type qos input AVID_POLICY1

***** FOR LLFC configuration need on interface *****


Switch(config-sys-qos)# interface port-channel 100-101, port-channel 1000, port-
channel 2000, port-channel 3000
Switch(config-if)# flowcontrol send on
Switch(config-if)# flowcontrol receive on

**** FOR PFC configuration needed on interface****


Switch(config-sys-qos)# interface port-channel 100-101, port-channel 1000, port-
channel 2000, port-channel 3000
Switch(config-if)# priority-flow-control mode on

Configuration – Queuing and scheduling


To simplify configuration, all switches will have same queuing/scheduling configuration. If
needed, queueing/scheduling configuration can be adjusted per interface:
Switch(config-pmap-c-qos)# policy-map type queuing AVID_EGRESS_QUEUEING
Switch(config-pmap-que)# class type queuing c-out-8q-q7
Switch(config-pmap-c-que)# priority level 1
Switch(config-pmap-c-que)# class type queuing c-out-8q-q6
Switch(config-pmap-c-que)# bandwidth remaining percent 0
Switch(config-pmap-c-que)# class type queuing c-out-8q-q5
Switch(config-pmap-c-que)# bandwidth remaining percent 0
Switch(config-pmap-c-que)# class type queuing c-out-8q-q4
Switch(config-pmap-c-que)# bandwidth remaining percent 0
Switch(config-pmap-c-que)# class type queuing c-out-8q-q3
Switch(config-pmap-c-que)# bandwidth remaining percent 60
Switch(config-pmap-c-que)# class type queuing c-out-8q-q2
Switch(config-pmap-c-que)# bandwidth remaining percent 0
Switch(config-pmap-c-que)# class type queuing c-out-8q-q1
Switch(config-pmap-c-que)# bandwidth remaining percent 0
Switch(config-pmap-c-que)# class type queuing c-out-8q-q-default
Switch(config-pmap-c-que)# bandwidth remaining percent 40
Switch(config-pmap-nqos)# system qos
Switch(config-sys-qos)# service-policy type queuing output AVID_EGRESS_QUEUEING

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 185 of 222


This uses “built-in” queuing functions that are modified but ONLY WITHIN this queuing
policy. Other policies can use these SAME queuing functions differently with another
queuing policy.

What it looks like when configured. (excluding interface setting)


class-map type qos AVID_TRAFFIC1
match dscp 0
policy-map type qos AVID_POLICY1
class type qos AVID_TRAFFIC1
set qos-group 3
set dscp CS3

policy-map type network-qos N_AVID_POLICY1


class type network-qos c-8q-nq3
pause pfc-cos 3
exit

system qos
service-policy type network-qos N_AVID_POLICY1

policy-map type queuing AVID_EGRESS_QUEUEING


class type queuing c-out-8q-q7
priority level 1
class type queuing c-out-8q-q6
bandwidth remaining percent 0
class type queuing c-out-8q-q5
bandwidth remaining percent 0
class type queuing c-out-8q-q4
bandwidth remaining percent 0
class type queuing c-out-8q-q3
bandwidth remaining percent 60
class type queuing c-out-8q-q2
bandwidth remaining percent 0
class type queuing c-out-8q-q1
bandwidth remaining percent 0
class type queuing c-out-8q-q-default
bandwidth remaining percent 40
system qos
service-policy type queuing output AVID_EGRESS_QUEUEING

Flow control activity can be monitored using

show interface flowcontrol For 802.3x


or
show interface priority-flow-control For 802.1Qbb

This will show information for multiple interfaces

e.g.
NEW_TEST_3# sh interface flowcontrol | n

--------------------------------------------------------------------------------
Port Send FlowControl Receive FlowControl RxPause TxPause
admin oper admin oper
--------------------------------------------------------------------------------
Eth1/1 off off off off 0 0

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 186 of 222


Eth1/2 off off off off 0 0
Eth1/3 off off off off 0 0
Eth1/4 off off off off 0 0
Eth1/5 off off off off 0 0
Eth1/6 off off off off 0 0

Or

NEW_TEST_3# sh interface priority-flow-control | n

slot 1
=======

============================================================
Port Mode Oper(VL bmap) RxPPP TxPPP
============================================================

Review PRIMARY QOS commands


show run ipqos | n

Or to see the default/hidden commands too (it is a. Long list so use | no-more)

sh run ipqos all | n

B.52 Flow control in Dell S4048 switches with AVID Nexis storage
In OS9 for S4048 Dell has a very different way of handling 802.3x flow control in
comparison to Cisco. There are no supporting policies needed. Apparently, there is little way
to influence the buffer management, the outcome is in the hands of the OS
flowcontrol rx {off | on} tx {off | on} [pause- threshold {<1-12480>] [resume-
offset <1-12480>] [negotiate]

Enter the keywords rx on to process the received flow control frames on this
rx on
port.
Enter the keywords rx off to ignore the received flow control frames on this
rx off
port.
Enter the keywords tx on to send control frames from this port to the
tx on
connected device when a higher rate of traffic is received.
Enter the keywords tx off so that flow control frames are not sent from this
tx off
port to the connected device when a higher rate of traffic is received.
pause-
Enter the buffer threshold limit for generating PAUSE frames.
threshold
resume-
Enter the offset value for generating PAUSE frames to resume traffic.
offset

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 187 of 222


To monitor is not easy either, EACH interface must be viewed separately (THANKS
DELL!!), there is no way to monito all interfaces in one go. Look for THROTTLES in the
Input Statistics and Output Statistics of CLI output from show interface command.

B.53 Mellanox ConnectX-3 adapter firmware version in NEXIS


The ability of how to obtain this information has only been recently provide to me (NOV
2020) and will not be provide in this document.

The firmware version of the Mellanox NIC in the NEXIS controller is not directly related to
NEXIS version. It cannot be field upgraded, hence replacement controllers may have a
different version to existing controller. Since mid 2018 new systems should have firmware-
version: 2.42.5000.

Firmware-version 2.42.5000 This added support for a SINGLE long-range optic 40GbE
MC2210511-LR4 Mellanox® optical module, 40Gb/s, QSFP, LC-LC, 1310nm, LR4 up to
10km, which of course will be compatible with Cisco LR Cisco QSFP-40G-LR4-S
transceiver at the other end.

The maximum TWINAX distance supported in the NEXIS controller NIC remains as 5M

Before that systems used firmware-version: 2.40.5030, so as a rough guide, systems delivered
in 2017 will probably be fitted with 2.40.5030 which does support 40GbE QSFP-40G-SR-
BD Cisco 40G BD Module, but no long range optics

Earlier systems 2016 and before will probably have firmware-version: 2_34_5000

In order to confirm driver version, user can go to the agent page of the engine
(https://<engine ip or hostname>:5015, go to the Advanced tab and under System Tools use
the op3on “Issue Shell Command”. In that window user type “ethtool -i gt0”. If user prefers
to use Putty and ssh to the engine, the same information is available using the same
command. ALL engines should be checked.

B.54 DELL S4100 OS10 vs. OS9 LACP and VLT commands
OS 10 and OS 9 have different syntax.

See session 1.1.1 regarding known VLT-LACP to HOST in OS10.5.2.2 that will affect
NEXIS and possibly ESXi, and likely to affect other platform operating systems/devices too.

OS10 OS9
interface port-channel31 !
description NEXIS ENGINE AGGREGATE interface Port-channel 70
no shutdown description PORT CHANNEL TO NEXIS
switchport access vlan 30 no ip address
! switchport
interface ethernet1/1/1 no shutdown
description NEXIS CONTROLLER P1

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 188 of 222


no shutdown interface TenGigabitEthernet 1/11
channel-group 31 mode active description NEXIS CTLR P1
no switchport no ip address
![flowcontrol receive on] !
! port-channel-protocol LACP
interface ethernet1/1/2 port-channel 70 mode active
description NEXIS CONTROLLER P2 no shutdown
no shutdown !
channel-group 31 mode active ! interface TenGigabitEthernet 1/12
no switchport description NEXIS CTLR P2
![flowcontrol receive on] no ip address
!
port-channel-protocol LACP
port-channel 70 mode active
no shutdown
NOTE: that the port shows a no switchport
!
command for the physical interface, but this has
no effect because the port is controlled in in
primary parameters by the port channel, hence
this is a syntactical peculiarity of the DELL
OS10 CLI.

VLT configuration OS10 example VLT configuration OS9 example


!
interface port-channel1 interface Port-channel 70
description NEXIS ENGINE AGGREGATE description VLT PORT CHANNEL TO NEXIS SDA-TOP
no shutdown no ip address
switchport access vlan 30 switchport
vlt-port-channel 1 vlt-peer-lag port-channel 70
! no shutdown
interface ethernet1/1/1
description NEXIS CONTROLLER P1 interface TenGigabitEthernet 1/11
no shutdown description NEXIS SDA TOP CT P1
channel-group 1 mode active no ip address
no switchport !
flowcontrol receive on port-channel-protocol LACP
! port-channel 70 mode active
! no shutdown
interface vlan30 !
no shutdown !
ip address 172.16.30.251/24 vlt domain 1
! peer-link port-channel 100
vrrp-group 30 back-up destination 169.254.1.2
priority 40 primary-priority 4096
virtual-address 172.16.30.253 unit-id 0
! !
! interface TenGigabitEthernet 1/24
vlt-domain 1 description To VLT peer - backup link to
VLTCORE2
backup destination 172.16.30.252
discovery-interface ethernet1/1/25-1/1/28 ip address 169.254.1.1/28

primary-priority 8192 no shutdown

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 189 of 222


vlt-mac 00:11:22:33:44:55
!
! interface Vlan 40
interface ethernet1/1/25 description AVID
description VLT SW1-SW2 PEER LINK ip address 10.21.40.253/24
no shutdown tagged TenGigabitEthernet 1/4,1/6,1/8
no switchport tagged Port-channel 30,40,100
flowcontrol receive on untagged TenGigabitEthernet 1/17,1/19-1/21
! untagged Port-channel 70-74,80-84
interface ethernet1/1/26 !
description VLT SW1-SW2 PEER LINK vrrp-group 40
no shutdown authentication-type simple 7 <<REMOVED>>
no switchport priority 90
flowcontrol receive on virtual-address 10.21.40.1
! no shutdown!
interface ethernet1/1/27 !
description VLT SW1-SW2 PEER LINK interface fortyGigE 1/53
no shutdown description To VLT peer link to VLTCORE2 Fo53
no switchport no ip address
flowcontrol receive on no shutdown
! !
interface ethernet1/1/28 interface fortyGigE 1/54
description VLT SW1-SW2 PEER LINK description To VLT peer link to VLTCORE2 Fo54
no shutdown no ip address
no switchport no shutdown
flowcontrol receive on !
!
interface Port-channel 100
description To VLT peer PO link to VLTCORE2
no ip address
channel-member fortyGigE 1/53,1/54
A corresponding “almost mirror” configuration would be no shutdown
needed in the peer-partner switch. !

A corresponding “almost mirror” configuration would be


needed in the peer-partner switch.

B.55 CISCO SWITCH HARDWARE BEACONS


The NEXUS 9000 and Catalyst 9000 have the ability to remotely activate a blue BEACON
LED. Occasionally it might be useful to activate, especially if the project is in one country
and the operator in another and you need to identify specific device to remote colleague!!

B.55.1 NEXUS 9000

Syntax Description

no Negate a command or set its defaults


locator-led blink locator led on device
chassis blink chassis led

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 190 of 222


module blink module led
module Type: integer
please enter the module number
fan blink Fan led
fan_num Type: uinteger
min: 1 max: 12
fan number

Command Modes

/exec
locator-led fex
[no] locator-led fex chas_no
Syntax Description

no Negate the command


locator-led Turn on locator beacon
fex Blink FEX ID
chas_no Type: uinteger
min: 100 max: 199
FEX number

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-
x/command_reference/config_612I22/b_n9k_command_ref/b_n9k_command_ref_chapter_0
1110.html

B.55.2 CATALYST 9000

See these URLs


https://www.youtube.com/watch?v=PWFiyiC_OBQ

https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9500/software/release/16-
10/command_reference/b_1610_9500_cr/stackwise_virtual_commands.html#wp3778239026

hw-module beacon switch


To control the blue beacon LED in a field-replaceable unit (FRU), use the hw-module beacon
switch command in priviledged EXEC mode.
hw-module beacon switch { switch-number| active| standby} { RP{ active| standby} | fan-
tray| power-supply power-supply slot number| slot slot number} { off| on| status}

B.56 DELL N3224 FIRST SETUP


A kind ASCR has provide the information below, MAR 2021. While not my words, they are
worthy of inclusion here.
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 191 of 222
The DELL N3224 switches, and they are not as straight forward out of the box as the
N30XX switches that they effectively replace.

Here’s a quick guide that an ACSR has written. The PDF version is here:
https://www.dropbox.com/s/3yl7fzr13dvwxo5/Quick%20Setup%20of%20DELL%20
N3224%20Switches%20for%20NEXIS%20use.pdf?dl=0

QUICK SETUP OF DELL N3224 SWITCHES FOR NEXIS USE


Unlike the earlier 3024 switch the 4 x SFP ports on the front of the 3224 are capable
of running at 25GbE OR 10GbE. However, out of the box these ports are NOT set to
auto negotiate to the incoming connections speed. If you turn on the unit straight out
of the box and plugin an SFP and connect it to a 10GbE source the activity lights will
show no activity.

To address this, you will have to setup the switch for remote access and change the
ports speed. This process can also be achieved using the switches CLI but for people
who are not familiar with the commands here’s how to change the ports to be 10GbE
through the web interface.

SETTING UP THE SWITCH FOR WEB INTERFACE ACCESS


You will need to be able to connect to the switch using the serial cable that is supplied
(with a 9pin d type!?) so will more than likely need a USB for serial adaptor cable.
You can use the DELL Quick Start Guide for a more details break down of setting the
system up but here’s quick rundown of how I did this…
Power off your switch by pulling the power
Connect the serial cable from your laptop to the correct port (labeled 10101) on the
DELL switch.
Start a terminal session (e.g. using putty) and set it up as follows:
Select the appropriate serial port (for me that COM 4) that will connect to the console.
Set the data rate to 115,200 Set the data format to 8 data bits, 1 stop bit, and no parity
Set the flow control to none
Start the serial session
Power on the switch.
After a few seconds you should be able to see the switches boot code start to appear
and populating the serial connection you have established.

If this is the first time you have booted the switch you will be asked…
Would you like to run the setup wizard (you must answer this question within 60
seconds)? (y/n) At this point enter Y.

Follow the onscreen requests to setup the initial access to the switch> Here you will
need to be able to supply IP addresses to be used to access it as well as an admin
username and password. Once you have this complete you should be able to access
the switch using a browser.
Login with your username and password.

At the web interfaces left hand menu navigate to Switching > Ports > Port
Configuration.
On the Port selection window at the top of this page choose TW1/0/1 from the drop
down menu ( this is marked as port 25 on the front of the unit and is the left most SFP
slot)
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 192 of 222
Now go to the Admin Port Speed drop down menu and change this from 25000 to
10000.
Note that Operational Auto Negotiation is marked as Disable and that the Auto
Negotiate Speed radio buttons below cannot be chosen unlike the other 24 copper
ports. This is why the port will not switch down to 10GbE automatically.
Choose Apply at the bottom of the page to make this port 10GbE.
Repeat for other ports as needed.

IMHO, it is probably easier to do this at the CLI ,but as ever it depends on one comfort level
at the coal face.

B.57 DELL N3024 N2024 DEFAULT INTERFACE COMMANDS


The DELL OS6 CLI is a fearsome opponent for a those well versed in Cisco CLI, some
elements and omissions just defy logic, unless you become deeply immersed in the CLI and
fin the “New Logic” . Every time I use DELL at the coal-face I curse a lot.

The default interface command is a joy in Cisco but is “missing in DELL OS6 until….

https://www.dell.com/community/Networking-General/Delete-all-configuration-from-a-port-
N4000-N2000/td-p/6069920

They have this command on version 6.5.3.4:

default gi1/0/29

This operation may take a few minutes.


The console prompt will return when the operation is complete.
Are you sure you want to factory default the interface configuration? (y/n) y

For those who are on lower version this same article contains other useful nuggets that are
not TOTALLY accurate, depending on you OS version.

Here is what worked for me on a Dell N2024 running version 6.3.2.3

no description
no spanning-tree portfast
no switchport mode
no switchport access vlan
no switchport trunk allowed vlan
no switchport trunk native vlan

to clean out

!
interface Gi1/0/2
switchport mode trunk

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 193 of 222


switchport trunk allowed vlan 153-159
exit

to
!
interface Gi1/0/2

So I could get to

!
interface Gi1/0/2
description "TVS-ESXnn_VM MANAGEMENT CONNECTION_ NON CRITICAL"
switchport access vlan 152
exit

or for all the gigabit ports in one go

interface range gigabitethernet 1/0/1-24


no description
no spanning-tree portfast
no switchport mode
no switchport access vlan
no switchport trunk allowed vlan
no switchport trunk native vlan
!
switchport mode access
switchport access vlan 152
description "TVS-ESXnn_VM MANAGEMENT CONNECTION_ NON CRITICAL"

Also the interface range command takes some wrangling and “helpfully” DELL OS 6 wants
more than just a G or a T for interfaces…. It wants Gi or Te

interface range gigabitethernet 1/0/19-20


it also auto fills….you can do
int rang go 1/0/19-20

B.58 DELL N3024 N2024 PASSWORD RECOVERY


The default Password for Avid supplied switch
Dell Networking N3024 (maybe N2024 too)
N3048
User: avid
Password: avid1234

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 194 of 222


http://resources.avid.com/SupportFiles/attach/AvidISIS/AvidNetworkSwitchGuide_v4.7.3_R
ev_F.pdf
URL operational - MAY 2021

Also try this…..


https://www.dell.com/support/kbdoc/en-uk/000108365/how-to-recover-from-forgotten-
password-on-dell-networking-n-series-switch

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 195 of 222


Appendix C - Power Connections
While not strictly networking, this is a good place to locate the information.

Historically, Avid products use the standard “kettle” type lead for a power connection and
been 100-240V PSU. The IEC 60320 C13 connection or the C15 if it is key like a REAL
kettle lead is the standard type of connection is prevalent on PC/Workstations, and has a
current rating of 10 Amps.

https://en.wikipedia.org/wiki/IEC_60320#C13.2FC14_coupler

Be prepared about power installation of the Cisco Catalyst 4500X – the power supply does
not use the common IEC C13/C14 power cable which can normally be found in a data centre
rack environment, but the less common C15/C16 power cable. The power cable that is
provided with the switch is terminated may be a country-specific wall socket plug, which
typically cannot be used in the rack PDU system or even have the wrong county spec. if it
was a GREY IMPORT. A possible solution performed successfully many times is to modify
the provided C15/C16 power cable and terminate it with the IEC C13/C14 to match the
datacentre PDU. This should be considered, planned and executed well in advance to avoid
unpleasant surprises at the time of switch installation.

The ISIS 2000/2500 with more demanding power supplies, needs a high current capacity
connection and this is the C19 connection and has a current rating of 16 Amps.

https://en.wikipedia.org/wiki/IEC_60320#C19.2FC20_coupler

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 196 of 222


Appendix D – NEXIS with MLAG Connections sequence
This appendix shows the connection sequence and the CLI output from a switch when
enabling a system with aggregated connections

D.1 C4500X VSS and dual controllers in NEXIS


This CLI output below is from a VSS pair of C4500X and five E4 engines and and SDA each
fitted with dual controllers.

The process for do this is well documented in the Avid documentation, but only from a
NEXIS perspective, not what happens at the switch.

This system has a single SDA with redundant controllers, and five E4 NEXIS engines also
with redundant controllers, so that is a total of 24 physical connections all at 10G.

The required method is to do basic configuration on the NEXIS engines and prepare them
with IP addresses and then to connect them one by one, with the SDA being first. When
applying Link aggregation on the controller, this requires a re-start.

The CLI output below shows what happens on a C4500X-VSS (it will be slightly different on
a NEXUS switch as there is better support for INDIVIDUAL mode of LACP and some
command are different).

Three useful articles for configuring a VSS from scratch are:

Virtual switching system (VSS) Configuration For Cisco 4500 series switches
https://supportforums.cisco.com/t5/network-infrastructure-documents/virtual-switching-
system-vss-configuration-for-cisco-4500-series/ta-p/3147865

and

https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/15-1-
2/XE_340/configuration/guide/config/vss.html

and

Catalyst 4500 Series Switch VSS Member - Replacement Configuration Example


https://www.cisco.com/c/en/us/support/docs/switches/catalyst-4500-series-switches/117640-
configure-vss-00.html

D.1.1 SDA

!! BELOW we have NO SHUT the ports for the SDA, all 4 of them on both controllers
!! We gets a message that the ports are not able to join the Aggregate
!! - note on NEXUS they will come up in individual mode (I) and not (s)

Enter configuration commands, one per line. End with CNTL/Z.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 197 of 222


AVID-CORE1-VSS(config)#int ran t1/1/1-6
AVID-CORE1-VSS(config-if-range)#int range t1/1/1, t1/1/17,t2/1/1, t2/1/17
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#
*Jul 19 07:54:42.223: %EC-5-L3DONTBNDL2: Te1/1/1 suspended: LACP currently not enabled on the
remote port.
*Jul 19 07:54:43.939: %EC-5-L3DONTBNDL2: Te1/1/17 suspended: LACP currently not enabled on
the remote port.
*Jul 19 07:54:47.124: %EC-5-L3DONTBNDL2: Te2/1/17 suspended: LACP currently not enabled on
the remote port.
*Jul 19 07:54:58.376: %EC-5-L3DONTBNDL2: Te2/1/1 suspended: LACP currently not enabled on the
remote port.
AVID-CORE1-VSS(config-if-range)#do sh etherch
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SD) LACP Te1/1/1(s) Te2/1/1(s)
11 Po11(SD) LACP Te1/1/2(D) Te2/1/2(D)
12 Po12(SD) LACP Te1/1/3(D) Te2/1/3(D)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SD) LACP Te1/1/17(s) Te2/1/17(s)
41 Po41(SD) LACP Te1/1/18(D) Te2/1/18(D)
42 Po42(SD) LACP Te1/1/19(D) Te2/1/19(D)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

!! NOW enable the SDA for LACP and restart the controllers
!! then the interfaces come on line, and then the SECOND controller goe off again
AVID-CORE1-VSS(config-if-range)#
*Jul 19 07:57:53.040: %EC-5-L3DONTBNDL2: Te2/1/1 suspended: LACP currently not enabled on the
remote port.
*Jul 19 07:58:25.896: %EC-5-L3DONTBNDL2: Te2/1/17 suspended: LACP currently not enabled on
the remote port.
*Jul 19 07:58:57.056: %EC-5-BUNDLE: Interface Te1/1/1 joined port-channel Po10
*Jul 19 07:58:57.152: %EC-5-BUNDLE: Interface Te2/1/1 joined port-channel Po10
*Jul 19 07:58:57.153: %EC-5-BUNDLE: STANDBY:Interface Te1/1/1 joined port-channel Po10
*Jul 19 07:58:57.252: %EC-5-BUNDLE: STANDBY:Interface Te2/1/1 joined port-channel Po10
*Jul 19 07:59:30.336: %EC-5-BUNDLE: Interface Te1/1/17 joined port-channel Po40
*Jul 19 07:59:30.787: %EC-5-BUNDLE: Interface Te2/1/17 joined port-channel Po40
*Jul 19 07:59:30.454: %EC-5-BUNDLE: STANDBY:Interface Te1/1/17 joined port-channel Po40
*Jul 19 07:59:30.887: %EC-5-BUNDLE: STANDBY:Interface Te2/1/17 joined port-channel Po40
*Jul 19 07:59:34.078: %EC-5-UNBUNDLE: Interface Te1/1/17 left the port-channel Po40
*Jul 19 07:59:34.135: %EC-5-UNBUNDLE: Interface Te2/1/17 left the port-channel Po40
*Jul 19 07:59:34.079: %EC-5-UNBUNDLE: STANDBY:Interface Te1/1/17 left the port-channel Po40
*Jul 19 07:59:34.154: %EC-5-UNBUNDLE: STANDBY:Interface Te2/1/17 left the port-channel Po40

AVID-CORE1-VSS(config-if-range)#

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 198 of 222


!! then see Po10 (SDA 1 TOP) is UP but Po80 (SDA 1 BOTTOM) is NOT
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SD) LACP Te1/1/2(D) Te2/1/2(D)
12 Po12(SD) LACP Te1/1/3(D) Te2/1/3(D)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SD) LACP Te1/1/17(D) Te2/1/17(D)
41 Po41(SD) LACP Te1/1/18(D) Te2/1/18(D)
42 Po42(SD) LACP Te1/1/19(D) Te2/1/19(D)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

!! then Po80 (SDA 1 BOTTOM) is coming online

*Jul 19 08:01:16.903: %EC-5-BUNDLE: Interface Te1/1/17 joined port-channel Po40


*Jul 19 08:01:16.987: %EC-5-BUNDLE: Interface Te2/1/17 joined port-channel Po40
*Jul 19 08:01:16.989: %EC-5-BUNDLE: STANDBY:Interface Te1/1/17 joined port-channel Po40
*Jul 19 08:01:17.091: %EC-5-BUNDLE: STANDBY:Interface Te2/1/17 joined port-channel Po40
AVID-CORE1-VSS(config-if-range)#

AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum


Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 199 of 222


2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SD) LACP Te1/1/2(D) Te2/1/2(D)
12 Po12(SD) LACP Te1/1/3(D) Te2/1/3(D)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SD) LACP Te1/1/18(D) Te2/1/18(D)
42 Po42(SD) LACP Te1/1/19(D) Te2/1/19(D)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

D.1.2 ENGINE 1
!! NOW LETS DO ENGINE 1, the procedure will be the same but with different PO
NUMBERS,
!! and the intervening states are not shown,
!! NOR are enable the ENGINE 1 for LACP and restart the controllers
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#int ran t1/1/2,t1/1/18,t2/1/2,t2/1/18
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#
*Jul 19 08:07:51.399: %EC-5-L3DONTBNDL2: Te2/1/2 suspended: LACP currently not enabled on the
remote port.
*Jul 19 08:07:52.175: %EC-5-L3DONTBNDL2: Te2/1/18 suspended: LACP currently not enabled on
the remote port.
*Jul 19 08:07:56.444: %EC-5-L3DONTBNDL2: Te1/1/18 suspended: LACP currently not enabled on
the remote port.
*Jul 19 08:08:08.676: %EC-5-L3DONTBNDL2: Te1/1/2 suspended: LACP currently not enabled on the
remote port.
*Jul 19 08:09:55.827: %EC-5-L3DONTBNDL2: Te2/1/2 suspended: LACP currently not enabled on the
remote port.
*Jul 19 08:10:05.907: %EC-5-L3DONTBNDL2: Te2/1/18 suspended: LACP currently not enabled on
the remote port.
*Jul 19 08:10:58.564: %EC-5-BUNDLE: Interface Te1/1/18 joined port-channel Po41
*Jul 19 08:10:58.682: %EC-5-BUNDLE: STANDBY:Interface Te1/1/18 joined port-channel Po41
*Jul 19 08:11:00.455: %EC-5-BUNDLE: Interface Te2/1/18 joined port-channel Po41
*Jul 19 08:11:00.764: %EC-5-BUNDLE: Interface Te1/1/2 joined port-channel Po11
*Jul 19 08:11:00.556: %EC-5-BUNDLE: STANDBY:Interface Te2/1/18 joined port-channel Po41
*Jul 19 08:11:00.885: %EC-5-BUNDLE: STANDBY:Interface Te1/1/2 joined port-channel Po11
*Jul 19 08:11:02.536: %EC-5-BUNDLE: Interface Te2/1/2 joined port-channel Po11
*Jul 19 08:11:02.635: %EC-5-BUNDLE: STANDBY:Interface Te2/1/2 joined port-channel Po11
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 200 of 222


Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SD) LACP Te1/1/3(D) Te2/1/3(D)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SD) LACP Te1/1/19(D) Te2/1/19(D)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

D.1.3 ENGINE 2
!! ENGINE 2
!! and here is what happens when a mistake is made, one port was not turned on correctly due
to a typo
!! T2/1/13 is enable instead of T2/1/3, so only three ports come up and SH
ETHERCHANNEL SUM
!! shows that port T2/1/3 is DOWN (D) while others expected ports are just suspended (s)
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#int ran t1/1/3,t1/1/19,t2/1/13,t2/1/19
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#
*Jul 19 08:22:57.280: %EC-5-L3DONTBNDL2: Te1/1/3 suspended: LACP currently not enabled on the
remote port.
*Jul 19 08:22:58.524: %EC-5-L3DONTBNDL2: Te1/1/19 suspended: LACP currently not enabled on
the remote port.
*Jul 19 08:23:02.080: %EC-5-L3DONTBNDL2: Te2/1/19 suspended: LACP currently not enabled on
the remote port.
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SD) LACP Te1/1/3(s) Te2/1/3(D)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SD) LACP Te1/1/19(s) Te2/1/19(s)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 201 of 222


45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#int t2/1/3
AVID-CORE1-VSS(config-if)#no shut
AVID-CORE1-VSS(config-if)#int t2/1/13
AVID-CORE1-VSS(config-if)#shut
AVID-CORE1-VSS(config-if)#
*Jul 19 08:25:04.112: %EC-5-L3DONTBNDL2: Te2/1/3 suspended: LACP currently not enabled on the
remote port.
AVID-CORE1-VSS(config-if)#
AVID-CORE1-VSS(config-if)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SD) LACP Te1/1/3(s) Te2/1/3(s)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SD) LACP Te1/1/19(s) Te2/1/19(s)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if)#

!! NOW enable the ENGINE 2 for LACP and restart the controllers
!! first activity after 1.5 minutes
AVID-CORE1-VSS(config-if)#do sh clock
*11:30:41.139 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if)#
*Jul 19 08:32:03.396: %EC-5-L3DONTBNDL2: Te2/1/19 suspended: LACP currently not enabled on
the remote port.
*Jul 19 08:32:07.984: %EC-5-L3DONTBNDL2: Te2/1/3 suspended: LACP currently not enabled on the
remote port.
*Jul 19 08:33:07.351: %EC-5-BUNDLE: Interface Te1/1/19 joined port-channel Po42
*Jul 19 08:33:07.471: %EC-5-BUNDLE: STANDBY:Interface Te1/1/19 joined port-channel Po42
*Jul 19 08:33:09.147: %EC-5-BUNDLE: Interface Te2/1/19 joined port-channel Po42
*Jul 19 08:33:09.248: %EC-5-BUNDLE: STANDBY:Interface Te2/1/19 joined port-channel Po42
*Jul 19 08:33:12.008: %EC-5-BUNDLE: Interface Te1/1/3 joined port-channel Po12
*Jul 19 08:33:12.127: %EC-5-BUNDLE: STANDBY:Interface Te1/1/3 joined port-channel Po12
*Jul 19 08:33:13.720: %EC-5-BUNDLE: Interface Te2/1/3 joined port-channel Po12
*Jul 19 08:33:13.831: %EC-5-BUNDLE: STANDBY:Interface Te2/1/3 joined port-channel Po12
AVID-CORE1-VSS(config-if)#
AVID-CORE1-VSS(config-if)#do sh etherchannel sum

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 202 of 222


Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SD) LACP Te1/1/4(D) Te2/1/4(D)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SD) LACP Te1/1/20(D) Te2/1/20(D)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if)#do sh clock
*11:33:45.238 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if)#

!! the Portchannels are online after approx. 3 minutes but the engine is still in the restart
cycle.

D.1.4 ENGINE 3

!! and now for engine 3


!! the timestamps have been up dated to local time
AVID-CORE1-VSS(config)#int ran t1/1/4, t1/1/20, t2/1/4, t2/1/20
AVID-CORE1-VSS(config-if-range)#do sh clock
*11:38:26.304 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#
*Jul 19 11:38:46.612: %EC-5-L3DONTBNDL2: Te1/1/20 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:38:47.239: %EC-5-L3DONTBNDL2: Te1/1/4 suspended: LACP currently not enabled on the
remote port.
*Jul 19 11:38:51.432: %EC-5-L3DONTBNDL2: Te2/1/20 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:39:00.400: %EC-5-L3DONTBNDL2: Te2/1/4 suspended: LACP currently not enabled on the
remote port.
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 203 of 222


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SD) LACP Te1/1/4(s) Te2/1/4(s)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SD) LACP Te1/1/20(s) Te2/1/20(s)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

!! NOW enable the ENGINE 2 for LACP and restart the controllers
!! first activity after 1.5 minutes
VID-CORE1-VSS(config-if-range)#
*Jul 19 11:40:55.175: %EC-5-L3DONTBNDL2: Te2/1/20 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:40:55.819: %EC-5-L3DONTBNDL2: Te2/1/4 suspended: LACP currently not enabled on the
remote port.
*Jul 19 11:42:01.020: %EC-5-BUNDLE: Interface Te1/1/4 joined port-channel Po13
*Jul 19 11:42:01.141: %EC-5-BUNDLE: Interface Te1/1/20 joined port-channel Po43
*Jul 19 11:42:01.720: %EC-5-BUNDLE: Interface Te2/1/20 joined port-channel Po43
*Jul 19 11:42:01.895: %EC-5-BUNDLE: Interface Te2/1/4 joined port-channel Po13
*Jul 19 11:42:01.139: %EC-5-BUNDLE: STANDBY:Interface Te1/1/4 joined port-channel Po13
*Jul 19 11:42:01.260: %EC-5-BUNDLE: STANDBY:Interface Te1/1/20 joined port-channel Po43
*Jul 19 11:42:01.830: %EC-5-BUNDLE: STANDBY:Interface Te2/1/20 joined port-channel Po43
*Jul 19 11:42:02.001: %EC-5-BUNDLE: STANDBY:Interface Te2/1/4 joined port-channel Po13
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 204 of 222


12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

D.1.5 ENGINE 4

!! and NOW ENGINE 4


here at 11:52:33 we can see that the PO in DOWN(D) as interfaced are DOWN during

AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#int ran t1/1/5,t1/1/21, t2/1/5, t2/1/21
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#do sh clock
*11:50:32.242 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if-range)#
*Jul 19 11:50:40.744: %EC-5-L3DONTBNDL2: Te1/1/5 suspended: LACP currently not enabled on the
remote port.
*Jul 19 11:50:44.327: %EC-5-L3DONTBNDL2: Te1/1/21 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:50:46.832: %EC-5-L3DONTBNDL2: Te2/1/21 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:50:53.432: %EC-5-L3DONTBNDL2: Te2/1/5 suspended: LACP currently not enabled on the
remote port.
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SD) LACP Te1/1/5(s) Te2/1/5(s)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SD) LACP Te1/1/21(s) Te2/1/21(s)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 205 of 222


91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#do sh clock
*11:51:16.817 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if-range)#
*Jul 19 11:52:33.504: %EC-5-L3DONTBNDL2: Te2/1/21 suspended: LACP currently not enabled on
the remote port.
*Jul 19 11:52:38.067: %EC-5-L3DONTBNDL2: Te2/1/5 suspended: LACP currently not enabled on the
remote port.
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SD) LACP Te1/1/5(D) Te2/1/5(D)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SD) LACP Te1/1/21(D) Te2/1/21(D)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#
*Jul 19 11:53:38.892: %EC-5-BUNDLE: Interface Te1/1/21 joined port-channel Po44
*Jul 19 11:53:38.939: %EC-5-BUNDLE: Interface Te2/1/21 joined port-channel Po44
*Jul 19 11:53:38.940: %EC-5-BUNDLE: STANDBY:Interface Te1/1/21 joined port-channel Po44
*Jul 19 11:53:39.064: %EC-5-BUNDLE: STANDBY:Interface Te2/1/21 joined port-channel Po44
*Jul 19 11:53:44.047: %EC-5-BUNDLE: Interface Te1/1/5 joined port-channel Po14
*Jul 19 11:53:44.055: %EC-5-BUNDLE: Interface Te2/1/5 joined port-channel Po14
*Jul 19 11:53:44.058: %EC-5-BUNDLE: STANDBY:Interface Te1/1/5 joined port-channel Po14
*Jul 19 11:53:44.194: %EC-5-BUNDLE: STANDBY:Interface Te2/1/5 joined port-channel Po14
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 206 of 222


Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SU) LACP Te1/1/5(P) Te2/1/5(P)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SU) LACP Te1/1/21(P) Te2/1/21(P)
45 Po45(SD) LACP Te1/1/22(D) Te2/1/22(D)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

D.1.6 ENGINE 5
!! AND now the final engine Engine 5
AVID-CORE1-VSS(config-if-range)#int ran t1/1/6,t1/1/22,t2/1/6, t2/1/22
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh clock
*11:59:30.056 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if-range)#no shut
AVID-CORE1-VSS(config-if-range)#
*Jul 19 11:59:47.442: %EC-5-L3DONTBNDL2: Te1/1/6 suspended: LACP currently not enabled on the
remote port.
*Jul 19 11:59:55.577: %EC-5-L3DONTBNDL2: Te2/1/22 suspended: LACP currently not enabled on
the remote port.
*Jul 19 12:00:04.007: %EC-5-L3DONTBNDL2: Te2/1/6 suspended: LACP currently not enabled on the
remote port.
*Jul 19 12:00:04.280: %EC-5-L3DONTBNDL2: Te1/1/22 suspended: LACP currently not enabled on
the remote port.
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SU) LACP Te1/1/5(P) Te2/1/5(P)
15 Po15(SD) LACP Te1/1/6(s) Te2/1/6(s)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 207 of 222


42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SU) LACP Te1/1/21(P) Te2/1/21(P)
45 Po45(SD) LACP Te1/1/22(s) Te2/1/22(s)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#do sh clock
*12:00:23.799 AST Wed Jul 19 2017
AVID-CORE1-VSS(config-if-range)#
*Jul 19 12:01:46.936: %EC-5-L3DONTBNDL2: Te2/1/22 suspended: LACP currently not enabled on
the remote port.
*Jul 19 12:01:52.179: %EC-5-L3DONTBNDL2: Te2/1/6 suspended: LACP currently not enabled on the
remote port.
*Jul 19 12:02:52.188: %EC-5-BUNDLE: Interface Te1/1/22 joined port-channel Po45
*Jul 19 12:02:52.340: %EC-5-BUNDLE: Interface Te2/1/22 joined port-channel Po45
*Jul 19 12:02:52.306: %EC-5-BUNDLE: STANDBY:Interface Te1/1/22 joined port-channel Po45
*Jul 19 12:02:52.455: %EC-5-BUNDLE: STANDBY:Interface Te2/1/22 joined port-channel Po45
AVID-CORE1-VSS(config-if-range)#
AVID-CORE1-VSS(config-if-range)#do sh etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 18


Number of aggregators: 18

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Te1/1/15(P) Te1/1/16(P) Te1/1/31(P)
Te1/1/32(P)
2 Po2(SU) - Te2/1/15(P) Te2/1/16(P) Te2/1/31(P)
Te2/1/32(P)
10 Po10(SU) LACP Te1/1/1(P) Te2/1/1(P)
11 Po11(SU) LACP Te1/1/2(P) Te2/1/2(P)
12 Po12(SU) LACP Te1/1/3(P) Te2/1/3(P)
13 Po13(SU) LACP Te1/1/4(P) Te2/1/4(P)
14 Po14(SU) LACP Te1/1/5(P) Te2/1/5(P)
15 Po15(SD) LACP Te1/1/6(D) Te2/1/6(D)
40 Po40(SU) LACP Te1/1/17(P) Te2/1/17(P)
41 Po41(SU) LACP Te1/1/18(P) Te2/1/18(P)
42 Po42(SU) LACP Te1/1/19(P) Te2/1/19(P)
43 Po43(SU) LACP Te1/1/20(P) Te2/1/20(P)
44 Po44(SU) LACP Te1/1/21(P) Te2/1/21(P)
45 Po45(SU) LACP Te1/1/22(P) Te2/1/22(P)
91 Po91(SU) LACP Te1/1/24(P) Te2/1/24(P)
92 Po92(SU) LACP Te1/1/25(P) Te2/1/25(P)
93 Po93(SD) LACP Te1/1/26(D) Te2/1/26(D)
94 Po94(SD) LACP Te1/1/27(D) Te2/1/27(D)

AVID-CORE1-VSS(config-if-range)#

!! THEN SAVE THE CONFIG, which is copied to the other VSS member.

AVID-CORE1-VSS(config-if-range)#do wr
Building configuration...
Compressed configuration from 20872 bytes to 7338 bytes[OK]
AVID-CORE1-VSS(config-if-range)#
*Jul 19 12:03:41.450: %C4K_REDUNDANCY-5-CONFIGSYNC: The private-config has been successfully
synchronized to the standby supervisor
*Jul 19 12:03:41.736: %C4K_REDUNDANCY-5-CONFIGSYNC: The startup-config has been successfully
synchronized to the standby supervisor

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 208 of 222


~SECTION END~

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 209 of 222


Appendix E – Useful articles
In no particular order or vendor preference, things I found while looking up other stuff or for
related information

E.1 Cabling, Optics and Transceivers


MELLANOX BLOG

http://www.mellanox.com/blog/author/bradsmith/

Part I of Three Part Blog Series on Cables & Transceivers


http://www.mellanox.com/blog/2016/10/16-ways-linkx-dac-cables-are-breaking-out-all-over-
servers-and-storage/

This blog is Part 2 in a 3-part series on Mellanox’s high-speed interconnect


products.
AOCs – Active Optical Cables Why are 2 transceivers and fibers priced less than one
connectorized transceiver
http://www.mellanox.com/blog/2016/12/aocs-active-optical-cables-why-are-2-transceivers-
and-fibers-priced-less-than-one-connectorized-transceiver/

NEVER FOUND PART 3of 3

Do You Know about Active Optical Cable (AOC Cable)?


http://www.cables-solutions.com/do-you-know-about-active-optical-cable-aoc.html

Active Optical Cable (AOC) – Rising Star of Telecommunications & Datacom Transceiver
Markets
https://community.fs.com/blog/active-optical-cable-aoc-rising-star-of-telecommunications-
datacom-transceiver-markets.html

ACTIVE OPTICAL CABLE (AOC) EXPLAINED IN DETAILS !


https://www.fiberoptics4sale.com/blogs/archive-posts/95047430-active-optical-cable-aoc-
explained-in-details.

Avid does NOT test any AoC products (active over optical). While AoC
might work between a Switch and relatively recent NIC (happily passing
NEXIS traffic) , you are likely to encounter problems if this is connected
directly to a NEXIS engine, the Mellanox CX3 adapter (an old legacy 1-
/40G device in Mellanox terms because there are so many newer 10/25G
products) does not support any AoC vendor that I am aware of. AoC is a
great, reduced cost short haul fibre method with “sealed” transceivers at
each end and hence fixed lengths up to 150m (for MMF variants).
However just like TWINAX (that was not supported with ISIS7x00) the
firmware on the terminating devices need to support AoC too.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 210 of 222


QSFP-DD vs OSFP: The Wave of the Future 400G Transceiver
http://www.fiber-optic-transceiver-module.com/qsfp-dd-vs-osfp-the-wave-of-the-future-
400g-transceiver.html

OSFP interconnect: what is it?


https://www.connectortips.com/osfp-interconnect-consortium/

~SECTION END~

Appendix F – Which Cisco Operating system to use?


The information here for CATALYST was imported for NETREQS V1.x, so it might seem
quite old, but the rationale still applies, as later versions exist since the time of writing
(approx. spring 2016).

Many of the principle apply equally well to NXOS and the minimum recommended version
is listed elsewhere in this document. There is no sun-section for NXOS in this appendix F at
this tim.

Avid does not routinely test any/new IOS or NXOS, but I list ones which I know have given
Avid problems, and hence this should be avoided.

F.1.2 Which version of (Cisco) IOS is supported?


With regard to the switches that Avid supplies, there are no special commands that require
specific IOS versions. Avid does not routinely test newer versions of IOS. When Avid tests a
switch, the version supplied will be the minimum software version supported. This
“minimum version” principle applies to both QUALIFIED switches and APPROVED
switches. Specific hardware options in a vendor switch family may well dictate software
versions. In rare situations when a network vendor rectifies a specific DEFECT, then an
upgrade may be required or recommended.

Note: this principle applies to Foundry/Brocade & Dell/Force10 products


but the software names are different to IoS.

There are rare situations when in extended deployment (not direct connect
to ISIS with vanilla configuration), with special features that certain
commands are required. One example of this is with the Foundry and
IEEE802.1q trunks on 10G links that is described elsewhere in this
document.

Minimum s/w versions are described in The Avid ISIS 7000 Ethernet Switch Reference
Guide available at:

http://resources.avid.com/SupportFiles/attach/AvidISIS/AvidNetworkSwitchGuide_v4.7.3_R
ev_F.pdf

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 211 of 222


NOTE: C4900M and C4948E and C4500E/SUP6 should avoid the s/w version
below: 15.1(1)SG (APR 2012) and 15.1(2)SG (NOV2012)
These versions have a bug in a new feature called Bidirectional Forwarding
Detection, which can drop ISIS packets.

The aforementioned bug is corrected in:

ENTERPRISE SERVICES W/O CRYPTO


cat4500e-entservices-mz.151-2.SG1.bin 24-JUL-2013

IP BASE W/O CRYPTO


cat4500e-ipbase-mz.151-2.SG1.bin 24-JUL-2013
http://software.cisco.com/download/release.html?mdfid=283027810&flowid=3592&softwareid=28
0805680&release=15.1.2-SG1&relind=AVAILABLE&rellifecycle=ED&reltype=latest

C4900M, C4948E, C4948-10GE and C4500E devices using 12.2.x IOS will not
exhibit the BFD problems and do not need to be upgraded.

F.1.2.1 What is the minimum software version recommended?


As mention in Section 1.3.2/Above Avid does not test all the IOS versions, this would be a
massive and untenable undertaking.

When projects purchase their own product, I am often asked what I recommend as a suitable
software version. Well the answer is “IT DEPENDS….”, which at first sight is unhelpful.
Initially understand that for Avid solutions the additional features in the new release are
extremely unlikely to be needed by the Avid applications, so the major version and point
version may be of little relevance, but the maintenance version might be critical in terms of
avoiding bugs. Secondly consider that when you are reading this content and when it was
written will have significant bearing on any suggestion of version number, as they keep
advancing with all vendors. And also there may be corporate policies of the destination
deployment that will restrict the choices that can be made.

Using cisco and an example see the screen shot below form their website

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 212 of 222


from
http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-xr-
software/product_bulletin_c25-478699.html

So the key thing when looking at release notes is to pick a recent version, that is above the
maintenance version and not too new and not too old, and that has a long maintenance
window. The h

So I WOULD ALWAYS AVOID a version that was X.0.0, and one that was X.Y.0
The hypothetical version 8 and 9 is used below for explanation, and we will assume that the
minimum version Avid was 7.8.9 and that was 5 years from the reading date
Hence lest us consider:

Hypothetical Version 8 introduced 3 years from the reading date


8.0.x = AVOID
8.1.0 = AVOID
8.1.2 = GOOD
8.1.3 = GOOD
8.2.0 = AVOID
8.2.1 = GOOD
8.2.2 = GOOD
8.3.0 = AVOID
8.3.1 = GOOD
8.3.3 = GOOD
8.3.4 = GOOD – LAST VERSION BEFORE V9

Hypothetical Version 9 introduced 1 year from the reading date

9.0.x = AVOID
9.1.0 = AVOID
9.1.1 = GOOD
9.1.2 = GOOD
9.2.0 = AVOID LATEST VERSION

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 213 of 222


Generally I will also avoid anything that is too new. I like it to have at least 3 months in the
field. And I would avoid anything new major release for 6 months.

So again, to use hypothetical situation of the 9.2.x thread was release in 1/JAN/2001 I would
not even consider is until 1/JUL/2001 and would want 9.1.1 minimum

F.1.2.2 Which IOS should I use on C4948E/C4900M?


First read all of section 1.3.2 above because it has a bearing on the suggestions below. Also
consider version mapping with the C45xx products below and they versions should ideally be
equivalents when the products are mixed (which will frequently be the reality).

C4500-X /C4500E SUP7/8 C4900M/C4948E


First publish dates First publish dates
3.3.0 15.1.1
First Published: 16-APR-2012 First Published: 16-APR-2012
3.4.1 15.1.2
First Published: 24-JUL-2013 First Published: 24-JUL-2013
3.5.1 15.2.1E1

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 214 of 222


First Published: 26-NOV-2013 First Published: 26-NOV-2013
3.6.0 15.2.2 E
First Published: 27-JUN-2014 First Published: 27-JUN-2014
3.7.0 15.2.3 ED
First Published: 10-DEC-2014 First Published: 10-DEC-2014
3.8.xE 15.2.4 ED
First Published: 02-OCT-2015 First Published: 02-OCT-2015

Avid does not MANDATE the version beyond the MINIMUM tested and specific version to
avoid as listed by Avid or Cisco deprecation.

At the time of writing this section [APRIL2016], below is the information from Cisco
http://www.cisco.com/c/en/us/support/switches/catalyst-4900-series-switches/products-
release-notes-list.html

I would be considering IOS- 15.2(3)E2 or 15.2(2)E4 , the most recent update for them are 2
months old (or more) and the base software has a maturity level that in comforting. IOS-
15.2(4)E was first published in First Published: October 1, 2015 and Last Updated: January
29, 2016, so this new kid on the block is probably too young for my preferences, the
minimum version Avid supports is 12.2.(46) and the latest 12.2.(54) version is—AUG 2014
so this would be suitable too.

BUT, by the time you read this, in 4 weeks, 3 months or 2 years on (you het the gist I hope),
the paragraphs/screenshots immediately above will almost certainly be superseded by newer
information. The point is to apply the principles.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 215 of 222


Also be aware that some upgrades from older version might need a
ROMMON upgrade too, if in doubt set up a TFTP boot to see if the desired
code will RUN before doing the REAL upgrades into NVRAM via TFTP.

Cisco Catalyst C4500E EoS announcement 31 OCT 19


https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-
series-switches/eos-eol-notice-c51-743088.html

F.1.2.3 Which IOS-XE should I use on C4500-X?


First read all of section 1.3.2 above because it has a bearing on the suggestions below. Also
consider version mapping with the C49xx products below and they versions should ideally be
equivalents when the products are mixed (which will frequently be the reality).

Avid does not MANDATE the version beyond the MINIMUM tested and specific version to
avoid as listed by Avid or Cisco deprecation.

At the time of writing this section [APRIL2016], below is the information from Cisco
http://www.cisco.com/c/en/us/support/switches/catalyst-4500-x-series-switches/products-
release-notes-list.html [STILL VALID FEB 2019]
NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 216 of 222
I would be considering IOS-XE 3.7.x or 3.6.x, the most recent update for them are 2 months
old and the software has a maturity level that in comforting. IOS-XE 3.8.x was first
published in First Published: October 1, 2015 and Last Updated: April 25, 2016
So this new kid on the block is probably too young for my preferences, the minimum version
Avid supports is 3.4.1 and the latest 3.4.x version is IOS XE 3.4.7SG—Dec 07, 2015 so this
would be suitable

As for IOS-XE3.5 as has not been updated for almost 2 years.

BUT, by the time you read this, in 4 weeks, 3 months or 2 years on (you het the gist I hope),
the paragraphs/screenshots immediately above will almost certainly be superseded by newer
information. The point is to apply the principles.

Also be aware that some upgrades from older version might need a
ROMMON upgrade too, if in doubt set up a TFTP boot to see if the desired
code will RUN before doing the REAL upgrades into NVRAM via TFTP.

1.3.2.4 Which IOS- should I use on C4948-10GE


First read all of section 1.3.2 above because it has a bearing on the suggestions below. Also

As the C4948 10GE has been end of sales since AUG 2013 and is approaching end of
Vulnerability S/W support and has less than 2 years until end of HW support, this is a
difficult question. The best advice is to replace with C4948E or C4500X (or both), but in
many cases that cannot be done in the short term.

As Avid no longer sells this device, and it only really likely to be used a “dumb” layer 2
switch with good buffers for ISIS clients, cascaded from a newer higher function switch, it is
not possible to give comprehensive advise as for other currently sold products. Hence
keeping it running as 12.2(25) EWA8 or 12.2.46 will be satisfactory for these duties. There
are a small number of legacy sites with little need for 10G connections sites still using these
old switches as a L3 Primary switch, which should really be considering a MAJOR upgrade
as soon as possible rather than worry about how to upgrade a C4948-10GE IOS. The running
version on IOS is sufficient to provide all the necessary software features needed by a
currently running legacy ISIS implementation.

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 217 of 222


Assuming that the C4948-10GE was supplied by Avid is will probably be 12.2(25) EWA8
and the highest that can go WITHOUT a ROMMON upgrade is 12.2(46). The technical
documentation on release note for C4948-10GE and C4500-Classic (SUP5 and SUP2) is very
complex.

Also be aware that some upgrades from older version might need a
ROMMON upgrade too, if in doubt set up a TFTP boot to see if the desired
code will RUN before doing the REAL upgrades into NVRAM via TFTP.

~SECTION END~

Appendix Z Full Revision history


Note: Holder for search engines: The Flockenator, Stingerator/Stingrator and Nexirator are internal system modelling tools created by Avid
for use by Internal Sales.

NETREQS this word is here for search ENGINES

Revision history
Note Version for this document number DOES NOT directly correlate to ISIS or Interplay
Production version

Version, Name & Comment


DATE
Initial Issue V1.0
04 July 2007
David Shephard
24 Version issues See Appendix Z in NETREQA V1.23

Initial Issue V2.0 Major reformatting, re-write and removal of ISIS and Interplay content to
15 DEC 2018
David Shephard
Version, Name & Comment
Date
Initial Issue V1.0 04 July 2007 David Shephard

V1.23
15 DEC 2017 UPDATE & Restructure SECTION 11 for move to NETREQS V2
ADD 12.0 Network Designs for NEXIS
David Shephard (OLD SECTION 12 becomes SECTION 13)
340 Pages (from UPDATE (and correct typo) 1.4.19.2 New product naming for Nexus 6000
324) family
FINAL VERSION
of V1.x UPDATE 1.5.7.3 Interim testing Nexus 93000

UPDATE 1.5.4 Cisco Nexus 5500/5000

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 218 of 222


Version, Name & Comment
DATE
V2.x for NEXIS ADD 11.2 Architecturally Capable Network switches (similar content at the
and MEDIA beginning of section 1.4 and 1.5)
CENTRAL
COMING VERY
SOON
V2.0
15 DEC 2017 Restructure documents and remove legacy ISIS and Interplay information,
which remains available in V1.23
David Shephard Section 11 in V1.23 is section 1 in V2.0 Avid NEXIS specific information
79 pages (from Section 12 in V1.32 is Section 2 in V2.0 Network Designs for NEXIS
340 in V1.23) Section 13 in V1.23 is Section 3 in V2.0 Virtual Machines and Blade servers
V2.1
03 APR 2018 V2.1 NOTES
Change section 1 name to Avid Nexis Requirements
Up to 91 Pages ADD 1.0.3 Zones - an evolved definition
ADD 1.3.1 Partially Tested Switches
ADD 1.7.1 Catalyst 9000
ADD 4.0 Wide Area and Data Center Networking
ADD 4.1 MPLS
ADD 4.2 Wavelength-division multiplexing
ADD 4.3 Software Defined Networking –SDN
ADD 4.4 VXLAN
ADD 4.5 Spine/Leaf Networks
ADD 4.6 Adapter Teaming

ADD B.33 Cisco Nexus vPC best practices


ADD B.34 Cisco Nexus 93180LC port breakout
ADD 1.3.1.2 Cisco Nexus 9348-GC-FXP Field testing (FEB 2018)
UPDATE 1.3 Architecturally Capable Network switches

ADD 1.8 Network Interface Cards

V2.2 UPDATE 1.0.1 NEXIS and Latency


05 OCT 2018 UPDATE Section 1.1 Qualified Switches for Avid NEXIS
ADD Section 1.9 Airspeed 5500 (3rd GENERATION) network connection
103 Pages
UPDATE 2.3 Custom/Deployed designs - CISCO
UPDATE 2.4 Custom/Deployed designs - JUNIPER
ADD 2.5 Custom/Deployed designs – ARISTA
ADD Appendix E – Useful articles
UPDATE 1.8 Network Interface Cards
UPDATE 1.6.2 Chassis Based Switch Families
ADD 5.1 Media Central UX Bandwidth requirements
UPDATE B.22 Speed setting on Switch ports

V2.3 UPDATE: 1.0.1 NEXIS and Latency


26 APR 2019 UPDATE: 1.1 Qualified Switches for Avid NEXIS
UPDATE: 1.3 Architecturally Capable Network switches
132 Pages UPDATE: 1.7.1 Catalyst 9000
ADD 1.8.4 10GBASET Transceivers
UPDATE 4.6.4 Linux Bonding

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 219 of 222


Version, Name & Comment
DATE
ADD 5.2 Media Central Cloud UX Server Connectivity requirements
ADD Section 7.0 Custom Testing
UPDATE: B.5 Deploy BPDU guard on all ports that use PORTFAST
ADD B5.2 Using spanning-tree port type edge with Cisco Nexus and AVID
NEXIS UPDATE: B.20 Multicast Propagation Does Not Work in the same
VLAN in Catalyst and NEXUS Switches
ADD B.20.3 – UCS Blade Servers Multicast Propagation
UPDATE B.22 Speed setting on Switch ports
UPDATE B.29 Service Timestamps and Time setting in Dell N3024/48
UPDATE B.30 Service Timestamps and Time setting in DELL S4048
UPDATE B.31 HOW TO FIND IP ADDRESS OF A MAC ADDRESS
ADD B.36 LINUX CONFIGURATION FOR TEAMED ADAPTERS
ADD B.37 NEXUS WATCH COMMAND (FIELD-TIP)
ADD B.38 AUTOMATING BACKING UP CISCO CONFIG FILES
ADD Appendix F – Which Cisco Operating system to use? (IMPORTED
SECTION from NETREQS V1.x)
ADD B.39 NEXUS 93xxx USEFUL COMMANDS FROM SHOW TECH
ADD INFOR ON SPICEWORKS & LORIOT PRO 8 for SNMP and LOG
MANAGEMENT

ADD INFOR on Catalyst auto backup for config file (SEE END OF DOC
AND SAME FOR NEXUS

V2.4 UPDATE 1.2 Approved Switches for Avid NEXIS


11OCT2019 ADD 1.5.2 Breakout cables NEXIS and NEXUS 93180
ADD 1.8.5 I350-T2 NIC Eol – Replaced by I350T2V2
153 pages UPDATE 1.8.4 10GBASET Transceivers
UPDATE 2.2.2 MLAG/VSS

ADD 2.3.3.1 Cisco Nexus 9336C with Nexus N93108TC/9348GC


ADD 2.6 SPINE/LEAF - Custom/Deployed designs
ADD 4.7 Cisco Transceiver Enterprise-Class versions
ADD 4.8 Jumbo Frames and Avid applications
ADD B.20.4 Some other useful Multicast URL & Information
MINOR UPDATE: B.36 LINUX TEAM CONFIGURATION FOR TEAMED
ADAPTERS
ADD B.34.3 Optical breakout useful information
ADD B.34.4 TWINAX breakout useful information
ADD B.39.2 Show interface counters - Errors only
ADD B.39.3 NEXUS copy run start SHORTCUT
ADD B.40 NEXUS FX2 Models and ISIS STORAGE LIMITATIONS
ADD B.41 Using DHCP with Avid Applications
ADD B.42 IP address allocation with NEXIS and MediaCentral
ADD B.43 Navigating Cisco NXOS versions

V2.5 UPDATE 1.1 Qualified Switches for Avid NEXIS


13FEB2020 >>> C4500X end of life announcement
UPDATE 1.2 Approved Switches for Avid NEXIS
175 pages ADD 1.10 NAT/PAT Incompatibility

ADD 1.7.2 Catalyst 9000. B series


ADD 1.5.3 Breakout cables NEXIS and DELL S4048
ADD 4.6.5 Teaming with FastServe Ingest
UPDATE B.20.2.1 – Field Knowledge NEXUS & Multicast
UPDATE B.34 Cisco Nexus 93000 series Port Breakout & QSA
UPDATE B.34.3 Optical breakout useful information

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 220 of 222


Version, Name & Comment
DATE
ADD B.34.5 QSA adapter
UPDATE B.39.2 Show interface counters - Errors only
ADD B.39.4 NEXUS other useful alias SHORTCUTS
ADD B.44 FIELD TIP: large file sending
ADD B.45 FIELD TIP: Upgrading Nexus 9000 switch firmware
ADD B.46 Multicast propagation on DELL switches
ADD B.47 Multicast propagation on Arista switches
ADD B.48 Useful cabling information

V2.6
20JUL2020 UPDATE section 1.3 in regard of Minimum S/W version for NXOS on 93000
FX series to use 7.0(3)I7(6)
173 pages ADD 1.5.4 Avid supplied 40G Transceivers and NEXIS
ADD 1.5.5 3rd Party Transceivers and Switch Vendors
UPDATE 1.10.1 Kubernetes – the “core” of the “network” problem
UPDATE 2.2.1 The Traditional/Legacy way
UPDATE 2.2.2 MLAG/vPC/VSS Single Controller
UPDATE 2.2.3 MLAG/vPC/VSS Dual Controller
ADD 4.9 NO specific VPN requirements for use with Thin Client applications
UPDATE B.20.2 – Nexus switches Multicast Propagation
UPDATE B.33 Cisco Nexus vPC Best Practices
UPDATE B.39.1 SHOW COMMANDS NEXUS 93xxx USEFUL
COMMANDS FROM SHOW TECH

ADD B.49 LACP for NEXIS clients – is it supported?

V2.7
27JAN2021 ADD 1.1.1 Issues to be aware of with Dell S4100 Series Switches
UPDATE 1.10.1 Kubernetes – the “core” of the “network” problem with
191 pages MCCUX
ADD 1.7.3 Catalyst 9000 - B series – USING SOFTMAX ONLY
UPDATE 1.8.3 NICs in a VM environment – applies to bare metal too
ADD 4.6.4.1 LINUX TEAMING
UPDATE B.20.2.2 – Field Knowledge NEXUS & Multicast PART 2UPDATE
B.25 Serial connection alternatives to DB9 using USB (FIELD-TIP)
ADD B25.1. Use the management network on NEXUS switches & remote
connection
ADD B.36.6 DEPLOYMENT CONSIDERTATIONS with MEDIA CENTRAL
CLOUD UX
ADD B.46.1 DELL 0S10 IGMP SNOOPING
ADD B.50 Flow control with AVID Nexis storage – is it needed?
ADD B.51 Flow control in Cisco NEXUS switches with AVID Nexis storage
ADD B.52 Flow control in Dell S4048 switches with AVID Nexis storage
ADD B.53 Mellanox ConnectX-3 adapter firmware version in NEXIS
ADD B.54 DELL S4100 OS10 vs. OS9 LACP and VLT commands

V2.8
24MAY2021 ADD 1.1.2 Dell S4100 Series Switches Model Variants
UPDATE 1.3.0 Known Cisco Issues impacting Avid Storage solutions
222 pages ADD 1.3.1.5 Cisco Nexus 93180YC-FX3
ADD 1.5.6 Gigabit Media Converters
UPDATE 1.8.3 NICs in a VM environment – applies to bare metal too
UPDATE 1.6.1 1U Switch Families
UPDATE 1.10.1 Kubernetes – the “core” of the “network” problem with
MCCUX (added paragraph at end.)

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 221 of 222


Version, Name & Comment
DATE
ADD 1.10.2 MCCUX “Kubernetes” – NO LONGER USING MULTICAST
MINOR UPDATE B.20 Multicast Propagation Does Not Work in the same
VLAN in Catalyst and NEXUS Switches
ADD 2.5.2 ARISTA - PROVEN DEPLOYMENTS WITH NEXIS
UPDATE B.20.2.1 – Field Knowledge NEXUS & Multicast PART 1
UPDATE B.36.2 TEXT of COMMANDS for LACP
SIMILAR UPDATE B.36.4 LACP TEAMING CONCLUSIONS
SIMILAR UPDATE B.49 LACP for NEXIS clients – is it supported?
UPDATE B.36 (.0) LINUX TEAM CONFIGURATION FOR TEAMED
ADAPTERS
ADD B.36.7 CHECKING/DEBUGGING LACP BOND CONNECTION
STATUS in LINUX
ADD B.55 CISCO SWITCH HARDWARE BEACONS
ADD B.56 DELL N32224 FIRST SETUP
ADD B.57 DELL N3024 N2024 DEFAULT INTERFACE COMMANDS
ADD B.58 DELL N3024 N2024 PASSWORD RECOVERY

ADD INFOR on Catalyst auto backup for config file (SEE END OF DOC
AND SAME FOR NEXUS

Do not try this at home 

Note: Tips from the field. Try this at home ☺

~END~

NETREQS-NEW_for_NEXIS_and_MediaCentral_V2.8.docx Page 222 of 222

You might also like