You are on page 1of 452
DCNI-1 Implementing Cisco Data Center Network Infrastructure 1 Volume 1 Version 2.0 Student Guide Text Part Number: 97-2673-01 tial. Bsa Sasser cisco. ieee Tair ces Ciseotas more tan 200ctcas worlwide Acrastes phone number, and fax numbers er iste onthe Cisco Webste ol www.isce comvgo/etons. 2. mn ammeter rn fh yume agg a. ayaa arm > [DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED “AS IS." CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN | CONNECTION WITH THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF [THis CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED |WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR [PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE, This learning product may contain early release [content and while Cisco believes ito be accurate it fills subject to the disclaimer above. ctfnetfne cisco. Students, this letter describes important course evaluation access information! Welcome to Cisco Systems Learning. Through the Cisco Leaning Partner Program, Cisco Systems is committed to bringing you the highest-quality training in the industry. Cisco learning products are designed to advance your professional goals and give you the expertise you need to build and maintain strategic networks. Cisco relies on customer feedback to guide business decisions; therefore, your valuable input will help shape future Cisco course curricula, products, and training offerings. We would appreciate a few minutes of your time to complete a brief Cisco online course evaluation of your instructor and the course materials in this student kit. On the final day of class, your instructor will provide you with a URL directing you toa short post-course evaluation. If there is no Internet access in the classroom, please complete the evaluation within the next 48 hours or as soon as you can access the web. On behalf of Cisco, thank you for choosing Cisco Learning Partners for your Internet technology training. Sincerely, Cisco Systems Learning Table of Contents Volume 1 la Course Introduction Overview Learner Skills and Knowledge Course Goal and Objectives Course Flow Additional References Cisco Glossary of Terme Your Training Curriculum Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade oR aeons Switches 1-4 Overview 141 Module Objectives re] cribing Data Center Architecture for Cisco Catalyst 6500 and 4900 witches 1-3 Overview 13 Objectives. 13 Data Center Evolution 14 Network 16 Power and Cooling Issues 18 Agility 19 Resilience 19 Cost-Etfectiveness 19 Centralized Design 1-12 Decentralized Design 112 Recentralized Design 4-12 The Enterprise Composite Network Model 4-13 Enterprise Campus 4-13 Enterprise Edge 1-14 Service Provider Edge 114 Access Layer 4-15 Aggregation Layer 1-15 Core Layer 1-15 SOA Overview 117 SOA Impacts 1-17 SONA Overview 1.17 Applications. 1-18 Abstraction 1-18 Infrastructure 1-18 The Cisco Data Center Network Architecture 119 Increase Productivity and Efficiency While Reducing Costs 1-19 Increase Resiliency and Business Agility 1-20 Improve Customer Relationships 1-20 Increase Revenue and Maximize Business Opportunities 4-20 ‘Summary 1-25 De ing and Positioning the Cisco Catalyst 6500 and 4900 Series Switches 4-27 Overview 1-27 Objectives 1-27 Introduction to Cisco Catalyst 4900 Series Switches 128 Exceptional Reliability 1-29 Wire-Speed Performance 1-29 Layer 2 and Layer 3 Services 1-29 Comprehensive Management 1-29 FHRP Operation 1-32 Hot Standby Router Protocol 4-32 Virtual Router Redundancy Protocol 1-33 Gateway Load Balancing Protocol 1-33 Half Card Options 1-34 Mixing TwinGig and X2 Modules 1-36 Cisco Catalyst 4900 Series Switch Architecture 1-42 Packet Flow Through Switch 4-43 Intelligent Packet Processor 1-44 Very Fast Forwarding Engine 4-44 Packet Flow Through Switch 1-44 Cisco Catalyst 4900 Series Switch QoS Overview 1-45 QoS Processing 146 Introduction to Cisco Catalyst 6500 Series Switches 1-48 Modular Architecture 143 Features and Benefits 1-49 Chassis Overview 1-50 Cisco Catalyst 6503-E Switch 151 Cisco Catalyst 6504-E Switch 1-51 Cisco Catalyst 6506-E Switch 151 Cisco Catalyst 6509-E Switch 1-52 Cisco Catalyst 6509-V-E Switch 4-52 ‘Supervisor Engines 1-55 Ethernet Line Cards 1-56 Services Modules 1-56 WAN Line Cards 1-56 Positioning Cisco Catalyst 4900 and 6500 Series Switches in the Cisco Data Center 3.0 Network 1-58 Summary 1-61 Describing the Cisco Catalyst 6500 Series Switch Supervisors _ 4-63 Overview 163 Objectives 1-63 Cisco Catalyst 6500 Series Switch Supervisor Architecture Overview 1-64 Switch Performance Metrics 1-64 Catalyst 6500 Series Switch Backplane Architecture 1-65 ‘Comparing Flow-Based and Cisco Express Forwarding Architectures 1-71 Catalyst 6500 Series Switch Supervisor Engine 720 with PFC3A/B/BXL_ 1-77 ‘Switching Architecture 1-78 Forwarding Architecture 1-78 ‘Supervisor720-38 1-79 ‘Supervisor720-3B 1-79 ‘Supervisor 720-3BXL 1-80 Catalyst 6500 Series Supervisor Engine 720 Switching Architecture 1 Bandwidth of Duplex Communications 1 Route Processor 1 ‘Switch Processor it Architecture 1-87 Cisco Catalyst 6500 Series Switch Supervisor Engine 720-10G-3C/CXL 1 1 1 1 Operating System -95 EtherChannel Load Balancing Modes -97 EtherChannel VLAN ID Hash -97 VSS Benefits 4-40 VSS Restrictions 4-102 Catalyst 6500 Series Switch Supervisor Engine 32 4-104 ‘Catalyst 6500 Series Supervisor Engine 32 Options 1-105 Integrated PFC3B 1-406 Integrated MSFC2a 4-106 PISA Overview 4-108 Stateful Packet Inspection with NBAR 1-110 Flexible Packet Matching 41-110 Protocol Discovery 1-111 Protocol Definition Language Module nt Implementation Consideration 1111 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, inc. Catalyst 6500 Series Switch Supervisor Engine Operating System Native Mode Hybrid Mode Catalyst Operating System Software Mode Catalyst 6500 Series Supervisor Engine 720 and 32 Operating System Mode Catalyst 6500 Series Supervisor Engine720-10G and 32-PISA Cisco IOS Image Synchronization ‘Summary th Overview Objectives Cisco Catalyst 6500 Series Switch Line Cards Overview Ethernet Line Cards WAN Line Cards Catalyst 6500 Series Switch Line Card Architecture Cisco Catalyst 6500 Series Switch Line Card Design Considerations Deploying Catalyst 6500 Series Switch Line Cards Example 1 Example 2 Bus or Flow-Through Mode ‘Compact Mode Truncated Mode Core Layer Distribution Layer Access Layer Catalyst 6500 Series Switch Service Module Overview Security Services Modules ‘Application Networking Services Modules Wireless Services Modules IP Telephony Services Modules Network Monitoring Services Modules Cisco Catalyst 6500 Series Switch Power Supplies Cisco Catalyst 6500-E Series Switch Chassis Power Supply Software Redundancy Configuration Line Cards In-Line PoE, Summary Implement Overview Objectives VSS 1440 Overview Challenges VSS 1440 Addresses VSS 1440 Architecture Virtual Switch Domain ID Virtual Switch Roles Control and Data Plane Router MAC Address VSL Traffic Virtual Switch Link Protocol VSL Initialization VSLP Ping Hash Distribution Algorithm Hardware Requirements PFC and DFC Requirements VSS 1440 Operation VSL Initialization ‘System Initialization RPR and SSO Redundancy Dual-Active Detection Using Enhanced PAgP 0 Series Switch Module and Power Su co Catalyst 6500 VSS 1 (© 2008 Cisco Systems, Inc Implementing Cisco Data Center Network Infrastructure (OCNI-1) v2.0 Cy Dual-Active Detection using IP-BFD 4-207 Deploying VSS 1440 4-209 Configuring VSS 1440 1-212 Saving Standalone Configurations 1-212 Validating the PFC Operational Mode 4215 Converting Switches 4-215 Merging Configurations 1-216 Summary 4-226 ing Cisco 10S So! ing Software M 1-227 Overview 1-227 Objectives 4-227 Cisco Catalyst 65UU Series Switcn Modular Cisco IOS Overview 4-228 implementing Cisco IOS Software Modularity 4-234 Using Cisco IOS Software Modularity 4-243 Rollback 4-252 Repackaging 4-252 Summary 1-255 Implementing NetFlow 1-257 Overview 1-257 Objectives 4-257 NetFlow and NDE Overview 1-258 Sampled NetFlow 4-260 NetFlow Aggregation 1-260 Per-Interface NetFlow Support 4-268 NetFlow for IPv6 1-268 NetFlow Top Talkers 4-268 NetFlow IPv4 Multicast Support 1-269 Configuring NetFlow and NDE 4-273 Summary 4-281 Implementing QoS 41-283 Overview 4-283 Objectives 1-283 Cisco Catalyst 6500 Series Switch QoS Overview 1-284 Ingress QoS Processing 4-290 Untrusted Ports (Default) 4-290 Trusted Ports 1-290 Ingress QoS Policing 1-296 Egress QoS Policing 4-301 Port- vs. VLAN-Based QoS 4-302 Modular QoS CL 4-303 VSS 1440 and QoS 4-304 Contiguring Gos 1-306 Policy Map Command Restrictions 4-317 Policy Map Class Command Restrictions 4-317 Configuring Policy Map Class Actions 1-318 Control Plane Policing and CPU Rate Limiting 1-322 Summary 4-329 Overview 1-331 Objectives 4-331 EEM Overview 4-332 Hardware and Software Requirements 1-333 Application Event Detector 4-935 CLI Event Detector 4-335 Counter Event Detector 4-335 GOLD Event Detector 1-336 Interface Counter Event Detector 4-996 ‘SYS Manager Event Detector 1-336 wv Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. SYS Monitor Event Detector 1-336 ‘Syslog Event Detector 1-337 Timer Event Detector 1-337 Cisco 10S Software Watchdog Event Detector 1-337 None Event Detector 1-338 OIR Event Detector 1-338 Redundancy Framework Event Detector 1-338 ‘SNMP Event Detector 1-338 EEM Policy Actions 1-339 Applet 4-340 Tel Script 1-340 Policy Director 1-340 Cisco IOS Software Patching 4.346 Faulty Process 1-346 Configuring EEM 1-350 Configuring EEM Applet 4-350 Environment Variables 1-351 Event Register Keyword 1.355 Importing Namespaces 1-356 Tel Script 1-356 Tel Script Elements 1-356 ‘Summary 1-360 Utilizing Automated Diagnostics 4 Overview 1-361 Objectives 1-361 Automated Diagnostics Over 1-362 Bootup Diaanostics 1-366 Health Monitoring Diagnostics 1-366 ‘On Demand Dit 1-367 Scheduled 1-367 GOLD Test Suite 1-367 ‘Smart Call Home and Cisco TAC 1-380 Using Diagnostics for Troubleshooting 1-382 ‘Summary 1-401 Implementing SPAN, RSPAN, and ERSPAN 4-403 ‘Overview 4-403 Objectives 4-403 SPAN Overview 4-404 Configuring SPAN 4-409 RSPAN Overview 1-414 Configuring RSPAN 1-417 ERSPAN Overview 4-422 Configuring ERSPAN 1-426 Summary 1-431 (© 2008 Cisco Systems, Inc. Implementing Cisco Data Center Network infrastructure (OCNI-1) v2.0, v “ Implementing Cisco Data Center Network Infrastructure 1 (OC? ‘© 2008 Cisco Systems, Inc. DCN Course Introduction Overview Implementing Cisco Data Center Network Infrastructure I (DCNI-1) v2.0 is a five-day course that offers data center-oriented content that is primarily focused on the Cisco Catalyst 6500 Series Switches, Cisco Catalyst 4900 Series top-of-rack switches, and, to a lesser degree, blade switches. This is one of six courses and exams that support the Advanced Data Center Networking Infrastructure partner specialization. Learner Skills and Knowledge This subtopic lists the skills and knowledge that learners must possess to benefit fully from the course. The subtopic also includes recommended Cisco learning offerings that learners should first complete to benefit fully from this course. Learner Skills and Knowledge Cisco CCNP®, Cisco CCIE® Routing & Switching, or CCIE Service Provider, or equivalent experience. The following Cisco learning offerings are recommended so that the learner can benefit fully from this course: * Building Scalable Cisco Internetworks * Building Cisco Multilayer Switched Networks * Optimizing Converged Cisco Networks * Implementing Secure Converged Wide Area Networks Course Goal and Objectives This topic describes the course goal and objectives. “To enable customers to build scalable, reliable, and intelligent data center networks using Cisco Core] \a lol0¢ Rl Ulm) Ie soe Implementing Cisco Data Center Network Infrastructure 1 Upon completing this course, you will be able to meet these objectives: = Describe the Cisco Catalyst 6500 and 4900 Series Switches, as well as blade switch family hardware options, data center architecture, and Cisco Catalyst 6500 Series Switch advanced configuration, management, and maintenance features = Identify and implement traffic flows, including FWSM routed and transparent modes, supervisor-based packet acceleration, and FWSM-PISA integration = Describe NAM hardware, implement initial configuration, and use the NAM to monitor network traffic = Describe and deploy high-availability features in the data center z Implementing Cisco Data Genter Network infrastructure 1 (CNM) v2.0 (© 2008 Cisco Systems, Inc Course Flow This topic presents the suggested flow of the course materials, Course Flow co coal Course Module 1, | Lessons 1.6 and 1-7, Module 1, Lessons 1-10 ‘and 1-11, Labs 4-8 and 1-6 Lesson 3-3, Labs 3-1 and 32 The schedule reflects the recommended structure for this course. This structure allows enough time for the instructor to present the course information and for you to work through the lab activities. The exact timing of the subject materials and labs depends on the pace of your specific class. (© 2008 Cisco Systems, Inc. Course Introduction Additional References This topic presents the Cisco icons and symbols that are used in this course, as well as information on where to find additional technical references. Cisco Icons and Symbols G= — a Nei Fee 8 pu Sener Cisco Glossary of Terms For additional information on Cisco terminology, refer to the Cisco Internetworking Terms and Acronyms glossary of terms at hup:/www.cisco.com/en/US/does/intemetworking/terms_acronyms/ita.html. 4 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Your Training Curriculum This topic presents the training curriculum for this course. Cisco Advanced Data Center Network Infrastructure Specialization Certification Path Enhance Your Cisco Certifications—and Validate Your Areas of Expertise Cisco AM Cisco SE Secu Tr Tech Ca erie esnry Costs Conary ease hitp:/iwww.cisco.comigolcertications (© 2008 Cisco Systems, Inc. Course Introduction Cisco Advanced Data Center Network Infrastructure Specialization Certification Path (Cont.) Enhance Your Cisco Certifications—and Validate Your Areas of Expertise Cisco FE (Infrastructure) ecomeended Teg Trough Go Leaning A = Frown Vas CONP Catan y ‘vinrening Ce Data Cone energie =_ ‘peer Cs at Cv enya Cisco Data Center Applications Networking Design Specialist sconmende Ying Though Cac Leonng A — Prenat Vt COA Cetenten MY cnpra on brs hae Sovae SE Cisco Data Center Applications Networking Support Specialist Feconrendes Taig Trough Co Leaning hitpiwwn.cisco.comigolcertfications sco Qualified Specialist certifications demonstrate significant competency in specific technology areas, solutions, or job roles. Individuals who have earned an associate-level career certification or higher are eligible to become qualified in these focused areas. With one or more specialist certifications, network professionals can better align their core expertise with current industry needs. For more information on the Cisco Qualified Specialist certification, visit http:/svww.cisco.com/go/eertfications. Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Module 1 | Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches Overview This module identifies the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and blade switch families, their hardware options, describes data center architecture, as well as Cisco Catalyst 6500 Series switch advanced configuration, management, and maintenance features. Module Objectives Upon completing this module, you will be able to describe the Cisco Catalyst 6500 and 4900 Series switches, as well as blade switch family hardware options, data center architecture, and Cisco Catalyst 6500 Series switch advanced configuration, management, and maintenance features. This ability includes being able to meet these objectives: Describe data center evolution, understand the ECNM, SONA, and DCNA, and identify the data center switching platforms = Describe the Cisco Catalyst 6500 and 4900 Series switches, their architecture, and position in the data center network '™ Describe the Cisco Catalyst 6500 Series switch supervisor modules, architecture, and ‘operating system = Identify and describe the Cisco Catalyst 6500 Series switch line cards, their architecture and deployment considerations, service modules, and power supply options Describe and deploy the Cisco Catalyst 6500 VSS 1440 = Describe the Cisco Catalyst 6500 and 4900 Series ewitch file system. | Describe the use of software modularity on the Cisco Catalyst 6500 Series switch '™ Describe how NetFlow and NDE work on the PFC3 and MSFC3 ™ Describe packet processing in hardware on the Cisco Catalyst 6500 Series switch, and explain how it can perform QoS functions on packets in hardware and software = Describe and configure EEM Describe the fault management tools that are available for the Cisco Catalyst 6500 Series switch = Describe how to use and configure SPAN, RSPAN, and ERSPAN sessions Identify the Cisco blade switch platforms, their key features, and benefits 4-2 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Lesson 1 Describing Data Center Architecture for Cisco Catalyst 6500 and 4900 Series Switches Overview Designing a highly available, high-performance, and scalable data center network must begin with a basic understanding of the business objectives that are driving the evolution of the data center. This lesson identifies data center trends and challenges and describes the evolution of data centers. To deploy scalable, manageable, and services-oriented architecture data centers, the Enterprise ‘Composite Network Model (ECNM) with hierarchical design applied should be followed. The lesson thus describes the how to deploy an efficient and expandable enterprise network using Cisco Catalyst 6500 Series Switches, Cisco Catalyst 4900 Series Switches, and the data center infrastructure module of the ECNM Objectives Upon completing this lesson, you will be able to describe data center evolution, understand the ECNM, Cisco Service-Oriented Network Architecture (SONA), and Cisco Data Center Network Architecture (Cisco DCNA), and identify data center switching platforms. This includes being able to meet these objectives: = Describe data center evolution and current trends ‘= Describe the ECNM and Cisco SONA ® Describe and understand the Cisco DCNA m= Identify data center switching platforms Data Center Evolution This topic identifies a typical data center, current data center trends, how data centers have transformed, and explains the data center evolution phases. Legacy Data Center Current Data Center er eee ets Data centers are about servers and applications. The first data centers were mostly mainframe, glass house, raised-floor structures that housed the computer resources as well as the intellectual capital (programmers and support staff) of the enterprise. Over the past decade, most data centers have evolved on an ad hoc basis. The goal was to provide the most appropriate server, storage, and networking infrastructure that supported specific applications. This led to data centers with stovepipe architectures or technology islands that were difficult to manage or adapt to changing environments. There are many server platforms in modern data centers, all designed to deploy a series of applications. For example: IBM mainframe applications E-mail applications on Microsoft Windows servers Business applications on IBM AS/400 servers Enterprise resource planning (ERP) applications on UNIX servers R&D applications on Linux servers In addition, there are a broad collection of storage silos to support these different server environments. These storage silos can be in the form of integrated, direct attached storage (DAS), network-attached storage (NAS), or small Storage Area Network (SAN) islands. 1-4 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc. This siloed approach has led to underutilized resources, difficulty in managing these disparate complex environments, and difficulty in applying uniform services such as security and application optimization. It is also difficult to implement strong consistent disaster recovery procedures and business continuance functions. (©2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-6 Bs hoe Ce Rew nau Ae UR ere ck ee BRR Mee ue RR a un ees co aoe ihe) eMac ena st cm areas Be eat eam laa ye Lele Bec A uae Mg cee Mea ans} Geechee ee PROC AC Rae cua Mec uR eR ch neem ie ean nec uckcseec tet Customers want to deploy data center-wide architectures. Depending on the IT team you are speaking with you will gather different requirements, you have an opportunity to talk on all different levels because of our strategic position and the fact that you touch all of the different ‘components in the data center. Likewise, you have the challenge of being able talk the same language of all the different groups and stakeholders and address their challenges. Selling into the data center involves multiple stakeholders, who all have different agendas and priorities. The traditional network contacts might get you in, but they might not be able to impact the decisions that ultimately determine how the network evolves. The organization might be run in silos, where each has its own budget and power base. On the other hand, many next-generation solutions involve multiple groups. For example, a Cisco Application Control Engine (ACE) deployment might need to involve people from the network, application, server, and security teams. Your biggest challenge could be getting these teams to work together. Ifyou can accomplish that, you have added significant value to the business, and you are viewed as a trusted partner; but getting to that point requires careful planning and execution. Network Customers are realizing that they have trouble incorporating new, advanced technologies into their existing data center architectures that are somehow glued together. Customers are asking their services vendors to help them deploy best-practice architectures. Enterprises also realize that an agile infrastructure that easily incorporates new technologies can often serve as a competitive edge. 1-8 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. cy Human Collaboratior Data Center requirements guide IT managers through complex considerations while expanding or planning network: Ne = Application intelligence = Integrated security = Virtualization = Nonstop syst ns = Operational manageability = Performance and density @ = Infrastructure reduction = Different fabrics (Ethernet, Fibre Channel, InfiniBand) Data Center should help reducing operational costs and improving IT productivity, thereby deliver operational excellence. IC should also improve productivity and accelerate innovation. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-7 Key Data Center Challenges Lesid ¢ Challenges Green Data Center sihebiihe Service-oriented Cet Data Center eres oo Ce) eee Sea ee ny Cee) oe Cet See eee Sood Centr) The challenge for enterprise IT an organization today is to redesign their data centers to . enhance agility, increase resilience, and manage costs. Power and Cooling Issues 4 High-density rack power requirements have far exceeded existing power capacity. Data centers designed only a few years ago need many times the power they currently have to implement , blade servers and other high-density technologies. When determining power requirements for equipment in a data center, one variable that is often . hard to predict is the level of usage. In a server environment, the harder the server must work, the greater the power draw from the AC supply and the heat output that needs to be dissipated. Power and cooling issues include: = Circuit breaker overload: Typically, staff does not account for fluctuations in power and might inadvertently create this issue. " = Heat variations: Power is dissipated as heat; more power equals more heat. Local liut spots can occur. = Loss of redundancy: Dual-homed server power supplies might cause a possible overload condition if more than half of the load is carried by the main power supply when a power failure occurs. i = Overheating is an equipment issue with high-density computing (blade servers): « — More heat overall — Hot spots. & — High heat and humidity threaten equipment life spans, — Computing power and memory requirements demand more power and generate more heat. 1-8 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Ine. — Data center demand for space-saving servers: Density equals heat; 3kW per chassis is, but five or six chassis per rack can mean up to 20 — Humidity levels affect static electricity and condensation; 40 to 55 percent relative humidity is recommended. Agility To become more agile, IT must be able to rapidly and cost-effectively roll out new applications and scale existing applications. This requires a new services-oriented approach to applications and infrastructure, where application and infrastructure resources are logically partitioned into services that can be easily allocated. One example of this is the use of virtual machines (VMs) that can be loaded onto any server on demand, as opposed to applications that permanently reside on a specific server. Ultimately, the goal is to logically partition computing, network, and storage resource: services that can be dynamically provisioned on an on-demand basis. Resilience The two key aspects to achieving resiliency are security and disaster recovery. A strong business continuity strategy needs to account for both aspects. Like application and infrastructure resources, security services must be dynamically provisionable. Storage resources must be tiered according to the level of service they provide, and applications must be provisioned with the appropriate storage and intersite transports according to the service level agreement (SLA) for each application. Cost-Effectiveness A dynamically provisionable applications infrastructure must also be designed to reduce operational costs. Pooling resources helps to increase overall resource utilization and leads to more standardized operating environments, But the evolution to this next-generation data center architecture also requires the introduction of new technologies, which requires new management practices and skill sets. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-9 Green Data Center Initiatives Fifty percent of modem data centers will have insufficient power and cooling capacity to meet the demands of high-density equipment in the near future. Through 2009, energy costs will ‘emerge as the second-highest operating cost (behind labor) in 70 percent of data center facilities. Power demand for high-density equipment will level off or decline. In-rack and in- row cooling will be the predominant cooling strategies for high-density equipment. In-chassis cooling technologies will be adopted for 15 percent of the servers. 1-10 Implementing Cisco Data Center Network infrastructure 1 (OCNI-) v2.0, (© 2008 Cisco Systems, Inc. tee en cat To following drivers are addressing the challenges and considerations of data center design: = Consolidating the computing resources = Virtualizing services, network resources, and computing resources = Automating provisioning process Green architecture with power efficiency, management, facility needs, and cooling Today's data centers are growing and thus consume vast amounts of energy and need a lot of cooling, since numerous servers and other devices are present. More energy, more devices, and more complexity produces a larger carbon footprint which contradicts with the green architecture guidelines and also government directives. Thus, the challenge to employ green data center architecture is one of the key considerations. Virtualization solutions slow the growth of power demand through increased utilization and reduce component count. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-11 Sonics web 207 rae) Bete Data centers have historically evolved from centralized to decentralized and back to centralized design. Centralized Design Early data centers encompassed monolithic infrastructure and proprietary platforms. The applications were tightly coupled with hardware, with direct attached storage typically being used. Decentralized Design Expenses and computing needs drove data center design to distributed infrastructure with server proliferation and web-facing applications. Recentralized Design With the advent of consolidation the infrastructure, along with computing resources, is virtualized, Resources are put into pools and are being standardized to help deploy services oriented applications, thus releasing coupling with the hardware 4-12 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc. The Enterprise Composite Network Model This topic explains the multilayer network design, the ECNM, and how it is used to divide enterprise network into physical, logical, and functional areas of service. ‘The ECNM divides an enterprise-wide network design into functional areas. Each functional area consists of several modules, each of which is addressed separately from a design perspective. The figure shows an overview of the modules contained in a corporate network. Enterprise Campus The enterprise campus functional area contains all the network resources within a single campus that might span multiple buildings. Multiple-building access and building distribution modules are designed much as they were using the Hierarchical Design Model (HDM), one for cach building, ‘The campus backbone performs the same functions as the core layer in the HDM. The server farm module contains all the campus servers and the network equipment used to attach the servers to the campus backbone. An additional network management module provides both in- band and out-of-band management of all network devices within the campus or enterprise. ‘The edge distribution module connects the campus network to larger networks that connect to the rest of the enterprise network at other campuses, or to external users and networks. ‘The Data Center module encompasses a high-density server environment with services oriented intelligent network. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-13 Enterprise Edge The enterprise edge functional area contains the following modules, which connect the campus to resources off campus: = E-commerce: The e-commerce module contains the systems that provide e-commerce access to enterprise clients via the Internet. = Internet connectivity: The Intemet connectivity module provides Internet access for systems on the campus and also handles incoming connections from the Internet that are not e-commerce related. = Remote access/virtual private network (VPN): The remote access/VPN module provides attachment to the external phone network and the termination point for VPN connections coming in from dial-up or Internet users. The remote access/VPN module provides access to the internal enterprise network for authorized users. = WAN: The WAN module connects the campus network to private WAN resources, typically used to implement a private enterprise long-haul network between campuses. The WAN module is designed with the assumption that the enterprise owns or controls the WAN to which it attaches, Service Provider Edge The service provider edge functional area consists of modules that are not owned or managed by the enterprise. Instead, the modules of the service provider edge represent attachment to communications services purchased from outside telecommunications vendors. The modules in this area are: Internet service provider (ISP): ISP A and ISP B are redundant connections to the public Internet that are purchased through different organizations. = Public switched telephone network (PSTN): The PTSN module provides transit for dialup connections, = Frame Relay/ATM/PPP: This service provider module provides communications ci that are dedicated to the enterprise network and are used to connect campuses together. cuits 114 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Hierarchical Design Model = Published by Cisco in the 1990s * Layered architectural model * Goals: ~ Scalable ~ Consistent design ~ Predictable performance ‘The Cisco HDM was first published in the 1990s, The HDM was created to facilitate network designs that were scalable and provided consistent design and consistent performance. ‘The HDM divides the network design into three layers. The functionality of each layer is defined to give the network designer guidance for the features to be supported at each layer, which in tum guides the device-selection process. Access Layer The first layer of the HDM is the access layer, also often referred to as the network edge. The access layer is the point at which devices that use the network for data transfer connect to the network. Different types of ports are required at the access layer, depending on the type of device that is to be attached to the network. WANs have ports for communication circuits, such as TI and T3 lines as the access ports, while LANs have copper or fiber Ethernet ports. Aggregation Layer The second layer of the HDM is the aggregation layer, also called the distribution layer. Aggregation layer devices collect the traffic from the access layer devices. The aggregation layer is usually the layer in which network usage and performance management policies are controlled through the use of policy-driven features such as access lists (ACLs) and quality of service (QoS) controls. Core Layer ‘The third layer of the HDM is the core layer. The core is a high-speed switching backbone designed with reliability, redundancy, and the ability to switch packets as fast as possible. Early design guidelines for the core layer emphasized the ability of the core to switch packets fast and cautioned against doing any packet manipulation or filtering. Modern switch devices, such as the Cisco Catalyst 6500 Series Switch, are capable of making routing and switching decisions at wire speed, relaxing some of the limitations on functions applied at the core layer while maintaining network performance, © 2008 Cisco Systems, Inc. Implementing he Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-15, ECNM with Hierarchical Design The figure shows a view that maps some of the ECNM modules onto the core-aggregation- access model. 1-16 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. as SOA Overview Ina service-oriented architecture (SOA), the application is divided into discrete components, or services. These services interrelate and communicate through well-defined interfaces and contracts. The interfaces are defined in a neutral manner that is independent of the hardware platform, the operating system, and the programming language that implement the service. This allows services, built on a variety of such systems, to interact with one another in a uniform and universal manner. SOA Impacts ‘SOA has the potential to dramatically change the way that data center networks are designed: = SOAs can result in higher server-to-server traffic flows. This has implications for both ‘access switch performance requirements and access-to-aggregation oversubscription design. © The application no longer resides on one server. Applications can be relocated between servers by migrating virtual machines. Applications can be split across many different servers or server farms, or even across multiple sites. Service-oriented applications require a SONA. SONA Overview Cisco SONA jis an architectural framework that specifies the set of common services that are being deployed in the network to facilitate deployment of SOAs. SONA complements SOA by virtualizing, or abstracting, infrastructure components, enabling a services-oriented infrastructure, {© 2008 Cisco Systems, Inc, Implementing he Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-17 Applications " The Application layer can be divided into two categories. The first, collaborative applications, are applications that enable communication and collaboration, and include applications such as bi Unified messaging, rich media, and telepresence, and contact center applications. ‘The other components of the Application layer are business applications such as product ' lifecycle management (PLM), customer relationship management (CRM), enterprise resource planning (ERP), human capital management (HCM), procurement applications, and supply chain management (SCM). Through all of the services in the interactive services layer, the ' network is now playing a direct and critical role in enabling these applications (and their resources) and thus business processes overall. ‘ Many of these applications interact through middleware and application platforms such as message queuing and load balancing services, “ Abstraction In the SONA framework, the interactive services layer performs the function of abstracting - logical services from physical infrastructure. There are three categories of services: | Infrastructure services include security, mobility, storage, voice and collaboration, ~ compute, identity and network infrastructure virtualization services. These services enable ‘you to optimize the effectiveness of your infrastructure and facilitate the allocation of the right resources to the right business processes and applications. A common technology - employed in many of these services is virtualization. Virtualization has two axes: the ability to make many resources look like one (or one to look like many), and the ability to deal with resources on a logical, as opposed to physical, basis. Historically, the network has ‘ been a challenge for virtualization, and is now being extended from network resources to other IT resources such as servers and storage. |= Application services are the upward facing services that enable application integration, delivery, scaling, and optimization through network-based services, This category has two ‘major components: Cisco Application-Oriented Networking (AON) and application S delivery. Cisco AON enables the network to speak the language of applications; that is, messages such as a purchase order. This enables the network to intelligently act to route, transform, log, notify, or validate business-level objects. Because most applications were not designed with network optimization in mind, adding application delivery services in the horizontal network framework enables end-to-end delivery, scale, and optimization of application data and control information across the enterprise and between users, suppliers, aud partners, = Adaptive management services are composed of three components: infrastructure be ‘management (automated management of collections of devices), services management (management of the interactive services), and advanced analytics and decision ‘support. ws ‘These management services are implemented though application programming interfaces (APIs) to other parts of the infrastructure to enable the network to share policy and control information across all of the layers of the IT infrastructure. es Infrastructure ‘The networked infrastructure layer represents the capital infrastructure of the IT environment, including the routing and switching infrastructure, storage, servers, and devices. Many of the services in the interactive services layer are actually hosted on these devices, either in software ~ or on blades. 1-18 Implementing Cisco Data Center Network infrastructure 1 (DCN) v2.0 (©2008 Cisco Systems, ne The Cisco Data Center Network Architecture ‘This topic identifies and explains the Cisco DCNA, layers, and functions, as well as the data center requirements. See eum elo) Secu elenon Brae to! ty Ree elo} erty accented Within Cisco SONA, distributed applications and services are centrally managed over a common, unified platform. This integrated environment increases both efficiency and use of network assets, lowering capital and management costs. Creating a feature-rich, integrated foundation increases the availability of applications and services that benefit the network itself (such as integrated security or identity services). In addition, a unified system enables networked applications and services to be readily available to all corporate locations with greater speed and service quality than if delivered over a non-integrated infrastructure. Improved performance, efficiency, and availability of networked applications and services are achieved through the use of intelligent networking, Intelligence results from the integration of devices, applications, and services that have been designed to work together as a cohesive system, resulting in an infiastructure that is dynamic, application- and service-aware, and capable of taking a more active role in optimizing communications. ‘The business benefits of Cisco SONA can provide value across the entire enterprise organization. Increase Productivity and Efficiency While Reducing Costs Virtualization technologies such as virtual firewalls, Cisco InfiniBand switching and VLAN segmentation improve resource use and free other resources, protecting network investment. {© 2006 Cisco Systems, inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-19 Increase Resiliency and Business Agility Integrated voice, video, and data services across a converged platform that can scale throughout the enterprise environment provide benefits such as higher network availability and improved employee productivity as the network is able to respond and recover more rapidly from any service disruptions or outages. By enabling tighter application integration, shared visibility and ‘communication between business applications and network services allow more rapid response to ever-changing market demands. Improve Customer Relationships With speedier, more accurate and more available access to corporate data, the ability to serve customers, partners and suppliers improves. For example, with Cisco SONA, enterprises could integrate customer relationship and supply chain management applications with their IP-based call centers, giving all attendants concurrent access to real-time information. This would improve the calling experience for customers. Increase Revenue and Maximize Business Opportunities A centrally managed and unified architecture across a standardized platform allows for more informed business decisions and the ability to bring products to market faster, ting Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Data Center Module Logical View Web Application Database Severs Servers Servers us Enterprise Core. ‘The Data Center module design comprises of core, aggregation, and access layers: = Core: Generally suited well for the Cisco Catalyst 6500 Series Switch with scalable speed, interfaces, and control plane m= Aggregation: Has the same aspect of the core with the differentiation that the Catalyst 6500 Series Switch offers a wide array of service modules to offer m= Access: Extends from blade switches for blade server systems thru top-of-rack Catalyst 4900 Series Switches, to the highest-density and feature-rich Catalyst 6500 Series Switches ‘The Data Center module design also adds elements to the standard design to increase functionality. Added elements of the design include server load balancing (SLB) and Secure Sockets Layer (SSL) offload, which are used to enhance the performance, reliability, and availability of the web, application, and database servers. (© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-21 Data Center Catalyst Switching Portfolio ey emery The Cisco Catalyst switching portfolio for the data center includes the following models: ’ = Cisco Catalyst 6500 Series Switch: This switch is the industry-leading switch, designed for all layers of data center architecture. < = Cisco Nexus 7000 Series Switeh: This switch is designed for large data centers, and is particularly suitable for the core layer. © Cisco Catalyst 4900 Series Switch: This switch provides wire-speed switching and services within the data center access layer. = Blade server switches: These switches provide integrated switching services wit data center access layer. Note ‘This course covers Cisco Catalyst 6500 Series Switches, Cisco Catalyst 4900 Series ‘Switches, and Cisco blade server family switches. For more information on Cisco Nexus 7000 Series Switches, refer to the Implementing Cisco Data Center Network Infrastructure 2 (OCNI.2) course. 1-22 Implementing Cisco Data Center Network Inrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Cisco Catalyst 6500 Series and Cisco Nexus 7000 Series Switches The Catalyst 6500 Series Switch and Nexus 7000 Series Switch complement each other data center architecture. Both encompass numerous mechanisms and functionalities that Cisco Data Center 3.0 architectural design. The key functionalities are: the the = Layer 2 scalability with switch virtualization and multichassis channeling capability Services integration with deep packet inspection Unified fabric with intermix of LANs and storage area networks (SANs) System scalability Operational continuity with modular operating system ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-23, Percy catata 0500| (C8S:100. News 7000 Ensertow ace Eniathow Cisco Data Center 3.0 architecture transforms the data center into a virtualized environment that enables organizations to adopt new IT strategies and respond quickly to changing business needs. The Cisco Data Center 3.0 architecture adds capabilities to help customers architect next-generation data centers. The Cisco Catalyst Series and Cisco Nexus Series Switches offer more than 15 years of switch innovation and an architectural approach specifically designed to unify all components of the Data Center, At the centerpiece of the Cisco Data Center 3.0 architecture are: Cisco Catalyst 6500 Series Switch family Cisco Nexus 7000 Series Switch family 4-24 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Summary This topic summarizes the key points that were discussed in this lesson. Summary = Data centers have evolved from centralized to decentralized and are now transforming to centralized again. * Current data center drivers are virtualization and green architecture. » The ECNM aids in building scalable data centers, * Challenges that data centers have to address are scalability, resilience, and manageability (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-25 1-26 implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, In. Lesson 2 Describing and Positioning the Cisco Catalyst 6500 and 4900 Series Switches Overview This lesson describes the key features of the one rack unit (1RU)-based fixed Cisco Catalyst 4900 Series Switch, 2RU half card slot-based semi-modular Cisco Catalyst 4900 Series Switches, and the key features of the chassis, modules, and fan trays used to provision a Cisco Catalyst 6500 Series Switch. Objectives Upon completing this lesson, you will be able to describe the Catalyst 6500 Series and Catalyst 4900 Series Switches, as well as their architecture and position in a data center network. This includes being able to meet these objectives: m= Describe the Catalyst 4900 Series Switches = Describe the hardware and software options for Catalyst 4900 Series Switches m= Describe the system architecture of Catalyst 4900 Series Switches Identify the characteristics of Catalyst 6500 Series Switches, the system architecture, and features = Identify the position of Catalyst 4900 Series and Catalyst 6500 Series Switches in a data center network Introduction to Cisco Catalyst 4900 Series Switches This topic lists and explains the features, and hardware and software options for the Catalyst 4900 Series Switches. Catalyst 4900 Series Switches are designed and optimized for data centers and top-of-rack aggregation, offering industry-leading wire-speed performance, low- latency switching for Layers 2 through 4, innovative security features, and Gigabit Ethernet or 10-Gb Ethernet uplinks. The Catalyst 4900 Series Switch is designed to deliver the highest reliability and serviceability in a 1RU or 2RU configuration Cisco Catalyst 4900 Series Switches * High performance: Nonbiocking, wire-speed performance Low-atency Layer 2-4 switching = Reliable and secure: Based on proven Catalyst 4500 Series architecture Innovative security features Broadcast and multicast suppressionn hardware for al pots (Layer Dual, hot-swappable AC or DC power supplies Hot-swappable fan tray with redundant fans Catalyst 49000 111001000 Accass 101100008» 10 GE Acoss (SOE Upinne 10 OMe Ups ocx Ups The Catalyst 4900 Series Switches come in three types: = Fixed-size Cisco Catalyst 4948 switch with 44-port 10/100/1000, four 1-Gb Ethernet ports, and 96-Gb/s backplane = Fixed-size Cisco Catalyst 4948 10 Gigabit Ethernet switch with 48-port 10/10/1000, two 10-Gb Ethernet ports and 136-Gb/s backplane, = Semimodular Cisco Catalyst 4900M switch with up to 40-port 10/100/1000 and up to 24 10-Gb Ethernet ports and 320-Gb/s backplane Catalyst 4900 Series Switches are optimized for server rack deployment in data center networks and help case cabling overhead, thus providing better manageability. The key features of Catalyst 4900 Series Switches are: = Offering high performance with low-latency and wire-speed switching = Scale from regular 1-Gb/s server connecti 'y to 10-Gb/s server connectivity ‘= Redundant hot-swappable power supplies (AC or DC) = Hot-swappable fan trays Note None of the Cisco Catalyst 4900 Series Switches is stackable. 1-28 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Exceptional Reliability With redundant hot-swappable internal AC or DC power supplies, nonstop operations are possible, with power supply design connected to different circuits. ‘The Catalyst 4900 Series Switch contains a console port and an Ethernet management port to improve disaster recovery. Even if all system images are corrupted, administrators can retrieve the image via the management port in seconds. Wire-Speed Performance Ihe Catalyst 4900 Series Switch offers wire-speed performance on all ports from 96 Gb/s to 320 Gb/s plus low latency for data intensive applications. ‘Switching performance is maintained regardless of the number of route entries or Layer 3 and 4 services enabled, with Cisco Express Forwarding allowing for increased scalability and performance. Layer 2 and Layer 3 Services Layer 2 and Layer 3 services include advanced routing protocols, equal cost routing, and multicast routing to maximize network resources, with Multi-VRF Customer Edge (VRF Lite) securing traffic. Comprehensive Management Comprchcnsive management is provided through Cisco Network Assistant , CiseoWorks, and embedded CiscoView. In addition, a dedicated console port and single 10/100 management port for offline disaster recovering are provided, with remote in-band management available through Simple Network Management Protocol (SNMP), Telnet client, Bootstrap Protocol (BOOTP), and TFTP. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-29 Cisco Catalyst 4900 Series Switch Security Features Comprehensive set of security features: = NAC prevents propagation of costly worms and viruses by isolating hosts that do not comply with security policies. = 802.1X and identify-based network services allow only authorized persons on the network. * Dynamic ARP inspection and IP Source Guard prevent against man-in-the-middle attacks. * DHCP snooping eliminates rogue DHCP servers. * Port security prevents MAC address flooding attacks. * CoPP mitigates DoS attacks. The Catalyst 4900 Series Switches offer a rich set of integrated security features to proactively lock down your critical network infrastructure. They reduce network security risks with a rich set of Network Admission Control (NAC) capabilities and IEEE 802.1X-based user authentication, authorization, and accounting (AAA). The security policy enforcement is uncompromised, with wire-rate, dedicated access control lists (ACLs) to fend off ever increasing virus and security attacks. The Catalyst 4900 Series Switches offer powerful, easy-to-use tools to effectively prevent untraceable man-in-the-middle attacks and control plane resource exhaustion, IP spoofing, and flooding attacks, without any change to the end user or host configurations. Secure remote access is accomplished with the Secure Shell version 1 (SSHv1) and Secure Shell version 2 (SSHv2) protocols. Secure file transfers are accomplished with the Secure Copy Protocol (SCP), and secure network management is accomplished with SNMPv3. DHCP snooping allows only trusted ports to relay DHCP messages, eliminating rogue DHCP servers. NAC prevents propagation of costly worms and viruses by isolating hosts that do not comply with security policies, while Dynamic ARP Inspection (DAl) and IP Source Guard prevent against man-in-the-middle attacks. The 802.1X and identity-based network services allow only authorized persons on the network, with port security preventing MAC address flooding attacks and Control Plane Policing (CoPP) ‘mitigating denial of service (DoS) attacks. 4-30 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Cisco Catalyst 4900 Series Switch Hardware HA Options + Hot-swappable interface modules * Power supplies: 11 redundancy ~ Hot-swappable and field-replaceable ~ AC or DG ~ Ato-DC fallover—unique for fixed switches + Fan tray: — Redundant fans with variable speed — Hot-swappable and field replaceable a ‘The Cisco Catalyst 4900M Series Switch is powered by two 1000 W power supplies which are located at the rear of the chassis. These power supplies can operate in a redundant mode where each supply provides half the energy consumed by the switch. Should one power supply fail, the remaining one can support the entire system. Power supplies may be removed during switch ‘operation for replacement. The switch is cooled by side-to-side airflow. The fan tray provides five redundantly-operated, variable-speed fans. If one fan fails, the remaining fans increase speed to provide uninterrupted cooling. The fan tray may be removed from the rear of the switch during operation. This is normally done when a replacement fan tray must be installed to remedy a failed fan. ‘© 2008 Cisco Systems, ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-31 Cisco Catalyst 4900 Series Switch Software HA Options = Failover is transparent to the end device: ~ Group of physical routers act as one router ~ Active and standby roles ~ Active is responsible for packet forwarding — Exchange of keepalive messages. * Layer 3 first-hop redundancy protocols: —HSRP. ~ VRRP. ~GLBP To provide transparent and fast first-hop failure recovery, Cisco Catalyst 4900 Series Switches offer the same set of first-hop resolution protocols as Cisco Catalyst 6500 Series Switches. FHRP Operation With first-hop redundancy protocol (FHRP), a set of routers works together to present the illusion of a single, virtual router to the hosts on the LAN. By sharing an IP address and a MAC (Layer 2) address, two or more routers can act as a single virtual router. The IP address of the virtual router will be configured as the default gateway for the workstations on a specific IP segment. When frames are to be sent from the workstation to the default gateway, the workstation will use Address Resolution Protocol (ARP) to resolve the MAC address associated with the IP address of the default gateway. The ARP resolution will return the MAC address of the virtual router. Frames sent to the MAC address of the virtual router can then be physically processed by any active or standby router that is part of that virtual router group. A protocol is used to identify two or more routers as the devices responsible for processing frames sent to the MAC or IP address of a single virtual router. Host devices send traffic to the address of the virtual router. The physical router that forwards this traffic is transparent to the end stations. The redundancy protocol provides the mechanism for determining which router should take the active role in forwarding traffic, and determining when that role must be taken over by a standby router. The transition from one forwarding router to another is transparent to the end devices. Hot Standby Router Protocol Hot Standby Router Protocol (HSRP) defines a standby group of routers, with one router as the active one. HSRP provides gateway redundancy by sharing IP and MAC addresses between redundant gateways. The protocol consists of virtual MAC and IP addresses that are shared between two routers that belong to the same HSRP group. 1-32 Implementing Cisco Data Center Network infrastructure 1 (OCNM4) v2.0 © 2008 Cisco Systems, Inc Virtual Router Redundancy Protocol Virtual Router Redundancy Protocol (VRRP) is a nonproprietary redundancy protocol described in RFC 3768 designed to increase the availability of the default gateway servicing hosts on the same subnet. This increased reliability is achieved by advertising a virtual router (an abstract representation of master and backup routers acting as a group) as a default gateway to the host(s) instead of one physical router. Two or more physical routers are then configured to stand for the virtual router, with only one doing the actual routing at any given time. If the current physical router that is routing the data on behalf of the virtual router fails, an arrangement is made for another physical router to automatically replace it. Gateway Load Balancing Protocol Gateway Load Balancing Protocol (GLBP) is a protocol developed by Cisco used to overcome the limitations of HSRP and VRRP by adding sharing functionality. In addition to being able to set priorities on different gateway routers, GLBP also allows a ing parameter to be set. Based on this weighting (compared to others in the same virtual router group), ARP requests will be answered with MAC addresses pointing to different routers. Thus, load balancing is not based on traffic Load, but rather on the number of hosts that will use each gateway router. GLBP assigns the primary router as the active virtual gateway (AVG) and the secondary router as the active virtual forwarder (AVF). There can be multiple AVFs. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-33 Cisco Catalyst 4900 Series Switch Software HA Options *« Failover is transparent to the end device: ~ Group of physical routers act as one router ~ Active and standby roles ~ Active is responsible for packet forwarding ~ Exchange of keepalive messages * Layer 3 first-hop redundancy protocols: ~ HSRP ~ VRRP ~ GLBP The Catalyst 490M Series top-of-rack Ethernet switch is optimized for ultimate deployment : flexibility It can be deployed for 10/100/1000 server access with 1:1 uplink-to-downlink oversubscription and a mix of 10/100/1000 and 10-Gb Ethernet servers or all 10-Gb Ethernet server environments. The Cisco Catalyst 4900M switch is a 320-Gb/s, 250-mpps, 2RU fixed-configuration switch with eight fixed wire-speed X2 ports on the base unit and two optional half-card slots for deployment flexibility and investment protection. Half Card Options The Cisco Catalyst 4900M switch half-cards provide a wide variety of combinations of Gigabit Ethernet and 10-Gb Ethernet media types. The base unit can accommodate up to two of following half-cards in any combination: = 20-port wire-speed 10/100/1000 (RJ-A5) half card = Four-port wire-speed 10-Gb Ethernet (X2) half-card m= Eight-port (2:1) 10-Gb Ethernet (X2) half-card (Cisco TwinGig Converter Module compatible) 1-34 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systeme, Inc Cisco Catalyst 4900M Switch Typical Configuration + 12 10GE ports (2): * 16 10GE ports (x2: Eight 106 wire epood porte ~ Eight 10GE wire speed ports Four 10GE wire speed ports ~ Eight 10GE wire speed ports * 24 10GE ports (x2): * Eight 10 GE Ports (X2) + 40 10/00/1000: Eight 10GE wire speed ports = Eight 10GE wire speed ports 416 10GE 2:1 oversubscribed or 32 ~ 40 10/100/1000 wire speed ports Ports wire speed GbE SFP (Ru45) GE= Gigabit Ethernet Typical Cisco Catalyst 4900M switch hardware configurations are: = 12x 10 Glgabit Ethernet ports (X2): One 4-port, 10 Gigabit Ethernet halfslot card is used in addition to eight onboard 10 Gigabit Ethernet ports. Thus, twelve 10 Gigabit Ethernet wire-speed ports are available. = 16x 10 Gigabit Ethernet ports (X2): Two 4-port, 10 Gigabit Ethernet half-slot cards are used in addition to eight onboard 10 Gigabit Ethernet ports. Thus, sixteen 10 Gigabit Ethemet wire-speed ports are available. = 24x 10 Gigabit Ethernet ports (X2): Two 8-port, 10 Gigabit Ethernet half-slot cards are used in addition to eight onboard 10 Gigabit Ethemet ports. With this setup, the Catalyst 4900M Series Switch could have: — Eight 10 Gigabit Ethernet wire speed ports — Sixteen 10 Gigabit Ethemet 2:1 oversubscribed ports or 32 Gigabit Ethernet SFP- based ports if a Cisco TwinGig Converter Module is used = 8x 10 Gigabit Ethernet ports (X2) + 40 10/100/1000 ports (RJ45): Two 20-port 10/100/1000 Gigabit Ethernet half-slot cards are used in addition to eight onboard 10 Gigabit Ethernet ports. With this setup, the Catalyst 4900M Series Switch has eight 10 Gigabit Ethernet wire-speed ports and 40 10/100/1000 Gigabit Ethemet copper-based ports. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-35, Cisco TwinGig Converter Module = Provides seats for 2 x 1GE SFP slots in a single X2 10GE port * Allows certain mixing 1GE and 10GE fiber interfaces = Eight-port half-card compatible only: ~ Four port groups ~ X2. and TwinGig mix between groups The Cisco TwinGig Converter Module converts a single 10 Gigabit Ethemet X2 interface into two Gigabit Ethernet port slots that can be populated with small form-factor pluggable (SFP) optics, enabling customers to use Gigabit Ethernet SFPs on the same card in combination with 10 Gigabit Ethemet X2 optics. Mixing TwinGig and X2 Modules The eight-port half-card has four port groups. The following applies to mixing of X2 and TwinGig modules: = X2 pluggables and Cisco TwinGig Converter Modules may not be mixed within a group = X2 pluggables and Cisco TwinGig Converter Modules may be mixed between the four groupings 4-96 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, nc. Examining Cisco Catalyst 4900 Series Switch Hardware = Examine the hardware with the show module command To verify the Catalyst 4900 Series Switch hardware use the show module command. The ‘output shows the Catalyst 4900 Series Switch chassis model and type, serial numbers, software version, and Catalyst 4900M Series Switch half-cards used. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-37 Verifying Cisco Catalyst 4900 Series Switch System Power * Examine the system power status with the show power command Gai paleisiaeaae] A Scer gooa me = = owe (in) owes (500) Power supplies are vital to the Catalyst 4900 Series Switch for providing high availability for server connectivity. The show power command gives details regarding the status of the power along with their type, power used, and available power. Note ‘The “Power supplies needed by system” value reveals the power redundancy mode. A value of 1 indicates that redundant mode is used and a value of 2 indicates that combined mode is configured, 1.38 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Verifying Cisco Catalyst 4900 Series Switch Operation + Examine the system operation with the show environment command ower Consumed by Fantray + 30 Mates ‘To check the temperature, LED status, environment-related alarms, or fan tray status, the administrator ean use the show environment commands ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-39 Cisco IOS Software Options = Single Cisco IOS Image across all Catalyst 4900 (and 4500) Series ‘Switches * Cisco IOS image feature sets: — IP Base: RIP v1/2, static routes, AppleTalk, IPX ~ Enterprise Services: OSPF, EIGRP, BGP, IS-IS - Crypto images providing SSHv1, SSHv2 * Security features are included in IP Base and Enterprise Services images: — DHCP snooping ~ Dynamic ARP Inspection — IP source guard + Hardware-based multicast Cisco Catalyst 4900 Series Switches use the same Cisco IOS image (the same image is even used for Cisco Catalyst 4500 Series Switches). Two Cisco IOS Software configuration options are available: = IP Base image: Standard Layer 3 image, including Routing Information Protocol version 1 (RIPv1), RIPV2, static routes, and Enhanced Interior Gateway Routing Protocol (EIGRP) stub = Enterprise Services image: Enhanced Layer 3 image, including Open Shortest Path First (OSPF), EIGRP, and Border Gateway Protocol (BGP); also includes all features of the IP Base image Other features available: = Security features: DHCP Snooping, DAI, IP source guard, and port security—are included in both Cisco IOS Software options. = Multicast: Protocol Independent Multicast (PIM) sparse mode, PIM dense mode, Multicast Source Discovery Protocol (MSDP), Multicast Border Gateway Protocol (MBGP), Internet Group Management Protocol version 3 (IGMPv3), Source Specific Multicast (SSM), and Distance Vector Multicast Routing Protocol (DVMRP); Pragmatic General Multicast (PGM) is hardware-based. = Full bridging features: Spanning Tree Protocol (STP), Remote Switched Port Analyzer (RSPA), Port Aggregation Protocol (PAgP), Link Aggregation Control Protocol (LACP), Private VLANs, etc., are supported. = Full quality of service (QoS) support with four queues per port. 1-40 Implementing Cisco Data Center Network Infrastructure 4 (DCNI-4) v2.0 © 2008 Cisco Systems, Inc. Comparing Cisco Catalyst 4900 Series Switches ‘Max 10/100/1000 ports ‘Mx 10GE pers [316-353 Watts You GE= Gigabit Ethemet ‘The table summarizes the information about Cisco Catalyst 4948, Cisco Catalyst 4948-10GE, and Cisco Catalyst 4900M switches. ‘The Catalyst 4900 Series Switches are built with server connectivity in mind: = The switching capacity (from 96 to 320 Gb/s) and throughput (from 74 to 250 mpps) offers enough bandwidth for high-speed server connectivity. = 10 Gigabit Ethernet port density reduces the port-to-u Catalyst 4900M switch, ‘oversubscription bottlenecks on The low latency of less then five microseconds aids to the overall performance of applications. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-41 Cisco Catalyst 4900 Series Switch Architecture This topic explains the system architecture of Cisco Catalyst 4900 Series Swit ches, Cisco Catalyst 4948 and 4948-10GE Switch Architecture Catalyst 4948 and 4948-10GE switches encompass centralized architecture. The processing is done on the packet header only to achieve wire rate. Key components that process the packets are the packet processing engine and fast forwarding engine. The packet processing engine performs the following functions: = Layer 2, IPv4, IPv6, and Layer 4 packets header parsing m= Packet rewrite on Layer 2 and Layer 3 for IPv4, IPv6, or type of service (ToS) rewrite for IP Broadcast and multicast suppression per port Generates system clock Uses shared packet buffer memory architecture with 16 MB of packet memory and jumbo frame support ‘The fast forwarding engine performs the following functions: = Lookup with performance of 250 mpps for Layer 2, IPv4, and 125 mpps IPv6 for Cisco Catalyst 4900M switch = Handles VLAN memory space—4 KB external (available for customer use) and 4 KB internal (for internal mapping) = 5S KB MAC address table = Hardware forwarding of IPv4, IPv6, and multicast = Handles queue memory 1-42 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc Packet Flow Through Switch The Cisco Catalyst 4948 and 4948-10GE swi Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 ‘hes perform the following actions on packets: Packets arrive from any Gigabit Ethernet or 10 Gigabit Ethernet port. Packets are buffered in the shared packet buffer memory. The packet header and flow label is sent to the fast forwarding engine through the Packet Lookup Descriptor (PLD). ‘The fast forwarding engine performs Layer 2, 3, and 4 forwarding lookups with emnary content addressable memory (TCAM). The fast forwarding engine performs per-port, per-queue congestion control with dynamic buffer limiting by monitoring the amount of buffering per flow. The fast forwarding engine sends the Packet Transmit Descriptor (PTD) to the packet processing engine. The packet processing engine performs QoS scheduling consulting transmit queue memory, rewrites the MAC headers and transmits the packet to the egress port. © 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-43 Mgmt (Console, Ethemet) The Cisco Catalyst 4900M switch also encompasses centralized architecture. The processing is, done on the packet header only to achieve wire rate. Key components that process the packets are Intelligent Packet Processor (IPP) and Very Fast Forwarding Engine (VFE). Intelligent Packet Processor The Intelligent Packet Processor is responsible for packet reception, storage, and transmission. ‘The IPP connects the ports and half-slot cards to the ASIC set. Very Fast Forwarding Engine VFE provides switching logic and uses several TCAM 4 memories to classify traffic, make switching decisions, and provide Layer 2 through Layer 4 services. Packet Flow Through Switch The Cisco Catalyst 4900M switch performs the following actions on packets: Step1 Packets arrive from any Gigabit Ethernet or 10 Gigabit Ethernet port. Step2 Packets are buffered in the shared packet buffer memory by the IPP which generates aPLD. Step3 The PLD is handed to VFE by the IPP which then uses TCAMs (for QoS, security and forwarding information lookup). Stop4 —_VFE performs Layer 2, 3, and 4 forwarding lookups with TCAMs and makes a forwarding decision. Step5 —VFE then sends the PTD to the IPP. Step6 —_IPP performs QoS scheduling consulting transmit queue memory, rewrites the MAC headers and transmits the packet to egress port. 1-44 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Catalyst 4900 Series Switch QoS Processing Coe The figure shows the QoS ingress and egress processing on a Cisco Catalyst 4900 Series sh. Cisco Catalyst 4900 Series Switch QoS Overview Sophisticated QoS and traffic management includes = Modular QoS Command-Line Interface (MQC) = Per-port per VLAN QoS = Dynamic transmission queue sizing = Strict priority queuing = IP differentiated services code point (DSCP) m= IEEE 802.1p class of service = Flexible classification marking = Classification and marking based on full Layer 3 and 4 headers Input and output policing based on Layer 3 and 4 headers = Support for 16,000 policers with flexible assignment for input and output © Two-rate three-color policing = Output queue management shaping and sharing. m= Dynamic buffer limiting: Congestion-avoidance feature Note Marking is done sequentially, meaning that egress policing is based on ingress marking, ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-45 QoS Processing When comparing the MQC on routers and Cisco Catalyst 4900 Series Switches few differences exist. More noticeable are differences between Cisco Catalyst 4948, 4948-10G, and 4900M switches, which are: = Trust @ Internal DSCP. Table maps Sequential versus parallel classification = Priority queue placement The Cisco Catalyst 4948 switch relies on trust for any traffic classification which does not align with MQC CLI construct since MQC provides table maps. Cisco Catalyst 4900M switch (in contrast to Cisco Catalyst 4948 and 4948-10G switches) does not rely on the internal DSCP value but rather on explicit matching via class maps for proper packet-to-queue placement. There is no special priority queue on a Cisco Catalyst 4900M switch; using MQC i configured on an as-needed basis. The Cisco Catalyst 4900M switch provides sequential classification rather then parallel. This allows the network administrator to classify traffic at egress based on the ingress markings. 146 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 ‘© 2008 Cisco Systems, inc Catalyst 4900 Series Switch Multicast Support -rate multicast Layer 2 and Layer 3 switching in hardware * Supported functionality: — IGMP snooping IGMP v1, v2, and v3 comp ~ PIM, SSM, and DVMRP a ~PGM ocuay | reas) | sexor | _ze« 3) 10% Cisco Catalyst 4900 Series Switch supports a wide variety of multicast functionality: = Layer 2 multicast mechanisms: — __ Intemet Group Management Protocol (IGMP) snooping — IGMP vi, v2, and v3 — Cisco Group Management Protocol m= Layer 3 multicast protocols: — Protocol Independent Multicast (PIM) — Source Specific Multicast (SSM) — __ Distance Vector Multicast Routing Protocol (DVMRP) — Pragmatic General Multicast (RGM) Multicast is processed in hardware and thus offers wire-speed performance. As with unicast traffic, the packet processing engine and fast forwarding engine process the traffic on Catalyst 4948 and 4948-10G switches, and the IPP and VFE process multicast traffic on Catalyst 4900M switches. A key to wire-speed multicast performance is handling of only IP header rather than replicating the entire packet between internal components on the Catalyst 4900 Series Switch. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-47 Introduction to Cisco Catalyst 6500 Series Switches This topic identifies the characteris Series Switches. 3s, features and system architecture of Cisco Catalyst 6500 Cisco Catalyst 6500 Series Switch System Architecture Modular architecture: * Separately configurable * Interoperable * Interchangeable + Hot-swappable * Upgradable = Backward-compatible Modular Architecture The Catalyst 6500 Series Switch is a modular system that can grow according to customer requirements by expanding with technological evolution, The ability for growth allows Cisco customers to upgrade and reconfigure systems by adding new modules, replacing existing modules, and adding and redeploying systems. Throughout the Catalyst 6500 Series Switch product line, modules are: = Separately configurable: Simplifies the addition of new services Interoperable in the same chassis: Provides flexible design options Interchangeable among Cisco Catalyst 6500 Series Switches: Simplifies sparing and network expansion = Hot-swappable without requiring a chassis to be powered off: Provides fast upgrade and repair = Upgradable as newer modules are released: Provides investment protection Note ‘The expected lifecycle of Catalyst 6500 Series Switches is an additional 10 years or more: (as of 2008). 4-48 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. Features and Benefits Features and benefits of Catalyst 6500 Series Switches include: = Maximum network uptime: With Cisco IOS Software Modularity and platform, power supply, supervisor engine, switch fabric, and integrated network services redundancy provides one- to three-second stateful failover and delivers application and services continuity in a converged network, minimizing disruption of mission-critical data and services. = Comprehensive network security: Integrates proven, multi-gigabit Cisco security solutions, including intrusion detection, firewall, VPN, and Secure Sockets Layer (SSL) into existing networks. Scalable performane: architecture. : Provides up to 450 mpps performance with distributed forwarding = Forward-thinking architecture with investment protection: Supports three generations of interchangeable, hot-swappable modules in the same chassis, optimizing IT infrastructure usage, maximizing return on investment (RON), and reducing total cost of ownership (TCO). = Operational consistency: Features 3-, 4-, 6-, 9-, and 13-slot chassis configurations sharing a common set of modules, Cisco IOS software, and network management tools that can be deployed anywhere in the network. Note Catalyst 6500 Series Switches also support the Cisco Catalyst Operating System, which has an end-of-life (EOL) status. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-49 Catalyst 6500 Series Switch E Chassis * Cisco Catalyst 6503-E, 6504-E, 6506-E, 6509-E, and 6513 switches: Horizontally aligned slots Side-to-side airflow * Cisco Catalyst 6509-NEB-A and 6509-V-E switches: Vertically aligned slots Frontstochack airflow s0a.Nes.A Redundant fan trays es06-£ Chassis Overview This switch is an industry-lcading switch, designed for all layers of data center architecture. All chassis except the Cisco Catalyst 6509-V-E switch support all existing supervisors, line cards, switch fabrics, and software releases, and are backward-compatible with all existing software versions. Note Catalyst 6509-V-E switch chassis do not support legacy supervisor modules like the Cisco Catalyst 6500 Series Supervisor Engine 1A and the Cisco Catalyst 6500 Series Supervisor Engine 2. It requires Cisco IOS Software version 12.2(18)SXF or newer, and is not supported in the Cisco Catalyst Operating System. Catalyst 6500 Series Switches are offered with either horizontally or vertically aligned slots. The following types of enhanced chassis are offered for Catalyst 6500 Series Switches: © Cisco Catalyst 6500 Series Switches with horizontally aligned slots: :0 Catalyst 6503-E switch with a three-slot chassis — Cisco Catalyst 6504-E switch with a four-slot chassis — Cisco Catalyst 6506-E switch with a six-slot chassis — Cisco Catalyst 6509-E switch with a lot chassis — Cisco Catalyst 6513 switch with a 13-slot chassis = Cisco Catalyst 6500 Series Switches with vertically aligned slots: — Cisco Catalyst 6509-NEB-A switch with a nine-slot chassis sle — Cisco Catalyst 6509-V-E switch with a nine-slot chassis aids to hot-aisle/cold- designs in modern data center with front to back air flow 1-50 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Ine Note Cisco Catalyst 6513 and 6509-NEB-A switches are not enhanced series chassis, but are stil available. Note Network Equipment Building Systems (NEBS) criteria are a set of requirements and objectives for personnel safety, property protection, and operational continuity. NEBS documents describe both physical and electrical requirements. NEBS compliance is a critical issue to telephone companies when these companies evaluate the suitability of products for use in their networks. All of the Catalyst 6500 Series Switch chassis are NEBS-compliant. ‘The Catalyst 6500 Series Switch chassis thal are identified as NEBS have airflow sperificaly designed for service provider installations, Note E Series fans cannot be used in non-E Series chassis, and non-E Series fans cannot be used in E Series chassis. The six- and nine-slot chassis (part numbers 6506-E, 6509-E, and 6509-V-F) are designed to scale power supply configurations beyond 4000 W and thus offer enhancements that increase the overall system power capacity for industry-leading Power over Ethernet (PoE) port-density scalability. Cisco Catalyst 6503-E Switch The three-slot Catalyst 6503-E switch chassis offers a compact 4RU height that is ideally suited for multi-gigabits per second, secure data cemters, remote access, e-commerce, and converged network solutions. The Catalyst 6503-E switch chassis supports up to 96 10/100/1000 Ethernet ports and is well-suited for small wiring closets. The Catalyst 6503-E switch chassis is also compatible with all Catalyst 6500 Series Switch supervisor engines and can be configured with redundant power supplies as well as redundant supervisor engines, Cisco Catalyst 6504-E Switch The four-slot Catalyst 6504-E switch chassis delivers performance in a compact SRU form factor. This chassis can be configured in two ways; with a single supervisor engine and up to three line cards or with dual supervisor engines and up to two line cards. The Catalyst 6504-E switch also supports redundant AC or DC power supplies. The interface density and breadth of the Catalyst 6504-E switch make it ideal for deployment in high-performance applications such as the following: Enterprise access layer = Small or medium enterprise core and distribution layers = Metro Ethernet edge aggregation = Enterprise WAN edge Cisco Catalyst 6506-E Switch This chassis is ideal for many wiring closet and core network deployments. The six-slot Catalyst 6506-E switch chassis provides intermediate port densities and supports a range of power supply options, including redundant power supplies ranging from 1000 W (AC or DC) to 8700 W AC. When equipped with a single Catalyst 6500 Series Switch supervisor engine, five payload slots are available to support a wide range of interface o service modules. The Cisco Catalyst 6506-E switch also features field-upgradable fan trays for easy service as well as redundant supervisors for high availability. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-51 Cisco Catalyst 6509-E Switch The nine-slot Catalyst 6509-E switch chassis offers scalable port densities, which is ideal for many wiring closet, core, and data center deployments. When equipped with a single Catalyst 6500 Series Switch supervisor engine, eight payload slots are available to support a wide range of interface or service modules. The Catalyst 6509-E switch offers a range of power supply options, including redundant power supplies ranging from 1000 W (AC or DC) to 8700 W AC. The Catalyst 6509-E switch also features field-upgradable fan trays for easy service, as well as, redundant supervisors for high availability. Cisco Catalyst 6509-V-E Switch The nine-slot Catalyst 6509-E switch chassis is similar in functionality as Cisco Catalyst 6509- E switch chassis. The key differences are: & Vertical slot positioning Increased system capacity with investment protection Data center optimized airflow Integrated cable management option Front to back airflow which supports data centers with hot aisle/cold aisle designs 4-52 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, In. Catalyst 6500 Series Switch EOS Chassis ‘Supports all existing supervisors, line cards, switch fabrics, and software releases Backward-compatible with all existing software versions Three-slot chassis does not support CEF720 line cards Six. and nine-slot chassis support 8700W power supplies. 2NES -ael The legacy Cisco Catalyst 6500 Series Switch chassis have reached end-of-sales (EOS) and were offered with either horizontally or vertically aligned slots. The following types of Cisco Catalyst 6500 Series Switch chassis were offered: Cisco Catalyst 6500 Series Switches with horizontally aligned slots: — Cisco Catalyst 6503 switch with a three-slot chassis — Cisco Catalyst 6506 switch with a six-slot chassis — Cisco Catalyst 6509 switch with a nine-slot chassis, = Cisco Catalyst 6500 Series Switches with vertically aligned slots: — Cisco Catalyst 6509-NEB switch with a nine-slot chassis, ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-53 Catalyst 6500 Series Switch Fan Trays ; FAN-MOD-3HS WS-C8503-E-FAN FAN-MOD-4HS WS-C8K-6SLOT-FAN2 ‘WS-CB506-E-FAN WS-6CK-SSLOT-FAN2 WS-C8500-E-FAN WS.C6500-NEB-FAN2 High-Speed Fan The Cisco Catalyst 6500 Series Switch supports two generations of fans. A new set of high-speed fans was introduced with the announcement of the Cisco Catalyst 6500 Series Supervisor Engine 720. New high-speed fans have been designed for each Catalyst 6500 Series Switch chassis. The primary purpose of these fans is to provide additional cooling for a new generation of line cards that draw more power and generate more heat. Ifa Catalyst 6500 Scries Supervisor Engine 720, 720-3B, 720-3BXL, 720-10G-3C, 720-10G- 3CXL, 32, or 32-PISA is installed in a non-E Series chassis, the new high-speed FAN2 assemblies must be used. ‘The FAN2 assemblies can be used with previous generations of the Catalyst 6500 Series Supervisor Engines (1, 1A, and 2). However, the original fan assemblies cannot be used with any versions of the Catalyst 6500 Scrics Supervisor Engine 720 or 32. The Catalyst 6500-E Series Switch chassis require their own high-speed fan tray that is, different from the high-speed fan trays for the non-E Series chassis. The F Series fan trays cannot be used in non-E Series chassis, and the non-E Series fan trays cannot be used in the E Series chassis. 1-54 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Switch Module Types Catalyst 6500 Series Switch module types include: = Supervisor engines * Ethernet line cards * Services modules = WAN line cards In the Cisco Catalyst 6500 Series Switch architecture, special-purpose modules perform separate tasks. Separating tasks into diserete modules allows the feature set to evolve quickly, and customers can add features and enhance performance by adding new modules. The Cisco Catalyst 6500 Series Switch features the following types of special-purpose modules: Supervisor Engines Four supervisor engine types are currently available for the Cisco Catalyst 6500 Series Switch: ™ Catalyst 6500 Series Supervisor Engine 32 © Catalyst 6500 Series Supervisor Engine 32-PISA = Catalyst 6500 Series Supervisor Engine 720 = Catalyst 6500 Series Supervisor Engine 720-10G A supervisor engine performs central control operations on processors that run Cisco 10S Software or Cisco Catalyst Operating System software. The following key components comprise supervisor engine: = The Policy Feature Card 3 (PFC3) is a supervisor engine daughter card that contains ASICs. The ASICs perform bridging and routing based on Cisco Express Forwarding, QoS marking and policing, ACL enforcement, hardware Generic Routing Encapsulation (GRE), Network Address Translation (NAT), Multiprotocol Label Switching (MPLS), and IP version 4 (IPv4) and IPv6 forwarding and policing. Various PFC3 cards are used on the supervisor engines for the Cisco Catalyst 6500 Series Switch: — Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 32-PISA use only the PFC3B — Catalyst 6500 Series Supervisor Engine 720 uses PFC3A, PFC3B, or PFC3BXL — Catalyst 6500 Series Supervisor Engine 720-10G uses PFC3C or PFC3CXL. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-55 = The Multilayer Switch Feature Card (MSFC) provides Layer 3 capabilities. It comes by default on the Catalyst 6500 Series Supervisor Engine 720 (MSFC3), Catalyst 6500 Series Supervisor Engine 720-10G (MSFC3) and Catalyst 6500 Series Supervisor Engine 32 (MSFC2A). = The Programmable Intelligent Services Accelerator (PISA) provides Layer 3 and deep packet inspection capabilities for the Sup32-PISA. It supports all standard routing protocols in addition to supporting Network Based Application Recognition (NBAR) and Flexible Packet Matching (FPM) services for application prioritization and packet filtering. Note Catalyst 6500 Series Switches supported also older supervisor engines which are now EOS ‘and EOL. Those supervisor engines were Catalyst 6500 Series Supervisor Engines 1, 1a, ‘and 2. These supervisor engines offered the MSFC by as an option, but were not configured with it by default, Ethernet Line Cards Ethernet line cards provide IEEE-standard receive and forwarding interfaces and forward packets within the defined network. Services Modules Services modules support multi-gigabit security, application-aware Layers 4 through 7 content switching, wireless LAN services, network management, and voice gateway services to traditional phones, fax machines, PBXs, and the PSTN. WAN Line Cards WAN line cards provide receive and forwarding interfaces at the WAN edge. 4-56 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Switch Investment Protection |invoauced | Cisco Catalyst | 00 with Superssor Engine t Baal Supenisor Engine 2 with Suites Fabric Modu sealing |to2566 | Supewisor | Supeniscr | SupT20-10Ge Engine 720 with | Engine 32 wit | psy 1PvB.GRE.” | BxiG and Continued NAT, and Bi-dir | 2x10G uplink | Vituat Switching) orovaton PitinHW | options depen t0Ge | and support Now 67x sevice | PrC3Band | et0G rec Wokies | SBXL Wh | asleaton | |Slaratengee | |e we eco 108 | coe | foun The Catalyst 6500 Series Switch also provides investment protection since it has been available for almost 10 years, and Cisco continues to support and add functionalities to the platform, {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches. +57 Positioning Cisco Catalyst 4900 and 6500 Series Switches in the Cisco Data Center 3.0 Network This topic identifies the position of Catalyst 6500 and 4900 Series Switches in a Cisco Data Center 3.0 network. Data Center Module Logical View Web Application Servers Enterprise Core The data center module design consists of core, aggregation, and access layers: = Core: Generally well-suited for the Catalyst 6500 Series Switch with scalable speed, interfaces, and control plane = Aggregation: Has the same aspect of the core with the differentiation that the Catalyst 6500 Series Switch offers a wide array of service modules Access: Extends from Cisco blade switches for blade server systems thru top-of-rack Catalyst 4900 Series Switches, to the highest density and feature-rich Catalyst 6500 Series Switch The Cisco Data Center 3.0 module design also adds elements to the standard design to increase functionality. Added elements of the design include server load balancing (SLB) and SSL offload, which are used to enhance the performance, reliability, and availability of web, application, and database servers. 4-58 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Positioning Catalyst 4900 Series Switch in Data Center eco Catayet 500 = Distributed integrated access End-cl-ow ——_Top-ofack accons isco Blade Switch ‘access isto Catalyt 4500 The Catalyst 4900 Series Switches are designed for access layer of the Cisco Data Center 3.0 network, since they are optimized for server rack deployment. Catalyst 4900 Series Switches provide better manageability in the data center access layer, since they help ease the cabling overhead. They are used for top-of-rack design where the network interface card (NIC) of each server is connected to an in-rack switch (typically two are provided for redundancy purposes), which then provides uplink to aggregation switches. If using a Catalyst 4900M switch with proper half-card, 10 Gigabit Ethernet can be provided to the servers. {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-59 Positioning Catalyst 6500 Series Switch in Data Center The Cisco Catalyst 6500 Series Switch is designed for all layers of the Cisco Data Center 3.0 architecture: = Core layer: Cisco Catalyst 6500 Series Switch provides high speed connectivity with 10 Gigabit Ethernet and 1-Gigabit Ethernet PortChannel connections. = Aggregation layer: To aggregate the traffic from the access layer devices and to deploy network usage and performance management policies which are controlled through the use of policy-driven features such as ACLs and QoS controls. Along that service modules like firewall and load balancers. Access layer: The network edge is the point at which devices that use the network for data transfer connect to the network. Depending of the device need they might connect with 100 Mb/s, 1-Gb/s, or even 10-Gb/s Ethernet (for example high speed servers or server blades), Cisco Catalyst 6500 Series Switch is used for end-of-row design of the access layer where each device in a rack has its own connection to Cisco Catalyst 6500 Series Switch. 1-60 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-) v2.0 (© 2008 Cisco Systems, nc. Summary This topic summ: es the key points that were discussed in this lesson. Summary Catalyst 4900 Series Switches are fixed-size. rack-optimized server switching platforms designed for top-of-rack deployments. Catalyst 4900 Series Switches offer wire-rate performance, Catalyst 6500 Series Switches offer flexibility with different Module and chassis options. Catalyst 6500 Series Switches are suitable for core, aggregation, and access of the Cisco Data Center 3.0 network Distributed cabling designs with top-of-rack and end-of-row access improve scalability and manageability of data center networks. The expected lifecycle of the Cisco Catalyst 6500 Series Switches is an additional 10 years or more (as of 2008). (© 2008 Cisco Systems, Inc. Implementing he Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-61 1-62 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Lesson 3| Describing the Cisco Catalyst 6500 Series Switch Supervisors Overview This lesson describes the capabilities and performance considerations of the Cisco Catalyst 6500 Series Switch supervisor modules. While multiple generations of supervisors exist for the Cisco Catalyst 6500 Series Switch, this lesson focuses on the Cisco Catalyst 6500 Series ‘Supervisor Engine 720 with PFC3A/B/BXL, Cisco Catalyst 6500 Series Supervisor Engine 32 modules, and Cisco Catalyst 6500 Series Supervisor Engine 720-10G with PFC3C/CXL. Objectives Upon completing this lesson, you will be able to describe the Catalyst 6500 Series Switch supervisor modules, architecture, and operating system. This includes being able to meet these objectives: Describe = Describe the Catalyst 6500 Series Switch supervisor module architecture ® Describe the Catalyst 6500 Series Supervisor Engine 720 with PFC3A/B/BXL modules = Describe the Catalyst 6500 Series Supervisor Engine 720—10G with PFC3C/CXL modules Describe the Catalyst 6500 Series Supervisor Engine 32 modules Describe the Catalyst 6500 Series Switch supervisor operating system Cisco Catalyst 6500 Series Switch Supervisor Architecture Overview This topic describes the Catalyst 6500 Series Switch supervisor module architecture. Cisco Catalyst 6500 Series Switch Switching Architectures * Switching architectures: ~ 32-Gbis bus ~ 720-Gbis switch fabric » Performance metrics: — Throughput in mpps Bandwidth in Gb/s ‘Two different hardware switching architectures that allow scaling in any development are supported in the Catalyst 6500 Series Switches: = 32-Gh/s switching bus: located in the backplane of every Catalyst 6500 Series Switch chassis = 720-Gb/s crossbar switch fabric: located on the Catalyst 6500 Series Supervisor Engine 720 Note ‘More information on the Catalyst 6500 Series Switch can be found at http:/iwmw.cisco.com/go/eatalyst@500 Switch Performance Metrics Two metrics are Switches: iportant in understanding the performance of the Catalyst 6500 Series = The first metric is the number of forwarding decisions that can be made per second, usually ‘measured in millions of packets per second (mpps). This metric reflects the fact that forwarding decisions take the same amount of time regardless of the size of the packet being switched. = The second metric is the bandwidth of various components of the switch, Bandwidth is usually measured in Mb/s or billions of bits per second (gigabits per second, or Gb/s). This. metric measures the amount of traffic that can be forwarded over a given resource such as a switch fabric, the internal bus, or a port. 41-64 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Switch Backplane Architecture The figure shows the Cisco Catalyst 6509 Switch chassis and the backplane architecture that is typical for the Catalyst 6500 Series Switches. Redundant clocking mechanisms for the bus and MAC address electrically erasable programmable read-only memory (EEPROM) are found on the top of the backplane. There are two columns of three to 13 slots populating the main body of the backplane. This example shows nine slots. The left column shows the switch fabric connectors (utilized when Catalyst 6500 Series Supervisor Engine 720 is present), and the column on the right shows the bus connections and components of the bus. The connections between slots 1 and 2 and between slots 5 and 6 support inter-supervisor communications. Supervisors are typically positioned in these slots. The bus connections support the following communication buses, as shown in this figure: = D=Data bus = R= Results bus = C= Ethemet out-of-band channel (EOBC) Note ‘The EOBC is always present even though it may not be shown in all the slides in the course ‘material. Itis not part of the forwarding architecture of the system. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-65, Muttlayer Forwarding Table Ty Erie en Each Catalyst 6500 Series Switch chassis is equipped with a passive 32-Gb/s shared-bus backplane. The Cisco Catalyst 6500 Series Switch shared-bus backplane is a centralized packet-forwarding architecture that is accessible through a 32-Gb/s shared-switching bus composed of a results bus (RBus) and a control bus (also called the data bus, or DBus) that provide the mechanisms by which control and data information is forwarded throughout the system. However, the 32-Gb/s bus is not used by all line cards or in all situations. The switching bus is shared by all resources in the Cisco Catalyst 6500 Series Switch chassis. Only one packet can be transmitted at a time on the switching bus. This packet is received by all line cards. As a result of switching decisions made by the policy feature card (PFC), the line cards either drop or process the packet as required. The PFC switching system executes decisions according to the Layer 3 switching and routing rules provided by the Multilayer Switch Feature Card (MSFC). 1-66 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Crossbar Switch Fabric Multilayer Forwarding en Geena Sec CO ear) tere! D>DeVOODO ‘The crossbar switch fabrics available in the Cisco Catalyst 6500 Series Switches provide any- to-any non-blocking connections between line cards via 20-Gb/s or 8-Gb/s fabric channels. All versions of the Catalyst 6500 Series Supervisor Engine 720 support a 720-Gb/s crossbar switching fabric comprised of 18 fabric channels. All versions of the Catalyst 6500 Series Supervisor Engine 32 do not support a switch fabric of any kind Note ‘The system will automatically allocate the speed of the channel based on the line card (no user configuration needed), {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-67 Crossbar Switch Fabric— Nine-Slot Chassis Een fee ‘Type of card in slot: [= Fabric (sFM/sup) ine Card The figure shows a schematic diagram of the crossbar switch fabric layout for a nine-slot Catalyst 6500 Series Switch chassis. There are 16 fabric channels attaching each fabric ASIC to the crossbar switch fabric. There are two fabric channels that support slot 5. The two fabric channels bring the total number of fabric channels in this nine-slot chassis to 18. Each fabric channel in this switch fabric is dual-speed, supporting the channel at either 20-Gb/s ‘or 8-Gb/s depending on the line card that is used in the slot. The Catalyst 6500 Series Supervisor Engine 720 incorporates an integrated switch fabric that supports all 18 fabric channels. The 720-Gb’s auto-negotiating switch fabric enables multiple generations of Catalyst 6500 Series Switch modules to be quickly categorized and recognized based on the forwarding and switch fabric characteristics of a given module. Dual Catalyst 6500 Series Supervisor Engine 720 devices can provide redundant fabrics as well as redundant Layer 2 through Layer 4 processing. If a problem causes one supervisor engine to fail over, the crossbar switch fabric and Layer 2 through Layer 4 processing all fail over together. This feature avoids having the erosebar switch fabrie running in one supervisor engine and Layer 2 through Layer 4 running in the other supervisor engine. In a redundant scenario, the switch fabrics are in active/standby mode such that a failure of the active does not reduce the forwarding capacity of the system when the standby takes over. For other three-, four-, six-, and nine- vertical Catalyst 6500 Series Switch chassis, the Catalyst £6500 Series Supervisor Engine 720 switch fabric also provides two fabric channels pet slot. Note The 720-Gb/s switch fabric is calculated as follows: 20 Gb/s per interface * 18 channels in the switch fabric * 2 (for input and output) = 720 Gbis. 41-88 Implementing Cisco Data Conter Network Infrastructure 1 (DCN) v2.0 (© 2008 Cisco Systems, Inc Crossbar Switch Fabric— Thirteen-Slot Chassis Type of card in slot = Fabric (SF wvsup) To = tine cars seit The figure shows a schematic diagram of the crossbar switch fabric layout for a 13-slot Catalyst 6500 Series Switch chassis. In the 13-slot Catalyst 6500 Series Switch chassis, there are not enough ports on the 18-channel switch fabric to support dual fabric channels to every slot. Slots 1 to 8 are supported by a single fabric channel and slots 9 to 13 are supported by dual fabric channels. Therefore, any cards requiring dual fabric channels (6748, 6704, 6708, 6716, and WiSM) must be installed in slots 9 to 13. If these cards are installed in slots 1 through 8, they will be powered down. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-69 Cisco Catalyst 6500 Series Switch Forwarding Architectures Forwarding architectures: * Classic * Cisco Express Forwarding * Distributed Cisco Express Forwarding oe Bi «Herve vases cantaize onaring be SMMam « Prcon suprtsotmaies tome ecsons GRONTERIMTCMMI «orci carrtzesorverang voto 99 mops EaTeE Toravare tuned ante owerarg _Distributed SCEF engine has a copy of he entre forwarding table Cisco Ex stowed Aivaricls ected ata ssaned 48 mpps (or DFC OME I | cs ccrrso, ma Catalyst 6500 Series Switch modules use one of three forwarding technologies: classic, Cisco Express Forwarding, and distributed Cisco Express Forwarding. Each technology has a unique architecture with the following characteristics and capabilities: © Classic architecture connects modules to the 32-Gb/s shared bus. These cards use the centralized Cisco Express Forwarding Engine on the PFC for forwarding up to 15 mpps. = Cisco Express Forwarding is a switched-fabric architecture. Cisco Express Forwarding is scalable to 30 mpps. This technology uses a central Cisco Express Forwarding engine located on the supervisor engine PFC. Cisco Express Forwarding makes forwarding decisions for all line cards. = Distributed Cisco Express Forwarding is suited for the most demanding environments, This technology uses the distributed Cisco Express Forwarding engine located on the interface module Cisco Catalyst 6500 Series Distributed Forwarding Card (DFC) daughter card and the distributed Cisco Express Forwarding table. The distributed Cisco Express Forwarding table is located on the interface module Catalyst 6500 Series DFC and is a local copy of the supervisor engine central Cisco Express Forwarding. The Catalyst 6500 Series DFC on the interface module makes all forwarding decisions locally. Interface modules equipped with Catalyst 6500 Series DFC provide the maximum performance and scalability of the architectures offered. Note ‘A Multilayer Switched Feature Card (MSFC) manages the Cisco Express Forwarding tables, 4-70 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Comparing Flow-Based and Cisco Express Forwarding Architectures Flow-Based Switching Flow-based switching is found in many switching architectures available today. The inherent problem with flow-based switching is that it relies on the control plane to forward the first packet of each new flow that enters the switch. In current applications, many flows are short- lived, and when combined with an increased data load, these flows place a large burden on the switch control plane, Flow-based architectures are also susceptible to denial-of-service (DoS) attacks, and most of these attacks are based on a high number of single packet flows. These attacks cause constant CPU lookups and cache table population and aging. These activities affect the normal operation of other CPU funetions such as Layer 3 routing protocols and Layer 2 Spanning Tree Protocol (STP). The Cisco Catalyst 6500 Series Switch provides a high-performance control plane. However, while control plane performance is on the order of hundreds of thousands of packets per second, this level of performance does not approach the performance provided by hardware- based switching, which is normally measured in millions of packets per second. In many customer environments, flow-based switching can impose a bottleneck on overall throughput. Cisco Express Forwarding Architectures To minimize throughput constraints, Cisco devised a new forwarding architecture to greatly enhance the forwarding capabilities of the Cisco Catalyst 6500 Series Switch architecture and to eliminate the control plane from the forwarding path. Cisco Express Forwarding architecture allows the control plane to do what it does best: to interact with its routing peers to understand the topology of the deployment. From this topology, the MSFC builds a Forwarding Information Base (FIB) that is pushed down to the PFC and programmed into hardware in a specialized high-performance lookup memory called ternary content addressable memory (TCAM). Atall times, the PFC has full knowledge of the topology and can make informed decisions on where to forward data. If the topology of the network changes, the FIB is modified and passed to the PFC and Catalyst 6500 Series Distributed Forwarding Cards (DFCs), ‘ensuring that all forwarding tables are current at all times, The Catalyst 6500 Series Supervisor Engine 2 introduced the Cisco Express Forwarding architecture, which replaced flow-based switching as the forwarding architecture used in the Cisco Catalyst 6500 Series Switches. Cisco Express Forwarding is now the default forwarding architecture on Cisco Catalyst 6500 Series Switch supervisor engines. ‘There are two variants of the Cisco Express Forwarding architecture: Cisco Express Forwarding and distributed Cisco Express Forwarding. The figure summarizes the key characteristics of each variant. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-71 Cisco Catalyst 6503-E Switch Slot Requirements ‘Sup 720, 32; Line Card or Service Module ‘Sup 720, 32; Line Card or Service Module Line Card or Service Module The Cisco Catalyst 6503-E switch supports all supervisors, line cards, and modules without exception, but these components must be positioned in specific slots. Supervisor engines must be placed in slots | or 2 (or both). Line cards and modules can be inserted into any open slot in the chassis. Redundant supervisors are supported in this chassis, Note Catalyst 6500 Series Supervisor Engines 1, 1A, and 2 are also supported in slots 1 and 2. ‘The Cisco Catalyst 6503 switch supports all supervisors, line cards, and non-6700 modules. The power supplies are at the back of the chassis. 4-72 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, Inc. Cisco Catalyst 6504-E Switch Slot Requirements up 720, 22, Line Card or Sorvioo Module ‘Sup 720, 32, Line Card or Service Module Line Card or Service Module pee} stot | Line Card or Service Module DRC The Cisco Catalyst 6504-E switch supports all supervisors, line cards, and modules without exception, but these components must be positioned in specific slots. Supervisor engines must be placed in slots 1 or 2 (or both). Line cards and modules can be inserted into any open slot in the chassis, including slots 1 and 2. Redundant supervisors are supported in this chassis, Note Catalyst 6500 Series Supervisor Engines 1, 1A, and 2 are also supported in slots 1 and 2. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-73 Cisco Catalyst 6506-E Switch Slot Requirements Line Card or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module ‘Sup 720, 32; Line Card, Service Module ‘Sup 720, 32; Line Card, Service Module The figure illustrates the locations of supervisors, line cards, and modules in a Cisco Catalyst ‘ 6506-E switch. Consider the following guidelines when installing these components: There are no slot dependencies for fabric-connected or bus-connected line cards in this . chassis. ™ Catalyst 6500 Series Supervisor Engine 720 or 32 must be located in slot 5 or 6, & Redundant supervisors are supported in the Catalyst 6506-E switch chassis Note Catalyst 6500 Series Supervisor Engine 1, 1A, or 2 can reside in slot 1 or 2 in the Catalyst 6506 Series Switch. Ifa Switch Fabric Module or Switch Fabric Module 2 is used with the Catalyst 6500 Series Supervisor Engine 2, it must be located in either slot 5 or slot 6. \ 41-74 Implementing Cisco Data Center Network Infrastructure 1 (OGNI-1) v2.0 (© 2008 Cisco Systems, Inc. Cisco Catalyst 6509-E, 6509-V-E, or 6509-NEB-A Switch Slot Requirements Line Card or Service Module Line Card or Service Module lune Card or Service Moaule Line Card or Service Module ‘Sup 720, 32; Line Card, Service Module ‘Sup 720, 32; Line Card, Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module The figure shows the location of supervisors, line cards, and modules in the Cisco Catalyst 6509-E, 6509-V-E, and 6509-NEB-A switches. The Catalyst 6509-V-E and 6509-NEB-A switch backplanes align cards in a vertical orientation, but the architecture is the same as depicted in the figure. Consider the following guidelines when installing these components: '™ Catalyst 6500 Series Supervisor Engine 720 or 32 must be located in slot 5 or slot 6. ® Redundant supervisors are supported in the Cisco Catalyst 6509 Series Switch chassis. ‘There are no slot dependencies for fabric-connected or bus-connected line cards in the Cisco Catalyst 6509 Series Switch chassis. Note Catalyst 6500 Series Supervisor Engine 1, 1A, or 2 can reside in siot 1 or 2 in the Catalyst 6509 Series Switch. If a Switch Fabric Module or Switch Fabric Module 2 is used with the Catalyst 6500 Series Supervisor Engine 2, it must be located in either slot 5 or slot 6 {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-75 Cisco Catalyst 6513 Switch Slot Requirements Line Card or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module ‘Sup 720, 32; Line Card, Service Module ‘Sup 720, 32; Line Card, Service Module Line Gard or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module Line Card or Service Module The figure shows the location of supervisors, line cards, and modules in a Cisco Catalyst 6513 switch, Consider the following guidclincs when installing thesc components: ™ Catalyst 6500 Series Supervisor Engine 720 or 32 must be located in slot 7 or slot 8. ‘= Redundant supervisors are supported in the Catalyst 6513 switch chassis. = After the primary supervisor or redundant supervisors are installed, the other slots are available for line cards and modules. Slots 1 through 8 of a Catalyst 6513 switch chassis support a fabric present in the system. gle channel into any switch Slots 9 through 13 of a Catalyst 6513 switch chassis support dual channels into any switch fabric present in the system. Any line card that depends on dual fabric channels to function must reside in the bottom five slots. Installing a dual fabric channel line card into slots 1 through 8 results in that line card not receiving power. = Line cards requiting only a single fabric channel or no fabric channel at all, can be placed in any slot in the Catalyst 6513 switch chassis. Note Catalyst 6500 Series Supervisor Engine 2 can reside in slot 1 or 2 of the Catalyst 6513 switch, 1-76 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Switch Supervisor Engine 720 with PFC3A/B/BXL ‘The Catalyst 6500 Series Supervisor Engine 720 provides higher performance management and forwarding functions to the Catalyst 6500 Series Switches than any other supervisor engine available, This topic describes the key features of the Catalyst 6500 Series Supervisor Engine 720 with PFC3A/B/BXL. Catalyst 6500 Series Supervisor Engine 720 Overview | Uplink Ports Console Removable Storage ‘The Catalyst 6500 Series Supervisor Engine 720 is designed the core, aggregation, and even access layer of data center networks. The supervisor engine was introduced in 2003 and integrates the crossbar switch fabrie, Policy Feature Card 3 (PFC3) and Multilayer Switch Feature Card 3 (MSFC3) into one supervisor module ‘The Catalyst 6500 Scrics Supervisor Engine 720 is based on a 600-MIlz CPU for the Switch Processor (SP) and a 600-MHz CPU for the Route Processor (RP). It supports up to 1 GB of, DRAM for the SP and up to | GB of DRAM for the RP. The RP bootflash is a 64-MB linear flash and is not upgradable. The SP bootflash is either a 64-MB linear flash or a 512-MB CompactFlash. The 512-MB CompactFlash option is available by default on Catalyst 6500 Series Supervisor Engine 720s ordered with Cisco Catalyst 6500 Series Switch Cisco IOS Software Release 12.2(18)SXES or newer. Existing Catalyst 6500 Series Supervisor Engine 720s with the 64-MB linear flash can be upgraded to the 512-MB CompactFlash via the WS- CF-UPG= upgrade kit. The NVRAM size is 2 MB and is not upgradable. The key features of the Catalyst 650 Series Supervisor Engine 720 include the following: = Integrated switch fabric = Supports up to 720 Gb/s = PFC3 supporting IPv4, IPv6, Network Address Translation (NAT), Generic Routing Encapsulation (GRE), MPLS (3B/3BXL only) forwarding in hardware ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-77 = Integrated MSFC3 complex containing both the SP and RP ‘Supports up to 1-GB DRAM on SP and RP Removable storage slots Supports up to 1-GB CompactFlash Three uplink ports: 2 x Gigabit Ethernet SFP and 1 x 10/100/100. Only two of these may be used at any one time; port 1 is Gigabit Ethemet SFP, and port 2 can be either Gigabit Ethernet SFP or 10/100/1000 but not both Switching Architecture The integrated crossbar switching fabric on the supervisor engine increases the bandwidth capacity of the Cisco Catalyst 6500 Series Switch to 720 Gb/s. This compares to the previous maximum capacity of 256 Gb/s available with a Catalyst 6500 Series Supervisor Engine 2 with Switch Fabric Module or Switch Fabric Module 2. The integrated switch fabric on the baseboard supports 18 fabric channels. Each fabric channel in this switch fabric is dual-speed, supporting the channel at either 20 Gb/s or 8 Gb/s, depending on the line card that is used in each slot. Thus the crossbar switching fabric supports connections to both the CEF256 line cards, at 8-Gb/s per fabric channel, and to the newer CEF720 line cards, at 20 Gb/s per fabric channel. The supervisor engine also supports a connection to the 32-Gb/s Catalyst 6500 Series Switch shared bus apart from the connection to the onboard 720-Gb/s switching fabric. Forwarding Architecture ‘The supervisor engine uses the Cisco Express Forwarding architecture to forward packets. Up to 30 mpps of Layer 2 and Layer 3 centralized Cisco Express Forwarding switching of IP traffic is supported. Using the new, higher-performance line cards with distributed forwarding allows a supervisor engine to scale switch performance to over 400 mpps. Note Unlike the Policy Feature Card 1 (PFC1) and Policy Feature Card 2 (PFC2), Internetwork Packet Exchange (IPX) switching in hardware is not supported on the Catalyst 6500 Series ‘Supervisor Engine 720 integrated PFC3. The Catalyst 6500 Series Supervisor Engine 720, however, still supports IPX forwarding in software. 1-78 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 (© 2008 Cisco Systems Catalyst 6500 Series Supervisor Engine 720-3A and 720-3B/BXL Overview ‘Incorporates new PFC3B | Incorporates new Incorporates PFC3A_| to provide the same PFC3BXL, extending which does not have all | features as the XL version | hardware features and features realized in but not as high a capacity | system capacity for hardware. for routes and flow routes and flow information, information. Supervisor720-3A The Catalyst 6500 Series Supervisor Engine 720 is the first released type of this supervisor engine. All versions of the Supervisors 720 share common architecture with the exception of the PFC which in this ease is PFC3A. The main difference when compared to later versions is the unavailability of some features in hardware , such as full MPLS (Multiprotocol Label Switching support) uRPF (Unicast Reverse Path Forwarding), and default memory size (512 MB). ‘The Catalyst 6500 Series Supervisor Engine 720-3B is the enhancement of the Catalyst 6500 Series Supervisor Engine 720. Architecturally, itis the same as the original Catalyst 6500 Series Supervisor Engine 720 in terms of the switch fabric used and the backplane connections offered. The Catalyst 6500 Series Supervisor Engine 720-3B incorporates the PFC3B, which increases the functionality of the supervisor over its predecessor. Some of the features that differentiate the Catalyst 6500 Series Supervisor Engine 720-38 from the earlier Catalyst 6500 Series Supervisor Engine 720 include these: MPLS support in hardware Support for Ethemet over MPLS (EoMPLS) m= Support for access control entry (ACE) hit counters = Increased support for access control list (ACL) labels, from 512 to 4096 labels, m= ACL-based uRPF check performed in hardware = Increased efficiency, from 50 percent to 90 percent, for storing NetFlow entries in the NetFlow table ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-79 Ability to apply quality of service (QoS) policies on tunnel interfaces Ability to apply MAC ACLs to IPv4 traffic Support for matching on class of service (CoS) and VLAN in ACLs Support for up to 256,000 multicast routes in sparse mode Supervisor 720-3BXL The Catalyst 6500 Series Supervisor Engine 720-3BXL was introduced in early 2004. The Catalyst 6500 Series Supervisor Engine 720-3BXL is functionally identical to the Catalyst 6500 Scries Supervisor Engine 720-3D, but differs in its capacity for supporting routes and NetFlow entries. Up to | million routes can be stored in its forwarding tables, and up to 256,000 NetFlow entries can be stored in the NetFlow tables. The Catalyst 6500 Series Supervisor Engine 720-3BXL also supports classic line cards, thus providing total backward compatibility for all line card generations. 1-80 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Supervisor Engine 720 Switching Architecture Supervisor - 30 10 400 mpps Engine 720 oy earn Optional DFC3 (arr age rl orc DFC3 DFC3 Catalyst 6500 Series Supervisor Engine 720 Switching Architecture The Catalyst 6500 Series Supervisor Engine 720 crossbar switch fabric offers the fabric enabled line cards or service modules a total of eighteen 20-Gb/s or 8-Gb/s fabric channels on a Cisco Catalyst 6500 Series Switch system in addition to the 32-Gb/s shared-bus connection. Note The legacy Switch Fabric Module or Switch Fabric Module 2 provided a 256-Gb/s crossbar switch fabric to a Cisco Catalyst 6500 Series Switch system with the Catalyst 6500 Series Supervisor Engine 2. The Switch Fabric Module or Switch Fabric Module 2 provides a total of eighteen 8-Gb/s fabric channels to CEF256 modules. Note ‘A Catalyst 6500 Series Supervisor Engine 720 and either form of Switch Fabric Module are mutually exclusive and cannot coexist in the same Cisco Catalyst 6500 Series Switch chassis. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-81 Bandwidth of Duplex Communications Comparing the bandwidth of various components and technologies is often complicated by r situations in which half- and full-duplex bandwidth metrics are mixed. For example, Fast Ethernet is often documented as being a 100-Mb/s medium with 200-Mb/s of full-duplex bandwidth, Documentation of the Catalyst 6500 Series Switch architectural features also contains evidence of this mix of half- and full-duplex numbers. The 32-Gb/s shared bus actually transmits data at 16-Gb/s. However, because the packet is both transmitted and received during this operation, the full-duplex number of 32-Gb/s is used in most Cisco literature, Full-duplex numbers are also used to document the total bandwidth of the crossbar switch fabrics. Conversely, Cisco documents the speed of the individual fabric channels with their half-duplex numbers. For example, you can have 18 switch fabric channels on the Catalyst 6500 Series Supervisor Engine 720 switch fabric. Each channel is capable of transmitting 20- Gb/s. These 18 channels provide 360-Gb/s of half-duplex bandwidth or 720-Gbis of full-duplex bandwidth. To be consistent with Cisco documentation, the figures in this course use full- duplex numbers for switching bus performance and half-duplex numbers for fabric channel speed. 1-82 Implementing Cisco Data Center Network infrastructure 4 (DCNI-1) v2.0 (© 2008 Cisco Systems, nc Catalyst 6500 Series Supervisor Engine 720 MSFC3 = MSFC3is a standard daughter card on the Supervisor 720 + Supervisor 720-3B/BXL has a maximum DRAM size of 1GB on the SP and RP a ascraene | ‘SORA (Oeaut) 61 ir 1221189546) ae 12M (1220185 coer) ‘The control plane functions in Cisco Catalyst 6500 Series Switches are processed by the MST. On the Catalyst 6500 Series Supervisor Engine 720, both the RP and the switch processor (SP) are on the daughter card MSFC3. ‘The MSFC! supported forwarding rates up to 170,000 pis (not supported by the Catalyst 6500 Series Supervisor Engine 720), and the MSFC2 and MSFC3 can support forwarding rates up to 500,000 p/s. These forwarding rates are applied only to the special cases of traffic that require processing by the MSFC3. Regular IPv4 and IPv6 data traffic is processed by the Cisco Express Forwarding architecture at greater speeds. Route Processor The Catalyst 6500 Series Supervisor Engine 720 MSFC3 supports the RP, which provides Layer 3 functionality such as routing and Cisco Express Forwarding table creation. The RP is responsible for a number of processes, including running the Layer 3 routing protocols, performing address resolution, running Internet Control Message Protocol (ICMP), managing the virtual interfaces (such as switch virtual interfaces [SVIs]), and the Cisco 10S Software configuration In all MSFC daughter cards (MSFC1, MSFC2, and MSFC3), the RP is located on the MSFC. ‘The memory defaults for the MSFC3 are shown in the figure. Note ‘The MSFC daughter card is required and is not a field-replaceable unit (FRU). ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-83. Switch Processor ‘The MSFC3 also supports the SP, which controls all chassis operations. ‘The MSFC3 differs from the MSFC1 and MSFC2 in that the SP, which was originally located ‘on the baseboard of the Catalyst 6500 Series Supervisor Engine | and the Catalyst 6500 Series Supervisor Engine 2, has now been moved onto the MSFC3 in the Catalyst 6500 Series ‘Supervisor Engine 720. The SP is primarily responsible for running the Layer 2 protocols such as Spanning Tree Protocol (STP), VLAN Trunking Protocol (VTP), and Cisco Discovery Protocol, as well as pushing the FIB tables to the PFC and Catalyst 6500 Series DFCs. 1-84 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systeme, Inc * PFC3 isa standard daughter card on the Supervisor Engine 720 * Field-upgradable to allow for changing network environments. ee Roxtos (Pvt) 256K eK | 1 Milan ‘Number of ACL 512 ao | ws es Yas ofoe we Naru ities | KUN) | tKCISIO | MEK ieee 1 | | ‘The PFC is a daughter card that sits on the supervisor baseboard and contains the ASICs (Layer 2 and Layer 3 engine) that are used to accelerate Layer 2 and Layer 3 switching, store and process QoS and security ACLs, and maintain NetFlow statistics. There are three generations of PFCs; the PFC1, PFC2, and PFC3. PFC| and PFC2 are for Catalyst 6500 Series Supervisor Engines 1 and 2, not for Catalyst 6500 Series Supervisor Engines 720 or Catalyst 6500 Series 32. ‘The PFC3 is a standard inclusion with the Catalyst 6500 Series Supervisor Engine 720 and provides centralized forwarding performance up to 30 mpps. Like the PFC2, the PFC3 supports hardware-based Layer 2 and Layer 3 switching, security ACL processing, QoS application and NetFlow statistics collection. The PFC3 has also been enhanced to include support for a set of new features that are now processed in hardware. These new features include these: NAT and Port Address Translation (PAT) Multipath uRPF check GRE IPV6 switching IPv6 ACLs IPV6 tunneling MPLS (PFC3B and PFC3BXL only) = MPLS virtual private network (VPN) (PEC3B and PFC3BXL only) MPLS provider and provider edge (PE) support (PFC3B and PFC3BXL only) MPLS Traffic Engineering (TE) support (PFC3B and PFC3BXL only) {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-85 | Bidirectional Protocol Independent Multicast (bidir-PIM) = User-Based Rate Limiting (UBRL) = Egress policing ‘The PFC3B is an enhanced version of the PFC3 that adds a number of new functions in hardware and improves the efficiency levels for storing flow entries in the NetFlow table. The ‘most significant enhancement provided by the PFC3B is hardware switching support for MPLS-tagged packets. This support enables any Ethernet line card to receive and send MPLS- tagged packets. This PFC also supports EoMPLS natively. The PFC3B adds support for a number of other new enhancements, including support for ACL hit counters, applying QoS Policies on tunnel interfaces, increasing the support for ACL labels up to 4096, and allowing ACLs to match on CoS and VLAN values on incoming packets. Note ‘The PFC3BXL requires more memory because of the support of enhanced features over the PFC3B. However, simply increasing the DRAM on the SP or RP of a PFC3B-equipped system will not increase the forwarding-entry capacities in the FIB of that system. Those ‘maximums are dictated by the size of the FIB TCAM which is not controlled by the size of the SP or RP DRAM. 4-86 Implementing Cisco Data Center Network Infrastructure 1 (DCNI1) v2.0 (© 2006 Cisco Systems, Inc Catalyst 6500 Series Supervisor Engine 720 Switch Fabric * Integrated 720-Gb/s switch fabric = 18 fabric channel: — CEF256 and dCEF256 connect in at 8 Gbis per fabric channel — CEF720 and dCEF720 connect in at 20 Gb/s per fabric channel The Catalyst 6500 Series Supervisor Engine 720 switch fabric supports the Cisco crossbar switch fabric architecture. Switch fabric provides 18 fabric channels that are apportioned across cach of the slots in the chassis. Each fabric channel can run at 8-Gb/s or 20-Gb/s depending on the attached line card (full-duplex numbers are 16-Gb/s and 40-Gb/s per channel). ‘The clocking speed is determined by the line card that is installed in the switch. The CEF256 and dCEF256 line cards connect to the fabric at 8-Gb/s per fabric channel, whereas the CEF720 and dCEF720 line cards connect to the fabric at 20-Gb/s per fabric channel. ‘The Catalyst 6500 Series Supervisor 720 can support single-fabric and dual-fabrie channel line cards, and can deliver a bandwidth of up to 40-Gb/s to each line card in the slots. Note The three, four, six, and nine slot chassis support two fabric channels into the crossbar switch fabric. The 13 slot chassis supports a single fabric channel for slots 1 through 8, and dual fabric channels for slots 9 through 13. Note In the Cisco Catalyst 6513 switch chassis, Its Important to ensure at dual fabric Hine cards are only inserted in slots 9 through 13. Architecture This architecture uses a combination of buffering and overspeed to overcome any potential congestion and head-of-line blocking conditions. Overspeed is used to clock the paths internal to the switch fabric at a speed higher than the fabric channel into the switch fabric. For ‘example, for an 8-Gb/s fabric channel into the Switch Fabric Module, the internal path within the Switch Fabric Module is clocked at 24-Gb/s. Similarly the internal path is clocked at 60-Gb/s for fabric channels that clock at 20-Gb/s. Overspeed is a technique used to accelerate packet switching through the switch fabric to minimize the impact of congestion. Line-rate buffering is also present internally within the switch fabric to overcome any temporary periods of congestion. Buffering is implemented on egress in the switch fabric to assist in eliminating head-of-line blocking conditions. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-87 Catalyst 6500 Series Supervisor Engine 720 IPv6 Features acc IPV6 addressing ICMP for Pv DNS for IPV8. V6 MTU path discovery SSH for IPvs IPV6 Telnet IPV6 traceroute Teens ar oared IPV6 load sharing up to 16 paths ee es EtherChannel hash across 48 bits eeEets te ees IPV6 poling NetFlowiclassieation neoir ee ‘STD end EXT V6 ACLS BS IPV6 Qos lookups IPv6 multicast {Pv6-to1Pv4 Tunneling {Pv function i located 1PV6 edge over MPLS (6PE) on PFC3 The figure lists the key features of IPV6. The Forwarding Information Base (FIB) for IPv6 is half the size of that for IPv4: = PFC-3A/PFC3B—128,000 = PFC-3BXL—512,000 1-88 Implementing Cisco Data Center Network infrastructure 1 (DCN) v2.0 (© 2008 Cisco Systemes, ne Route Processor Rate Limiters * Switching in hardware operates at millions of p/s * RP supports processing rates in the thousands of p/s, * RP rate limiters are used to limit the impact of traffic flooding to the RP and swamping the CPU Input and output ACL traffic. CEF receive trafic CEF glean trafic MTU faiures ICMP redirect VACL logging Layer 3 secutty feature trafic TTL failures RPF failures The Cisco Catalyst 6500 Supervisor Engine 720 supports platform-specific, hardware-based rate limiters for special networking scenarios resembling DoS attacks. These hardware CPU rate limiters are called “special-case™" rate limiters because they cover a specific predefined set of IPv4, IPv6, unicast, and multicast DoS scenarios. These DoS scenarios identify special cases where traffic needs to be processed by the switch processor (SP) or route processor (RP) CPU. Examples include multicast traffic for which a destination prefix cannot be found in the routing table, dropped traffic that needs to be processed by the CPU to send an ICMP unreachable message back to the source, and special packet types that cannot be identified with an ACL. (©2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-89, Catalyst 6500 Series Supervisor Engine 720 GRE Support * GRE and IP-in-IP tunneling is supported in the PFC3 at hardware- accelerated speeds * GRE performance is: ~ Up to 10 mpps centralized ~ Up to 25 mpps decentralized The number of available internal VLANs limits the number of configurable GRE tunnels. An intemal VLAN is assigned for each routed or switched virtual interface that is created on the switch and for each WAN port that exists on an installed Flex WAN, OSM, or SPA module. As mentioned earlier, an internal VLAN is also assigned to each GRE tunnel that is created on the system. Internal VLAN usage ranges from VLAN 1006 to VLAN 4094, 1-90 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc Egress Policing + Egress policer can be applied on a routed (Layer 3 port) or a VLAN SVI * Itcannot be applied to a Layer 2 port Egress Policer All policing, including egress policing, is done on the ingress PFC or Catalyst 6500 Series DFC; it is not done at the port level. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-91 Multipath Unicast RPF * URPF check mitigates spoofed or malformed IP source addresses = URPF drops packets whose source address is not in the local forwarding tables ete peer RSs Prefix Next Hop Interface 14.0018 10111 — ogan 1020018 10214 goa The Unicast Reverse Path Forwarding (Unicast RPF) feature reduces problems caused by the introduction of malformed or forged (spoofed) IP source addresses into a network by discarding IP packets that lack a verifiable IP source address. 4-92 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc UBRL Features * Three types of global flow masks can be stored on the supervisor in the NetFlow table: ~ Destination-only IP (default) ~ Source-destination IP ~ Fulllow (Sre IP, Det IP, Protocol, Sre Port, Dst Port) + The Catalyst 6500 Series Supervisor Engine 720 supports: Up to two flow masks in the system Source-only and destination-only flow masks in the PFC3 * The Catalyst 6500 Series Supervisor Engine 720 can more entries in its NetFlow table: Allows different features using the NetFlow table to use different masks (Cisco 10S SLB, NDE, TCP intercept, reflexive ACLs, WCCP, and CBAC) User-Based Rate Limiting (UBRL) isa form of microflow policing that supports the policing of idual flows. The primary difference between UBRL and regular microflow policing is that you can specify a source-only flow or destination-only flow rather than the full source or destination address of the packet. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1.93 Cisco Catalyst 6500 Series Switch Supervisor Engine 720-10G-3C/CXL This topic describes the Cisco Catalyst 6500 Series Supervisor Engine 720-10G. Catalyst 6500 Series Supervisor Engine 720- 10GE Overview + MSFC3 is a standard daughter card * PFC3CICXL is switching engine: Field-upgradable to allow for changing network environments Rou P) | zex | t8ton tearoutnm — | 4omepe | 4800 wraracrg | z2smere | 225 ope Nethow Envios 129% 28k Removable Storage 3x 1GE 2x 1068 x2 pin Pons Upink Pons The Cisco Catalyst 6500 Series Supervisor Engine 720-10G is designed for the core, aggregation and even access layers of data center networks. The Catalyst 6500 Series Supervisor Engine 720-10G has the following features: m= 2x 10 Gigabit Ethernet ports (X2 optics) = 2x Gigabit Ethernet SFP ports and 1x10/100/1000 port—two of them are small form factor- pluggable (SFP) and one is RJ4S. = Alluplinks are active, even on redundant supervisor module Supports multiple generations of line cards Rich services support with services modules such as Network Analysis Module (NAM), firewall, wireless controller, ACE = 2.x USB ports (currently disabled) 1 x CompactFlash slot Integrated MSFC3 and PFC3C/CXL_ Up to 96 KB MAC addresses Up to 48-mpps centralized forwarding Enables system performance of 450-mpps/ 720-Gb/s switching fabric ‘Compatible with E-Series and non-E chassis 1-94 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Operating System Catalyst 6500 Series Supervisor Engine 720-10G is supported in Cisco IOS Software from version 12.2(33)SXH onwards. The only option for the supervisor is native mode. There is no Cisco Catalyst operating system support for this supervisor module. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1.95 Catalyst 6500 Series Supervisor En 10G Architecture Channet 611918 a Te Catalyst 6500 Series Supervisor Engine 720-10G architecture is similar to that of Catalyst 6500 Series Supervisor Engine 720-3B/BXL. The following differences pertain: = 10-Gigabit Ethernet uplinks on the supervisor The Layer 2, 3, and 4 enj = The switch fabric has 20 channels instead of 18; the two additional channels are to support the 10 Gigabit Ethernet interfaces on the supervisor and a potential redundant supervisor is now a single ASIC instead of two separate ASICs = The Layer 2 CAM is still on-chip, and its capacity has been increased to 96K. = The uplink ports are no longer tied to the switching bus The architecture is more like a DFC module. The two 10 Gigabit Ethemet uplinks and the 1- Gigabit Ethemet uplinks go through the port ASICs to connect to the fabric ASIC. Since 10 Gigabit Ethernet and 1-Gigabit Ethemet ports share the port ASIC, the QoS is impacted if 10 Ethernet dedicated mode is not used. Using the 10 Gigabit Ethernet dedicated mode makes all the queues available to the 10 Gigabit Ethemet uplinks. The supervisor-based links on Catalyst 6500 Series Supervisor Engine720-10G can be used in the dedicated Catalyst 6500 Series DFC mode, whereas on the Catalyst 6500 Series Supervisor Engine720-3B/BXL they are unavailable. 4-96 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Ine PFC3C/CXL EtherChannel Enhancements = Load balancing mixed mode (new) MAC address x IP address TCPIUDP port IP Address & TCP/UDP Port * EtherChannel trunk: ~ Layer 3 + VLAN ID hash load balancing EtherChannel Load Balancing Modes EtherChannel load balancing modes have been extended by allowing more information to be included in hash computation. Besides the existing nine load balancing modes (source, destination, source XOR destination per MAC address, IP address, and TCP/UDP port), support for three new mixed-modes have been added in PFC3C/CXL: Source IP address and TCP/UDP port = Destination IP address and TCP/UDP port = Source XOR destination IP address and TCP/UDP port Note ‘These features are also available on the older platforms (PFX3A/B/XL), ‘This means that now a mix of Layer 3 and Layer 4 information can be used to calculate FtherChannel hash, which better distributes traffic flows EtherChannel VLAN ID Hash Using an EtherChannel trunk to carry multicast traffic for multiple destination VLAN receivers with Layer 3 hash (based on IP address) would typically bind multicast flow to one link only and potentially oversubscribe single link. To overcome such non-optimal situations, PFC3C/CXL now supports hash based on Layer 3 and VLAN ID information, thus providing ‘more granular distribution across links in EtherChannel trunk bundle. Note ‘The VLAN ID hash is not configurable; that is, it cannot be tumed on or off. Instead when using trunk on EtherChannel, it is counted in the hash algorithm, (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-97 Examining Cisco Catalyst 6500 Series Switch Configuration Feeete omens To verify the Cisco Catalyst 6500 Series Switch hardware use the show module command, The output shows the Cisco Catalyst 6500 Series Switch modules installed, serial numbers, software version, and diagnostics result. 1-98 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Verifying Cisco Catalyst 6500 Series Switch Operation + Examine the system operation with the show environment command ow environment. cooling Environmental monitoring of chassis components provides carly-warning indications of possible component failures, which ensures cafe and reliable system operation and avoids network interruptions. Monitoring of these critical system components allows administrator to identify and rapidly correct hardware-related problems in the system. To check the temperature, environment-related alarms, and fan tray status, the administrator can use the show environment commands. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-99, Determining System Hardware Capacity + Examine the system CPU capacity, utilization information for RP, SP, and switching module To determine the system hardware capacity, the show platform hardware capacity command is used. This command can display the current system utilization of the hardware resources and displays a list of the currently available hardware capacities, including the following: Hardware forwarding table utilization Switch fabric utilization CPU utilization Memory device (flash, DRAM, NVRAM) utilization ‘The example displays CPU capacity and utilization information for the route processor, the switch processor, and a switching module, 4-100 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Virtual Switching System 1440 Traditional Deployment VSS Deployment Separate Catalyst 6500 The Catalyst 6500 Series Supervisor Engine 720-10G can be used to combine two Catalyst 6500 Series Switches into a single unit using the Virtual Switching System (VSS) funetionality. Such a virtual switch provides a unique, high-availability solution. A maximum of two Cisco Catalyst 6500 Series Switches, each with single supervisor, can be combined into VSS. Two Catalyst 6500 Series Supervisor Engine 720-10Gs in separate chassis are connected via a Virtual Switch Link (VSL) to form a VSS. ‘The interfaces used for the VSL have to be either 10 Gigabit Ethernet interfaces on the Catalyst 6500 Series Supervisor Engine 720-10G or interfaces on the WS-X6708-10G 8-port 10 Gigabit Ethernet line card. Note ‘Since WS-6708 has 2:1 oversubscription only four 10 Gigabit Ethernet ports in dedicated mode should be used. A VSS 1440 provides a 1.44-Tb/s system-wide backplane and can host up to 820 1-Gigabit Ethernet and 256 10 Gigabit Ethernet interfaces, and 128 port channels (to be scaled to 576 port channels in newer code). VSS Benefits The benefits provided by the VSS system are: = Extension of control and management planes across chassis = Active-active data pl lane = Statefull switchovers across chassis, ‘© Single point of management and simplified distribution layer services = Multichassis EtherChannel (MEC) between removes dependency on STP for link recovery tual Switch and all neighbors, which © Eliminates spanning tree (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-101 ‘= Eliminates the need for Hot Standby Router Protocol (HSRP), Virtual Router Redundancy Protocol (VRRP) and Gateway Load Balancing Protocol (GLBP) = Doubles effective bandwidth by utilizing all links = Reduced number of Layer 3 routing neighbors VSS Restrictions The VSS functionality currently has the following restrictions: = Two Cisco Catalyst 6500 Series Switches per VSS Single supervisor per individual chassis = Only NAM-1 and NAM-2 service module support = No MPLS and IPv6 Note Additional service modules, MPLS, and IPv6 support will be added in newer code versions. Consult the Release Notes for Cisco IOS Release 12,2(33)SXH and Later Releases on Cisco.com to see the latest supported hardware in a VSS, 4-102 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc Catalyst 6500 Series Supervisor Engine 720-10G-3C/CXL Line Card Compatibility + Line cards not supported from 12.2(33)SXH onwards: WS-X6248-RJ-45 — WS-X6248-TEL ~ WS-X6248A-TEL ~ WS-X6501-10GEX4 WS-X6416-GE-MT ~ WS-X6316-GE-TX ~ WS-X6024-10FL-MT — WS-X6224-100FX-MT — WS-X6324-100FX-SM * VSS supported modules: ~ 67xx series line cards ~NAM-1 and NAM-2 service modules t 6500 Series Supervisor E: card and service modules except: mm WS-X6248-RJ-45, = WS-X6248-TEL =) WS-X6248A-TEL WS-X6501-10GEX4 WS-X6416-GE-MT WS-X6316-GE-TX WS-X6024-10FL-MT WS-X6224-100FX-MT WS-X6324-100FX-SM These li gine 720-10GE-3C/CXL supports all the existing line cards are not supported from Cisco IOS Software Release 12.2(33)SHX onwards. Furthermore if VSS functionality is deployed currently the following modules are supported: = 6700 line cards = NAM-1 and NAM-2 service modules (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-103 Catalyst 6500 Series Switch Supervisor Engine 32 This topic describes the Catalyst 6500 Series Switch Supervisor Engine 32 modules. Catalyst 6500 Series Supervisor En: Overview Se = Nees ee Ee 38 MCs iscratlel-a} Ba eaten [34 f-Sont erat * Supervisor 32 PISA - Integrated deep packet inspection, application awareness, security, availability, and manageability services ‘The Catalyst 6500 Series Supervisor Engine 32 is designed primarily for the access layer. The following characteristics apply the Catalyst 6500 Series Supervisor Engine 32: = Supports a connection to the 32-Gb/s Catalyst 6500 Series Switch shared bus only with maximum performance of 15 mpps. = Does not support DFCs or Switch Fabric. = Has an integrated MSFC2A or Programmable Intelligent Services Accelerator (PISA), and an integrated PFC3B. = Provides Layer 2 bridging and Layers 2 through 4 services with Layer 3 routing optional hardware accelerated services. = Supported in legacy and E series chassis, = Must be placed in a specific slot which depends on the chassis model. These slots are the same as for the Catalyst 6500 Series Supervisor Engine 720. = Has a 10/100/1000BASE-TX port for management, but keep in mind that this port is in- band. This port can also be used as a regular switch port. Extemal CompactFlash slot 2x Universal Serial Bus (USB) ports (currently disabled) RS-232 console port Internal 256-MB CF bootdisk Note ‘The Catalyst 6500 Series Supervisor Engine 32 does not provide hardware-based forwarding for Internetwork Packet Exchange (IPX), 1-104 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Catalyst 6500 Series Supervisor Engine 32 Options ‘The Catalyst 6500 Series Supervisor Engine 32 is available in four models: ® With eight 1-Gigabit Ethernet ports + 1 10/10/1000 port ‘= With two 10 Gigabit Ethernet ports + 1 10/10/1000 port = PISA with eight 1-Gigabit Ethernet ports + 1 10/100/1000 port ® PISA with two 10 Gigabit Ethernet ports + 1 10/100/1000 port Note ‘An onboard port ASIC is used to drive the front eight 1-Gigabit Ethemet ports or the two 10, Gigabit Ethernet ports. All of the Gigabit Ethernet fiber ports use SFP optics. Different SFP options are available depending on distance requirements. In a redundant configuration, all ports on both the active and standby supervisors are active, In a fully-redundant chassis with two Catalyst 6500 Series Supervisor Engine 32 modules, a total of 18 active Gigabit Ethernet ports are available for use. The 10 Gigabit Ethernet ports use XENPAK optics. Different XENPAK options are available depending on distance requirements. In a redundant configuration, all ports on both the active and standby supervisors are active, Ina fully-redundant chassis with two Catalyst 6500 Series Supervisor Engine 32 modules, a total of four active 10 Gigabit Ethemet and two 10/10/1000 ports are available for use. ‘© 2008 Cisco Systems, in. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-105 Integrated MSFC2A and PFC3B. * MSFC2A and PFC3B are a standard daughter card on the Catalyst 6500 Series Supervisor Engine 32 = Daughter cards are not field-upgradeable MSFC2A PFC3B Integrated PFC3B The integrated PFC3B is a daughter card included on the Catalyst 6500 Series Supervisor Engine 32 that provides Cisco Express Forwarding-based forwarding for IPv4 and IPv6 as well as hardware-based QoS and security capabilities. This PFC3B is the same PFC3B that is found on the Catalyst 6500 Series Supervisor Engine720-3B. With the PFC3B, the Catalyst 6500 Series Supervisor Engine 32 can support hardware-based QoS and security ACLs using Layer 2, Layer 3, and Layer 4 classification criteria to secure and prioritize target data. Standard PFC3B enhancements allow the Catalyst 6500 Series Supervisor Engine 32 to take advantage of new hardware accelerated features such as CPU rate limiters, ACL hit counters, port ACLs, and improvements in route and NetFlow capacities, There are a number of Layer 2 features in the PFC3B that differentiate the Catalyst 6500 Series Supervisor Engine 32 from earlier supervisor models. The Catalyst 6500 Series Supervisor Engine 32 has capacity similar to the Catalyst 6500 Series Supervisor Engine 2 in terms of support for ACLs and MAC addresses. The differences are some of the new hardware features previously found only in the Catalyst 6500 Series Supervisor Engine 720; for example: IPV6, MPLS, GRE, tunneling, Encapsulated Remote Switched Port Analyzer (ERSPAN), and UBRL. Integrated MSFC2a The integrated MSFC2a is another daughter card on the Catalyst 6500 Series Supervisor Engine 32. This MSFC option is functionally equivalent to the MSFC2 found on the Catalyst 6500 Series ‘Supervisor Engine 2. The only difference is that the MSFC2a supports up to 1 GB of DRAM, whereas the MSFC2 supports up to 512 MB of DRAM. The RP CPU has 32 MB of bootflash and 1 MB of NVRAM available. A full-duplex, 1-Gb/s in-band connection allows the MSFC2a to communicate with other components on the Supervisor Engine 32 baseboard. An MSFC2a is also integrated into the Supervisor Engine 32 to enable the Supervisor Engine 32 to be a full-fledged Layer 3 switch. The forwarding architecture used by the MSFC2a and PFCSB is Cisco Express Forwarding. 4-106 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. ‘The MSFC2a provides the following features: Default card on Catalyst 6500 Series Supervisor Engine 32 Similar in function to the MSFC2 256-MB DRAM 32-MB bootflash = 1-MBNVRAM (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-107 Catalyst 6500 Series Supervisor Engine 32 PISA Overview see ‘Multigigabit performance Flexible Packet Rapid security protection a es Programmable Architecture Full integration The Catalyst 6500 Series Supervisor Engine 32 PISA is available in two uplink options: = 8x I-Gigabit Ethernet SFP + 1 x 10/100/1000 = 2x 10-Gigabit Ethemet XENPAK + | x 10/10/1000 Note Catalyst 6500 Series Supervisor Engine 32 PISA would be typically deployed at the WAN Edge. PISA Overview Apart from PFC3B, the Catalyst 6500 Series Supervisor32 Engine PISA also has a PISA engine which incorporates generic MSFC control/management plane functionalities as well as hardware acceleration of Network-Based Application Recognition (NBAR) and Flexible Packet Matching (FPM) services. PISA is a daughter card that replaces the MSFC2a to accelerate IP services such as NBAR and FPM. PISA is a superset of the MSFC2a. It contains an RP as well as a network processor (NP). Key characteristics are: = Route processor: Responsible for control plane processing, software punts processing, and NP programming: — 1-GB DRAM — 2MBNVRAM — 256-MB bootflash = Network processor: Accelerates NBAR and FPM at up to 2 Gb/s: — 3*256-MB DRAM — 4*8MBSRAM — Crypto processor with 128-MB DRAM (not used currently) 1-108 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Classification field programmable gate array (FPGA): Redirects packets to the RP or NP Key features include Stateful packet inspection for more than 90 protocols and applications via protocol definition language modules New protocol definition language modules can be uploaded as they are created (Cisco- ot user-created) NBAR is stateful packet inspection and thus needs to see both directions of a flow on the interface where it is configured FPM js not stateful and needs only to sce the traffic in the direction configured on the interface configured NBAR/FPM configured on a Cisco Catalyst 6500 Seri FlexWan or SIP200 adapter, and not by PISA Not available for MPLS or VPN tunnel interfaces ‘Switch WAN port is done on the {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-109 Catalyst 6500 Series Supervisor Engine 32 PISA Packet Manipulation * NBAR * Flexible Packet Matching erry wazaa(P2e) | 20% | Rate skype 40% | lock AOL IM 5% Prioritize H.323 and SIP_ | 15% |Log rede 20% |ete pyle Ce act ocet eth Match Pattem And or Not Stateful Packet Inspection with NBAR Stateful Packet Inspection with NBAR identifies over 100 applications and protocols such as peer-to-peer (eMule, BitTorrent, and Skype), protocol traffic (RTSP, SIP, L2TP, MPLS to IP), and corporate applications (CITRIX ICA, SAP). NBAR supports dynamic protocol definition Janguage modules to upload identifiers for new protocol, user-defined applications and sub-port classification. Flexible Packet Matching FPM provides a means to implement network-based blocking of known attack vectors for the advanced user. FPM is the next generation ACL pattern-matching tool for more thorough and customized packet filters. The technology provides the ability to match on arbitrary bits of a packet at arbitrary depth in the packet header and payload. It removes the constraints to specific fields that previously had limited packet inspection. FPM provides the means to configure match criteria for any or all fields in the header of a packet, as well as bit-patterns within the payload of the packet. This allows the characteristics of an attack (source port, packet size, byte string) to be uniquely matched, and for a designated action to be taken. FPM provides a flexible Layer 2~7 stateless classification mechanism. The user can specify classification criteria based on any protocol and any field of the protocol stack of the traffic, Based on the classification result, actions such as “drop” or “log” can be taken. FPM matches Layer 2-7 patterns in packets. Inspection is performed 8 Kb deep in the packet which covers every bit except for the largest jumbo frames (9 Kb) which can almost always be identified in the first 8 Kb. 1-110 Implementing Cisco Data Center Network Infrastructu 1 (DONH4) v2.0 (© 2008 Cisco Systems, Inc. FPM policy can be defined: = Via command-line interface (CLI) = Using a Traffic Classification Definition File (TCFD) which is an XML file. FEM TCDF filters on your router or switch, and will ereate the necessary FPM policy CLI configuration for blocking the attack or application Note Like ACLs, FPIM is also a stateless solution. Protocol Discovery Protocol discovery determines what applications are running on the network and provides real- time statistic per-interface, per-protocol, and bi-directional in bit rate (b/s), packet count, or byte count, Protocol Definition Language Module A protocol definition language module is an identifier or file which contains a signature of applications. Protocol definition language module identifiers can be created by Cisco or customer. New protocol definition language module files are loaded to Flash, and added without restarting the switch or reloading Cisco IOS Software. Implementation Consideration Supported features: a NBAR and FPM are accelerated in hardware PISA accelerates Layer 3 IPv4 unicast packets ONLY PISA does not accelerate Layer 2 packets or multicast packets (although these are planned for support via future software updates) = Microflow policing does not work with PISA = Layer 2 NetFlow Data Export (NDE) is not supported = Supported interfaces: — Fast Ethernet, Gigabit Ethernet and 10 Gigabit Ethernet interfaces, port channels, VLANs, (1untks, subinter faces (routed ports and SVI9 only) — _NBAR/FPM are not accelerated by PISA when configured on WAN interfaces; NBAR can, however, be accelerated on WAN interfaces by the Enhanced FlexWAN and the SIP-200 — Accelerated features cannot be applied on MPLS or VPN tunnel interfaces (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-111 Catalyst 6500 Supervisor Engine 32 Line Card Compatibility iil Classic a ceF256 acerzs6 ceF720 ace720 “SFMSFM2 Services Modules Any DFC OSM “FlexivAN The Catalyst 6500 Series Supervisor Engine 32 is a classic module, meaning that it provides a connection to the classic 32-Gb/s bus to communicate with other line cards present in the chassis. Unlike some supervisors, the Supervisor Engine 32 has no built-in switch fabric and cannot take advantage of a separate Switch Fabric Module. This mode of operation defines the type of line cards that can work with this supervisor. Any line card that does not support data transfer over the classic bus cannot interoperate with the Catalyst 6500 Series Supervisor Engine 32. Note Not all Service and SIP Modules are supported with the Catalyst 6500 Series Supervisor Engine 32, Refer to the Release Notes for support. 1-112 Implementing Cisco Data Center Network infrastructure 1 (OCI) v2.0 (©2008 Cisco Systems, inc Catalyst 6500 Series Switch Supervisor Engine Operating System This topic describes the key features of the operating systems available for the supervisor modules. File System + Cisco 108 File System (Cisco IFS) provides an interface to al fle systems Feveio) Type PingePeatixas ain: eke kt) sevednt(aedSit] ‘The Catalyst 6500 Series Supervisor Engine 720 supports several file systems. This topic describes the purpose and use of the major file systems available to the network administrator. Use the show file systems command to view a list of the different file systems that are available on the Catalyst 6500 Series Supervisor Engine 720. Each file system is assigned a specific index prefix, and is referenced and accessed by that prefix. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-113 Catalyst 6500 Series Switch Operating System Overview * Native mode: = Cisco IOS Software is on the switch processor (SP) and route processor (RP) ~ A single Cisco IOS image is used to run the SP and RP New feature set enhancements and suppor in Cisco 10S software Cisco 10S Software Modularity Native IOS Image Hybrid mode: EOL July 31 2007, EOS Janvary 29 2008 Catalyst operating system ison the SP and Cisco 10S ison the RP: * Catalyst Operating System Is used to run SP. + Cisco 10S Software is used c6msfe3-psv-mz.122-17a.SX1 torun RP Hybrid 10S Image Native Mode In native mode, Cisco IOS Software is installed on both the switch and route processor. A single Cisco IOS image is used as the system software to run both the SP and RP. This single image provides configuration and command-line support for Layer 2, 3, and 4 functionality on the switch Cisco IOS Software has historically been a Layer 3 operating system on routing platforms, and when installed on the supervisor engine of a Cisco Catalyst 6500 Series Switch, has expanded capabilities to include true Layer 2 functionality, Cisco IOS mode requires that both CPUs run the full Cisco IOS Software, There can be no hidden Catalyst operating system software running in the switch, and the executable images used by both CPUs run the complete Cisco IOS kernel. When both processors run Cisco IOS Software, overall system performance is enhanced. However, should the MSFC fail, all Layer 2, 3 and 4 functionality is lost, Note Distributed forwarding via DFCs is supported only in native mode. 4-114 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 {© 2008 Cisco Systems, Inc Hybrid Mode In hybrid mode, the Catalyst operating system is located on the SP and Cisco 10S Software is oon the RP. ‘A Catalyst operating system image is used as the system software to run the SP on Cisco Catalyst 6500 Series Switches, A separate Cisco IOS image is used to run the RP. A switch having an older supervisor might run the Catalyst operating system only on the SP and is thus @ Layer 2 forwarding device supporting Layer 2, 3, and 4 functionality for QoS, security, multicast, and network management of the PFC, but it does not have routing capabilities. Therefore, the combination of the Catalyst operating system on the SP and Cisco IOS Software on the RP is referred to as hybrid mode. The two operating systems work together to provide complete Layer 2, 3, and 4 system functionality Note This software option will be deprecated, so any switch running in this mode should be converted to native mode. Catalyst Operating System Software Mode In Catalyst operating system software mode, the switch operates on the SP and the PFC to provide Layer 2 forwarding, and Layer 3 and 4 services. If Layer 3 forwarding and routing capabilities are required, the MSFC daughter card must be present and run Cisco 10S Software (as part of the hybrid operating system) on the RP. Therefore, should the MSFC fail in hybrid configurations, Layer 2 and PFC functionality are not affected and remain operational. In a real deployment, use the high-availability features and redundant configuration for Catalyst switch chassis and supervisors for complete operational support if a hardware failure occurs. The RP and the SP both have their own set of bootflash. The SP bootflash is used to store the boot image and is referred to as “sup-boottlash” in native mode, and the RP bootflash is referred to as “bootflash” in native mode. Catalyst 6500 Series Supervisor Engine 720 and 32 Operating System Mode All versions of the Catalyst 6500 Series Supervisor Engine 720 and the Catalyst 6500 Series Supervisor Engine 32 support native and hybrid modes of operation. It is important to differentiate between the native version of the Cisco 10S Software, and Cisco 10S Software that works with a Catalyst operating system software image. The filename $72033-ipservices_wan-mz,122-33,SXH is a Cisco IOS Software name identifying a native Cisco IOS version. The filename C6msfc3-psv-mz.122-17a.SX1 is a Cisco TOS Software name identifying a hybrid Cisco IOS version. A hybrid Cisco IOS Software version includes “MSFC3" in the image name. = Native model: The Cisco 10S Software requires that a single image is present on a device local to the supervisor because it is a bundled image for two processors, and the SP boots first. The image can reside on the sup-bootflash or the flash card (slot0 or disk0 or disk1); the image cannot reside on the MSFC boottlash. Cisco IOS system files start with “s720vw,” where “v" is the MSFC version and “w” is the PFC version, or with “s32vw,” where “v” is the MSEC version and “w” is the PFC version. Examples of these are $72033- ipservices_wan-mz.122-33.SXH (Supervisor 720 image) and s3223-ipbase-mz.122- 33.SXII (Supervisor 32 image). (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-115 = Hybrid model: Two separate image files are managed by the two different operating systems. The Catalyst operating system software images are stored on the sup-bootflash or flash cards (PCMCIA for Catalyst 6500 Series Supervisor Engine 1A and Catalyst 6500 Series Supervisor Engine 2 and CompactFlash for the Catalyst 6500 Series Supervisor Engine 720 and Catalyst 6500 Series Supervisor Engine 32). The Cisco IOS image for the MSFC is stored on the MSFC bootflash. The images can be moved between the active and standby supervisors using the copy command and uploaded to the switch via the TFTP application. Note Do not try to use a hybrid Cisco IOS image as a native Cisco IOS image. Catalyst 6500 Series Supervisor Engine720-10G and 32-PISA The Catalyst 6500 Series Supervisor Engine 720-10G and Catalyst 6500 Series Supervisor Engine 32-PISA support only native mode operation. 4-116 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Determining Cisco IOS Version * Native Cisco OS mode ‘switchishow version Cisco 108 Software, s72033_rp Software (s72033_rp- IPSERVICESK9_WAN-M), Version 12.2(83)SXH, RELEASE SOFTWARE (fe5), = Hybrid Catalyst Operating System and Cisco IOS mode Router#show version Cisco Internetwork Operating Systen Software 10S (tm) MSFC2 Software (CBNSFO2-PSV-M), Version 12.1(19)E, EARLY DEPLOYMENT RELEASE SOFTWARE (fot) To determine which operating system is running on the Cisco Catalyst 6500 Series Swit enter show version from the Cisco IOS command line. To access Cisco IOS Layer 3 functionality in a hybrid system, enter session 15 or switch console from the command line. The console is then tumed over to the RP. Take care at this point, because both the Cisco IOS and the hybrid software systems look similar. You can determine which software is running on the chassis by viewing the interfaces. For example, enter show ip interface brief on the hybrid software to show VLANs. The same command on the Cisco IOS Software displays physical interfaces (for example, Gigabit Ethernet 1/1), as well as VLAN interfaces. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-117 Cisco IOS Feature Sets 0 10S Software supports a variety of platforms and thus comes in various feature sets. Five different feature sets exist for Cisco Catalyst 6500 Series Switches: IP Base IP Services Advanced IP Services Enterprise Services Advanced Enterprise Services ‘The IP Base package is the most basic package offered across the Layer 3 Cisco Catalyst switches, The IP Services package is an advanced Cisco IOS Software feature set that contains full IP routing capabilities. Three additional premium packages offer new Cisco IOS Software feature combinations that address more complex network requirements. All features merge in the most premium package, Advanced Enterprise Services, which integrates support for all routing protocols, including security with VPN. 4-118 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. Universal Boot Loader Upon second supervisor installation: * UBL communicates with present supervisor in the chassis * Downloads the running image from the active = Reboots the new supervisor with newly downloaded image Cisco IOS Image Synchronization The new supervisor engine comes with preloaded Cisco IOS image. This also applies to RMA- received spare supervisors. When this new supervisor is installed into a chassis with an existing supervisor for redundancy purposes, images on supervisor modules have to be synchronized. The Universal Boot Loader (UBL) is a minimal network-aware image that can download and install either a Cisco IOS image or a Catalyst operating system image from a running active supervisor engine in the same chassis. Thus, the UBL automatically synchronizes the new supervisor Cisco IOS image with the existing supervisor running Cisco IOS image by: =| Communicating with the supervisor present in the chassis = Downloading the running Cisco IOS image from the active supervisor to the new standby supervisor = Rebooting the new supervisor with the newly downloaded image This feature requires the active supervisor engine to be running either Cisco IOS Software Release 12.2SXH or later, or Catalyst operating system software Release 8.6(1) or later. Note Image synchronization works also for hybrid (Catalyst operating system) images. Since the hybrid mode is EOL July 31, 2007, itis recommended to migrate to native 1S mode. Note When a standby supervisor engine copies a Catalyst operating system image using UBL, the downloaded image will not include the MSFC Cisco 10S Software. A Cisco IOS Software image that is downloaded using UBL wil include the MSFC software, (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-119 Summary This topic summarizes the key points that were discussed in this lesson Summary * The Cisco Catalyst 6500 Series switch supports 32-Gb/s shared bus and 258 Gb/s and 720-Gb/s awitch fabrics. * System bandwidth is measured in gigabits per second, whereas forwarding performance is in millions of packets per second * The position of the supervisor module depends on the Catalyst 6500 Series Switch chassis used. * The supported forwarding architectures on the Catalyst 6500 Series Switch are classic, CEF, and dCEF. = The Catalyst 6500 Series Supervisor Engine 720 has a 720-Gb/s integrated switch fabric and supports all available line cards. * The MSCF3 performs control plane functionality, Summary (Cont.) PFC3 is a multilayer switching engine. Different card types can coexist in single chassis. Catalyst 6500 Supervisor Engine 720-10GE-3C/CXL enables VSS functionality \VSS functionality combines two individual Catalyst 6500 Series switch ‘chassis into one virtual switch, Catalyst 6500 Supervisor Engines 32 and 32-PISA are designed for access layer. Catalyst 6500 Supervisor Engines 32 and 32-PISA support only shared bus enabled cards. Catalyst 6500 Series switch can run in native Cisco 10S or hybrid Catalyst operating system + Cisco !OS mode. Hybrid mode is EOL. Catalyst 6500 Series Supervisor Engines 720-10G and 32-PISA require native IOS mode, 1-120 Implementing Cisco Data Center Network Infrastructure 1 (OCN-1) v2.0 (© 2008 Cisco Systems, Inc. Lesson 4 Describing the Cisco Catalyst 6500 Series Switch Module and Power Supply Options Overview This lesson describes the line cards, service modules, and power supplies used to provision a Cisco Catalyst 6500 Series Switch. Objectives Upon completing this I Switch line cards, the lin and power supply options. This abi son, you will be able to identify and describe the Catalyst 6500 Series -ard architecture and deployment considerations, service modules, includes being able to meet these objectives: List the types of WAN and LAN line cards that are offered for the Catalyst 6500 Series Switch Describe the basic architectures of the Catalyst 6500 Series Switch line cards Describe the packet flow through each of the Catalyst 6500 Series Switch line card architectures Explain key design considerations for deploying line cards in a Catalyst 6500 Series Switch Explain considerations for line card operability Describe the power supply options and consideration Cisco Catalyst 6500 Series Switch Line Cards Overview This topic lists the types of WAN and LAN line cards that are offered for the Catalyst 6500 Series Switch. Ethernet and WAN Line Cards Eas 2 En WAN ff Potasic ff Portasic B12kb Butter |] St2Kb Butter |] St2Kb Buffer] | 512Kb Butor Ports 5-8 Ports9-12 Ports 13-16 CEF256 fabric-enabled line cards have two connections: a single 8-Gb/s channel that can connect to the crossbar switch fabric and a 32-Gb/s shared bus connection. The figure shows that these line cards have a 32-Gb/s local switching bus on the line card itself. The local switching bus is similar in function and operation to the shared 32-Gb/s bus that conneets to all shared-bus-capable line cards in a Catalyst 6500 Series Switch chassis. The local switched bus on the line card is used for local switching. When a DFC is present to determine the forwarding destination, the local bus routes a locally switched packet and avoids transmitting the packet over the switch fabric. This process reduces overall latency of switching the packet. The CEF256 line card architecture includes a fabric interface ASIC, which is used as the interface between the ports on the line card and the switch fabric, and a number of port ASICs. On the 16-port Gigabit Ethernet line card, for example, there are four port ASICs, each of which controls four ports. Each port ASIC maintains a per-port buffer for queuing. When a switch fabric is present in the system, a CEF256 line card uses the 32-Gb/s connection only to send or receive control information to and from the supervisor engine. The general rule for line cards in the Cisco Catalyst 6500 Series Switch is that if there is a switeh fabric present, the line cards use their switch fabric channels for data transmission. When a CEF256 line card is equipped with a DFC and DFC3, the 32 Gb/s is never used for communication. Knowing these facts will help when troubleshooting traffic flow issues. 1-130 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, In. dCEF256 Line Card Architecture + Two 8-Gb/s connections to the switch fabric only. Crossbar Integrated DFC and DFC Port ASIC 2 cle peutcle Beoeiod Malice) Bice) Baoeuice Pors1-4 Ports 5-8 Ports9-12 Ports 13-16 76-Port Gigabit Ethemet Line Card ‘The dCEF256 line cards have two local switching buses and two 8-Gb/s connections to the switch fabric crossbar. Each of the 32-Gb/s shared buses serves a block of ports that consists of one-half of the ports on the line card. Packets switched between each block of ports are locally switched over a 32-Gb/s bus, and are not transmitted outside the line card. Packets that are switched from one block of ports to the other block of ports on the same line card are switched through the crossbar. In the dCEF256 line card architecture, two fabric ASICs are used as the interfaces between the ports on the line card and the switch fabric. The (CEF256 line cards also have an integrated DFC or DFC3 that makes local forwarding decisions. The maximum forwarding rate for these line eards is up to 24 mpps per line card. ‘The dCEF256 architecture has no connection to the 32-Gb/s chassis shared bus. The dCEF256 requires that the system be running in Cisco IOS native mode and that a switch fabric is present. The dCEF256 cards are compatible with the Catalyst 6500 Series Supervisor Engine 2 with Switch Fabric Module (SFM) and SFM2, and with the Catalyst 6500 Series Supervisor Engine 720. ‘The WS-X6816-GBIC line card is an example of a (CEF256 line card that supports dual fabric connections and implements two local 32-Gb/s shared buses. Note {All PFCs and DFCs must be the same version. A customer with a WS-X6816-GBIC line card ina system that they want to upgrade from a Catalyst 6500 Series Supervisor Engine 2 with ‘SFM and SFM2, to a Catalyst 6500 Series Supervisor Engine 720 will also have to upgrade the DFC to a DFC3 (WS-F6K-DFC3). {© 2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-131 CEF720 Line Card Architecture [ '32-Gb/s Shared Bus—Control Traffic Only | | a — | Ports1-12 Ports 13-24 The Catalyst 6500 Series Supervisor Engine 720 includes a new series of line cards that use a new architecture called the CEF720 architecture. CEF720 line cards provide the highest port density available and support two different forwarding options: the cost-effective centralized Cisco Express Forwarding architecture, and the high-performance distributed Cisco Express Forwarding architecture. These line cards are designed to take advantage of the architecture extensions of the Catalyst 6500 Series Supervisor Engine 720 and the enhanced 720-Gb/s crossbar switch fabric. CEF720 line cards are designed to work only with the Catalyst 6500 Series Supervisor Engine 720 and are not compatible with the previous generations of supervisor engines. CEF720 line cards have one or two fabric ASICs that are connected to both the 32-Gb/s shared bus and to the 720-Gb/s crossbar. Line cards with a single fabric ASIC have a single 20-Gb/s channel, and line cards with two fabric ASICs have dual 20-Gb/s channels, The figure siiows a dual-channel line card, Unlike a CEF256 card, which can use either the 32-Gb/s shared bus or the 8-Gb/s fabric ‘channel for data transmission, the CEF720 line cards never use the 32-Gb/s shared bus for data nn. The general rule for line cards in the Cisco Catalyst 6500 Series Switch is that if switch fabric present, the line cards use their switch fabric channels for data ion, Because a CEF720 line card requires a Catalyst 6500 Series Supervisor Engine 720, which has an integrated switch fabric, there is always switch fabric in the chassis. ‘Therefore, no data traffic is ever sent over the 32-Gb/s by a CEF720 line card. These line cards use this connection only for transmission and reception of control information to and from the Catalyst 6500 Series Supervisor Engine 720 when using centralized forwarding. A CEF720 line card is equipped with an optional DFC3 that never uses the 32-Gb/s shared bus. Knowing these facts will help when troubleshooting traffic flow issues. 4-132 __ Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. ‘The base line card configuration uses the centralized Cisco Express Forwarding of the Catalyst 6500 Series Supervisor Engine 720-based PFC3 for all forwarding decisions. This is accomplished with a centralized forwarding card (CFC). When ordering a new CEF720 line card a CFC is provided by default, if no distributed Cisco Express Forwarding daughter card is chosen, The line card must have either a CFC or a distributed Cisco Express Forwarding daughter card in order to operate. The distributed Cisco Express Forwarding daughter card can be added to the CEF720 line card via the DFC3. The performance of these line cards is as follows: Up to 30 mpps per system for a centralized forwarding configuration Up to 26 mpps per line card for single fabric-channel line cards with DFC3 = Up to.48 mpps per line card for dual fabric-channel line eards with DFC3 ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-133 dCEF720 Line Card Architecture * Two 20-Gb/s connections to the switch fabric only. Ports1-12 Ports 13-24 The figure shows that the (CEF720 line cards incorporate an integrated DFC3 between the two fabric ASICs. These line cards are designed to provide optimum performance and maximize internal resources such as buttering and queues. The CEF720 line cards support dual 20-Gb/s channels into the crossbar. The dCEF720 series line cards are designed to be the highest performing line cards in terms of switching capacity, port buffering, and QoS. The performance of these line cards can be a sustained 48 mpps per card. As with CEF720 line cards, (CEF720 line cards require a Catalyst 6500 Series Supervisor Engine 720 to operate, and dCEF720 line cards have no connectivity to the 32-Gb/s shared bus. 4-134 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, Inc. Cisco Catalyst 6500 Series Switch Line Card Design Considerations ‘This topic describes the Catalyst 6500 Series Switch line cards design considerations. Line Card Feature Comparison 256 Gb/s with Fabric 720 Gbis ‘Up to 30 mpps per ‘Up to 30 mpps per Maximum sym ie 22Gb/s at —_ 15 mpps | ‘With DFC or DFC3 and orwarding'° Sete” | uparade, upto 1Smpps |W") OECE vores. math | er slot 210 mors Per | sustained perso, 450 Syste (250 mop ith es Cateye | Tepe per ayer Centalea CEF engine | CevratzesCEF reel inPeCy onoine Pr reas Contazed CEF | Ypqacebieto DFC | Upgradeaia to DFCS architecture Age 6816 comes with | 6708 and 6716 come ‘Supported wpervisor 32 supervisor engines Supervisor 720 ‘The table summarizes the key features of the classic, CEF256, and CEF720 line cards in terms of performance, forwarding engine architecture, supported supervisor engines. Note On a Cisco Catalyst 6513 Switch, distributed Cisco Express Forwarding, and CEF720 (except the 6724) cards must be installed only in the bottom five slots, These slots have dual fabric channel connections. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-135 Feature Fabric connections None, only 32- Gbis shared bus Chassis or slot Any slot in any support requirements ‘ACEF256 has dual 8 ‘Gbis channels 77) aoe Single or Dual 20 Gbis channels 6724: No slot restrictions but no '8903 suppor Al others: ‘Any sit in Catalyst 8506, 6509, and 6509- NEB.A Siots 9-13 only on Catalyst 6513) Catalyst 6503 not ‘supported This is the continuation of the table on the previous page—the summarization of the key features of the classic, CEF256, and CEF720 line cards in terms of fabric connections, and slot requirements, 1-136 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-4) v2.0 (© 2008 Cisco Systems, Ine Cisco Catalyst 6500 Series Switch Line Card Options _nteroce Type “10Base FL 10 and 100M TX 100M Fx 10,100, ane 0008 Tx "0000 FX 000M GBIC 000M SFP __10GE Xenpak — FLEXWAN Optical Service Module The table shows the types of forwarding architectures that are available for the various line card interfaces. The following slides describe the line cards that are typically used in enterprise and service provider deployments today. Note Different service modules require different supervisors. For example, the Cisco Application Control Engine (ACE) module requires the Catalyst 6500 Series Supervisor 720. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-137 Classic-to-Classic Centralized Forwarding Supervisor Engine 720 TE scuce in biacx Vian Destination in red VLAN > Ene pocket seo Packet header The figure shows a Catalyst 6500 Series Supervisor Engine 720 and two classic line cards (for example, WS-X6148-GE-TX or WS-X6416-GBIC), although this supervisor engine could be a Catalyst 6500 Series Supervisor Engine 2 or a Catalyst 6500 Series Supervisor Engine 32,. Traffic originates on the host in the green VLAN (labeled “S” in the figure) and is destined for a host in the red VLAN (labeled “D”). The Catalyst 6500 Series Supervisor Engine 720 must forward from the green to the red VLAN in the hardware. ‘There are four steps taken to achieve this communication: Stop1 Classic module A receives the packet on the port ASIC and floods that packet onto the shared bus. In this way, all the devices on the bus sce this packet, including the forwarding engine and the port ASICs on classic module B. Step2 The forwarding engine takes the frame and packet headers and performs the lookup. At the MAC layer, this packet is going to the default gateway. This packet is destined to the route processor (RP) MAC, so a Layer 3 lookup is performed. While the headers are passed through the forwarding engine, additional processes such as QoS, access control list (ACL) enforcement, and NetFlow statistics gathering take place in parallcl. In this case, the Catalyst 6500 Scries Supervisor Engine 720 docs a destination IP lookup. It might do a source IP lookup if Unicast Reverse Path Forwarding (uRPF) check is configured, it will check the packet against any ACL or QoS policies that are configured, and it will ultimately arrive at a result. In the figure, the packet is forwarded into the red VLAN. Step3 The forwarding engine sends out a lookup result. The result is flooded onto the bus, and the result is seen by all components attached to that bus. Now all components have a copy of the original packet and all components have a copy of the result. In the example, the port ASIC on module B, which contains the red VLAN, is the only port that does anything with the packet. This port received the entire packet when the source port ASIC flooded the packet onto the bus. Based on the result received, all these devices must determine what to do with the packet, 4-138 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Step4 Based on the lookup result, the destination port ASIC with the red VLAN rewrites the original packet, assigning new MAC addresses, decrementing the Time to Live (TTL) value, and possibly assigning a new VLAN tag. The port then forwards the new packet from the interface to the destination host in the red VLAN. All other devices on the bus discard the packet and the result (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-138 CEF256-to-CEF256 Centralized Forwarding ‘Supervisor Engine 720 architecture, 10 local bus on a CEFT20 line card The figure shows a Catalyst 6500 Series Supervisor Engine 720 and two CEF256 line cards (for example, WS-X6548-RJ-45 or WS-X6516-GBIC), although this supervisor engine could be a Catalyst 6500 Series Supervisor Engine 2 with a Switch Fabric Module or Switch Fabric Modul le 2, Traffic originates on the host in the green VLAN (labeled “S” in the figure) and is destined for ahost in the red VLAN (labeled “D”). The Catalyst 6500 Series Supervisor Engine 720 will have to forward from the green to the red VLAN in the hardware. There Step 1 Step 2 Step 3 Step 4 Step 5 are six steps in this process: The packet enters the system on the CEF256 module A port ASIC and is flooded onto the local line card bus. All components on that bus see this packet. The fabric interface on module A receives the packet, looks at the packet, and floods only the packet header onto the shared bus. Recall that in the classic system, the entire packet is flooded onto the bus. Here, regardless of whether you have a 64-byte packet or a9 KB frame, only packet headers are placed on the bus. The headers are seen by everyone on the bus. In this scenario, the fabric interface on CEF256 module B would see a copy, as would any other fabric interfaces in the system. ‘The Supervisor Engine 720 completes the Forwarding Information Base (FIB), ACL, and QoS lookups. The forwarding engine then floods the result onto the bus. In this scenario, just this fabric interface will do something with that result. Other devices on the bus flush that packet. The fabric interface sends the result onto the local bus to inform the port ASICs. The port ASICs stored the frame when it first entered the system. Therefore, the port ASICs can dismiss the frame, because the result will be transmitted over the fabric. The fabric interface receives the result and sends the entire packet over the fabric to CEF256 module B. 41-140 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, inc Step6 —_Afler module B has received the packet, the module floods the entire packet and result information onto its local bus. After the port ASICs receive the information and determine who needs a copy of it, then the egress port ASIC rewrites the packet and sends it out the interface to the destination. Note (CEF720-to-CEF720 forwarding works in the same way that CEF256-to-CEF256 forwarding works, except that the CEF720 line card does not have a local bus on the line card itself. (© 2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-144 (d)CEF720-to-(d)CEF720 Distributed Forwarding Cerca Destraon ned VLAN entre pact This figure shows a Catalyst 6500 Series Supervisor Engine 720 (a Catalyst 6500 Series Supervisor Engine 2 with a Switch Fabric Module or Switch Fabric Module 2 plus CEF256 and DFC would be the same) and two CEF720s with DFC3s installed (for example, WS-X6748- GE-TX plus DFC3B or WS-X6704-10GE plus DFC3B). Traffic originates on the host in the green VLAN (labeled “S” in the figure) and is destined for a host in the red VLAN (labeled “D”). The Supervisor Engine 720 must forward from the green to the red VLAN in the hardware. There are five steps in this process: Step1 The packet enters the system on the CEF720 + DFC3 module A port ASIC and is sent to the fabric interface. Step2 The packet headers are sent into the local DFC3 for the lookup. Remember that this DFC3 has the same information as the PFC3 and can therefore make the decision at the local line-card level. The DFC3 also applies any QoS, ACL, or NetFlow policies. Step3 The result of the lookup is sent back to the fabric interface. Step4 — The packet is transmitted over the fabric to the fabric interface on CEF720 + DFC3 module B. Step5 The destination fabric interface sends the packet to the egress port ASIC, which rewrites the packet and sends it out the interface. Note ‘The central supervisor engine is not involved in the forwarding process in this packet flow. This packet flow is the same for dCEF256, dCEF720, or CEF256 plus DFC or DFC3. 4-142 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. Examining Module Types = Use the show platform hardware capacity system command to installed module information Spates Resour Tae opersting modes Prete perineeseouanecy noe) tminateaicey sto, epratingiy ato Tvotshow platform bardvare capacity aysten yates tasources suitening renourcens tose To examine the information about the installed modules in chassis, the administrator can use the show platform hardware eapacity system command, The command displays the following: List of modules installed with the part number ‘The switching architecture of the module (CEF256, CEF720, classic) ‘The PFC operating mode The forwarding architecture of the module (© 2008 Cisco Systems, Inc. ‘Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 143 Examining Module Information + Examine the individual module information with show module slot command ideas aaa ‘The individual module information can also be examined with the show module command which, among other things, displays the hardware, firmware, and software versions and the type of forwarding card on the module: = CFC: Centralized forwarding, which utilizes PFC on the supervisor = DFC: Distributed forwarding utilizing the DFC on the module 4-144 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Examining Interface Information > Giganivatberaee 9/1 capantiivies ‘o/i00/teconaser fuged, ex apaate) Feteoa)y eeieon) ‘Source /deatination ‘To examine the capabilities of the interfaces on capabilities command. ndividual modules, use the show interface (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-145, Oversubscription Considerations * Applications require different oversubscription ratios * Analyze traffic patterns for oversubscription requirements: ~ Peak traffic ows per application ~ Lateral vs. vertical trafic flows + 10 Gigabit Ethernet uplinks are often oversubscribed 2:1 in a single-supervisor configuration. Access Catalyst 6500 seehe cen 4x = 160Gb 1x HPC Clustering Total core-edge oversubscription = 10:1 Based on the traffic patterns and number of users of enterprise applications, packets are generated at an average frequency and in an average size. Interactive applications such as conferencing tend to generate high packet rates with small packet sizes. For this combination of rate and size, the packet-per-second limit of the Layer 3 switches is more critical than the throughput in terms of bandwidth. Applications such as file repositories move large amounts of data and transmit a high percentage of full-length packets. For this rate and size combination, uplink bandwidth and oversubscription ratios become key factors in the overall design, Actual switching capacities and bandwidths vary based on the applications in use. Gigabit Ethemet-to-desktop deployments have grown to several million ports. This broad adoption has significantly increased the oversubscription ratios of the rest of the network. 10 Gigabit Ethernet can help bring these oversubscription ratios back in line with network design best practices. Also, server adapter and PCI bus advancements have enabled servers to generate more than 7-Gb/s of traffic, which increases the demand for 10 Gigabit Ethemet connectivity to servers. New applications are accelerating the need for 10 Gigabit Ethemet performance throughout the campus, within a data center, and between data centers. As Gigabit Ethemet services are implemented in desktop systems, the oversubscription ratios have expanded to 48:1 or 96:1 even when the wiring closet uplinks have been inereased to two or four Gigabit Ethernet channels. This ratio is far over the standard target of 20:1. Deploying 10 Gigabit Ethernet uplinks with switching solutions that are now available can help bring the wiring closet oversubscription ratios back in line with network design best practices, and scale bandwidth capacity for future requirements 1-148 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Ine. Deploying Catalyst 6500 Series Switch Line Cards ‘The Catalyst 6500 Series Supervisor Engine 720 is designed to support three generations of line cards: classic, CEF256 (and dCEF256) and CEF720 (and dCEF720). This feature provides flexibility in network design and investment protection. This topic describes factors to consider when different line cards need to interoperate in a Catalyst 6500 Series Switch. Line Card Interoperability Considerations * Verify performance capability for each line card in the chassis + Consider the potential traffic problems with line card interoperability * Take into account the total network traffic expected in the network * Verify that the line card is supported by the supervisor The Catalyst 6500 Series Supervisor Engine 720 is designed to support three different generations of line cards: = Classic m= xCEF256 = xCEF720 This feature provides flexibility in network design and investment protection. When you consider a deployment of Catalyst 6500 Series Switch line cards, ensure that the demands of the network can be met in the case of the lowest-performance feature set. Also, when you mix line cards, consider the potential traffic problems in the design, the total traffic that you expect in the network, and the line card support in the installed supervisor engine. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-147 Example 1 A customer has an existing system with a Catalyst 6500 Series Supervisor Engine 2 with Multilayer Switch Feature Card 2 (MSFC2), a Switch Fabric Module, and WS-X6516-GBIC line cards. The customer is upgrading the supervisor and switch fabric to a Catalyst 6500 Series Supervisor Engine 720 and some WS-X6748- SFP line cards. The existing system is designed to handle 8-Gbis per slot. The new Cisco Catalyst 6748 line cards were purchased because of their higher port density. In this case, it is not advisable to use the WS-X6516-GBIC line card (CEF256 series) as an uplink from the WSX6748-SFP line cards because the WS-X6748-SFP line cards have the potential to forward more data than the WS-X6516-GBIC line card can handle. With knowledge of the performance capabilities for each line card and the network traffic forecast, the network designer should be able to properly scale the system performance requirements. Given the number of classic and CEF256 series line cards in the installed system base, this example is very realistic. Example 2 A customer has a 10 Gigabit Ethernet link using the WS-X6704-10GE line card and needs to add a firewall. The Cisco Catalyst 6500 Series Firewall Services Module (FWSM) can handle 5-Gb/s of bandwidth. With this information, a network designer has several options (including load balancing, multiple line cards, and so on) to handle the possible 10-Gb/s of traffic traversing that link. Because the performance of the Catalyst 6500 Series FWSM is known, the fact that these line cards are of different generations is secondary. A viable solution is available. 1148 Implementing Cisco Data Center Network Infrastructure 1 (CN) v2.0 (© 2008 Cisco Systems, Inc Catalyst 6500 Series Supervisor Engine 720 Performance with Mixed Line Cards rie) The figure shows the impact on communication between fabric-enabled line cards when mixing fabric-cnabled line cards with non-fabric-cnabled line cards in the same chassis. When mixing, line cards, the centralized forwarding rate for CEF256 line cards and CEF720 line cards is reduced from 30 mpps to 15 mpps. However, the bandwidth is not affected. It is important to differentiate between these two units of measurement: mpps refers to the number of packets per second, while bandwidth refers to the total number of bits per second. These two units of ‘measurement are not related proportionally due to the variance in the size of the packets. Note ‘The forwarding rate of DFC- and DFC3-equipped line cards is not affected because forwarding decisions are made locally. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-149 Switch Fabric Modes * When transferring data between line cards, the switch fabric will ‘operate in one of three modes = The modes are determined by the combination of line cards installed in the chassis, and the module the traffic is sourced from and destined to Used for traffic between non-fabric enabled modules: and for traffic between a non-fabric and a fabric enabled linecard, ed wor onesiabs a oan cs an Compact _| enabled. This mode only passes the packet header over the Dbus after compressing it for transmission. Used when both fabric-enabled, a crossbar switch Truncated | fabric, and classic line cards are instaled in the chassis The bus and fabric ASICs that are present on the CEF256 and CEF720 line cards support three switching modes. These modes determine the header format that is used to transmit data across the data bus (Dbus) when communicating with the other CEF256 and CEF720 line cards. These modes do not apply to line cards that use a DFC. Bus or Flow-Through Mode Bus or flow-through mode is used by the CEF256 modules when no crossbar switch fabric is present (for example Catalyst 6500 Series Supervisor 2 without the SFM module). In this mode, the CEF256 modules operate as if they were classic line cards, by forwarding the entire packet (header plus data) to the supervisor for processing. When bus or flow-through mode is used, performance levels up to 15 mpps can be achieved. Compact Mode Compact mode requires a crossbar switch fabric within the system, either as a separate module, or integrated on the Catalyst 6500 Series Supervisor 720, To use compact mode, all modules within the chassis must be fabric enabled, If classic line cards are resident within the cha: the ability of the switch to run in compact mode is negated. In compact mode, only the header is passed over the Dbus to the supervisor, after being compressed for transmission, This approach increases the bandwidth available for header transmission. The data portion of the packet is transmitted over the channels of the crossbar switch fabric. This compact mode of operation provides centralized performance of up to 30 mpps, regardless of packet size 4-150. Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Truncated Mode ‘Truncated mode is used when CEF256 or CEF720 line cards are installed in the chassis where a classic line card and a crossbar switch fabric are resident. In truncated mode, the classic line cards transmit both the header and the data portion of the packet over the Dbus, while the CEF256 and CEF720 line cards transmit the headers over the Dbus and the data over the crossbar switch fabric. This mode of operation provides centralized performance of up to 15 mpps. Because the CEF256 and CEF720 line cards use the Crossbar Switch Fabric to transmit data, overall aggregate bandwidth can be higher than the 32-Gb’s shared-bus capacity. DFC-cnabled line cards are not affected by truncated mode, and performance remains the same, regardless of the line card mix within the chassis. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series. and Blade Switches 1-15 Examining Fabric Operation Mode = The switch supports redundant supervisor engines ~ administrator can inspect which of the switch fabrics is active 1 Use the show fabric active command to display the switch fabric redundancy status. Use the show fabric switching-mode command to display the fabrie channel switching mode of one or all modules. 1-152 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Verifying Fabric Operation + Inspecting the status of the switch fabric ‘G500Nshow fabric etatus slot’ channel speed ° ° ° * Inspecting utilization of the switch fabric ‘500#show fabric utilization ‘slot’ channel speed. Ingress % Egress © 1 ° 30 28 ° 2 0 ra ° “ 3 83 0 5 6 ° 200 ° ° Use the show fabric status command to display the fabric status of one or all switching modules. Use the show fabric utilization all command to display the fabric utilization of one or all modules. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-153 Verifying Fabric Operation (Cont.) = Fabric operation mode and utilization can be also inspected with the following command Sxitening mode crossbar crossbar acer crossbar oot Use the show platform hardware capacity fabric command to examine the following: Current and peak 21 Ghis bus utilization = Current and peak ingress and egress fabric utilization per fabric channel per module = Switching mode used 4-154 Implementing Cisco Data Center Network infrastructure 1 (DCN) v2.0 © 2008 Cisco Systems, Inc. Verifying Fabric Operation (Cont.) + Inspecting the switch fabric transmission errors ‘500¥ehow fabric errors Module errors: stot channel Use the show fabric errors command to display fabric errors on one or all modules. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-155 Verifying Forwarding Capacity Aaguenncy unnges poet see POEs anvaasay one vet mar ah 3 The show platform hardware capacity forwarding command can be used to verify usage and availability for the Layer 2 and 3 forwarding resources, The command displays the capacities and utilization for the Layer 2 and 3 forwarding resources of the system, focusing mainly on forwarding engine rates and forwarding table scalability. Using these statistics, a network engineer can determine how any proposed changes or upgrades would affect the Layer 2 and 3 forwarding capabilities of the system. This information could also assist in troubleshooting of any potential forwarding issues if those issues were a result of system resources being exceeded. = MAC Table Usage: Shows the total available entries, used entries, and used percentage of the MAC table. This information will be displayed for all supervisors and DFC-equipped modules in the chassis, = ~=VPN CAM Usage: Shows the total available entries, used entries, and used percentage of the VPN CAM table. The VPN CAM table is used for MPLS VPN and VRF-lite implementations. = FIB Ternary Content Addressable Memory (TCAM) Usage: Shows the total available entries, used entries, and used percentage for the FIB TCAM. The entries are further broken out by protocol type. = Adjacency Usage: Shows the total available entries, used entries, and used percentage of the adjacency table. The displaying is done for each region in which the adjacency table is. divided. = Forwarding Engine Load: Shows the current and peak loads (in packets-per-second) for all supervisors and DFC-equipped modules in the chassis. Peak load time is also displayed. 41-156 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-+1) v2.0 (© 2008 Cisco Systems, Inc. WS-X6548-GE-TX (CEF256) = Features: — CEF256 line card ~ 48-ports 10/100/1000 RJ-45 ~ Shared bus connection 8 Gbis fabric connection ~ Optional in-ine power ‘Not operable with all supervisors — Does not support DFC/DFC3 = Deployment consideration: Pl ~ Gigabit Ethemet to the ae al — Wing closet | [Coser eenes—) | | = No jumbo frames j i 8:1 oversubscription Swner repre ~ Not for data center The WS-X6548-GE-TX is a 48-port 10/100/1000 Ethernet line card that is designed for deploying Gigabit Ethemet to the desktop and for IP telephony, video, and wireless applications. This line card features an optional field-upgradable PoE daughter card that ean support IP phones, IP video cameras, and wireless access points using Cisco in-line power capabilities and the IEEE 802.3af in-line power standard. This CEF256 line card is compatible with all Catalyst 6500 Series Switch supervisor engines, chassis configurations, line cards, and Cisco IOS Software versions. To enhance operational manageability, this line card also features an integrated time domain reflectometer (TDR), which allows network managers to more easily monitor and isolate faults on their copper-based wiring infrastructure and to reduce the need for costly third-party network testing equipment. With full support for the Cisco Unified Communications system, this line card is ideal for performance-oriented wiring closet deployments using the Cisco Catalyst 6500 Series Switch fabric, Due to limitations like high oversubscription and no support for jumbo frames, this line card is suitable for wiring closet or low-end Data Centers. Note This line card is not recommended for use in data centers, especially for the Cisco Data Center 3.0 architecture. The WS-X6548-GE-TX is the only CEF256 Ethernet LAN line card that does not support a DFC or DFC3 upgrade option. Note Each of the 8 x Gigabit Ethernet blocks in the figure is a port ASIC that controls eight 10/10/1000 ports. Each of these port ASICs is attached to the bus and fabric ASIC via a 1- Gb/s connection. Therefore, you have 6 * 1-Gb/s connections for all 48 ports. This line card is 8:1 oversubscribed. Although this line card has an 8-Gb/s fabric connection, no more than 6-Gbs of trafic can be passed because of the port ASIC to bus and fabric ASIC connections. As a result. itis not recommended that this line card be used in hiah- performance areas of the network such as core, distribution, or data center access. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-187 WS-X65416A-GBIC (CEF256) » Features: CEF256 line card 16-ports 1000BASE GBIC (copper or fiber) ~ Shared bus connection 0 Gb/s fabric connection Optional DFC/DFC3 — up to 15 mpps ‘Two Tx and three Rx queues Strict priority Rx and Tx queues * Deployment consideration: ~ Gigabit Ethernet in the backbone Server farms 2:4 oversubscription The enhanced Catalyst 6500 Series Switch 16-port GBIC-based Gigabit Ethernet line card provides a cost-effective and flexible approach to deploying Gigabit Ethemet in the structure of the network, as well as in server farms. This card supports the GBIC form factor, which can be used with fiber or copper installations. The card offers higher-density coarse wavelength- division multiplexing (CWDM) and ZX-based GBIC deployments when used in Catalyst 6500 Series Switches and Cisco 7600 Series Routers. The key features of the WS-X6516A-GBIC card include the following: = Interfaces: SX, LX and LH, ZX, CWDM, and Category 5 RJ-45 copper GBIC interfaces = Backplane connections: A single 8-Gb/s connection to the switch fabric, and a 32-Gb/s shared bus connection = Queues per port: Three transmit (Tx), two receive (Rx) = Forwarding: CEF256; optional upgrade to (CEF256 using the DFC or DFC3 4-158 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, inc. WS-X6748-GE-TX (CEF720) * Features: (CEF720 line card 48-ports 10/10/1000 RJ-45 ‘Shared bus connection 2x20 Gb/s fabric connection Optional DFC — up to 48 mpps ‘Two Rx and four Tx queues per port Strict priority queue on transmit Weighted Round Robin * Deployment consideration: | Data center server farms Puceswbe | 41.2:1 oversubscription ‘Switch Fabric The WS-X6748-GE-TX is a 48-port 10/100/1000 RJ-45 interface line card. This line card is a ficld-upgradable CEF720 card that supports distributed forwarding when a DFC3 daughter card is added. This line card uses a new set of ASICs that provide higher port densities for high-speed Gigabit Ethemet and 10 Gigabit Ethemet interfaces than previous generations of line cards. A new fabric ASIC has been developed that replaces the fabric ASIC used in previous fabric line cards. The new fabric ASIC integrates support for multicast and SPAN replication, which was previously found in a separate ASIC. {© 2008 Cisco Systems, Ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-159 WS-X6748-SFP (CEF720) * Features: (CEF720 line card 48-ports 1000BASE SFP ‘Shared bus connection ~ 2x 20 Gbis fabric connection ‘Optional DFC — up to 48 mpps [ac] [= ‘Two Rx and four Tx queues per i port Strict priority queue on transmit ~ Weighted Round Robin * Deployment consideration: a Data Center server farms 1.2:4 oversubscription The WS-X6748-SFP is a 48-port 1000BASE SFP interface line card, This line card is a field- upgradable CEF720 card that supports distributed forwarding when a DFC3 daughter card is added. It is similar in design to the WS-X6748-GE-TX line card. The line card is ideal for handling mission-critical, bursty, or low-latency traffic, such as enterprise resource planning (ERP) or voice applications. This line card scales network performance and intelligent services and features transmit and receive packet buffers, and a strict-priority transmit queue. 1-160 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc WS-X6724-SFP (CEF720) = Features: ~ CEF720 line card 24-ports 1000BASE SFP ‘Shared bus connection 1% 20 Gbis fabric curmection ~ Optional DFC — up to 24 mpps — Two Rx and four Tx queues per port Strict priority queue on transmit — Weighted Round Robin = Deployment consideration: ~ Data Center core 1.2:1 oversubscription ‘The WS-X6724-SFP is a 24-port 1000BASE SFP interface line card. This line card is a field- upgradable CEF720 card that supports distributed forwarding when a DFC3 daughter card is added. ‘The line card is ideal for handling mission-critical, bursty, or low-latency traffic, such as ERP or voice applications. This line card scales network performance and intelligent services and features transmit and receive packet buffers, and a strict-priority transmit queu ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-161 WS-X6704-10GE (CEF720) * Features: CEF720 line card ~ 4-ports 10-Gigabit Ethernet (XENPAK) ~ Shared bus connection ~ 2x20 Gb/s fabric connection ~ Optional DFC — up to 48 mpps ~ Eight Rx and Tx queues per port Strict priority queue on transmit — Weighted Round Robin * Deployment consideration: Data center core — No oversubscription ‘The WS-X6704-10GE is a four-port 10 Gigabit Ethemet line card that is designed for enterprise data center, distribution, and core deployments. This line card allows up to 34 10 Gigabit Ethemet ports to be deployed in a Cisco Catalyst 6509 switch chassis. This line card is fully IEEE 802.3ac standards-compliant and uses industry-standard modular optics to meet a wide range of station-to-station deployment lengths. This CEF720 line card supports a 40-Gbys interconnection to the integrated 720-Gb/s switch fabric of the Catalyst 6500 Series Supervisor Engine 720 and can be upgraded to distributed Cisco Express Forwarding architecture. This upgrade delivers peak throughput of up to 48 mpps per line card. 4-162 Implementing Cisco Data Center Network infrastructure 1 (DCN/-1) v2.0 © 2008 Cisco Systems, nc, WS-X6708-10GE-3C/CXL (dCEF720) = Features: JCEF720 line card 8-ports 10-Gigabit Ethemet (x2) 2x 20 Gb/s fabric connection Integrated DFC3C/3CXL ~ Upgradeable DCF3CXL. — Works with any Supervisor720 ~ 64Gbis local switching ~ All ports VSL capable Weighted/Shaped Round Robin = Deployment consideration: ~ Data center core, distribution 2:4 oversubscription ‘The WS-X6708-10GE is a high-density, eight-port 10 Gigabit Ethemet line card that is designed for enterprise data center, distribution, and core deployments, This line card allows up to 66 10 Gigabit Ethernet ports to be deployed in a Cisco Catalyst 6509 switch chassis. This line card is fully 802.3ae standards-compliant and uses industry-standard modular optics to mect a wide range of station-to-station deployment lengths. This distributed Cisco Express Forwarding line card has DFC3C/CXL integrated and supports a 40-Gb/s interconnection to the integrated 720-Gb/s switch fabrie of the Catalyst 6500 Series Supervisor Engine 720. It can be upgraded to DFC3CXL architecture if originally ordered with DFC3C. The card has 2:1 oversubscription and 64-Gb/s of local switching capacity Note When operating in a non-E Series chassis, the chassis becomes non-NEBS compliant (operating temperature up to 104 F {40°C)) {© 2008 Gisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-163, WS-X6716-10GE-3C/CXL (dCEF720) * Features: ~ dCEF720 line card ~ 16-ports 10-Gigabit Ethemet (x2) 2x 20 Gb/s fabric connection ~ Integrated DFE3C/SCXL Upgradeable DCF3CXL Works with any Supervisor720 4 port groups = Weighted/Shaped Round Robin + Deployment consideration: Data center core, distribution ~ 4:1 oversubscription ‘The WS-X6716-10GE is a high-density, sixteen-port 10 Gigabit Ethernet line card that is designed for enterprise data center, distribution, and core deployments. This line card allows up to 130 10 Gigabit Ethernet ports to be deployed in a Cisco Catalyst 6509 switch chassis. This line card is fully 802.3ae standards-compliant and uses industry-standard modular optics to meet a wide range of station-to-station deployment lengths. This distributed Cisco Express Forwarding line card has DFC3C/CXL integrated, and supports a 40-Gb/s interconnection to the integrated 720-Gb/s switch fabric of the Catalyst 6500 Series Supervisor Engine 720. The card has 4:1 oversubscription and 64-Gb/s of local switching capacity. Note When operating in a non-E Series chassis, the chassis becomes non-NEBS compliant (operating temperature up to 40°C). 1-164 Implementing Cisco Data Genter Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. WS-X6716-10GE-3C/CXL Scalability * Port groups operate in two modes: — Performance (1 port nonblocking) — Oversubscription (4 ports oversubscribed) * Mixed-mode operation supported for maximum flexibility The 16-port 10 Gigabit Ethernet module provides up to 130 10 Gigabit Ethemet ports in a single Catalyst 6500 Series Switch chassis and 260 10 Gigabit Ethernet ports in a virtual switching system (VSS), It consists of four port groups of four ports each. Users can operate each port group in either: = Oversubscription mode (two to four ports used per port group) = Performance mode (one port used per port group) This allows maximum flexibility for using some ports for connection to servers in performance mode, and some other uplinks to wiring closets in oversubscription mode. ‘When in performance mode, up to four 10 Gigabit Ethernet ports can be used to create a virtual switch link ina VSS.1 In addition, the 16-port 10 Gigabit Ethernet module has reduced power consumption, It uses half the power per port compared to the 8-port 10 Gigabit Ethemet module, providing substantial power savings to the customer. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4800 Series, and Blade Switches 1-165 $OGBASE.LXA 10GBASE-SR JOGBASE-LRM {0GBASE-ER 10GBASE-OX4 J0GBASE-ZR wom 100BASEAW | 00600 atom) bel lac 1000 fet (200 rmetes) (OMS MV) |r eet 220 mete) |s40 a (10 some | 197000 fet 4 amet) | teat 18 meter) | 25000 ot (2 kiometrs) 20,000 feet (80 lometers) (32 ‘wavelengths over single stand) | sone to orton) aN HO) ‘The two form factors for 10 Gigabit Ethernet modules are X2 and XENPAK, the last one being larger: = XENPAK dimensions are (D x W x H): 121 x 36 x 18 mm = X2 dimensions (D x W x H): 91mm x 36mm x 13.46mm Both 10 Gigabit Ethernet form factors came als the fiber used. variety of type where distance depends of 1-166 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 {© 2008 Cisco Systems, nc Distributed Forwarding * Local forwarding engine is Distributed Forwarding Card (DFC) ~ Optional daughter card CEF256 and CEF720 modules Central (PFC) and distributed (DFC) engines perform different lookups independently and simultaneously — Not local switching only (destination interface Is irrelevant) ~ Deterministic performance ~ Highly scalable (not flow-based) = DFCs require native Cisco 10S mode Distributed forwarding means that one or more of the switching modules in the Catalyst 6500 Series Switch has its own forwarding engine sitting on the module itself. Such a forwarding engine, or daughter card, is called a Distributed Forwarding Card (DFC). With multiple independent forwarding engines working in parallel, each engine is processing different packets at the same time. This is the way the performance on the Cisco Catalyst 6500 Series Switch is scaled up: the performance of the forwarding engines is aggregated. Such forwarding is fully distributed forwarding, which means that the local engine on the module makes all the forwarding decisions for packets coming in on that module, A DFC has all the hardware of the PFC: the ASICs, the memories, and TCAMs. The fall bridge table, FIB, adjacencies, ACLs, and QoS are on the line card, also eliminating the need to go to the centralized forwarding engine for a lookup. DFCs do not just add capacity for local switching (from port to port on a single module); the destination interface is irrelevant and can be on any line card in the Catalyst 6500 Series Switch while lookup is still done on the ingress line card. DFC is optional on all the CEF256 and CEF720 modules and can be added when performance has to be scaled. {© 2008 Cisco Systems, ne. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-167 DFC3C/CXL * All features of DFC3B/BXL * Optional on: WS-X6704-10GE, WS-X6724-SFP, WS-X6748-SFP, WS-X6748-GE-TX * Integrated on: WS-X6708-10G- SCICXL, WS-X6716-10G-3C/CXL vss 1 Yes macenes | 98.000 | sw | aaeavee | I 220000 entioe | ai 96,000 16 1,000,000 entree 256,000 enten Menor Prote | NetFlow Entries 120,000 enti DFC3C and DFC3CXL are the latest DFC versions. The cards support all the features of the DCF3B and DFC3BXL. They are used with the CEF720-capable 6700 line cards. 1-168 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, {© 2008 Cisco Systems, Inc. DFC3x and PFC3x Interoperability DFoPFE ae eee ES ARae Ronee] Oana Different versions of PFC3 and DFC3 can function in the same system. Since the DFC always works in conjunction with a corresponding supervisor (PFC on the supervisor), the use of a DFC3 requires cooperation with the equivalent PFC3 version. A mix of PFC3 and DFC3 versions will result in the Catalyst 6500 Series Switch operating at the Jowest common denominator. Note Any features specific to a more advanced PFC3 or DFC3 will be lost when it has to operate ina lower mode. Mixing different versions of PFC3 in the same chassis is not supported. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-169 16-Way Equal-Cost Multipath * Allows 16 equal-cost paths to be used for load sharing * Higher-density High Performance Computing (HPC) Data Center design = Configured with maximum-paths keyword under the routing process, or via static routes + Prerequisites ~ Cisco IOS 12.3(33)SXH — Any PFC3 version 16-way equal-cost multipath (ECMP) load sharing enables higher-density high-performance computing (HPC) data center designs. Earlier designs were scale-limited due to the 8-way load- sharing limit prior to release 12.2(33)SXH. All PFC3 versions provide hardware support for 16- way ECMP load sharing. The option is configured with the “maximum-paths” keyword under the routing process, or by using multiple static routes. 1-170 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Line Card Deployment ea eae 10GE: Distbuton end Enterprise D ro Core Layer The core layer has to provide high-speed connection between the network segments. Therefore, 10 Gigabit Ethernet cards are appropriate to be used. Another option is using | Gigabit Ethernet cards and combining multiple interfaces into an EtherChannel (maximum of 8) and using combined Layer 3 and 4 EtherChannel load distribution among the channel interfaces. Distribution Layer The distribution layer is the aggregation point of the access layer switches, meaning that many ports have to be provided. 10 Gigabit Ethernet and 1 Gigabit Ethemet line cards are appropriate for this layer. When designing, do not forget about the oversubscription r Access Layer Connecting end devices requires many ports. When a Catalyst 6500 Series Switch is used in the access layer, | Gigabit Ethernet line cards are most often used. Since high-performance computing and dense server racks are used, 10 Gigabit Ethernet line card might also be a deployment option. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-171 Managing Catalyst 6500 Modules e500 = Reset or shut down (disable) a module ~ Only services modules can be shut down Proceed with shutdown of module? (confirm) Gt §Pt PC shutdown completed for module €500¥he nodule module 2 reset Gadi NCORPHR st | shutdown) To reset an individual module, the hw-module module reset command is used. Apart from resetting, a service module can also be shut down with the hw-module module shutdown command. To re-enable the module use the reset version of the command. 1472 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Catalyst 6500 Series Switch Service Module Overview ‘This topic summarizes the features of the service modules that are available for the Catalyst 6500 Series Switch. Advanced Services Modules IPeec VPN Shared ont Adapter = Hy oe ee Fewall Module Intrusion Detection (CEF256 archtectre Bee) (CEFT20 Arciteture Service modules represent the next generation of intelligent modules for Catalyst 6500 Series Switches. Each service module provides high performance, feature-rich deployment options for Layer 4 through 7 applications. Catalyst 6500 Series Switch services modules are used in security services, content services, network monitoring, mobile wireless, and IP telephony. The following descriptions highlight the features of the Catalyst 6500 Series Switch services modules by application. Most of the current service modules use a fabric-enabled architecture. ‘The Cisco Application Control Engine (ACE) module, however, is a CEF720 module with a single 20-Gb’s link to the fabric, The ACE module requires the Catalyst 6500 Scrics Supervisor 720. Security Services Modules These modules provide security services for Catalyst 6500 Series Switches: = Catalyst 6500 Series FWSM: This module allows any port in the chassis to operate as a firewall port. This module integrates stateful firewall security into the network infrastructure. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-173, = Cisco IPsec VPN Shared Port Adapter: The Cisco I-Flex design combines shared port adapters (SPAs) and SPA interface processors (SIPs) to enable service prioritization for voice, video, and data services. The Cisco IPsec VPN SPA offers next-generation encryption technology as well as a form factor designed to enable a more flexible and scalable network infrastructure. The Cisco IPsec VPN SPA delivers scalable and cost- effective VPN performance for Cisco Catalyst 6500 Series Switches and Cisco 7600 Series routers. Each slot can now support up to two Cisco IPsec VPN SPAs, and although the Cisco IPsec VPN SPA does not have physical WAN or LAN interfaces, it can take advantage of the breadth of LAN and WAN interfaces of each of the platforms. = Cisco Catalyst 6500 Series Intrusion Detection System Services Module 1 (IDSM-1) and IDSM-2: These modules take traffic from the switch backplane at wire-speed to integrate intrusion detection functions directly into the switch, For more information on Cisco Catalyst 6500 Series Switch IDSM and IDSM-2, go to hutp:/Awww.cisco.conv/en/US/products/hw/switches/ps708/products_data_sheet09186a0080 1e5Sdd.html, © Cisco Catalyst 6500 Series SSL Services Module (SSLSM): This module offloads processor-intensive tasks related to securing traffic, Secure Socket Layer (SSL) accelerates the performance of the module and increases the security of web-enabled applications. For more information on Catalyst 6500 Series SSLSM, go to http:/Avww.cisco.com/en/US/products/hw/switches/ps708/products_data_sheet09186a0080 Oc4fe9.html. Application Networking Services Modules ‘These modules provide application networking services for Cisco Catalyst 6500 Series Switches: & Cisco Content Switching Module (CSM): This module integrates advanced content switching into the Catalyst 6500 Series Switch to provide high-performance, high- availability load balancing of caches, firewalls, web servers, and other network devices. For more information on Cisco CSM, go to hutp://www cisco.com/en/US/products/hw/switches/ps708/products_data_sheet0918640080 088743.hum!, = Cisco Content Switching Module with SSL (CSM-S): This module provides a solution that combines Layer 4 through 7 content switching with SSL acceleration on a single line card for the Catalyst 6500 Series Switch. By combining these two technologies, the CSM-S offers secure end-to-end connections to applications while also providing advanced content-switching features. With this integration, the CSM-S can use information typically ‘encrypted in the data field or in the header of an SSL session, to make load balancing decisions. These modules provide network monitoring services to the Cisco Catalyst 6500 Series Switches. = Cisco ACE: This module delivers the highest level of application infrastructure control, application performance, application security and infrastructure simplification. The ACE enables greater control over the application infrastructure and allows organizations to quickly deploy and migrate applications. The ACE also delivers the highest levels of services to the end user while simplifying the overall management and operation of a data center. In addition, the ACE works in conjunetion with the Cisco AVS 3100 Series Application Velocity System to provide the richest solution for application security. For ‘more information on the ACE, go to http://www cisco.com/en/US/prod/collateral/modules/ps2706/ps6906/product_data_sheet09 O0aecd8045861b_ps708_Products_Data_Sheet.html. 4-174 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Advanced Services Modules (Cont.) wis ee m= <= ace N Mee cna TE! Sences Modes NAM and NAME TAD and AS ——— ay SS Wireless Services Modules These modules provide wireless services for Cisco Catalyst 6500 Series Switches: ® Cisco Catalyst 6500 Series Wireless Services Module (WiSM): The Catalyst 6500 Series WiSM enables fast, secure, campus wide wireless LAN (WLAN) roaming within and across IP subnets, enhances WLAN security (user-group segmentation and Cisco Catalyst integrated security services, for example), and simplifies WLAN deployment and ‘management. For more information on the Catalyst 6500 Series WiSM, go to hitp://www.cisco.com/en/US/prod/collateral/ modules/ps2706/ps6526/product_data_sheet09 O0aecd80364340_ps708_Products Data Sheet.html, = Cisco Mobile Wireless Access Module (MWAM): The MWAM delivers the performance, density, and scalability required for comprehensive IP service delivery in mobile operator networks. The MWAM is a Cisco IOS application module that can be installed in the Catalyst 6500 Series Switch. The MWAM system supports Cisco Packet Data Serving Node (PDSN), a Gateway General Packet Radio Service (GPRS) Support Node (GGSN), and a Service Selection Gateway (SSG). The MWAM uses a base module and daughter card arrangement to provide distributed functions. For more information on the MWAM, go to http://www.cisco.com/en/US/prod/collateral/modules/ps5510/ product_data_sheet0900aecd800f8965_ps708_Products_Data_Sheet.html. = Cisco Content Services Gateway Module (CSG): The Cisco CSG is the ideal solution for service providers seeking to apply advanced processing of IP flows through dynamic application-layer content examination, subscriber service access control, subscriber account balance enforcement, and content filtering. The Cisco CSG is a specialized line card designed for the Cisco Catalyst 6500 Series Switch and Cisco 7600 Series router. The Cisco CSG supports billing, content filtering, service control, traffic analysis, and data mining in a highly-scalable, fault-tolerant package. For more information on the Cisco CSG, go to http://www.cisco.com/en/US/prodicollateral/wireless/wirelssw/ps779/ product data sheet09186a00801abf75_ps708_Products_Data_Sheet. htm Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-175 IP Telephony Services Modules ‘These modules provide voice services for Cisco Catalyst 6500 Series Switches: = Cisco Catalyst 6500 Series Communication Media Module (CMM): The Catalyst 6500 Series CMM provides flexible, high-density TI and El gateways, FXS gateways, and transcoding services. The Catalyst 6500 Series CMM allows organizations to connect their existing time-division multiplexing (TDM) networks to their IP communications networks. For more information on Catalyst 6500 Series CMM, go to http://www cisco.com/en/US/prod/collateral/switches/ps5718/ps708/product_data_sheet09 00aced8066426f html. i © Cisco Catalyst 6000 Series Voice T1/E1 and Services Module: The Voice T1/E1 and Services Module establishes the Cisco Catalyst 6000 Series Switch as the most complete campus multiservice platform available. These modules provide high-end T1 or El gateways to the PSTN or legacy PBXs and network-based voice services. For more information on the Voice T1/E1 and Services Module, go to http://www ciseo.conv/en/US/partner/products/hw/modules/ps3115/products_data_shect091 86a00800923b8. html. Network Monitoring Services Modules These modules provide network monitoring services for Cisco Catalyst 6500 Series Switches: = Cisco Network Analysis Module (NAM-1 and NAM-2): This module provides application level visibility into the network infrastructure for real-time traffic analysis, performance monitoring, and troubleshooting. This module also performs traffic monitoring with an embedded web-based traffic analyzer. For more information on Cisco NAM-1 and NAM-2, go to hutp://www.cisco.conven/US/prod/collateral/modules/ps2706/ps5025/product_data_sheet09 scd804bab 1 I_ps708_Products_Data_Shect.html. = Cisco Traffic Anomaly Detector (TAD) and Anomaly Guard (AG): The TAD is an integrated services module for Catalyst 6500 Series Switches and Cisco 7600 Series Routers. The TAD protects large organizations against distributed denial of service (DDoS) and other online assaults by quickly detecting attacks and automatically activating the Cisco AG services module to initiate mitigation services before business is adversely affected. The AG module diverts and scrubs only malicious traffic addressed to targeted devices or zones without affecting other traffic. Within the module, integrated multiple layers of defense enable the AG module to identify and block malicious attack traffic while allowing legitimate transactions to continue flowing to their original destinations. For more information on the Cisco TAD, go to hitp://www.cisco.com/en/US/prod/collateral/modules/ps2706/ps6236/product_data_sheet09 O0aecd80220a6e_ps708_Products_Data_Shect.html. For more information on the Cisco AG, go to httpy/www.cisco.com/en/US/prod/collateral/modules/ps2706/ps6235/product_data_sheet09 00accd80220a7e_ps708_Products_Data_Sheet.html. 4-176 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-t) v2.0 (© 2008 Cisco Systems. nc. ‘Applicaton Control Engine Service Module |Appteaton Contre Engine 20 Hréware ‘Communication Media Module WS-SVC-FWMA1-K9 Firewall iad for 6500 and 7600, VFW License Separate WS-SVCA0S2-2UN-K9 (600M JOSH? Mesor Cat WS-SVONAM |catayat 500 Network Anais Mocle-+ WSSVCNAM2 [Cala 800 Network Anais Mote WS-SVC:WSMLKO [Cisco Wieoss Sevios Module (WISM) e00-88c-400 |cisco 7600 Cataiys 6500 Serdcos SPA Cartier Card SPAIpSECaG |ciaco 700 / Catt 6500 Pate VPN SPA -DESIODESIAES Ws.n6882.2PA, |cleore0o/caayt6s00 Enhanced ltWAN,Fatrc-enaied 17600-S1P-200 [cisco 7600 Series SPA Interface Processor:200 7e00-81P-400 [aco 7600 Series SPA interace Processor400 ole: VSS curently supports oly NAME and NAM? service module The table shows the service modules supported from Cisco IOS Software Release 12.2(33)SXH onwards. Note Please consult the release notes for your version of code to determine service module support for that release. Note The VSS currently supports only NAM-1 and NAM-2 service modules. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-177 Cisco Catalyst 6500 Series Switch Power Supplies This topic describes the power supply options for the Catalyst 6500 Series Switches. Catalyst 6500 Power Supplies * All Catalyst 6500 Series chassis options support redundant power supplies: ~ Power supplies should provide the same wattage. — Redundant and combined operational mode. + Power supply options include both AC and DC versions: Power Power ~ AC and DC power Supply 4 Supply 2 supplies can be installed Seen at the same time, The figure shows the location of the power supplies in a Catalyst 6509 switch. There are two types of power supply operational modes: redundant and combined. In redundant mode, two power supplies are present, but they provide the power of only one power supply. The power supplies assume a load-sharing operation with neither supply providing more than 60% of the required load. In combined mode, the two power supplies combine their power. Combined mode does not provide power supply redundancy. Combined mode aggregates the power of two individual power supplies, but this mode does not result in a doubling of their combined power capacity. The load-sharing operation between the two supplies allows a 67 percent increase (1 2/3 times the individual amount) in power over a single supply (or dual supplies in redundant mode). For example, a single 2500 W power supply provides 55.5 A to the line cards and powered devices. ‘A dual 2500 W power supply running in combined mode has a capacity of 92.5 A. That is, 55.5 ‘A * 1.67 = 92.5 A. The main purpose of supporting combined mode is to allow for online power supply upgrades or to support lab environments. You should not use combined mode in a production environment. Caution _The Catalyst 6500 Series Switch supports the mixing of different-capacity power supplies. However, mixing different-capacity power supplies is not recommended because true power redundancy is not possible when the switch is running in non-redundant mode. 1-178 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. rors pacity per Chassis fer 200 W (pa narber WS-CES0) ‘eprosmatay 0040 220 eeoxmatey 100A DV NA 5000 W ipa ruber WS-C8S04-) Rovronmatey BAG AEV 2800 W (pat number WS-C5506) | ~14500 W (prt umber WS-C8506-£) Aomoansiey OK@AZV | Aoprosmatey 9500G 2V ~2000 W (part number WS.08500) | 14500 W (prt number WS-C6500-E) Appoumael 50 4 @ 42 Aoproumaly 30 A @ 42V ~4500 W (pat umber WS-CBE0@-NEB- ‘A Approsaly 108 A @ 42 V (Nol on ~2200W war eunber ws.css0e | E-Seres) NEB) Approximately 99A@42V_| susp (pat umber WS-C6S064-£), ‘Aeprosimatly 3508 @ 42 2000 W (pat rumber WS-08513) Aoprosmaly MOA@S2V ‘The figure compares the original and the enhanced (E Series) chassis options. The enhanced chassis are recommended for large PoE deployments Alll Catalyst 6500 Series Switch chassis options can support PoE. The original chassis options have a power capacity of approximately 4000 W of output power, which includes PoE as well as module power. To provide further power scalability, Cisco has developed an enhanced series of Catalyst 6500 Series Switch chassis designed specifically to scale the power capabilities for applications such as PoE. The latest chassis options, including the WS-C6506-E and WS- €6509-E, can scale up to 14,500 W of output power. These enhancements, along with larger power supply options, translate into the ability to support a larger PoE environment. (© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-179 SS ——————— ror Tre eer [neces racers rar reer) o> or) recta on Modules can be fer) selectively shut down, eer eee iin ie stakes: © fen modules 2 and 7 are shut down while others continue to operate ‘normally. Catalyst 6509 Catalyst 6509 The Catalyst 6500 Series Switch power management code allows individual modules to be powered on and off, or allows power to be cycled selectively. This is an important feature, especially for those service modules that must be powered down prior to removal from the chass The figure shows an operational Catalyst 6509 switch on the left. Using the Catalyst 6500 Series Switch power management code, line cards 2 and 7 are shut down without affecting the overall operational status of the switch or the other line cards. Note If there is only one supervisor engine in the system (regardiess of chassis or supervisor ‘engine type), the redundant supervisor slot reserves enough power to satisfy the requirement of one full supervisor, provided that nothing else Is installed. If there is no need {for a redundant supervisor, a line card can be installed in the slot for the redundant supervisor. The line card will then use power that has already been budgeted elsewhere. You should place your highest-power line card or service module in this slot only if redundancy is not implemented on the switch, 4-180 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, nc. system power available + 3141 — ‘ower-Capacity 78 Der-Altocated Admin oper Retz Warts A @42V State Sti To view the power status of a Catalyst 6500 Series Switch, enter the show power command at the command prompt. The show power command provides the following information about installed power supplies: = System power redundancy mode, either combined or redundant Power units measured in both watts and amperage System power total System power used System power available ‘Type of power supply Power capacity Power supply fan status Output status Operational state Note If the state displayed is “off (admin requested)” the administrator disabled the power for the Particular module, ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-181 The show power command also reveals the following information about line card power: = Slot number of the line card = Line card type = Power requested by each line card in watts and amperage = Power allocated to each line card in watts and amperage © Line cards administrative state = Operational state Note ‘The Catalyst 6500 Series Switch does not have the ability to measure actual power usage with the show power command. The numbers seen here are the worst-case possible given the active hardware in the system at the time. An external power measuring device is needed if real-time power usage is required. The show platform hardware capacity power command displays information about the ( system power capacities and utilizations—system and inline power consumption, 6500#show platform hardware capacity power Power Resources Power supply redundancy mode: administratively combined operationally combined ( System power: 1952W, OW (0%) inline, 1638W (84%) total allocated Powered devices: 0 total, 0 Class3, 0 Class2, 0 Class1, 0 Class0, 0 Cisco 4-182 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. Managing System Power 6500 (contig) # ‘power redundancy-mode [redundant | combined) * Set the power supply operational mode (redundant/combined) 6500 (contig) # [nol power enable module slot * Power ON/OFF individual module Catalyst 6500 Series Switch power supplies can operate in either redundant (default) or combined mode. To enable either of the modes use the power redundancy-mode global configuration command, ‘The administrator can also power on or off individual module using the [no] power enable ‘module command. The power supply redundancy (redundancy is enabled by default) is disabled or enabled from global configuration mode. To view the power status of a Catalyst 6500 Series Switch, issue the show power command at the command prompt. ‘The show power command provides the following information about installed power supplies: 6500#show power system power redundancy mode = redundant system power redundancy operationally ~ non-redundant system power total = 2331.00 Watts (55.50 Amps @ 42v) system power used 1137.78 Watts (27.09 Amps @ 42v) system power available = 1193.22 Watts (28.41 Amps @ 42v) Power-Capacity PS-Pan Output Oper PS Type Watts A @42V Status status state Ws-CAC-2500W 2331.00 55.50 OK OK on Pwr-Requested Pwr-Allocated Admin Oper Card-Type Watts A @42V Watts A @42v State State 2 WS-X6148A-GE-Tx 105.00 2.50 105.00 2.50 on on 3 7600-ssc-400 226.80 5.40 - - ott ort (admin request) {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-183, 5 WS-SUP720-3BxL. 328.44 7.82 328.44 7.82 on on 6 — WS-X6748-GE-Tx 407.40 9.70 407 9.70 on on 7 WS-x6724-SFP 125.16 2.98 125.16 2.98 on on 8 WS-SVC-FWM-1 171.78 4.09 171.78 4.09 on on 1-184 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, ‘© 2008 Cisco Systems, nc. Power Supply Considerations = Catalyst 6500 E-series support higher power capacity = Power supply software redundancy configuration * Processors and line cards * Use of in-line PoE when following devices are used: — IP phones — Wireless applications ~ Video cameras ~ Building control ~ Security access devices Cisco Catalyst 6500-E Series Switch Chassis ‘The new Catalyst 6500-E Series Switches scale beyond a 4000W power supply. The 6-, 9-, and 13-slot chassis can use the 8700W power supply. Power Supply Software Redundancy Configuration ‘The combined mode provides no power supply redundancy when the Catalyst 6500 Series Switch systems require more power than a single power supply can offer. In the event of power failure, intelligent power management disables modules when their needs cannot be fulfilled. The order that modules and devices are powered down is as follows = For modules that have powered devices attached, the powered devices are powered down before any line card is powered down, = The powered devices are powered down beginning from the highest to the lowest port number (for example, from port 48 down to port 1) = Line cards are powered down from the bottom slots to the top slots until the system is using less than the power budget allowance. Supervisors, switch fabrics, and service modules are the last to be powered down in order to maintain system integrity. Line Cards Power requirements for a Catalyst 6500 Series Supervisor 720-based system depend on the overall system configuration. For a Cisco Catalyst 6500 Series Switch system equipped with Catalyst 6500 Series Supervisor 720 or Catalyst 6500 Series Supervisor 32 processors, the minimum is a 2500 W power supply for 6-, 9-, and 13-slot chassis. (© 2008 Cisco Syst 8, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-185 In-Line PoE To allow greater flexibility, Cisco IOS Software Internetwork Performance Monitor (IPM) allows the nominal per-port power value to override the power values derived through the default 15.4 W IEEE 802.3af-2003 class, or Cisco Discovery Protocol-negotiated value. If power is not being drawn, the configured power budget is retumed to the power sourcing equipment overall power budget. 1-186 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 ‘© 2008 Cisco Systems, Ine Peake} eae ae Oe to fea in ears Cae Tut gy ie Cte a ie te ace ad "@e16e5t6a 6724. 6748- 6502 6704 6708 6716 7 oe The features of Catalyst 6500 Series Switches aids green data center architecture: © Virtualization of firewall services with Catalyst 6500 Series FWSM Virtualization of application services with ACE module = Reuse of existing chassis = Long chassis life (thus, replacing is not necessary upon new line cards and supervisors introduction) In addition, newer line cards and service modules consume less power per port or service and thus also contribute to less carbon footprint and green architecture. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-187 Cisco Power Calculator * hltp://tools.cisco.com/epellaunch jsp * Used to calculate the power requirements for various platforms ‘and operational modes ‘The Cisco Power Calculator enables you to calculate the power supply requirements for a specific PoE configuration, The results will show output current, output power, and system heat dissipation, The Cisco Power Calculator supports the following Cisco product switching platforms: § Cisco Nexus 7000 Switch Cisco Catalyst 6500 Series Switch Cisco Catalyst 4500 Series Switch Cisco Catalyst 3750-E/3750 Switch Cisco Catalyst 3560-E/3560 Switch = Cisco Catalyst Express 500 Series Switch = Cisco 7600 Series Router The Cisco Power Calculator enables you to calculate the power supply requirements for a specific PoE configuration. The results show output current, output power, and system heat dissipation. The Cisco Power Calculator supports the Catalyst 6500, Catalyst 4500, Catalyst 3750, and Catalyst 3560 Series Switches, as well as the Cisco 7600 Series Router. To use the Cisco Power Calculator, follow these steps: Step1 Navigate to http://tools.cisco.com/cpe/launch jsp and launch the Cisco Power Calculator. You might need to scroll down to see this tool. Stop2 _ Read the online privacy statement and click I Agree. Step 3 _In the Product Family field, choose Cisco Catalyst 6500 Series Switch and click Next, Step 4 —_In the chassis drop-down menu, choose a Cisco Catalyst 6500 Series Switch platform—the WS-C6503-E in this example. The screen updates to provide additional options. 4-188 Implementing co Data Center Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc. Step 5 Step 6 Step 7 Step 8 Step 9 In the Supervisor Engine field, choose WS-Supervisor Engine 720. Notice that the fan tray drop-down menu is populated with the proposed fan tray type “WS-C6503- E-FAN.” In the Input Voltage field, choose 100-120 Volts AC. The Cisco Power Calculator might automatically progress to the next step in this procedure. If the progress is not automatic, click Next. Inthe Slot 2 and Slot 3 fields, choose WS-X6704-10GE for both slots. Click Next. Notice that the first field is populated with “WS-Supervisor Engine720.” In this step, optional DFCs can be specitied. Click Next. The Cisco Power Calculator generates a power consumption and heat dissipation summary that can be downloaded to a Microsoft Excel file or an Adobe PDF file. Scroll down to see additional power supply design details. Note Keep in mind that the power consumption and heat dissipation numbers generated by the Cisco Power Calculator are worst-case scenario numbers and will be, in almost all cases, higher than the actual power draw measured between the source and the power supplies, (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Swiiches 1-189 Summary This topic summarizes the key points that were discussed in this lesson. Summary * Catalyst 6500 E-Series chassis meet the increased power needs of new line cards. * In the Cisco Catalyst 6500 Series architecture, special-purpose modules perform separate tasks, which allows the feature set to evolve quickly and allows customers to add features and enhance performance by adding new modules. * The types of line cards used for the Catalyst 6500 Series Switch are Classic, CEF256, dCEF256, CEF720, and dCEF720. * The architecture of each line card is based on the capabilities of the bus connection, fabric connection, and type of forwarding engine in use. * The packet flow across a Cisco Catalyst 6500 Series backplane is based on the type of line card. Summary (Cont.) + The key factors in the deployment of line cards are compatibility with supervisor engines, overall performance (oversubscription, forwarding engine type, queuing, and QoS), and port density. * All Cisco Catalyst 6500 Series chassis options support redundant ‘AC and DC power supplies. = To determine the correct power supply for a Cisco Catalyst 6600 Series Switch, consider the type of chassis, the redundancy requirements, the processor and line cards to be used, and the amount of in-iine POE that your applications require. + The Cisco Catalyst 6500 Series adds to the Green Data Center Architecture, + Use the Cisco Power Calculator to simplify the task of selecting the correct size of power supply to serve the needs of your applications. 4-190 Implementing Cisco Date Center Network Infrastructure 1 (CNH) v2.0 {© 2008 Cisco Systems, Inc Lesson 5 Implementing Cisco Catalyst 6500 VSS 1440 Overview Network operators increase network reliability by configuring switches in redundant pairs and by provisioning links to both switches in the redundant pair. A Cisco Catalyst 6500 Virtual Switching System (VSS) 1440 combines a pair of Cisco Catalyst 6500 Series Switches into a single network element. The VSS 1440 manages the redundant links, which externally act as a single port channel. This lesson discusses the VSS 1440 functionality. Objectives Upon completing this lesson, you will be able to describe and deploy the Catalyst 6500 VSS 1440. This ability includes being able to meet these objectives: Describe the VSS 1440 functionality m= Identify the VSS 1440 benefits = Explain the VSS 1400 system architecture Describe the VSS 1440 operation and protocols used to maintain the state '™ Identify the VSS 1440 deployment scenarios "= Describe the VSS 1440 conversion process = Identify the VSS 1440-related commands VSS 1440 Overview This topic describes the VSS 1440 functionality, VSS 1440 * VSS 1440 provides physical redundancy with simplified contro! plane providing for improved flexibility in network design + VSS 1440 addresses challenges: — Network complexity = Time spent to upgrade and manage 1 ‘ ~ Resource availability — Muhtiple protocols to manage ~ Multiple nodes to manage VSS functionality is used to combine two Catalyst 6500 Series Switches into a single network element using the latest supervisor engine: VS-S720-10G-3C/CXL. This is achieved by forming a Virtual Switch Link (VSL) between two chassis each containing this supervisor. The interfaces used for the VSL have to be either 10 Gigabit Ethernet interfaces on the supervisor or interfaces on the WS-X6708-10G-3C/CXL 8-port 10 Gigabit Ethernet card. Challenges VSS 1440 Addresses VSS 1440 addresses these challenges: m= The need to simplify network operations and relieve the effort involved in managing multiple network resources. m= The need for redundancy and highly available networks has increased and thus the network topology and backup data paths to be managed get more complicated. = Different protocols like Spanning Tree Protocol (STP), Virtual Router Redundancy Protocol (VRRP), and Hot Standby Router Protocol (HSRP) are deployed to help determine the path to which traffic will switch when re-convergence is required. Since multiple protocols need to be deployed, they also have to be tuned and managed which adds to the workload on the IT department. 4-192 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Virtualization means that IT resources are used logically so that physical constraints are removed. The benefits are maximized operational efficiency, availability, and better asset utilization, Two virtualization categories exist: = L:many—server virtualization or network virtualization = many :1—for example, storage virtualization or network system virtualization VSS 1440 increases operational efficiency by enabling multiple Catalyst 6500 Series Switches to share a single point of management, single IP address, and single routing instance, as well as by eliminating the dependence on STP. (©2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-193 VSS 1440 Benefits Separate Catalyst 6500 VSS Deployment A VSS 1440 provides a 1.44-Tb/s system wide backplane and can host up to 820 1 Gigabit Ethernet and 256 10 Gigabit Ethernet interfaces, and 128 port channels (to be scaled to 576 port channels in newer code), An access switch connects to both chassis of the VSS 1440 using one logical port channel. The VSS 1440 manages redundancy and load balancing on the port channel. This capability enables a loop-free Layer 2 network topology. The VSS 1440 also simplifies the Layer 3 network topology, because the VSS reduces the number of routing peers in the network, Since both data planes are active there is no time delay in routing traffic. The control plane of the secondary switch is in a hot standby mode and will take over from the primary switch in case of failure. The additional benefit is that VSS 1440 utilizes existing network architecture while still reducing the set of control protocols needed. To summarize, the benefits provided by the VSS 1440 system are: = Extension of control and management planes across chassis Active-active data plane Stateful Switchover (SSO) across chassis Single point of management and simplified distribution layer services Multichassis EtherChannel (MEC) between virtual switch and all neighbors, which removes dependency on STP for link recovery MEC to servers plifies NIC teaming Eliminates spanning tree Doubles effective bandwidth by utilizing all links Reduces number of Layer 3 routing neighbors Eliminates the need for HSRP, VRRP, or GLBP 4-104 Implementing Cisco Data Center Network Infrastructure 1 (OCNM-1) v2.0 {© 2008 Cisco Systems, Inc. VSS 1440 Architecture This topic explains the VSS 1440 architecture. VSS 1440 Architecture Overview * Virtual Switch Domain: ~ VS Domain ID: value 1 to 255 Switch role: active or standby = VSL; Used for control plane traffic, available for data traffic also Virtual Switch Link (st) ‘Virtual Switch Uorain Virtual Switch Domain ID A virtual switch domain ID is allocated during the conversion process and represents the logical grouping the two physical chassis within a VSS. It is possible to have multiple virtual switch domains throughout the network. The configurable values for the domain ID are 1-255. Itis always recommended to use a unique virtual switch domain ID for each virtual switch domain throughout the network. Virtual Switch Roles When you create or restart a VSS, the peer chassis negotiate their roles. One chas the active chassis, and the other chassis becomes the standby. becomes ‘The active chassis controls the VSS. It runs the Layer 2 and Layer 3 control protocols for the switching modules on both chassis. The active chassis also provides management functions for the VSS, such as line card online insertion and removal (OIR) and the console interface. The active and standby chassis perform packet forwarding for ingress data traffic on their locally hosted interfaces. However, the standby chassis sends all control traffic to the active chassis for processing, {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-195 Control and Data Plane In virtual switch mode, while only one control plane is active, both data planes (policy feature cards) are active, Therefore, cach can actively participate in the forwarding of data, Since both data planes are active, each switch has a full copy of the forwarding tables and security/Quality of Service (QoS) policies in hardware so that each can make a fully informed local forwarding decision. Any DFC3C or DFC3CXL in either chassis in the VSS will also have a copy of this information, Router MAC Address Ina standalone Cisco Catalyst 6500 Series Switch system, the router MAC address is derived from the Chassis MAC EEPROM and is unique to each chassis. In a VSS, since there is only a single routing entity now, there is also only one single router MAC address, The MAC address allocated to the VSS is negotiated at system initialization, Regardless of either switch being brought down or up, the same MAC address will be retained such that neighboring network nodes and hosts do not need to resubmit an Address Resolution Protocol request (re-ARP) for a new address, 4-196 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc Virtual Switch Link = VSL is a special link joining two switches together: ~ Extends the out of band channel — Allows the active control plane to manage the hardware in the ‘second chassis = VSL bundle can consist of up to eighit 10 Gigabit Ethernet interfaces * VSL protocols: Link Management Protocol (LMP) ~ Role Resolution Protocol (RRP) Ina VSS, 1440 the two Supervisor 720-10G-3C/CXL in separate chassis are connected via a YSL to form the VSS 1440. A VSL bundle can consist of up to eight 10 Gigabit Ethernet interfaces. Note Since WS-6708 has 2:1 oversubscription only four 10 Gigabit Ethernet ports in dedicated mode should be used. The control plane uses the VSL for CPU to CPU communications, while the data plane uses VSL to extend the internal chassis fabric to the remote chassis. VSL Traffic All traffic traversing the VSL is encapsulated with a 32-byte virtual switch header containing ingress and egress switch port indexes, class of service (COS), VLAN number, and other important information from the Layer 2 and Layer 3 header. Virtual Switch Link Protocol ‘The Virtual Switch Link Protocol (VSLP) consists of several protocols that contribute to VSS. initialization. The VSLP includes the following protocols: = Link Management Protocol: The Link Management Protocol (LMP) runs on all VSL links and exchanges information required to establish communication between the two chassis. LMP identifies and rejects any link, the chassis that detects the condition brings the link down and up to restart the VSLP. negotiation. VSL moves the control traffic to another port if necessary. = Role Resolution Protocol: The peer chassis use Role Resolution Protocol (RRP) to negotiate the role (active or standby) for each chassis. RRP also determines if each chassis has the proper hardware and software to form a VSL. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Serles, and Blade Switches 1-197 VSL Initialization Before the virtual switch domain can become active, the VSL must be brought online to determine active and standby roles. The initialization process essentially consists of 3 steps: = Link bring-up to determine which ports form the VSL = LMP is used to track and reject unidirectional links, exchange chassis ID and other information between the two switches = RRP is used to determine compatible hardware and software versions to form the VSL, as well as determine which switch becomes active and hot standby from a control plane perspective VSLP Ping ‘A new ping mechanism has been implemented in VSS mode to allow the user to objectively verify the health of the VSL itself. This is implemented as a VSLP ping. ‘The VSLP ping operates on a per-physical interface basis and parameters such as count, destination, size and timeout may also be specified. 4-198 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, nc. Multichassis EtherChannel Improves link resiliency EtherChannel link bundle terminates across two physical switches Up to 8 links in MEC. Protocols supported: PAgP, 802.3ad and manual ON * Modified EtherChannel hash to prefer local link over VSL Prior to virtual switches, EtherChannels were restricted to residing within the same physical switch. Ina VSS, the two physical switches form a single logical network entity; therefore, EtherChannels can now also be extended across the two physical chassis, ‘An EtherChannel (also known as a port channel) is a collection of two or more physical links that combine to form one logical link. Layer 2 protocols operate on the EtherChannel as a single logical entity. A Multi-chassis EtherChannel (MEC) is a port channel that spans the two chassis of a VSS. The access switch views the MEC as a standard port channel. The VSS supports a maximum of 128 EtherChannels (more to be added in future software releases) on one VSS. This limit applies to the combined total of regular EtherChannels and MECs. Because VSL requires two EtherChannel numbers (one for each chassis), there are 126 user-configurable EtherChannels. MEC supports all existing EtherChannel modes: 802.3ad LACP, PAGP or manual EtherChannel mode. Hash Distribution Algorithm The EtherChannel hash distribution algorithm prior to Cisco IOS Software Release 12,2(33)SXH requires 100 percent of flows to be temporarily dropped so that duplicate frames are not sent into the network for the duration of time it takes to reprogram the port ASICs with the new member information. A new hash distribution algorithm has been introduced with the Cisco 10S Software Release 12.2(33)SXH, which allows for members of a port channel to be added or removed without the requirement for all traffic on the existing members to be temporarily dropped. The new hash algorithm chooses the most optimal path from the VSS perspective—it should always be a locally attached interface, since sending traffic across the VSL link is not desirable. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-199 VSS Operational Considerations * Two Catalyst 6500 switches per VSS * Single VS-S720-10G-3C/CXL in each chassis = VSL deployed thru 10G interfaces on: ~ Supervisor 720-10G-3C/CXL — WS-X6708-10G-3C/CXL line card — WS-X6716-10G-3C/CXL line card (interfaces must be in performance mode)* * Only NAM-1/NAM-2 service module support * No MPLS and IPV6 (supported in future software release) "To be enabled in future software release Currently, a maximum of two Catalyst 6500 Series Switches, each with a single Catalyst 6500 ; Series Supervisor Engine 720-10G-3C/CXL, can be combined into VSS. Hardware Requirements The VSL EtherChannel supports only 10 Gigabit Ethernet ports, The 10 Gigabit Ethernet port can be located on the supervisor engine module or on a WS-X6708-10G-3C or WS-X6708- 10G-3CXL switching module. It is recommended that you use both of the 10 Gigabit Ethernet ports on the supervisor engines to create the VSL between the two chassis. You can add additional physical links to the VSL EtherChannel by using the 10 Gigabit Ethernet ports on WS-X6708-10G switching modules if your requirements for the VSL scale beyond the 20-Gb/s that can be provided by the supervisor 10 Gigabit Ethernet ports. PFC and DFC Requirements The VSS 1440 supports DFC3C or DFC3CXL hardware and does not support DFC3A/3B/3BXL hardware. If any switching module in the VSS 1440 is provisioned with DFC3C, the whole VSS must operate in PFC3C mode. If, 6700 series switching module with a DFC3A/3B/3BXL is inserted in the chassis of a VSS 1440, the module will remain unpowered, because VSS 1440 supports only DFC3C and DFC3CXL. If the supervisor engines are provisioned with PFC3C, the VSS 1440 will automatically operate in 3C mode, even if some of the line cards are 3CXL. However, if the supervisor engines are provisioned with PFC3CXL, but some of the line cards are 3C, you need to configure the VSS to operate in 3C mode, 1-200 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Supervisor 720-10G-3C/CXL Line Card Compatibility * Line cards not supported from 12.2(33)SXH onwards: ~ WS-X6248-RJ-45 ~ WS-X6248-TEL = WS-K0501-10GEX4 WS-X6416-GE-MT ~ WS-X6316-GE-TX ‘WS-X6024-10FL-MT WS-X6224-100FX-MT ~ WS-X6324-100FX-SM * VSS supported modules: ~ 67x series line cards NAM-1 and NAM-2 service modules Supervisor 720-10GE-3C/CXL supports all the existing line card and service modules except: WS-X6248-RJ-45 WS-X6248-TEL WS-X6248A-TEL WS-X6501-10GEX4 WS-X6416-GE-MT WS-X6316-GE-TX WS-X6024-10FL-MT WS-X6224-100FX-MT WS-X6324-100FX-SM These line cards are not supported from Cisco IOS Software Release 12.2(33)SXH onward, Furthermore, VSS functionality is deployed, the following modules are currently supported 6700 line cards NAM-I and NAM-2 service module (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-201 VSS 1440 Operation This topic describes how VSS 1440 operates. VSS 1440 Initialization * VSL ports become operational * VS role is negotiated — active/standby * Active completes boot sequence with consistency check: ~ Consistency check OK => standby in SSO ~ Consistency check failure => standby RPR mode VS State: Standby ‘Control Plane: Standby Data Plana: Active Virtual Switch Domain ‘A VSS 1440 is formed when the two chassis and the VSL between them become operational. The peer chassis communicate over the VSL to negotiate the chassis roles. If only one chassis becomes operational, it assumes the active role. The VSS 1440 forms when the second chassis ‘becomes operational and both chassis bring up their VSL interfaces. VSL Initialization A VSS is formed when the two chassis and the VSL between them become operational. Because both chassis need to be assigned their role (active or standby) before completing initialization, the VSL is brought online before the rest of the system is initialized. The initialization sequence is as follows: 1. The VSS init es all cards with VSL ports, and then initializes the VSL ports. 2. The two chassis communicate over VSL to negotiate their roles (active or standby). 3. The active chassis completes the boot sequence, including the consistency check 4, If the consistency check completes successfully, the standby chassis comes up in Stateful Switchover (SSO) standby mode. If the consistency check fails, the standby chassis comes up in Route Processor Redundancy (RPR) mode. 5. The active chassis synchronizes configuration and application data to the standby chassis. 1-202 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0, '© 2008 Cisco Systems, Inc. System Initialization Ifyou boot both chassis simultaneously, the VSL ports become active, and the chassis will ‘come up as active and standby. If priority is configured, the higher priority switch becomes active. If you boot up only one chassis, the VSL ports remain inactive, and the chassis comes up as active, When you subsequently boot up the other chassis, the VSL links become active, and the new chassis comes up as standby. If you have configured preemption and the new chassis has the higher priority, it will initiate a switchover to become the active chassis. The formerly active chassis reloads and comes up as standby. Note Preemption is not a recommended configuration option in VSS 1440, (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-203 VSS 1440 Redundancy * SSO requires: ~ Identical Cisco IOS Software ~ VSL configuration consistency ~ Identical supervisor engines * Mismatch puts VSS into RPR mode VS state: Active ian: Active 1} ee ey, Sona: A Control Plane: Standby Inter-chassis stateful failover results in no disruption to apy information (for example, forwarding table info, NetFlow, NAT, authentication, authorization), Unlike HSRP, it eliminates Layer 2 and Layer 3 protocol reconvergence if a virtual switch member fails, resulting in deterministic, sub-second virtual switch recovery. VSS 1440 utilizes EtherChannel for deterministic, sub-second Layer 2 link recovery, removing the dependency on STP for link recovery. Ina VSS 1440, supervisor engine redundancy operates between the active and standby chassis. The peer chassis exchange configuration and state information across the VSL and the standby supervisor engine runs in hot standby mode. The standby chassis monitors the active cha: using the VSL. If it detects failure, the standby chassis initiates a switchover and takes on the active role. When the failed chassis recovers, it takes on the standby role. If the VSL fails completely, the standby chassis assumes that the active chassis has failed, and i switchover. After the switchover, if both chassis are active, the dual-active detection feature detects this condition and initiates recovery action. A VSS operates statefuul switchover (SSO) between the active and standby supervisor engines. Compared to standalone mode, a VSS has the following important differences in its redundancy model: = The active and standby supervisor engines are hosted in separate chassis and use VSL to exchange information. = The active supervisor engine controls both chassis of the VSS. The active supervisor engine runs the Layer 2 and Layer 3 control protocols and manages the line cards on both chassis. = The active and standby chassis both perform data traffic forwarding, If the active supervisor engine fails, the standby supervisor engine initiates a switchover and assumes the active role. 1-204 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc RPR and SSO Redundancy ‘With SSO redundancy, the standby supervisor engine is always ready to assume control following a fault on the active supervisor engine. Configuration, forwarding, and state information are synchronized from the active supervisor engine to the redundant supervisor engine at startup, and whenever changes to the active supervisor engine configuration occur. If ‘a switchover occurs, traffic disruption is minimized. If a VSS 1440 does not meet the requirements for SSO redundancy, the VSS 1440 will use Route Processor Redundancy (RPR). With RPR, the active supervisor engine does not synchronize configuration changes or state information with the standby. The standby supervisor engine is only partially initialized and the switching modules on the standby chassis are not powered up. If'a switchover occurs, the standby supervisor engine completes its initialization and powers up the switching modules. Traflic is disrupted for approximately 2 minutes, The VSS 1440 normally runs stateful switchover (SSO) between the active and standby supervisor engines. For the VSS 1440 to operate with SSO redundancy, it must meet the following conditions: = Identical software versions—both supervisor engine modules on the VSS 1440 must be running the identical software version = VSL configuration consistency During the startup sequence, the standby chassis startup-config file to the active chassis. sends virtual switch information from the The active chassis ensures that the following information matches correctly on both chassis: = Switch virtual domain ® Switch virtual node = Switch priority = Switch preempt = VSL port channel: switch virtual link identifier = VSL ports: channel-group number, shutdown, total number of VSL ports = Power redundancy-mode = Power enable on VSL modules Note Ifthe VSS 1440 detects a mismatch, it prints out an error message on the active chassis console and the standby chassis comes up in RPR mode. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-205 VSS 1440 Dual Active Detection * VSL failure causes dual active state: ~ Both switches become active Share the same network configuration Can cause communication problems through the network = Two mechanisms to overcome dual active state: ~ Enhanced Port Aggregation Protocol (PAgP+) IP Bidirectional Forwarding Detection (IP-BFD) Control Plane: Active Plane: Active Virtual Switch Domain Ifthe VSL fails, the standby chassis cannot determine the state of the active chassis. To ensure that switchover occurs without delay, the standby chassis assumes the active chassis has failed and initiates switchover to take over the active role. If the originally active chassis is still operational, both chassis are now active. This situation is called a dual-active scenario, Dual- active scenarios can have adverse affects on network stability, because both chassis use the same IP addresses, SSH keys, and STP bridge ID. The VSS 1440 must detect dual-active scenarios and take a recovery action. The VSS 1440 supports two methods for detecting dual-active scenarios: = Enhanced Port Aggregation Protocol (PAgP) over the MEC links to communicate between the two chassis. = Dual-active detection with IP-Bidirectional Forwarding Detection (IP-BFD) messaging over a backup Ethernet connection. Note ‘You can configure both detection methods to be active at the same time, The PAgP method takes priority, because it detects dual-active scenario much more quickly than the IP-BFD method. 7-206 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Dual-Active Detection Using Enhanced PAgP Port aggregation protocol (PAgP) is a Cisco-proprictary protocol for managing EtherChannels, Ifa VSS 1440 MEC terminates on a Cisco switch, you can run PAgP protocol on the MEC. If PAgP is running on the MECs between the VSS 1440 and an upstream or downstream switch, the VSS 1440 can use PAgP to detect a dual-active scenario, The MEC must have at least one port on each chassis of the VSS 1440. In virtual switch mode, PAgP messages include a new type-length-value (TLV) which contains the ID of the active switch. Only switches in virtual switch mode send the new TLV. For dual-active detection to operate successfully, one or more of the connected switches needs ta he able to process the new TLV. Catalyst 6500 Series Switches with Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 720 have this capability if they are running Cisco IOS Software Release 12.2(33)SXH or newer (support is planned for other platforms, please check the release notes). When the standby chassis detects a VSL failure, it initiates SSO and becomes active. Subsequent PAgP ‘messages to the connected switch from the newly active chassis contain the new active ID. The connected switch sends PAgP messages with the new active ID to both of the VSS chassis. If the formerly active chassis is still operational, it detects the dual-active scenario because the active ID in the PAgP messages changes. This chassis initiates recovery actions. Dual-Active Detection using IP-BFD To use the IP-BFD detection method, you must provision a direct Ethernet connection between the two switches. Regular Layer 3 ping will not function correctly on this connection, as both chassis have the same IP address. The VSS instead uses the Bidirectional Forwarding Detection (BED) protocol. If the VSL goes down, both chassis create BFD neighbors, and try to establish adjacency. Ifthe original active chassis receives an adjacency message, it realizes that this is a dual-active scenario and initiates recovery. Note Ife are configured on the VSS 1440, itis recommended to use the PAgP detection ‘method. Do not configure flex links and BFD dual-active detection on the same VSS 1440. Refer to Cisco.com for information about the flex links, {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-207 VSS 1440 Dual Active Recovery * VSL is restored: ~ VSLP detects this and reloads fst switch to renegotiate active/standby ie = Upon boot role is restored: ~ SSO standby mode built Interfaces are brought up ~ Traffic resumes 100% capacity VSL is restored Virtual Switch Domain The chassis shuts down all of its non-VSL interfaces (except interfaces configured to be excluded from shutdown) to remove itself from the network and waits in recovery mode until the VSLs have recovered, User intervention may be required to fix the VSL failure, When both chassis detect that the VSL is operational again, the previously active chassis reloads to be able to re-negotiate active/standby role after bootup and comes into service as the standby chassis. Note ‘After role has been resolved and SSO, hot standby mode is possible, interfaces will be brought up and traffic will resume back to 100 percent capacity 4-208 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc. Deploying VSS 1440 This topic describes how the VSS 1440 is deployed. Slot and Port Numbering * Chassis ID is part of the interface name * * Chassis ID is always either 1 or 2 In VSS 1440 mode, interfaces are specified using switch number—Chassis ID—in addition to slot and port, because the same slot numbers are used on both chassis. For example, the interface GigabitEthernet 1/2/4 command specifies port four of the Gigabit Ethernet switching module in slot two of switch one. The interface GigabitEthernet 2/2/4 command specifies port four on the Gigabit Ethemet switching module in slot two of switch two. (© 2008 Cisco Systems, nc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-209 VSS File System * File system is completely managed from the active supervisor Previous disko: bootfash sup-bootdsk The file systems in a VSS 1440 environment are completely managed from the console of the ’ active switch. Thus all file system activities take place at single centralized location. There are some extensions to the naming conventions—some filenames have remained the \ same while others have changed. To access the active supervisor (chassis) file systems, you have to concatenate the switch number, slot number, and the file system. 1-210 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Configuration Guidelines * General recommendation: VSL should use at least two links + VSS guidelines: — VSS configuration on two switches must match ~ Changes take effect after reload * MEC guidelines: MEC must terminate on both chassis in the VSS ~ MEC can be deployed between two vss * Dual Active Detection guidelines: PAgP should be used it FlexLinks are used VSS 1440 configuration guidelines and restrictions include: The VSS 1440 configurations in the startup-config file must match on both chassis. ‘Ifyou configure new values for switch priority or preemption, the change only takes effect afier you save the configuration file and perform a restart. MEC configuration guidelines and restrictions include: = All links in an MEC must terminate locally on the active and standby chassis of the same virtual domain, = Foran MEC using LACP, “minlinks” defines the minimum number of physical links in each chassis for the MEC to be operational. = Foran MEC using LACP, “maxbundle” defines the maximum number of links in the MEC across the VSS 1440. © MEC supports LACP 1:1 redundaney. Dual active detection configuration guidelines and restrictions include: = If flex links are configured on the VSS 1440, it is recommended to use the PAgP detection method. Do not configure flex links and BFD dual-active detection on the same VSS. Note Itis always recommended to deploy the VSL with two or more links. If using more than two links, distribute those interfaces across multiple modules to ensure the greatest redundancy. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-211 Configuring VSS 1440 This topic describes how VSS 1440 is configured, VSS Conversion Process Conversion steps: 1. Standalone configuration backup 2. VSD and number assignment 3. VSL port channel and port configuration 4 Conversion of chassis to virtual switch mode Virtual Switch Domain (VSD) By default, the Catalyst 6500 Series Switch is configured to operate in standalone mode (the switch is a single chassis). The VSS 1440 combines two standalone switches into one VSS, operating in virtual switch mode. To convert two standalone chassis into 2 VSS 1440, you perform the following major activities: = Save the standalone configuration files = Configure each chassis as a VSS = Convert to VSS = Configure the peer VSL information In virtual switch mode, both chassis use the same configuration file. When you make configuration changes on the active chassis, these changes are automatically propagated to the standby chassis. Saving Standalone Configurations Save the configuration files for both chassis operating in standalone mode using the eopy startup-config disk0:old-startup-config command, You need these files to revert to standalone mode from virtual switch mode. 1-212 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, nc. Assigning Virtual Switch Domain and Switch Number 6500 (config) # ewitch virtual domain number * Set the virtual switch domain to a number between 1 and 255 6500 (config-va-domain) # mwitch 1]2 = Set the virtual switch number to 1 or 2 ‘The same virtual switch domain number has to be configured on both chassis of the VSS. The virtual switel dou ber between 1 und 255, and must be unique for each VSS in your network (the domain number is incorporated into various identifiers to ensure that these identifiers are unique across the network). Within the VSS, a unique switch number must be configured for each chassis, Note The switch number is not stored in the startup or running configuration, because both chassis use the same configuration file (but must not have the same switch number). ©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-213 VSL Port Channel and Port Configuration 6500 (config) # ‘interface port-channel auaber * Create a port channel interface €500(config-i£)4 ewitch virtual link 1/2 = Set the VSL number to virtual switch number (1 or 2) 6500(config-if)# channel-group ounber mode on = Manually add the physical interface to the port channel Create a port channel interface on both switches with the interface port-channel command. , Next set the VSL number with the switch virtual link command. The value used corresponds to the number the switch is assigned in VSS domain. Finally add the VSL physical interfaces (cither on Catalyst 6500 Series Supervisor 720-10G- 3C/CXL or on WS-X6708 or WS-X6716 line card) to the port channel with the channel-group mode on interface configuration command. 1-214 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0, (© 2008 Cisco Systems. nc Converting Chassis to VSS Mode {500-2¥ehow platfor= hardware plo mode PFC operating mode + PFCIC 6500(config) Platform hardware vel pfe mode pfo3c * Set the PFC operational mode 650 avitch convert node virtual = Convert a chassis to VSS mode VSO 100 Validating the PFC Operational Mode Conversion to virtual switch mode requires a restart for both chassis. After the reboot, commands that specify interfaces with module/port now include the switch number. For example, a port on a switching module is specified by switch/module/port. Prior to the restart, the VSS converts the startup configuration to use the switch/module/port convention, A backup copy of the startup configuration file is saved on the RP. This file is assigned a default name, but you are also prompted to override the default name if you want to change it. Prior to the conversion, ensure that the Policy Feature Card (PFC) operating mode matches on both chassis. If they do not match, VSS comes up in RPR redundancy mode. Enter the show platform hardware pfe mode command on each chassis to display the current PFC mode. If only one of the chassis is in PEC3CXL mode, you can configure it to use PFC3C mode with the platform hardware ysl pfe mode pfe3e command Converting Switches Both switches that will be part of the VSS domain have to be converted using the switeh convert mode virtual privileged command. Note ‘After you confirm the command (by entering “yes” at the prompt), the running configuration is automatically saved as the startup configuration and the chassis reboots. After the reboot, the chassis is in virtual switch mode, so you must specify interfaces with three identifiers (switch/module/port) (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-215 Merging Configurations This final, critical step is applicable only for a first-time conversion. If the switch has been converted or partially converted already, you cannot apply this. When the standby virtual switch is in SSO hot mode, you must execute the switch accept mode virtual command to automatically configure the standby virtual switch configuration on the active virtual switch. This command prompts you to accept all standby virtual switch VSL-related configurations and also updates the startup configuration with the new merged configurations. Note Only VSL-related configurations are merged with this step—all other configurations will be lost and require manual intervention. 6500#switch accept mode virtual This command will bring in all VSL configurations from the standby switch and populate it into the running configuration. In addition the startup configurations will be updated with the new merged configurations. ye Merging the standby VSL configuration. Do you want proceed? [yes/no Building configuration... tox) 1216 Implementing Cisco Data Center Network infrastructure 1 (DCNI+1) v2.0 (© 2008 Cisco Systems, nc. 22) yvau ABINGUP-6-wooU_UP. VSL nodule Sa slot S aviten 1 Drought up fSaitiaiteing aa Virtual Oviten aetive VSD 100 After the switeh convert mode virtual command is used, the switch reboots and reverts to VSS mode. 6500-2#ewitch convert mode virtual This command will convert all interface names to naming convention "interface-type switch-number/slot/port", save the running config to startup-config and reload the switch Do you want to proceed? [yes/no]: y Converting interface names Building configuration [ox] 3w0d: %SYS-SP-3-LOGGER_FLUSHING: System pausing to ensure console debugging output. 3w0d: ¥OIR-SP-6-CONSOLE: Changing console ownership to switch processor 3w0d: $SYS-SP-3-LOGGER_FLUSHED: System was paused for 00:00:00 to ensure console debugging output. *** --- SHUTDOWN NOW - %SYS-SP-5-RELOAD: Reload requested XOIR-SP-6-CONSOLE: Changing console ownership to switch processor <-.-part of the output omitted. ..> System detected Virtual switch Gonftiguration: Interface TenGigabitEthernet 2/5/4 is member of PortChannel 20 (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-217 00:00:06: %SYS-3-LOGGER_FLUSHING: System pausing to ensure console debugging output 00:00:04: IFS: Opening: file nvram:/startup-config, flags 1, mode 0 00:00:04: NV: Opening: file /startup-config 00:00:04: NV: Opened: file /startup-config, fd 21 00:00:04: IFS: Opened: file nvram:/startup-config as fd 21 00:00:04: VS_PARSE_DBG: vsl_mgr_parse_config file: vel_mgr_parse_config_file:Open Succeeded for startup con fig nvram:/startup-config Firmware compiled 16-Aug-07 11:57 by integ Build [100] Earl Card Index= 259 00:00:06: ¥PFREDUN-6-ACTIVE! Initializing as ACTIVE processor for this switch part of the output omitted...> 0:12: $VSL_BRINGUP-6-MODULE_UP: VSL module in slot 5 switch 2 brought up Initializing as Virtual Switch standby <...part of the output omitted...> 00:00:43: $VSLP-5-RRP_ROLE_RESOLVED: Role resolved by VSLP 00:00:43: tVSU-5-VSL_CNTRL_LINK: Vsi_new_control_link NEW VSL Control Link 5/4 00:00:43: SVSL-2-VSL_STATUS: -aaes=e== VSh is UP 00:01:47: ¥OIR-SW2_SPSTBY-6-CONSOLE: Changing console ownership to route processor «part of the output omitted...> ‘STANDBY Press RETURN to get started! 6500-1-sdby> Standby console disabled 4-218 Implementing Cisco Data Genter Network infrastructure 1 (OCNM-1) v2.0 (© 2008 Cisco Systems, nc WSL Opeian + 10 minutes The two chassis now form a VSS. In virtual switch mode, you enter all configuration commands on the active chassis. The startup configuration file is automatically synchronized to the standby chassis. For the VSS to operate correctly, the active chassis needs the configuration information for the other end of the VSL link (on the standby chassis). Enter the switeh accept mode virtual command to automatically copy the VSL link configuration from the standby chassis onto the active chassis. The updated configuration is automatically saved to the startup configuration file on both the active and standby chassis. The switch accept mode virtual command performs this action only the first time that the chassis come up as a VSS, To examine the VSS, use the commands listed in the table. VSS Examination Commands Command Description show switch virtual ‘Shows the brief information about the VSS domain and member switches. show switch virtual ‘Shows the information related to VSL. link show switch virtual, ‘Shows the information related to VSL. link port-channel show switch virtual ‘Show the information about the member switches and their role in role vss. show switch virtual ‘Shows the information about the VSS redundancy mode, redundancy (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Sories, and Blade Switches 1.219 Examining VSS Mode (Cont.) wr Seiten aa st Snage vereton = cinco ios seterer ‘Pabele deuce 5 Ace Ccomeron Pane State © ACTS Examine the information about the VSS redundancy mode with show switch virtual redundancy command, 1-220 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Examining VSS Mode (Cont.) Shape Version + Chaco ob softwere, o72032_ep sotteare (472022_sp- 12.3003) 6th, REESE EOPTHARE (te ‘The output shows the continuation of the show switch virtual redundancy command output. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-221 Fine-Tuning VSS 6500 (config-va-donain) # witch [i | 2) priority (priority uml = Set the chassis VSS priority ~ higher is better, default is 100 VSS switch priority defines the priority for the chassis. The switch with the higher priority assumes the active role. The range is | to 255; the default is 100. Note ‘The new priority value only takes effect after you save the configuration and perform a reload of the VSS. If the higher priority switch is currently in standby state, you can make it the active switch by initiating a switchover. Enter the redundancy force-switchover command. Note ‘The priority should be set upon converting to the VSS. Later, the priority should rarely be changed, 4-222 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 {© 2008 Cisco Systems, Inc Configuring PAgP Dual-Active Detection 6500 (config-ve-donain) # Gual-active detection pagp = Enable PAgP dual-active detection (default) 6500 (contig-va-domain) # Gual-active detection pagp trust channel-group group nuaber * Enable trust mode for specified port channel Interface port-channel 10 ‘shutdown peitch visrtael domain 100 intertace port-channel 10 ‘ne shutdown, Ifenhanced PAgP is running on the MECs between the VSS and its access switches, the VSS can use enhanced PAgP messaging to detect dual-active scenatio. By default, PAgP dual-active detection is enabled. However, the enhanced messages are only sent on port channels with trust mode enabled. You must configure trust mode on the port channels that will detect PAgP dual-active detection, By default, trust mode is disabled. Note To use the PAgP dual-active detection, the neighboring switch also has to be dual-active capable, that is, it must support enhanced PAgP. To verify that neighbor switch is PAgP dual-active capable enter the show switch virtual dual active pagp command. 6500#show switch virtual dual-active pagp PAgP dual-active detection enabled: Yes PAgP dual-active version: 1.1 Channel group 10 dual-active detect capability w/nbrs Dual-Active trusted group: No Dual-Active Partner Partner Partner Port Detect Capable Name Port Version Gi1/8/1 No SALOB02SHG 5/2 N/A Gi2/8/1 No SALO802SHG 5/1 N/A Channel group 20 dual-active detect capability w/nbrs Dual-Active trusted group: Yes Dual-Active Partner Partner Partner Port Detect Capable Name Port Version Te1/1/1 Yes vs-access-2 Te5/1 1.1 Te2/1/1 Yes vs-access-2 Te5/2 1.1 ‘© 2008 Cisco Systems, Ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-223 Note Before changing PAgP dual-active detection configuration, ensure that all port channels with ‘trust mode enabled are in the administrative down state. Use the shutdown command in interface configuration mode for the port channel. Remember to use the no shutdown ‘ command to reactivate the port channel when you are finished configuring dual-active detection. 4-224 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, Inc. Configuring BFD Dual-Active Detection 6500 (config-ve-donain) # dual-active detection bfa = Enable BFD dual-active detection 6500 (config-ve-domain) # dual-active pair interface int 1 interface int 2 bfd * Configures the dual-active pair of directly connected interfaces (single Layer 3 hop) Tetertace giganieetborese 179/48 ‘bo neleenpore tive pair interface g 1/9/40 dnvertace 92/1/48 bee For the BFD dual-active detection, you must configure dual-active interface pairs that will act as BFD messaging links, and enable the BFD dual-active detection mechanism. When you configure the dual-active interface pairs, note the following information: ‘= The individual ports must be configured first, with both an IP address and BFD configuration. The configuration is validated when you add the dual-active interface pair. The IP addresses assigned to the dual-active pair must be from two different networks or subnetworks = The BFD timers must be configured with the same values on the ports at both ends of the link to ensure proper operation of Layer 3 BFD dual-active detection. m= The MAC address cannot be specified on the interface. Note {tis recommended that you configure a short BFD interval and small muiiplier value (such 28 50 to 100 ms for the interval and 3 as the muiipier value). Ifthe interval and multiplier values are large, there isa long delay before the system initiates dual-active mode recovery. ASS operating in dual-active mode can cause network instabiity. The operation can be examined with the show virtual switch dual-active bfd command. 6500#show switch virtual dual-active bfd Bfd dual-active detection enabled: Yes Bfd dual-active interface pairs configured: interfacel Gi1/9/48 interface2 Gi2/1/48 {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and fe Switches 1-225, Summary This topic summarizes the key points that were discussed in this lesson. Summary + VSS 1440 virtualizes the Cisco Catalyst 6500 Series Switches + Two Catalyst 6500 chassis are required to form a VSS 1440. + Supervisor 720-10G-3C/CXL is mandatory for VSS 1440 deployment. * VSL can be formed using 10 Gigabit Ethemet interface on the Supervisor WS-X7608-10G or WS-X6716-10G (future) only. * Both data planes of the Catalyst 6500 switches in VSS 1440 are operational, giving 1.44 Tbis total throughput. * Consult the release notes and configuration guides for the latest hardware and feature support 4-226 Implementing Cisco Data Center Network infrastructure 1 (CNM) v2.0 ‘© 2008 Cisco Systems, nc. Lesson 6 Upgrading Cisco IOS Software Using Software Modularity Overview Any network device today needs to be able to offer maximum uptime to provide service to \ traffic. When building the network, high availability 1s usually built in to the core and distribution layers through redundant systems, so that a single networking device is nt to changes and does not become a single point of failure for connected devices, The Cisco Catalyst 6500 Series Switch with Cisco IOS Software Modularity minimizes downtime and boosts operational efficiency through evolutionary software infrastructure advancements. This lesson describes the software modularity features available on the Catalyst 6500 Series Switch. Objectives Upon completing this lesson, you will be able to describe the use of software modularity on the Catalyst 6500 Series Switch. This ability includes being able to meet these objectives: = Explain the reliability benefits of Cisco 6500 Series Switch with Cisco IOS Software Modularity Describe how to upgrade a system to support patching = Describe the process for installing maintenance packs = Describe system tags, repackaging, and rollback Cisco Catalyst 6500 Series Switch Modular Cisco 10S Overview This topic describes Cisco IOS Software Modularity. Before Cisco IOS Software Modularity * Cisco IOS Software is made up of a large number of subsystems: ~ A subsystem is a grouping of functions ~ Subsystems are assembled at build time to create a Cisco IOS image = Updates require a reload or a switchover to a redundant route processor * Cisco IOS software uses a run-to-completion model and shared memory between processes = Engineered for maximum packet forwarding performance Minimizing downtime and boosting operational efficiency through an evolutionary software infrastructure is available on the Cisco Catalyst 6500 Series Switch with Cisco IOS Software Modularity. This feature minimizes downtime through modular Cisco IOS software subsystems which run as independent processes. This approach simplifies software changes through subsystem In Service Software Upgrades (ISSU) and helps process-level automated policy control. 4-228 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, Ine. Tee cuke de} Cuan oie tug 108 tasks ur ketaces cut cca POSIX-style process Porat) eee Cisco IOS software consists of hundreds of subsystems. Each subsystem is like a small programming file that has the code for part of a feature. For example, Open Shortest Path First (OSPF) could be built with several subsystems. Each of these subsystems share the same memory space and can overwrite the data written previously by the others. This sharing activity could result in a single subsystem fault, and bring down the entire system. Various subsystems, such as routing and Cisco Discovery Protocol, have been removed from the subsystem structure and now run in their own protected memory space, using Portable Operating System Interface (POSIX) styled processes with clean APIs. Other components such as TCP have been written from scratch, The remaining subsystems exist in what is called the Cisco IOS core, or base. The Cisco IOS core and these new modularized processes (there are about 20 in the first Cisco 10S software release) have been placed on top of a high-availability subsystem that is responsible for restarting processes or causing switchovers. The secure, lightweight microkernel underlying these modularized processes is extremely stable and allows process prioritization, thus facilitating a real-time multitasking operating system. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-229 Software Modularity: Benefits The benefits provided by these infrastructure changes include: = Each of these new modularized processes has protected memory that is isolated from the ‘other modularized processes and the Cisco IOS core. This isolation prevents memory corruption errors and faults, ‘= These modularized processes are all restartable. If a process experiences an error during run time, the process is restarted to clear the state and recover the process, without restarting the switch, = Ifa fault occurs inside a modularized process, it does not affect the other modularized processes. For example, Cisco Discovery Protocol failures do not impact any other components of the system. Cisco IOS Software Modularity provides the ability to patch any modularized subsystem. The only two components that cannot be patched are the installer (the patching process itself), and the microkernel. If a patch is required in the Cisco IOS core, the system must be restarted. If the patch is for a modularized process, the patch is applied in-service, in a process called In Service Software Patching (ISP), = Because Software Modularity represents Cisco IOS software with new functionality, no features are lost. Only Multiprotocol Label Switching (MPLS) and IPv6 are supported from 12.2(33)SXH onwards. ™ Because Software Modularity is Cisco IOS software, customers do not need to lean a new operating system. Software Modularity operates the same as Cisco IOS software, but without the modularity. A few new commands are all that is needed to enable new functions such as patching. 1-230 Implementing Cisco Data Center Network infrastructure 4 (DCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc Software Modularity: Unplanned Downtime Fault occurs In modular process. 3. failure occurs in Process restarts, with state check- Ss Cansies cos pointed (graceful restart kicks inif " needed) Switchover to 8 redundant This figure depicts an unplanned downtime sce hang states and software crashes. rio, possibly triggered by bugs that cause Ifa fault occurs in a modular process, the process restarts automatically. Alternately, the user can force the process to restart, if in a hung state, to recover from the fault. This user-led process happens in-service and no packets are lost. If the fault occurs in the Cisco IOS core or the microkernel, the software forces a switchover to the redundant supervisor. This switchover causes a reload if redundancy is not provided. ‘©2008 Cisco Systems, nc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-231 TSU feature allows PSIRT patches to be applied with zero service disruption. SIRT patch in Cisco 10S core requires system restart but avoids code drag. 3. Regular rebuilds with other bug fixes can be loaded through & reload (same as present process). ‘Secondary Supervisor In planned downtime scenarios, phase 1 of Cisco IOS Software Modularity offers only Cisco Product Security Incident Response Team (PSIRT) patches. This limited offer is designed to help you become familiar with the patching process. PSIKI’ patches are those released, when necessary, by the Cisco internal security team. The volume of released patches is low and any portion of the software can be patched. Ifa patch is required within a modular process, it can be applied in-service. If the patch is in a Cisco IOS core process, a switch restart is required. The benefit of using a PSIRT patch is that a specific bug can be corrected by applying a specific patch. This process is different from a new software rebuild, when several bugs are corrected at the same time, The patching process is designed to expedite the process of code qualification, while ensuring faster delivery of patches. 4-232 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. *supr20 ony This figure outlines the hardware and software support for the initial release of Cisco IOS Software Modularity. The processes modularized include routing process, Internet daemon, raw IP processing, TCP process, User Datagram Protocol (UDP) process, Cisco Discovery Protocol process, syslog daemon, all embedded event manager components, IP file system components, file system drivers, installer, etc, Software modularity features include: = Ability to patch individual pieces of modular Cisco IOS software = Patches for publicly announced security vulnerabilities (PSIRT) = Patch Navigator to support management of patches ®™ CiscoWorks support to be added to the CiscoWorks Resource Manager Essentials (RME) Because Cisco IOS Software Modularity is an evolving feature, more subsystems are modularized over time, and additional hardware support is added. Note Please consult the Cisco IOS Software Modularity release notes for the latest support information, (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-233 Implementing Cisco lOS Software Modularity An important part of Cisco IOS Software Modularity is the ability to patch. This is performed at a sub-system level, where multiple sub-systems make up a process, Patching occurs on a sub- system level, with the restarting occurring at the process level. This topic describes how to upgrade a system to support patching. Cisco IOS Software Modularity-Capable? + The first step isto determine if your Cisco 1OS version supports patching Chaco Internetwork Operating system software munge somes (21) ‘i - (outpat enore) ns Patching dn aot available since the aysten 1a not fuaniag from an iastalied image, ‘Fo dnataii please nae the tinstaii f1te comand The ability to patch is an important feature of Cisco IOS Software Modularity. Cisco 10S images for the Cisco Catalyst 6500 Series Switch with Cisco IOS Software Modularity are delivered as a single binary file. It is possible to benefit from the process modularity and restart capability of processes by installing the binary image downloaded from Cisco.com. However, if there is a requirement to add or remove patches and get line card modules loaded in run time, then the image needs to be installed. When installing an image, a directory structure is created on CompactFlash. The installed image is larger in size than the binary image and must be installed to a CompactFlash or to an upgraded internal CompactFlash adapter, WS-CF-UPG This upgrades the internal CompactFlash from the 64-MB sup-bootilash: to a 512-MB or 1-GB sup-bootdisk. ‘The patching concept requires intelligence within the operating system to ensure that only applicable patches are installed on the individual version of the operating system. In addition, there is an add-on to the patching functionality called patch rollback, so that if unexpected behavior or problems are introduced, one can get back to a last known good state, This is referred to as rollback. A system tag can be defined at any time, and is a snapshot of the current system. It is this tag (snapshot) that allows rolling back to a defined status. The first step in defining this process is to install a patch. This introduces the directory structure. After installation, the next step is to activate the changes and force the affected processes to be restarted. ‘The show version command identifies whether the Cisco 10S loaded on the Cisco Catalyst switch is capable of patching, and if so, whether the image has been installed. 1-234 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Ine Noticeable Changes Ifa system is running Cisco 10S Software Modulany, ten ceran opus rele that change. * System with Cisco IOS Software Modularity: 2 4 ‘{ When a system is running Cisco 10S Software Modularity, there are noticeable differences in the output of some of the commands. The example depiets the output of the show process epu command on a system not running Cisco 10S Software Modularity. {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-295 Noticeable Changes (Cont.) + System with Cisco IOS Software Modularity: Pioager Srve-kiferat pros (ertsat ontput) The figure depicts the output of the show process cpu command on a switch running Cisco IOS Software Modularity. In a system running Cisco 10S Software Modularity, most of the processes show up as cither “proc” or “.iosproc,” indicating that they are individual processes. Processes that are listed as. “proc” have been rewritten, while those listed as “.iosproc” have been ported from previously existing Cisco IOS code, 4-236 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, inc. Noticeable Changes (Cont.) + Further details regarding individual processes can be displayed as folows: sSieriprecciag Bh nor 30 rerasisa 2006 Tprowtsng To get more details on individual processes, Cisco IOS Software Modularity is required. The ‘output of the show process detailed iprouting.iosproe command is shown in the figure. Here you can see information such as: = How many times the process has been restarted = How many times the process has been restarted since its last patch = When the process was last restarted = The user ID of the user that last restarted the process ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-297 Upgrade to Support Patching * Use these steps to upgrade the system to support patching: ~Step 1: dir filesys —Step 2: install file filesys:/filename filesys:/sys* —Step 3: dir flesys:/sys* —Step 4: remove old boot string (config mode) ~Step 5: install bind filesys:/sys* (config mode) —Step 6: verify ~Step 7: reload \ “Installation path shouldbe of form :leaya|neweysloldays>) Use the following six steps to upgrade the system to support patching: : Step1 Verify that there is enough space on the CompactFlash for the directory structure that is to be created: ’ 6500#dix disko: Step2 The original binary file must be expanded into a treed sub-directory file structure that permits patching. The expanded structure is approximately 20 percent larger than the original binary file for systems based on the Cisco Catalyst 6500 Series Supervisor Engine 720: \ 6500#install file disk0:/filename disk0:/sys Step3 Verify that the install process has completed by checking the directory structure on the CompactFlash: 6500#dir disk0:/sys Step4 To tell the system to run from the installed image the system must be bounded to the new file system directory. Remove any existing boot string commands. 6500 (config) #no boot system location: filename ‘ Step 5 Tell the system to bind: 6500 (config) #install bind disk0:/sys Step6 Verify that the install bind command added the correct boot variables to your configuration: 6500#show running-config | include boot boot system disk0:/sys/filename 4-238 Implementing Cisco Data Center Network Infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Ine Step7 Save the configuration and verify the boot variables. It is important that the boot variables look similar to below, otherwise the system is left in ROM monitor (ROMMON) mode: 6500#copy running-config startup-config 6500#show bootvar BOOT variable = dick0:/sys/s72033_rp-ADVENTERPRISEK9_wan- vz,12.2(18) SKP4 CONFIG-FILE variable does not exist BOOTLDR variable = Configuration register is 0x2102 Reload the system 6500#reload Note ‘The installation path should be of the form :/l/] (© 2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-299 Upgrade to Support Patching (Cont.) Ttvtowepat eniseegit titi it verifying checkauns of entracted files Verifying instal2ation compatittiey ‘nage Contig mode foltoved by a sseload™ soe t¥reioed Proceed with relond? (conthen) The example shows the process of enabling patching. 4-240 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Confirmation = The show version command should now verify that the system is running from installed software. Implementation 1284, Rev 1.2, 512KB L2 Cache Bridging software X.25 gottware, Version 3.0.0. SuperiAT software (copyright 1990 by Meridian Technology Corp) - 110270 Boulation software: 2 Virtual Hthernet/IEEE 602.3 Interfaces 439 Oigabit Ethernet/IEEE 802.3 intertact 1917K bytes of non-volatile configuration manory. Configuration register is oxz102 system is currently runsing from installed software Yor further information use *show install ranning* ‘The output of the show version command should now verify that the installed software is now running, (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1241 Verifying Cisco 10S Software Modularity Soterare runing on coed deatelied at iocation 272033 = suet 5+ i Adedveahanos/aya/ait039/oase/e7a005-puarviceaRdwaa-ym ~ Yeraion 32.2(39) jerafeh Aeyoune/eate The output of the show install running command shows the installed Cisco IOS Software Modularity image and eventual patches. 4-242 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Using Cisco IOS Software Modularity Installing maintenance packs is important if the system needs individual processes to be upgraded. Cisco IOS Software Modularity can roll back to a set of installed files defined by tags, similar to that of a database rollback. This topic explains how to install a m pack and use system tags, repackaging, and rollback. Patch Navigator * Tool for downloading maintenance packs and patches www.cisco. com/go/pn * Patches can be searched based on: — Bug ID (DDTS) Base image Platform ‘Cisco 108 Sofware Mocutarty Patch Navigator The Patch Navigator too! on http://www.cisco.conv/go/pn is used to get the Cisco IOS patches. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-243 Install a Maintenance Pack * Installing a maintenance pack is a 4-step process: ~Step 1: install file filesys:/filename filesys:/sys* —Step 2: verify: show install running —Step 3: activate: install activate filesys:/sys* — Step 4: verify: show install running “Installation path should be of form :/csysinewsysjoldeys>[/] Installing a maintenance pack is a four-step process: Step 1 Issue the following command to install the maintenance pack: 6500#install file disk0:/filename disk0:/sys Step2 Verify that the maintenance pack has been installed: 6500#show install running Step3 Activate the maintenance pack: 6500#install activate disk0:/sys Step4 Verify that activation has occurred: 6500#show install running Note Step 1 and Step 3 are done at the top-level directory. The user does not need to designate exactly where the maintenance pack updates go within the directory structure. This is done automatically. 41-244 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems. nc. {a\currentiy penging tor tals Jocation + Activation of the pending changes Listed above will affect the following processes: Poaatizing saateniation Proceed with Step 1: Install the maintenance pack with the install file disk0:/filename disk0:/sys command. Note ‘The installation path should be of form /|/]. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-245 1-246 Verifying Maintenance Pack Installation " The installed maintenance pack is in Pendinst state > ranasnse “atekes/aye/a72e3 coteware running on cord tnavatt B Active diahds/aya/ez_ie/bane/CiLe pach AA3379- patch Proceed with Step 2: Verify that the maintenance pack has been installed with the show install running command, Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, © 2008 Cisco Systems, Inc. Tia‘ieepres! Oe Se opaating more instatier neta-aata ... eo conttnue with activating eis change set...? (yea/no}: Proceed with Step 3: Activate the maintenance pack with the install active disk0:/sys command. Note ‘The installation path should be of form :/[/}. (© 2008 Cisco Systems, inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches, 1247 Verifying Maintenance Pack Activation * The installed maintenance pack is now in active state ‘ceive aise «/aye/#73035_xp/o "alah /aya/a72013_ep/putsh/pateh-AAS373-pateh-odp 00 Proceed with Step 4: Verify that activation has occurred with the show install running command, 1-248 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, nc. Default Tags *» There are three Cisco-defined tags created by default: ~ CISCO_BASE: Base image with no patches or other tags ~ CISCO_LATEST: Remove one level of install files ~ CISCO_LATEST_ACTIVATE: Remove one level of install activation * Do not use these tags for user tags. Tags are used to capture the system at a defined point in time. Ifa maintenance pack is installed that introduces further problems into the system, the predefined tag could be used to roll back the system to a known good working state. Any processes affected by the rollback are restarted, and those processes then continue to use the software that was present at the time the tag was created, Tags can be deleted, at which point the system removes any installation files associated with that tag. There are three Cisco-defined tags created by default: = CISCO_BASE: This tag is defined as the base image with no patches or other tags. Using this tag with the install rollback command takes the system back to the installed base image. = CISCO_LATEST: This tag is defined as removing one level of the install file. Using this, tag with the install rollback command removes the set of files that were added with the last install file command, The software rolls back to the most recently installed patch whether active or not. If the patch is in an active state it sets the patch to a PendRoll state, meaning that the changes do not take place until the install activate command is used. If the patch has been installed but not activated, the install rollback removes the installed patch. = CISCO_LATEST_ACTIVATE: This tag removes one level of install activation. Using this tag with the install rollback command removes the set of files that were more recently activated by the Install activate command. If multiple maintenance packs or patches were installed with the install file command, and were then activated simultaneously with a single install activate command, all files are marked to be rolled back. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-249 Manipulate Tags 65008 install commit disk0:/ays tag name * Set a tag upon installation 65008 show inatall tage running * Verify the tags 55008 install prune disk0:/ays tag name * Delete a tag To create a tag defining the last stable patching configuration, use the following command: 6500#install commit disko sys tag name To verify which tags have been set so far, use the following command: 6500#show install tags running To delete a tag, use the following command: 6500#install prune disk0:/sys tag_name 41-250 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, In. The slide shows an example of defining a tag upon installation of maintenance pack. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-251 Rollback and Repackaging 65008 install rollback disk0:/sys tag name show install running install activate disk0:/sys * Rollback to a pre-defined tag 65008 install repackage disk0:/sys disk0: filename.bin * Transfer a base image with all maintenance packs and patch history information “s00.tvinatelh repackage “ian catssce mogelar PATER DIE ‘nedulae-PATCR bie)? Rollback To roll back a system to the point before a maintenance pack was installed, tirst run the insti rollback command to identify which processes are affected by the process. 6500#install rollback disk0:/eys tag name Gathering information for location s72033_rp - Slot 1 PECEOUEEEEPEPEeeEeEeeeee <.+.part of the output omitted. Pereenedeneeniae Activation will affect the following processes: cdp2.iosproc [ox] Repackaging The base image with all maintenance packs, along with patch history information, can be transferred to another system using the repackaging feature, This feature copies all ncvessary files to a single binary file that can be loaded as a binary or installed on another system with the file system created. Hreeniiey Pedi benenenenggenegys To create the new binary file, execute the following command: 6500#install repackage disk0:/sys disk0:/filename.bin Use this command to verify that the file has been created: 6500#dir disko: Directory of disk0:/ 1 -rwx 69213304 Apr 25 2006 20:08:28 +00:00 972033-ipservicesk3- vz.8x4-demo 2 -rwx 636416 Apr 25 2006 20:10:44 +00:00 972033-XNA0001.122-18.SXF4 3 -rwx 769024 Apr 25 2006 20:11:20 +00:00 #72033-XNA0002.122-18.SKF4 4 -rwx 1287268 Apr 25 2006 20:11:46 400100 072033 xMA0003.122 10.cKF4 5 drwx 1 May 9 2006 18:08:48 +00:00 sys 1-252 Implementing Cisco Data Cantor Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc 81 -rwx 69963776 May 9 2006 19:44:08 +00:00 lab_repackage.bin 512065536 bytes total (278913024 bytes free) This file can now be distributed to all switches in the network, and would have the patches applied. Note Repackaged images based on 12.2(18)SXF4 are not bootable. You cannot copy the repackaged image onto disk: and boot the system from there, but it can be placed on a central server and be installed from there. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-253 Rollback to a Defined Tag ‘Paceing holibaok pateh/getensaasssn pecan. sprocting 40 qe foloving tolinack changeset ia‘crranhy pending for Sie Activation wit woe attect any Pe Gutnering sntoraation for location 62 te = slot 3 ‘ae! following otiback changeset du ‘aging RlSbuck 4 patch/pa As rolling back occurs, the printout shows which processes will be affected. 1.254 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, {© 2008 Cisco Systems, Inc Summary This topic summarizes the key points that were discussed in this lesson. = Cisco IOS Software Modularity minimizes downtime and boosts ‘operational efficiency through evolutionary software infrastructure advancements. = A few new commands are all that is needed to enable new functions such as patching. * Installing a maintenance pack is a four-step process. * Cisco IOS Software Modularity lets you roll back to a set of installed files defined by a tags, similar to that of a database rollback. * The repackaging feature copies all necessary files to a single binary file that can be loaded as binary or installed on another system with the file system created. * Rollback is used to revert a system to a point in time before a maintenance pack was installed. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-255 4-256 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, inc. Lesson 7| Implementing NetFlow Overview This lesson introduces NetFlow and NetFlow Data Export (NDE) and explains the parameters required to configure these Layer 3 enhancements. Cisco Catalyst 6500 Series Switches provide Layer 3 switching with Cisco Express Forwarding, for the Cisco Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 720 with Policy Feature Card 3 (PFC3). NDE is an enhancement that can be used to monitor all Layer 3-switched traffic through the Multilayer Switch Feature Card (MSFC). NDE is a Layer 3 enhancement that collects global statistics from traffic that flows through the switch, and stores those statistics in the NetFlow table. NDE supports bridged IP traffic for a system running in PFC3B, PFC3BXL, PFC3C or PFC3CXL modes. Objectives Upon completing this lesson, you will be able to describe how NetFlow and NDE work on the PFC3 and MSFC3. This includes being able to meet these objectives: Describe the use of NetFlow and NDE on the Catalyst 6500 Series Si = Explain the process of configuring NetFlow and NDE tch NetFlow and NDE Overview This topic explains the use of NetFlow and NDE on the Cisco Catalyst 6500 Series Switch. NetFlow services provide access to IP flow information from data networks, Exported NetFlow data can be used for a variety of purposes, including network management and planning, enterprise accounting and departmental charge backs, Internet service provider (ISP) billing, data warehousing, and data mining for marketing purposes. Understanding NetFlow and NDE * NetFlow collects statistics on traffic that flows through the switch. * Statistics are kept in the NetFlow table on the PFC3. + NetFlow Data Export (NDE) is the process that exports these statistics to a collector for report creation. | at * Tralfic lows through the switch A flow record is created for each unique flow Statist for each flow are Updated in ne tiow recora ‘At certain intervals, NDE pushes Completed flow records to an extemal collector for later reporting and analysis Peace) NetFlow collects global statistics from traffic flowing through the switch, and stores those statistics in the NetFlow table. In all PFC3 modes except PFC3A with release 12.2(18)SXE and later, NDE can be configured to collect statistics for both routed and switched traffic. In earlier releases and PFC3A mode, NDE collected statistics for routed traffic only. NDE makes the routed-traffic statistics available for analysis by an external data collector. Two extemal data collector addresses can be configured to provide redundant data streams to improve the probability of receiving complete NetFlow data. 41-258 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc. NetFlow and Cisco NAM * Cisco NAM provides application-level visibility into network traffic: ~ Capture Reduce ~ Analyze = Trend = NetFlow provides: Metering traffic statistics Delivers through NAM The Cisco Network Analysis Module (NAM) is an integrated traffic monitoring solution for the Catalyst 6500 Series Switches, Cisco 7600 Series routers, and some branch routers. Cisco NAM enables network managers to gain application-level visibility into network traffic to improve performance and reduce failures. Cisco NAM facilitates the following: = Capture: Performs raw data capture ® Reduce: Reduces captured data to useful information = Analyze: Assists in drawing conclusions about reduced data a Trend: Maintains ongoing statistics on incremental data captures for long-term planning, NetFlow technology provides the metering base for a key set of applications including network traffic accounting, usage-based network billing, network planning, as well as denial of service (DoS) monitoring capabilities, network monitoring, outbound marketing, and data mining, capabilities. Cisco provides a set of NetFlow applications to collect NetFlow export data, perform data volume reduction, and post-processing. Cisco NAM and NetFlow work together—NetF low traffic statistics are exported to Cisco NAM. without impacting network device performance, and Cisco NAM performs data reduction. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-259 NetFlow Options » Two forms of NetFlow statistics are collected: Sampled NetFlow and NetFlow Aggregation Sampled NetFlow NetFlow Aggregation og NetFlow Tabie Sompled NetFow only collec slats on @ NetFlow aggregation calc information about sample of flow packs, consuming less cusses of faws, reducing the amcunt of dla To memory be exerted ‘Two NetFlow options are available to reduce the volume of statistics being collected: ‘Sampled NetFlow reduces the number of statistics collected NetFlow aggregation merges collected statistics Sampled NetFlow The sampled NetFlow feature captures a subset of traffic in a flow, instead of all packets within a flow on Layer 3 interfaces. Sampled NetFlow substantially decreases the supervisor engine's CPU utilization. When using the PFC3, sampled NetFlow uses the full-interface flow mask. Sampled NetFlow can be configured to use time-based sampling or packet-based sampling, NetFlow Aggregation ‘The NetFlow aggregation feature allows limited aggregation of NetFlow data export streams on Catalyst 6500 Scrics Switch, This is achieved by maintaining onc or mute extra flow caches called aggregation caches. Benefits of using NetFlow aggregation include: Reduced bandwidth requirement: NetFlow aggregation caches reduce the bandwidth required between the switch and the NetFlow management station. Reduced NetFlow workstation requirements: NetFlow aggregation caches reduce the number of NetFlow management workstations required. Improved scalability: NetFlow aggregation caches improve scalability for high-flow-per- second devices such as the Catalyst 6500 Series Switch. Each aggregation cache can be configured with its own individual cache size, cache ager timeout parameter, export destination IP address, and export destination User Datagram Protocol (UDP) port. 1-260 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. = The NetFlow cache on the PFC3 supports sampled NetFlow and NetFlow aggregation for flows switched in hardware. Net?low records are based on the flow mask defined in the system. 'SRCONLY | One NeiFlow entry for each unique SRC IP address DESTONLY | One NetFlow enry for each unique DEST IP adress DEST-SRC | One NetFlow ent for each SRC-DEST P address pair DEST-SRC:NT | Adds VLAN SNMP IFINDEX tothe DEST-SRC record FULL ‘One NetFlow erty fr each unique SRC-DEST IP addres with por numbers FULLINT __| Adds VLAN SNMP IFINDEX othe FULL record The NetFlow cache on the PFC3 captures statistics for flows forwarded in hardware. It supports, sampled NetFlow and NetFlow aggregation. ‘The PFC3 can use one of several flow masks to create NetFlow entries: = Source-only: A less specific flow mask where the Policy Feature Card (PFC) maintains ‘one entry for each source IP address. All flows from a given source IP address will use this entry. Destination: A less specific flow mask where the PFC maintains one entry for each destination IP address. All flows to a given destination IP address will use this entry. & Destination-source: A more specific flow mask where the PFC maintains one entry for each source/destination IP pair. All flows between the same source and destination IP address will use this entry. = Destination-source-interface: A more specific flow mask which adds the souree VLAN ‘Simple Network Management Protocol (SNMP) “if index” to the information in the destination source flow mask. m= Full: A more specific flow mask where the PFC creates and maintains a separate cache entry for each IP flow. Full entry flow masks include the source and destination IP address, protocol and protocol interfaces. = Full-interface: The most specific flow mask which adds the source VLAN SNMP “if]ndex” to the information in the full flow mask. Additionally, one flow mask can be used for all statistics. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-261 NetFlow Table Utilization PFC Recommended NetFlow Table Utilization | Total NetFlow Table Capacity PFC3CxL | 235,520 (230 KB) entries 262,144 (256 KB) entries PFC3c. 117,760 (115 KB) entries 131,072 (128 KB) entries PFC3BXL__ | 235,520 (230 KB) entries, 262,144 (256 KB) entries PFC3B 117,760 (115 KB) entries 131,072 (128 KB) entries PFC3A, 65,536 (64 KB) entries 131,072 (128 KB) entries 1-262 Implementing Cisco Data Center Network infrastructure 1 (DCNL-1) v2.0 (© 2008 Cisco Systems, Inc NetFlow on the MSFC3 = NetFlow records can also be exported for flows that are processed by the MSFC. * The feature must explicitly be defined in the CLI NetFlow aggregation must be enabled on the MSFC3 for it to be enabled on the PFC3. The NetFlow cache on the MSFC3 captures statistics for flows routed in software, and supports NetFlow aggregation for traffic routed in software, Note NetFlow aggregation must be enabled on the MSFC3 for it to be enabled on the PFC3. (© 2008 Cisco Systems, Inc. Implementing he Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Ser NDE on the PFC3: NetFlow Record Types » NDE on the PFC3 supports NetFlow v5 and v7 records for statistics captured on the PFC3, = NetFlow v8 records are used when NetFlow aggregation is used Peay NetFlow vS and v7 header and record layouts can be found at: Intp iw cisco com univerodiectaidociproductlarycat6000/122sx/sweginde Mini1052896 NetFlow exports flow information in UDP datagrams in one of two formats; version 5 and version 7. The datagram consists of a header and one or more flow records. The first field of the header contains the version number of the export datagram. Typically a receiving application that accepts either format allocates a buffer large enough for the largest possible datagram from either format and uses the version from the header to determine how to interpret the datagram. The second field in the header is the number of records in the datagram, and should be used to index through the records, Because NetFlow export uses UDP to send exported datagrams, it is possible for datagrams to be lost in the network. To determine if flow export information has been lost, the header format includes a flow sequence number that is equal to the sequence number of the previous datagram plus the number of flows in the previous datagram. On receiving a new datagram, the receiving application subtracts the expected sequence number from the sequence number in the header to identify if any flows have been missed. The latest version of NetFlow is version 9. This version is designed to support template-based records, and it complies with the Internet Engineering Task Force (IETF) standards for NetFlow exporting. The format is flexible and extensible, which provides the versatility needed to support new fields and record types and accommodates new NetFlow-supported technologies such as Multicast, Multiprotocol Label Switching (MPLS), Network Address Translation (NAT), and Border Gateway Protocol (BGP) next hop. = NetFlow version 9 is a flexible and extensible means to carry NetFlow records from a network node to a collector. NetFlow version 9 has definable record types and is self describing for easier NetFlow Collection Engine configuration. In NetFlow version 9: Record formats are defined using templates. = Template descriptions are communicated from the router to the NetFlow Collection Engine. 1-264 mpl tng Cisco Data Center Network Infrastructure 1 (DCN) v2.0 (© 2008 Cisco Systems, Ine. ‘Flow records are sent from the router to the NetFlow Collection Engine with minimal template information so that the NetFlow Collection Engine can relate the records to the appropriate template. = NetFlow version 9 is independent of the underlying transport (UDP, TCP, Stream Control Transmission Protocol [SCTP}, and so on). Note NetFlow version 9 is not backward-compatibie with version 5 or version 8. If you need version 5 or version 8, then you must configure version 5 or version 8. Note Export bangwidin Increases for NetFlow version 9 (because of template flowsets) versus version 5. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-265 NDE on the PFC3: NetFlow Filters Normally, NDE exports all expired records. NDE filters export records to the collector. Filters identify the expired and purged records sent to the collector. Filters can be applied on source and destination address, port numbers, or specific TCP/UDP ports. By default, all expired flows are exported until a filter is configured. After the filter i configured, only expired and purged flows matching the specified filter criteria are exported. Filter values are stored in NVKAM and are not cleared when NDE is disabled. 4-266 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, nc. NetFlow Aging Time * Default 300-second aging timer is applied to all NetFlow cache entries to age out old entries — Aging timer can be changed depending on the utilization of the NetFlow table Lower value means less time an entry stays in the table * NetFlow table size PFC3A/3B/3C — 128K, PFC3BXL/CXL — 256KB cer Degen | Detnes te before aging out entries “Defines thresheld an te, _ Alter he te exes, hs mechanism | checks flows to see they have switched |e threshold number of packets: if nt. the enty 8 aged out | Deletes enties that have been fon the cache forthe defined paid ottime ‘The default 300-second multilayer switching (MLS) aging time is applied to all NetFlow cache antics, This aging time can be configured between 32 and 4092 seconds. The lower value means less time an entry stays in the table, thus freeing up space more regularly. To keep the NetFlow cache size below the recommended utilization level, the network administrator can configure one of three aging options: Normal: Configures the wait before aging out and deleting shortcut entries m= Fast aging: Configures an efficient process to age out entries created for flows that only switch a few packets and then are never used agi = Long: Configures entries for deletion after a specified period of time, regardless of their usage; long aging ensures accurate statistics by preventing counter wrap-around Cache entries typically removed are those for a Domain Name System (DNS) or TFTP server, where entries are created but often not used again. By aging out these entries, the PEC ean save space in the NetFlow cache for other data. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-267 Cisco IOS 12.2SXH NetFlow Enhancements * NetFlow per-interface support ~ Reduces data export volumes - IP flow ingress * NetFlow for IPV6 Unicast traffic * NetFlow MIB and Top Talkers * NetFlow multicast IP support ~ Export of multicast data in a v9 NetFlow record, Per-interface NetFlow, NetFlow Top Talker Per-Interface NetFlow Support Betore Cisco JOS Software Release 12.2 SXH, the hardware IP version 4 (IPv4) NetFlow creation was global: = When mils flow ip is configured, the NetFlow entries are created for all flows on all interfaces = When mls nde sender is enabled, the NetFlow entries are exported for all flows on all interfaces With the per-interface NetFlow support, the user explicitly chooses interfaces that will create and export NetFlow entries. Only interfaces with ip flow ingress will create NetFlow entries. The feature can help decreasing hardware NetFlow table utilization and reduce CPU load. NetFlow for IPv6 IP version 6 (IPv6) NetFlow support is based on NetFlow version 9. It is uscd for both ingress and egress IPv6 traffic and currently supports full flow only (no sampling supported). The IPv6 flow records are exported over IPv4 only. NetFlow Top Talkers The flows that are generating the heaviest traffic in the cache are known as the "top talkers“, Examining top talkers allows flows to be sorted by the total number of packets in each top talker. The match criteria for the top talkers works like a filter. The top talkers can be retrieved via the CISCO-NETFLOW-MIB. A new separate cache is used to provide the information with the following characteristics: Similar to output of the show ip eache flow or show ip cache verbose flow command ‘= Generated on the fly = Frozen for the cache-timeout value 1-268 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc NetFlow IPv4 Multicast Support IPv4 multicast NDE is supported using NetFlow version 9 and includes the following: ‘Support for ingress and egress multicast statistics Supported since Release 12.2(18)SXF_ Ingress NetFlow tracks multicast traffic input on an interface Egress NetFlow tracks multicast traffic replicated (output) on an interface Feress accounting requires PFC3R/3C/ARXL CXL Egress accounting with ingress replication mode Egress accounting with egress replication mode Support of Reverse Path Forwarding (RPF) Fail Accounting {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-269 NDE NDE Source Address NDE Data Collector Address and UDP Port NDE Filters ‘Sampled NetFlow tion (Disabled None None Disabled | Disabled Disabled This table describes default configuration values for NDE. 4-270 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Sampled NetFlow Rates * Sampled NetFlow uses different sampling intervals * Sampling intervals can be defined by the administrator 1in64 64 18,192 1in 128 {32 |e,192 1 in 256 \16 [8,192 Vin 512 is |8,192 4 in 1,024 \4 |8,192 1 in 2,048 \4 |a,192 1 in 4,096 la | 16,384 1 in 8,192 4 32,768 This table shows the different sampling intervals that can be defined by the network administrator. For example, if the configured rate is 64, sampled NetFlow uses traffic from the first 128 milliseconds of a flow every 8192 milliseconds. If the configured rate is 2048, the sampled NetFlow features uses traffic from the first 4 milliseconds of a flow every 8192 milliseconds. Sampled NetFlow can also be packet-based. With packet-based sampling, a flow with a packet count of 7 is sampled n/m times, where m is the sampling rate. This example shows how to enable the packet-based NetFlow sampling and set the sampling rate and interval: 6500(config)#mls sampling packet-based 1024 6192 (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-271 VSS NetFlow Support = Both data planes are active = NetFlow data collection is performed on each Supervisor's PFC * NetFlow export is only performed by the control plane on the VSS active Active VSS. Standby VSS ae aaa NetFlow Collection: Active NetFlow Collection: Active NetFlow Export: Active NetFlow Export: Inactive NetFlow operation in a virtual switch is similar to the way in which NetFlow operates in a single chassis with a distributed forwarding card (DFC) present. Just as each DFC maintains its own NetFlow information, each PFC in a Cisco Catalyst 6500 Series Virtual Switching System 1440 (VSS 1440) maintains its own NetFlow information. Thus the load on the active switch for NDE will be higher than with a single chassis. 4-272 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, nc. Configuring NetFlow and NDE This topic explains how to configure NetFlow and NDE on the Catalyst 6500 Series Switch. Enabling NetFlow + Enable NetFlow globally 6500 (config) # = Enable time-based sampled NetFlow 6500 (config) #mle sampling tine-baned ? 64 + (output removed) 6500 (config) #mla sampling time-based 512 = Enable packet-based sampled NetFlow 6500 (contig) #mis sampling packet 236 7 <8000-16000> sampling interval in mill: 6500 (configi #mls sampling packet 256 8192 Use the following commands to enable NetFlow on the PFC: 6500 (config) #mls netflow Use the following command to configure sampled NetFlow globally: 6500 (config) #mls sampling (time-b: [interval] ) 4 rate | packet-based rate When configuring sampled NetFlow globally, note that the valid ranges for rate are 64, 128, 256, 512, 1024, 2048, 4096, 8192, and the valid values for packet-based export interval are from 8000 through 16,000. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-273 Setting the Flow Mask = Set the minimum NetFlow mask that can be used on the system. (6500 (config) #mls flow 7 ip flownaak tp keyword Apyé flownask ipys keyword 6500 (config) tls flow ip ? destination-source destination-source flow keyword fall full flow keyword estination-source interface-destination-source flow interface full flow keyword source only flow keyword 6500 (config) tala flow ip deatination-source Use the following command to set the minimum flow mask for the NetFlow cache on the PFC: 6500 (config)#imla flew ip (source | destination | destination- source | interface-destination-source | full | interface-full} Ata minimum, the actual flow mask used will incorporate the specifies configured by the mls flow ip command. Use the following commands to verify the IP MLS flow mask being used: 6500#show mle netflow flowmask 4-274 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, nc. Setting the NetFlow Aging Time * To change the default aging time from 300 seconds to another value, set the normal aging timeout: $500 (config) mle aging ? fast 13 fast aging long long aging keyword Rormal normal aging xeywora sib SLB connection aging keyword 16500 (contig) mls aging normal ? <32-40925 13 aging timeout in second * Specifying long aging timers [6500 (contig) ing Long ? 920> long aging timeout Use the following command to configure the MLS aging time: 6500(config)#mle aging {fast (threshold 1-128 | time 1-128 | long 64-1920 | normal 32-4092) } The following example shows how to configure the MLS aging time with a timer of 30 seconds and threshold of 64 packets: 6500(config)#mle aging fast threshold 64 time 30 Use the following command to display the MLS aging-time configuration: 6500#show mls netflow aging ‘© 2008 Cisco Systems, Ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-275 Enabling NetFlow per Interface » NetFlow can be enabled on the MSFC3 on a per-interface basis 6500 (config) finterface vian 300 cof Forwarding flow e-ewitching cache policy switching policy cache for outgoing [packets ‘eame-intertace 6500 (config-it) #ip route-cache flow oR 6500 (config-1) Mp flow-export ingress To enable NetFlow on the MSFC, use the following commands for each Layer 3 interface: 6500 (config) #interface vian | type | port-channel 6500 (config-if)#ip flow-export ingress (12.2(18)SXD or later) 6500(config-if)#ip route-cache flow (12.2(18)SXD or earlier) 4-276 Implementing Cisco Data Center Network infrastructure 1 (OCNE-1) v2.0 (© 2008 Cisco Systems, Inc. Setting the NetFlow Record Type * Enabling NDE and setting the NetFlow record type used for export: 500 (config) #mle nde sender ? version version keyword 6500 (contig) tmls nde a 6500 (contig) tmis nde st * Configuring additional fields—enable populate the IP address of the next hop router and the egress SNMP IFINDEX in the NDE packets [SS00 (contig) tale nde interface ——S~*@d Use the following command to enable NDE from the PFC and, optionally, the NDE version: 6500(config)#mls nde sender {version [5 | 7]} NDE can be configured to populate additional fields in the NDE packet such as: m= IPaddress of the next hop route = Egress interface SNMP if ndex Use the following command to populate the additional fields in NDE packets: 6500 (config) #mla nde interface ‘© 2008 Cisco Systema, ne. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-277 NDE Export Destination * Defining the destination address for where the export records are to be sent (6500 (config) #ip flow-export 7 destination specify the Destination IP addré source Specify the interface for source address version specify the vereion number 6500 (config) #ip flow-export destination 7 Hostname oF A.B.C.D Destination IP address 6500 (config) #ip flow-export destination 195.111.23.40 7 <1-65535> UDP port number 6500 (config) #ip flow-export destination 195.111.2340 2002 + The example sets the export destination as IP address 195.111.23.40 using UDP port number 2002. Use the following command to configure the destination IP address and UDP port to receive the NDE statisti 6500 (config) #ip flow-export destination ip address udp_port_number Redundant NDE data streams can be configured to improve the probability of receiving complete NetFlow data. Enter the ip flow-export destination command twice to accomplish this task. 1-278 Implementing Cisco Data Center Network infrastructure 1 (OCNH1) v2.0 (© 2008 Cisco Systems, Inc Building NDE Filters + Anumber of NDE filters that can be defined, * In the example NDE is enabled to filter records based on destination Port number (6500 (config) tals nde flow ? exclude exclude keyword destination destination keyword protocol protocol keyword source source keyword src-port —arc-port keyword <1-65535> deat-port number 6500 (config) #mis nde flow include 6500(config)#mie nde flow include dest-port ? at-port 23 Use the following command to configure a destination or source port flow filter: 6500 (config) #mls nde flow {exclude | include} ({dest-port number | src-port number} The following example shows how to configure a port flow filter so that only expired flows to destination port 23 are exported (assuming the flow mask is set to full): 6500 (config) #mls nde flow include dest-port 23 {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 4-278 Building NDE Filters (Cont.) + Filtering NDE records based on source address or protocol: 500 (config) Wale nde flow include source ? A.B.C.D source ip addre 6500 (config) ¥nie nde flow include source 10.1.1.0 7 A.B.C.D source ip address mack bite 6500(config)#mle nde flow include source 10.1.1.0 255.255.255.0 500 (config) Wale nde flow include protocol ? tep tep protocol keyword udp udp protocol keyword 6500(config)#mla nde flow include protocol tep ? Gest-port dest-port keyword src-port _src-port keyword 6500 (config) mls nde flow include protocol tep dest-port 7 <1-65535> deat-port number 6500(config)#mls nde flow include protocol tep dest-port 443 This slide provides additional examples for configuring NDE. 41-280 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0, (© 2008 Cisco Systems, Inc Summary ‘This topic summarizes the key points that were discussed in this lesson. Summary *» NetFlow provides an effective mechanism for collecting statistics on traffic that flows through the switch. * Exported NetFlow data can be used for network management and planning, enterprise accounting and departmental charge backs, Internet service provider (ISP) billing, data warehousing, and data mining for marketing purposes. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-281 1-282 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, nc. Lesson 8| Implementing QoS Overview This lesson discusses the operation of quality of service (QoS) within the Cisco Catalyst 6500 Series Switch and identifies the parameters required to enable congestion management to support latency-sensitive applications within the network. Switches have large backplanes and are able to switch millions of packets per second, yet congestion can still occur at any time within the network. If congestion management features are not in place, packets received during congested periods will be dropped, causing unnecessary retransmissions to occur. Retransmissions increase network load, and performance degrades in a downward spiral As latency-sensitive traffic increases within the network, congestion management schemes are required to ensure that an acceptable level of performance is available at all times. Congestion management methods rely on traffic classifications, buffer management and scheduling techniques to switch important packets from queues as quickly as possible Objectives Upon completing this lesson, you will be able to describe packet-processing in hardware on the Cisco Catalyst 6500 Series Switch, and explain how it can perform QoS functions on packets in hardware and software. This ability includes being able to meet these objectives: = Describe how QoS is processed in Catalyst 6500 Series Switches = Describe the basies of QoS and ToS Describe ingress QoS progressing Describe QoS policing and policers Describe the process of egress policing Explain VLAN-based QoS Explain the Modular QoS CLI Describe the process of configuring QoS Describe CoPP- Describe the QoS on Cisco Catalyst 6500 VSS 1440 Cisco Catalyst 6500 Series Switch QoS Overview To support QoS levels, several features have been incorporated into the hardware of the Catalyst 6500 Series Switch, These features include the Multilayer Switch Feature Card (MSFO), the Policy Feature Card (PFC), and the port ASICs incorporated into the line cards, Understanding PFC QoS + PFC provides classification, policing and marking for packets it processes. * All of these functions are performed in hardware. The PFC on the Catalyst 6500 Series Switch is known for its ability to provide hardware acceleration of Layer 3 switching. The PFC also supports QoS at a hardware level. The PFC has the ability to push QoS policies down to a Distributed Forwarding Card (DFC) to support distributed enforcement of these policies. The QoS functions are performed by the ASICs to yield high levels of performance. Additional performance gains can be realized by adding DFCs. To support local switching, the DFC must also support the QoS policies that have been defined for the switch. Although the network administrator cannot configure ‘he DFC directly, it does ‘come under the control of the master MSFC or PFC on the active supervisor. When the PFC pushes down the Forwarding Information Base (FIB) table, it also pushes down a copy of the QoS policies that are local to the line card. Thus, the DFC is able to make local swi decisions based on the local FIB table and the QoS policies, providing hardware QoS processing speeds and yielding higher levels of performance. 1-284 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. Other QoS Elements in Hardware = Each linecard also implements a series of QoS features. * These features are designated ingress and egress QoS in this presentation * QoS features described are implemented in the port ASICs found on line cards. + The actual QoS features found on each linecard are line card- dependent, Each of the line cards on the switch also implements a number of ASICs. These ASICs mplement the queues, buffering and thresholds used for temporary storage of frames as they transit the switch, ‘The QoS functions in the Catalyst 6500 Series Switch are performed by the following hardware components: m= Input scheduling: Performed by port ASICs; available on Layer 2 only, with or without the PFC = Classification: Performed by the PFC = Policing: Performed by the PFC via the Layer 3 forwarding engine = Marking: Performed by port ASICs . ‘Output scheduling: Performed by port ASICs {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-285, QoS on the Catalyst 6500 Series Switch Actions at ingress) Action of forward Actions at egress = signe PFC) 2 Scheduling: Queue | Classification at Rewrite ToS header and threshold based | Layer 2/3/4 via ACL fn Incoming CoS ‘Scheduling queue and threshold Assign trust via ACL | based on CoS Map Received CoS can be veraniten it Police trafic based | Each queue has configurable size and Portis untrusted ‘on byte or burst ‘threshold (token bucket) WRED and tall drop congestion Mgmt Exceed action on police is drop or Do-quoue using WRRIDWWR or mark down priorty strict prionty Traffic shaping on selected line cards Several QoS elements can be offered at both Layer 2 and Layer 3. These include; classification, input queue scheduling, policing, rewriting and output queue scheduling. In the Catalyst 6500 Series Switch, these QoS clements are applied by a Layer 2 engine that has insight into Layer 3 and Layer 4 details, as well as Layer 2 header information. A frame enters the switch and is initially processed by the port ASIC that received the frame. It will place the frame into a receive (Rx) queue. Depending on the Cisco Catalyst 6000 Series line card, there will be one or two Rx queues. The port ASIC will use the CoS or differentiated services code point (DSCP) bits to indicate into which queue to place the frame (if multiple input queues are present). If the port is classified as untrusted, the port ASIC can overwrite the existing CoS bits based on a predefined value. The frame is then passed to the Layer 2 and Layer 3 forwarding engine, which classifies and optionally polices the frame. Classification assigns a DSCP value to the frame, which is used internally by the switch. The DSCP is derived from one of the following: Step1 An existing DSCP value, set prior to the frame entering the switch Step2 The received IP precedence bits already set in the IP version 4 (IPv4) header. As there are 64 DSCP values and only cight IP precedence values, the administrator can configure a mapping that is used by the switch to derive the DSCP. Default ‘mappings arc in place to assist this process. Step3 The received CoS bits are already set prior to the frame entering the switch. Similar to IP precedence, there are a maximum of eight CoS values, each of which is mapped to one of 64 DSCP values. This map can be configured, or the switch can use the default map in place. Step4 Set for the frame by using a DSCP default value, typically assigned through an access control list (ACL) entry. 41-286 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0, (© 2008 Cisco Systems, Ino. After a DSCP value is assigned to the frame, rate limiting is applied to limit the flow of data through the PFC, Rate limiting is accomplished by dropping or marking down traffic that is out of profile, Out of profile is a term used to indicate traffic that has exceeded a limit defined by the administrator as the amount of bits per second the PFC can send. Out-of-profile traffic can be dropped, or the CoS value can be marked down, All PFC3 versions support both ingress and egress policing (rate limiting). ‘The PFC then passes the frame to the egress port for processing. At this point, a rewrite process is invoked to modify the CoS values in the frame, and the ToS value in the IPv4 header. This is derived from the internal DSCP. The frame is then placed into a transmit queue based on its CoS of DSCP valuc, and is ready for transmission. While the frame is in the queue, the port ASIC monitors the buffers and implements weighted random early detection (WRED) to keep the buffers from overflowing. A weighted round robin (WRR), deficit weighted round robin (DWRR) or shaped round robin (SRR) scheduling algorithm is then used to schedule and transmit frames from the egress port. Ingress queuing based on DSCP values, egress queuing based on DSCP values and SRR are supported on the following line cards and interfaces: = WS-X6708-10G-3C/CXL = WS-X6716-10G-3C/CXL = VS-S720-10G-3C/CXL uplinks = Cisco Catalyst 6500 Series Supervisor Engine 32 (all versions) uplinks {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-267 QoS in the Catalyst 6500 Series Switch * QoS processing occurs in three different places in the Catalyst 6500 Series Switch Ingress Qos is performed on the Ingress linecard port. Includes port trust, re-marking, Classiicaton, queue scheduling and ‘congestion avoidance. PFC QoS features include QoS ACLS, ‘marking, classification and policing, Egress Qos is performed on the ‘egress linecard port. Includes queue scheduling, congestion avoidance and in some line cards, shaping. PFC QoS features are applied as follows. Ingress port PFC QoS features: = Port trust state: In PFC QoS, trust means to accept as valid, and use as the basis of the internal DSCP value. By default, ports are untrusted, which sets the initial internal DSCP value to zero. Ports can be configured to trust received CoS, IP precedence or DSCP values. m= Layer 2 CoS remarking: PFC QoS applies Layer 2 CoS remarking to incoming frames with the port CoS in the following situations: — If aportis configured as untrusted — Ifaport is configured as trusted but the traffic is not an ISL, IEEE 802.1Q or IEEE 802.1p frame = Congestion avoidance: if an Ethemet LAN port is configured to trust CoS, then QoS. classifies the traffic on the basis of its Layer 2 CoS value and assigns it to an ingress queue to provide congestion avoidance. If an Ethernet LAN port 1s contigured to trust DSCP and it supports DSCP-queue mapping (see the previous page for a list interfaces that support this), then QoS classifies the traffic on the basis of its DSCP value and assigns it to an ingress queue to provide congestion avoidance. PFC and DFC QoS features: = Internal DSCP: On the PFC and DFCs, QoS associates an internal DSCP value for all to classify this traffic for processing through the system. The initial internal DSCP value is based on the traffic trust state and a final internal DSCP. The final DSCP can be the same as the initial value, or a Modular QoS Command-Line Interface (MQC) policy map can set it to a different value 4-288 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Ine. = -MQC policy maps: MQC policy maps can perform one or more of the following operations: — Change the trust state of the traffic — Set the initial internal DSCP value — Mark the traffic — Police the traffic Egress Ethernet LAN port QoS features: = Layer 3 DSCP marking with the final intemal DSCP = Layer 2 CoS marking mapped from the final internal DSCP m= Layer 2 CoS-based or Layer 3 DSCP-based (for those interfaces that support it) congestion avoidance ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-289 Ingress QoS Processing This topic explains ingress QoS processing. Ingress QoS requires the switch to identify the incoming port as trusted or untrusted. The parameters applied by the switch are dependent on the incoming port value and type. The Elements: Setting Trust * When an incoming packet is already marked with a priority, the switch must decide whether to keep this setting or change i. * The decision is based on the ports trust setting, EEE) Unused | CoS/T0S set to zero Trust CoS | CosrTos maintained Trust IP Precedence |CoS/ToS maintained sce o8/T08 matt * Trust settings define what to do with the priority setting in the incoming packet. * Note: The value of the CoS and ToS may differ on egress depending on ‘map settings. Maps are discussed later in this lesson. Any port on the Catalyst 6500 Series Switch can be configured as trusted or untrusted. This trust state dictates how a port marks, classifies, and schedules a frame as it transits the switch. Untrusted Ports (Default) Ifa port is configured as untrusted, the CoS or ToS value of any frame entering that port is reset to zero, giving the frame the lowest priority of service on its path through the switch Alternately, the administrator can reset the CoS value of any Ethernet frame that enters an untrusted port to a pre-determined value. Setting a port as untrusted prevents the switch from performing congestion avoidance. Congestion avoidance causes the switch to drop frames based on their CoS values after exceeding the thresholds defined for that queue. Trusted Ports When Ethernet frames enter the switch, they can have a CoS or ToS setting that is to be ‘maintained as the frame transits the switch. To support this functionality, the administrator defines the entry port as trusted. As the frame enters a trusted port, the port uses an internal DSCP valuc to assign a predetermined level of service to that frame. The administrator can configure the port to look at the existing CoS, IP precedence, or DSCP value as a basis for setting the internal DSCP value. Alternately, the administrator can set a predefined DSCP to every packet entering the trusted port 4-290 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. The Elements: Ingress Priority to DSCP Maps * The switch uses an internal DSCP value to assign service levels to the frame as it transits the switch. * The internal DSCP is derived from the ingress CoS or IP precedence value, and uses a CoS-to-DSCP map to derive that Sn The internal DSCP value is derived from the table shown here, depending on whether the port is configured to trust CoS, IP precedence or ingress DSCP. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-291 The Elements: Setting Extended Trust * Extended trust allows the switch to instruct an attached IP phone to re- tag the CoS value of packets from a PC attached to the phone * A.switch port with a phone attached normally has the trust setting ‘enabled. This ensures that QoS prionty integrity is not compromised when a downstream device is connected to the network. ‘TRUST set to TRUST IP-PRECEDENC Bar lig eer domi Cos van od] rany eels Sava va Extended trust occurs when the switch instructs an attached IP phone to re-tag the CoS value of packets coming from a downstream PC that is attached to the switch via the IP phone. Switch ports with IP phones attached normally have the trust setting enabled to ensure that QoS priority integrity is not compromised when a downstream device is connected to the network. 41-292 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, {© 2008 Cisco Systems, Inc. Scheduling: Ingress Queue Mapping * Ingress packets are placed into a queye based on ingress CoS (all LAN Infertaces) or DSCP (evlect LAN interfaces) values. = « Frames marked with CoS=5 placed into SP Input Port queueifoneis YP Normal Queue present other CoS values are placed into the normal queue |Win normal queue, drop thresholds are used to indicate which CoS tagged packets can be dropped ‘once the queue has filed beyond a certain threshold. When a frame comes into the switch it is placed into an Rx queue. The port ASIC implements thresholds on cach Rx qucue (the number of these thresholds depends on the line card) and uses these thresholds to identify frames to be dropped after the thresholds are exceeded. This action prevents buffer overflows. The port ASIC checks the CoS value (all LAN interfaces) or DSCP value (select LAN interfaces) of the frame to determine which frames should be dropped. Frames with a higher priority are allowed to remain in the buffer for a longer period when congestion occurs. The queue mappings are configurable. Input queue structures define the number of standard and strict priority queues available on a line card, and the number of thresholds available per standard queue. For more information on the default queue mappings, go to http://www. cisco.com/en/US/docs/switches/tan/catalyst6500/i0s/12.2SX/configuration/guide/q ‘os.html#wp1478881. © 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-203 Congestion Avoidance: Tail Drop * Specify which CoS-tagged packets can be dropped from the queue when drop threshold has been exceeded. * After a threshold has been reached, the tall drop mechanism drops all incoming packets that have a CoS mapped to that threshold until queued packets drop below the threshold. Threshold 3 Drop ALL packets with CoS = 4 and 6 (OSCP = 22-47) Threshold 2 Drop ALL packets with CoS = 2 and 3(O50P = 16-3) Threshold 1 Drop ALL packots wth CoS =O and 1 (OSCP = 0-45) ‘Note: Threshold numbers are coniigurable by the administrator Note: Threshold numbers and CoS to Receive Queue threshold maps difer across line cards After a packet is dropped during normal data transmission, it is retransmitted to support TCP flows. During times of congestion, this action can add to network load and increase buffer overload conditions. Thresholds are imaginary levels assigned by the switch or network administrator to define the point at which congestion management algorithms can start dropping data from the queue. In QoS, thresholds are deployed as a means of telling the switch which frames can be dropped. For more information on the drop thresholds, go to http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/q, ‘os htmli#wp 1478881, 1-204 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Congestion Avoidance: WRED = Weighted Random Early Discard (WRED) randomly starts to drop packets marked with a particular CoS value when a threshold has been Feached, as opposed to tail drop, which drops all packets mapped to that threshold = WRED takes advantage of the TCP windowing mechanism to gradually reduce the arrival rate of packets for a particular flow 080.1 cos 2:3 (OSCP =0-15) (SCP = 16-34 Mark Probability When Threshold tis ht, Threshold When Threshold 2is hit, CoS 2 ppackets randomly dropped 4 and 3 packets are randomly ‘based on CoS 0, 1 values WRED minimizes the impact of dropping high-priority traffic in times of congestion. WRED takes into consideration the priority of the frames. The administrator assigns frames with certain CoS values (all LAN interfaces) or DSCP values (select LAN interfaces) to specific thresholds, and to specify the frames that are eligible to be dropped. Frames with CoS or DSCP values assigned to higher thresholds are kept in the queue, allowing higher priority flows to remain intact, and minimizing the latency involved in getting the packets from the sender to the receiver. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-295 Ingress QoS Policing This topic describes the purpose and use of policing and policers. Policing is the ability of the PFC to rate limit incoming traffic to the switch to reduce the flow of traffic to a predetermined limit. The Elements: Policing * A process of limiting traffic to a prescribed rate. * Allows the definition of a rate and a burst. Rate defines the amount of traffic that is sent per given interval. After that amount has been sent, no more traffic is sent for that given interval ~ Burst defines the amount of traffic that can be held in readiness for being sent. Traffic in excess of the burst can either be dropped or have its priority setting reduced. erry eating No more traffic to be received Burst bytes’ Bupa] Rate bytes ——*| ‘“ Zero bytes ——* Policing uses a defined rate and burst value to limit traffic to a prescribed rate. The rate value defines the amount of traffic that can be sent during any given interval, and the burst value defines the amount of traffic that can be held in readiness for transmission. After the rate value is reached, no more traffic is sent within that defined interval. Traffic in excess of the burst value can either be dropped or have its priority setting reduced. 41-296 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc. A Policing Example = The token bucket depth and replenishment rate are calculated as follows: police 100000000" 26000 conform-action set-dscp-transmit exceed-action drop Rate” Burst = The example specifies a policed rate of 100Mb/s; the rest is caloulated as follows: fd O retrceeeateras second = rate / interval = aoe 400,000,000 / 4000 = 25,000 tokens every Oo 41/4000 of a second @ Bucket doptn = burst = 2,000 tokens Policing uses a token bucket to store policing data, Data can only be sent when tokens exist in the bucket. The command shown in this example defines how to calculate the token bucket depth and replenishment rate Poli Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 ing is accomplished as follows: At time interval TO, the bucket is loaded with a full complement of tokens. When a packet arrives at the PFC, the number of bits that make up the packet are counted. ‘The PFC checks the token bucket. If the number of tokens in the bucket is less than or equal to the number of bits in the packet, the packet can be forwarded, otherwise the packet is dropped. The tokens are removed from the bucket. The packet is sent by the PFC to its destination; other packets will also be forwarded within that time interval if enough tokens exist. At the end of the time interval, the token bucket is replaced with a new complement of tokens. ‘The next packet is only forwarded if there are enough tokens in the bucket, and the policing cycle continues. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-297 The Elements: Dual Leaky Bucket Policers * The PFC3 implements a dual leaky bucket algorithm; definition of: ~ Arate and an eRate (extended rate), ~ A burst and an eBurst (extended burst). Burst Bucket 2 ‘This area defines the amount of traffic that can be received in a given interval by ‘Bucket 1—defined by the burst parameter EB: ‘area defines the amount of traffic that can be received in a given interval by Bucket 2—defined by the eBurst parameter The dual leaky bucket algorithm adds two new policing levels; normal policing level and excess policing level. Normal policing level equates to the first bucket and defines parameters specifying the depth of the bucket (burst) and the rate at which data should be sent from the bucket (rate). Excess policing level equates to a second bucket and defines parameters specifying the depth of the bucket (eBurst) and the rate at which data should be sent from the bucket (cRate), In this process, the PFC accepts incoming streams of data and fills the first bucket to a level that is less than or equal to the assigned depth (burst value). Data that overflows the first bucket can be marked down and passed to the second bucket. The second bucket accepts an incoming rate of data from bucket one to a level that is less than or equal to the eBurst value, Data from the second bucket is sent at a rate defined by the eRate parameter minus the rate parameter. Data overflowing the second bucket can be marked down or dropped. 4-208 Implementing Cisco Data Center Network Infrastructure + (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Types of Policers: Aggregate + Supervisor 720 and Supervisor 32 support an aggregate policer which can be applied on: — A port or group of ports — AVLAN or group of VLANs: * When applied to multiple ports or VLANs, the policed rate for all Watlic across those ports is limited (o te Stated polived rate. vor BO GODID syrom: An aggregate appl icing rule toa port ‘Aggregate 2). An aggregate polices all the traffic coring into the Pest or VLAN and applies the policed rate to that trafic An aggregate can also be used to rate limit traffic. The aggregate policer applies to all traffic inbound on a port or VLAN that matches a specified QoS ACL. PFC QoS applies the defined aggregate bandwidth limits in a cumulative manner to all flows in matched traffic. For example, if an aggregate policer is configured to allow 1 Mb/s for all TFTP traffic flows on VLAN 1 and VLAN 3, it effectively limits the combined TFTP traffic for all flows on VLAN | and VLAN 3 to | Mb/s. Up to 1023 aggregate policers are supported. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-299 Types of Policers: Microflow * Allflows coming into the ports associated with the microflow policer are. policed down the stated rate, Each flow is Ingress commen | limited to the rate Traffic specified in the microfiow Egress Traffic Note: A flow is defined by the flow mask in use by the system, A microflow defines the policing of a single flow. A flow is defined by a session with a unique source address/destination address MAC address, source address/destination address IP address, and TCP/User Datagram Protocol (ICP/UDP) port numbers. For each new flow initiated through a port of a VLAN, the microflow can be used to limit the amount of data received for that flow by the switch. Packets exceeding the prescribed rate limit can either be dropped or have their DSCP value marked down. For example, a microflow policer configured to limit the TFTP traffic to 1 Mb/s on VLAN | and VLAN 3 means that 1 Mb/s is allowed for each flow in VLAN 1 and 1 Mb/s for each flow in VLAN 3. Up to 63 microflow policers are supported. 4-300 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Egress QoS Policing This topic explains the policing that can be implemented on egress ports. Traffic on a switch is not restricted to ingress ports. Egress ports must also be considered, as they too can benefit from policing functionality. Egress Policing * The PFC3 also supports egress policing Egress policer can be applied only to a VLAN or a routed interface. INPUT ouTPUT TR Flow of traffic Egress Policer * Egress policers can only be applied to VLANs or routed interfaces. — Physical egress port is not known when the egress policing function is performed; the only known factor is the VLAN-ID (from the internal header) — Both VLAN Interfaces and routed interfaces have VLAN-IDs. Routed interfaces are assigned internal VLANs. To support the policing of egress traffic, the PFC uses a configurable map to derive a CoS. value from the final internal DSCP value associated with traffic. PFC QoS sends the derived CoS value to the egress LAN ports for use in scheduling, and for writing ISL and 802.1Q frames, Egress policing can only be applied to a VLAN or routed interface, duc to the fact that the physical egress port is not known when the egress policing function is performed. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-301 Port- vs. VLAN-Based QoS This topic describes the differences between port-based QoS and VLAN-based QoS. VLAN-Based QoS * By default, PFC uses policy maps assigned to LAN ports. = L2 ports (ewitch port) can be told to use the policy map attached to their arent VLAN interfaces (VLAN-based QoS) [om oa | Oe EE = a] == With port-based QoS, policy maps are applied o a physical switch interface; the applied to the VLAN policy map manages traffic only on that through all associated switch ports is ssutchport ‘managed by that policy map. PFC QoS uses policy maps attached to LAN ports by default, The default only applies the policy map to a specific physical switch interface, and only manages traffic on that switch port. With VLAN-based QoS, applying the policy map to the VLAN interface, all traffic associated with that VLAN is managed by the policy map. 41-802 Implementing Cisco Data Center Network infrastructure 1 (DCNF1) v2.0 (© 2008 Cisco Systems, Inc Modular QoS CLI This topic explains how the MQC provides a framework for applying QoS on the Catalyst 6500 Series Switch Modular QoS CLI * Policing and Classification policies on an interface are applied with Modular QoS CLI (MAC). = MQC is a modular framework for applying QoS and defines: A standard CLI across all Cisco 10S based platforms — The use of the policy and class map for defining QoS actions on the 6500 So TURE Loosen capeee lass mops ‘map to]an interface s one [ee ciassvap tng the service ply Refers oa set of casa ‘command, This binds the ae polcy map and ts Erte or he flowing acon ‘assiicton and acton rte the terface MQC provides the framework for applying QoS and defines a standard CLI for the application of QoS across all Cisco IOS Software-based platforms. On the Catalyst 6500 Series Switch, MQC defines the use of the policy and class maps when defining QoS actions. ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-903 VSS 1440 and QoS This topic describes the VSS 1440-specific QoS features. VSS QoS — Aggregate Policers * Handled by PFC QoS and applied on L3 interfaces Gigentextneraet 2/2/10 Gigarienenernet 2/2/39 Classifiers and policers are all handled by the PFC or DFCs. Classification and policing is executed by either the PFC on the active and hot standby supervisor, or the ingress line card DFC. Classification and policing functions are handled by PFC QoS. Policers and classifiers must either be applied on: = Layer 3 interfaces (VLAN or physical interfaces) = Port channels Note Policies on Layer 2 interfaces are currently not supported. Aggregate policers that are applied on VLANs or port channels that have interfaces distributed across multiple forwarding engines are subject to distributed policing caveats. 1-304 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Ino QoS on the VSL = QoS is enabled by default on VSL = Currently, it is not configurable = VLSP and control frames are always handled as priority packets Virtual Switch Domain The Virtual Switch Link (VSL) carries both control plane and data plane traffic. The VSL is an important link and therefore it must assure that the control plane traffic has guaranteed bandwidth, But the data plane traffic should also be handled correctly. The control plane traffic is really important because that is what maintains the communication between the members of the VSS 1440. The VSS 1440 protocols—Virtual Switch Link Protocol (VSLP), Link Management Protocol (LMP), Role Resolution Protocol (RRP), or internal protocols, such as Secure Copy Protocol (SCP)—are all carried across the VSL to communicate between the two chassis in the VSS 1440. Note Even if QoS is not enabled, itis automatically provisioned with queues and buffers on the VSL interfaces to ensure that it always has the queues provisioned. ‘The following pertains to the VSL QoS: VSL has QoS provisioned by default (cannot be changed). = VSL is always provisioned with trust CoS, = Trust CoS mode allows for ingress and egress classification and prioritization of the traffic on the VSL. = VSLP and other control frames are always marked as priority packets and are always queued and classified as such. Service policies are not supported on the VSL. © CoS maps, thresholds and queues are not configurable on the VSL. Note ‘Any change to the trust CoS setting is currently not allowed, ‘© 2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-205 1-306 Configuring QoS This topic describes QoS configuration, and explains the options used in the process, Globally Enabling QoS = The QoS engine must be enabled prior to configuring QoS. 500#ahow mle gan Qos 1s disabled globally 6500 (config) mls aos 65008 show mis qos QoS is enabled globally Microflow policing is enabled globally G08 ip packet dacp rewrite enabled globally Vian or Portchannel (Multi-Earl) policte Egress policies supported: Y Jcoutput omitted> Jwad: XSYS-5-CONFIG I Configured from console by console supported: Yes The QoS engine must be enabled before you configure QoS. Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, nc. Preserving Incoming ToS = To preserve the incoming ToS on the egress packet, IP DSCP rewrite must be disabled from its default enabled state. 6500#ehow mis qos ‘908 1 enabled globally Secceiwigonciey toi ecesies Gisbaiy Default state O08 ip packat dmcp rewrite anahled globallys————T Vian or Portchannel (Multi-Bari) policies supported gress policies supported: Ye 6500 (config) no mle gow rewrite ip dacp 7w3d: $SYS-5-CONFIG I: Configured from console by console 6500 show mle qos 08 is enabled globally Microflow policing is enabled globally 08 ip packet dscp rewrite disabled globally Vian or Portchannel (Multi-Barl) policies supported: Yes DSCP transparency preserves the received Layer 3 ToS byte. QoS uses the marked or marked- down CoS value of the ToS byte for egress queuing and egress-tagged traffic. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-207 Setting Ingress Trust + The trust setting of a port always defaults to untrusted. To change it, use the following command (6500(config-if)#nle qos trust 7 cos cos keyword scp dacp keyword extend extend keyword Ap-precedence p-precedence keyword * Ifthe port is set to untrusted, it uses the default Port CoS to tag the packet, = The Default Port CoS will initially be set to zero. If this value needs to be changed, use the following command ‘The default trust setting on any port is untrusted. Use the following command to change this to a trusted port: 6500(config-if)#mls qos trust {cos | dscp | extend | ip- precedence} The extend option provides the IP phone with the CoS value for the downstream PCs connected through the phone Use the following command to change the default port CoS tag from zero, Zero is initially set for untrusted ports: 6500(config-if)#mls qos cos {0 - 7} 1-308 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc Configuring PFC QoS + Marking and policing on the PFC can be disabled globally. * In this mode, all ports default to a configuration of trust CoS, and will apply a default port CoS to all ingress packets on ports not able to be set to trust CoS, (6500 (contig-1£) no = By default, microflow policing is enabled for routed (Layer 3 switched) traffic only. * Microfiow policing can also be enabled for bridged traffic (this is disabled by default) and can be enabled in interface VLAN configuration mode €500 (config) Finterface vian 300 6500(config-if)#mls gos ? bridged bridged keyword dacp-mutation mutation keyword exp-mitation exp mutation keyword loopback Loopback cable between LAN and MAM port Use the following global command to disable global marking and policing, and to configure all ports to trust Layer 2 CoS: 6500(config)#mls qos queueing-only By default, microflow policers affect only routed traffic. Use the following commands to enable microflow policing of bridged traffic on a specific VLAN: 6500 (config) #interface vian vlan id 6500 (config-it) #m1s gos briagea {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-309 Named Aggregate Policers * Anamed aggregate is an aggregate policer that is referenced within a policy map. * It defines a policed aggregate rate and burst, and contains action settings that define what to do with packets that conform within the specified rate, and what to do with packets that are outside the stated burst values €500 (config) tals qo <32000-4000000000> example), and then use bits per second to define the rate that will be allowed * Note: ‘he configuration parameters used for this example do not reflect stated guidelines from Cisco. They are simply used to show how a named aggregate is built Aggregate policers can be applied to ingress interfaces on multiple modules. Aggregate policing works independently on each DFC-equipped switching module, and independently on the PEC, which supports any non-DFC-equipped switching modules. The named aggregate policer is referenced within a policy map, and defines the policed aggregate rate and burst. It specifies the action settings that define what to do with traffic that conforms within the specified rate and what to do with traffic that is outside of stated burst values, 4-310 Implementing Cisco Data Center Network infrastructure 1 (DCNI-+1) v2.0 (© 2008 Cisco Systems, nc. Named Aggregate Policers (Cont.) * Specify the normal burst value to define the depth of the token bucket 6500 (config)#mls qos aggregate-policer x¥z 100000000 7 <2000-31250000> ‘conform-action pir violate-action 6500 (config) #mle qoe aggregate-policer x¥z 100000000 10000 7 = Specify the maximum burst value to define the depth of the second token bucket (6500 (config)#mls qos aggregate-policer x¥Z 100000000 10000 7 conform-action action when rate is not exceeded PIR action when rate violated 6500(config)#mls qos aggregate-policer X¥z 100000000 10000 20000 A normal burst value defines the depth of the token bucket and the maximum burst value defines the depth of the second token bucket. ‘© 2008 Cisco Systems, Inc Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1311 Named Aggregate Policers (Cont.) ‘= Specify the peak information rate for the second leaky bucket. This value must be greater than or equal to the first rate * Specify the conform action; this is what the policer should do with traffic within the stated rate 500 ont) tem Toeoeaeee aenee 20008 ir saessnn0D F vlotated The peak information rate (PIR) value is defined for the second leaky bucket which has to be greater from the size of the first token bucket, The conform-action option defines the action for the traffic in conformance with the specified rates, 1-312 lmplementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc. Named Aggregate Policers (Cont.) = Specify the exceed action; this is the action to be taken when the {6500 (config) ¥##0000000 10080 20000 pix 150000000 contorm-action trananit ? ction action when rate violated ‘ron policea-dacp-tr (6500 (contig) #m-action transmit exceed-action policed-dacp-trananit ? $500 (contig) #Sransmit exceed-action policed-dacp-transmit violate-action 7 ‘arop \3-dscp-tranemit change decp per policed-dscp map and transmit packet ‘The exceed-action option defines the action taken for the traffic exceeding the normal rate burst (the burst value for the first token bucket). ‘The violate-action option defines the action taken for the traffic exceeding the PIR—burst value for the second token bucket. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-313. Defining Policy Maps = The process of creating a policy map starts by first creating the class map. 500 (contig) Fela WORD cla sateh-a1l Log 12 matening faten-any Log 1 matching’ 6500 (contig) #elass-map abe123 6500 (config-coap)# * In the example, the class map abc123 has been created. * The first command puts the administrator into class map configuration mode. * Following this, a series of match statements must be defined to classify traffic associated with this class map. A policy map is configured by first defining a class map. The class map classifies and associates traffic types to policy maps. 4-314 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Defining Policy Maps (Cont.) * The match statement identifies and associates traffic types to the $500 (contig-emap)#match ? ‘eatination-aasri input interface BaP trattic index valve IEEE 02.10/51 eta Match on fr-dlei Select an input Interface to match TP apecitic valu Mult! Protocol Label Switching apecttic values Protocol ‘This example shows how the mateh command is used. Class map command restrictions include: = PFC QoS supports the mateh any class map command. = PFC QoS supports class maps that contain a single match command. mPFC QoS does not support these class map commands: — match cos — math classmap — match destination-address — match input-interface — match qos-group — match source-address {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1315 6500 (config) policy 6500 (contis-peap) #7 J008 policy-map configuration comanda: cla policy criteria Gencription Poltey-Map exe Bxit from QoS policy-nap configuration mode 20 Megate or sot default values of # comand Fensna Renane this policy-map * The last command in the example places the administrator into the policy map configuration mode. After the class map is defined, the policy map is created and the class map applied. 1-316 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. After the class map is applied to the policy map, the administrator specifies the action to be Defining Policy Maps (Cont.) + The next step is to enter the name of the class previously defined. police Pelority Tandom-detect tering 0 configuration mode. From this mode, a variety of class-related actions can be configured. The actions highlighted in red above are not supported in PEC hardware. configuration Strict Scheduling Priority Queve Max Threshold for Tall Drop uable Random arly Detection as érop policy Configure Qos Service Follcy Set 908 values fe trust value for the clase taken when that traffie type is identified Policy Map Command Restrictions PFC QoS does not support these policy map commands: class class_name destination-address class class_name input-interface class class_name protocol class class_name qos-group class class_name source-address Policy Map Class Command Restrictions PFC QoS does not support these policy map class commands: bandwidth priority queue-timit random-detect set qos-group service-policy (© 2008 Cisco Systems, Inc |mplementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Ser +317 Defining Policy Maps (Cont.) = One class map action that is normally configured is the set command. (6500(config-pmap-c) feet 7 atm-clp Set ATM CLP bit to 1 cos st IEEE 802.19/1SL clase of service/user priority {t DSCP in IP(v4) and IPv6 packeta Precedence Set precedence in IP(v4) and IPv6 packets qos-group Set QoS Group 6500 (config-pmap-c) Haet ip 7 ‘acp Set IP DscP (DiffServ CodePoint) Precedence Set IP precedence * Use the set ip command to reset the ToS bits in the packets to the values defined in this class map. This method is known as marking. ‘The set parameter allows the network administrator to reset the ToS bits according to the value set in the class map. This action is called “marking.” Configuring Policy Map Class Actions Policy map class actions include: = PFC QoS does not support the set qos-group policy map class commands. = PFC QoS supports the set ip dsep and set ip precedence policy map class commands for IPv4 traffic: — You can use the set ip dsep and set ip precedence commands on non-IP traffic to mark the internal DSCP value, which is the basis of the egress Layer 2 CoS value. — The set ip dscp and set ip precedence commands are saved in the configuration file as set dscp and set precedence commands. = PFC QoS supports the set dsep and set precedence policy map class commands for IPv4 and IPV6 waffic = You cannot do all three of the following in a policy map class: — Mark traffic with the set commands — Configure the trust state — Configure policing Ina policy map class, you can either mark traffic with the set commands or do one or both of the following: = Configure the trust state = Configure policing 4-318 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Defining Policy Maps (Cont.) * A previously defined named aggregate can also be referenced within the class map. $500 (config) Wpoltcy map ABCISE 6500 (config-paap)felaes ABCi23, 1500 (contig-paap-c) #police ® '<32009-4900000000>, Sits per second 500 (config-pmap-c)¥ police aggregate? WORD enter aggregate-policer name 6500 (contig-paap-c)#police agate: +A polcer can also be defined within the class map by using the police command. @500(contig-paap-c)#police 50000000 19000 26000 pir 100000000 confor= Erananit exceed policed violate-action op. + This example sets a rate of 50 Mb for transmit traffic, an extra 50 Mb over and above that isto be marked down, and anything in excess of 100 Mb is to be dropped. A previously defined named aggregate policer policy can be referenced within the class map. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-319 Mapping Received CoS to Internal DSCP * Ifa ports set to trust CoS, the received CoS will be used to map to the internal DSCP. The default map can be changed. 6500 (config) #mls qos map cos: <0-63> dacp val (8 values coral) 6500 (config) #mls qos map cos. 6500#show mls qos maps | begin Cos. Cos-dacp map: oor 022345 scp: 07 18 22 30 46 52 61 * This command allows the specification of 8 DSCP values that are mapped directly to CoS values 0 through 7. Use the mls qos map cos-dscp command to change the default map. This command allows the specification of 8 DSCP values which are mapped directly to CoS values 0 through 7. 41-320 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Mapping Received IP Precedence to Internal DSCP * Ifa portis set to trust IP precedence, the received IP precedence values are used to map to the internal DSCP. The default map can $500 (config) fale qos map ip-prec-dscp ? <0-63> dacp values fated by spaces (8 values total) 6500 (config)#mls gos map ip-prec-dscp 0 10 13 26 37 46 53 60 6500#show mls qos maps | begin IpPrecedence-decp map IpPrecedenc: ipprec: 0 dacp: 0 10 13 26 37 46 53 60 * This command allows the specification of 8 DSCP values which are mapped directly to IP precedence values 0 through 7 If the port is set to trust IP Precedence, the received IP Precedence value is used to map to the internal DSCP. This ean be changed by using the mls qos map ip-pres-dsep command This command allows the speci ation of eight DSCP values which can be mapped dit IP precedence values 0 through 7. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-321 Control Plane Policing and CPU Rate Limiting This topic describes how to identify the Control Plane Policing (CoPP) and CPU rate limiting on the Catalyst 6500 Series Switch Multilevel Hardware and Software Protection Special PFC3/DFC3 cae Traffic toCPU Matches ony The Catalyst 6500 Series Switch with PFC3 is equipped with multiple levels of hardware and software control plane protection mechanisms which will keep the switch CPU running at reasonable levels in the event of an intended or un-intended denial of service (DoS) attack against the switch CPU. Traffic punted to the switch processor (SP) and route processor (RP) are managed via a series of dedicated hardware rate limiters. In addition to the hardware rate limiters, hardware QoS policers are applied to ingress traffic that is not managed by the ate limiters For more information on CoPP and hardware rate limiters, go to hitp:/www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aec 4802caSd6.himl 1-322 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Control Plane Policing (CoPP) . ponies Catalyst hardware QoS policies to traffic punted to the * Customizable traffic control and CPU protection * Logical control plane interface Provides ability to rate limit total traffie volume destined to control lane » Hardware-based CoPP adds a very granular and tunable mechanism to identify and manage the volume of traffic set to the switel: CPU. With CoPP, a new interface called the control plane interface is defined as effectively being the interface to the CPU. A QoS policy can be applied to that control plane interface to classify and rate limit the amount of traffic going to the CPU. The mechanisms are hardware based and utilize the familiar MQC interface to define the CoPP rules, The Catalyst 6500 Series Switch supports CoPP on the Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 720 in hardware starting with Cisco 10S Software Release 12.2(18)SXD1. CoPP is actually applied at two different levels on the Catalyst 6500 Series Switch: © The first level is the hardware based forwarding engine mitigation = The second level is the software CoPP agintes are programmed with the same global CoPP policy even though they each independently, so the route processor CPU could ultimately be presented N times police traffi the configured traffic rate, where N denotes the number of forwarding engines (active PFCs and DFCs) present in a Catalyst 6500 Series Switch chassis. Thus, after each forwarding engine has independently mitigated a line-rate attack in hardware, CoPP is enforced in software at interrupt level to make sure that only the exact rate configured in the control-plane policy is processed by the route processor. This should be taken into account when configuring a control-plane policer. {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-323 Configuring CoPP * Enable MLS QoS (if not already) * Create class map matching desired traffic * Create policy map and apply policing to the created class map * Apply policy map to control plane interface [policy-map control-plane-policy class reporting police 100000 contorm-action tranaalt axceed-action drop trol-plane sevicespolicy saput control-pla CoPP allows filtering and rate limiting of traffic sent to the route processor. This CoPP capability is achieved by using existing QoS policers and applying these to a new interface, the control plane interface. This interface is attached to the route processor. As a result, a control plane policy protects traffic inbound to the route processor CPU (CoPP only affects input packets, not output packets), and it can thus prevent DoS traffic from congesting the route processor CPU. CoPP configured policies depend heavily on the customer environment and where the switch is used in this environment. The following methodology can be used to determine the right CoPP policies for a given switch: ‘= Determine the classification scheme for your network: enumerate the known types of traffic that access the route processor and divide them into categories (classes), Examples of categories include an exterior gateway protocol (EGP) class, interior gateway protocol (IGP) class, management class, reporting class, monitoring class, critical application class, undesirable class, and default class. Classify traffic going to the route processor CPU using ACLs. For each category identified in step 1, different types of traffic can be further categorized using granular access control entries, = Review identified traffic, adjust classification, and apply liberal CoPP policies for each class of traffic. It is essential to apply a corresponding policing action for each class, because the Catalyst 6500 Series Switch will ignore a class that does not have a corresponding policing action. If the traffic in a given class should not be rate limited, configure a transmit policing conform-action with a high rate and a policing exceed-action of drop (for example, “police 31500000 conform-action transmit exceed-action drop”). Alternatively, both conform-action and exceed-action could be set to transmit, but doing so will allocate a default policer as opposed to a dedicated policer with its own hardware counters. 1-324 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. = Narrow the ACL permit statements to only allow known authorized source addresses, = Refine CoPP policies based on CoPP and software access control entry (ACE) counters. Note CoPP is supported in hardware for unicast IPv4 and IPV6 traffic. Its not supported in hardware for multicast and broadcast traffic. However, CoPP software protection should stil, be used to mitigate multicast and broadcast DoS attacks targeted at the route processor cpu. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-325 Verifying CoPP Sytes action: tranaast be be 3128 bye ra byeegy eottons byeeny acttonas pay encead 9 ope Use the following commands to examine and verify the operation of CoP: = show policy-map control-plane [Input class class_name] = show mls qos ip show access-list 1-826 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Enabling CPU Rate Limiters 6500 (contig) # ‘als rate-limit ali | layer? | multicast | unicast * Enable the desired CPU rate limiter Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 720 support platform-specific, hardware-based rate limiters for special networking scenarios resembling DoS attacks. These hardware CPU rate limiters are called special-case rate limiters because they cover a specific predefined set of IPv4, IPV6, unicast, and multicast DoS scenarios. These DoS scenarios identify special cases where traffic needs to be processed by the switch processor or route processor CPU. Examples include multicast traffic for which a destination prefix cannot be found in the routing, table, dropped traffic that needs to be processed by the CPU to send an Intemet Control ‘Message Protocol (ICMP) unreachable back to the source, and special packet types that cannot be identified with an ACL. ‘The special-case rate limiters do not provide the same level of granularity as CoPP and are thus especially useful for cases where hardware CoPP cannot be used to classify particular types of traffic. Such special packet types include packets with Time to Live (TTL) equal to 1, packets that fail the maximum transmission unit (MTU) check, packets with IP options, and IP packets with errors. Other examples of DoS scenarios not covered by CoPP include CPU protection against line-rate attacks using multicast packets, and switch processor CPU protection, CoPP and special-case rate limiters should be used together. Note ‘Special-case rate limiters will override the hardware CoPP policy for packets matching the rae limiters criteria, ‘The Catalyst 6500 Series Supervisor Engine 32 and Catalyst 6500 Series Supervisor Engine 720 forwarding engines provide 10 hardware registers to be used for special-case rate limiters. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-327 Eight of these registers are present in the Layer 3 forwarding engine and two of these registers are present in the Layer 2 forwarding engine. The registers are assigned on a first-come, first- serve basis and some rate limiters share one register. Should all registers be used, the only ‘means to configure another special-case rate limiter is to free one register. It is recommended 10 ‘use all ten special-case rate limiter hardware resources available. These are supported in all available Cisco Catalyst operating system and Cisco IOS Software releases for the Catalyst 6500 Series Supervisor Engine 720 and Catalyst 6500 Series Supervisor Engine 32. Rate limiters have been added over time, An exhaustive list of special-case rate limiters can be obtained by issuing the show mls rate-limit command. 1-328 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Summary This topic summarizes the key points that were discussed in this lesson. Summary * QoS classifies and schedules traffic according to priority during times of network congestion. Qo can be applied as an ingress or egress parameter. The TOS field is used to indicate IP precedence Policing is performed by the PFC or DFC. Egress policing can be applied only to a VLAN or routed interface, because the physical egress port is not known when the egress Policing function is performed. Summary (Cont.) * With VLAN-based QoS, applying the policy map to the VLAN interface means that all traffic associated with that VLAN is managed by the policy map. MQC provides the framework for applying QoS and defining a standard CLI for the application of QoS across all Cisco |OS- based platforms. The QoS engine must be enabled before you configure QoS. CoPP is used to protect CPU resources. VSS has QoS enabled by default on VSL and cannot be changed. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-329 4-330 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Lesson 9 Implementing EEM Overview This lesson explains the Cisco IOS Embedded Event Manager (EEM) functionality and architecture, and how EEM can be used for automating tasks and troubleshooting Objectives Upon completing this lesson, you will be able to describe and configure EEM. This ability includes being able to meet these objectives: Describe the EEM functionality and usage = Understand the EEM architecture Describe the EEM event detectors Identify the EEM applets and Tel scripts Identify hardware and software requirements Describe the EEM policy configuration steps EEM Overview This topic describes the EEM functionality, architecture and lists event detectors. Embedded Event Manager * Cisco IOS Software enhancement | Spaey | swe) vasa that is available on the Cisco a, een eae Catalyst 6500 Series Switch Y + A combination of processes designed to monitor key system parameters such as — CPU utilization ~ interface counters ~ SNMP. Syslog events * Acts on specific events or thresholds/counters that are exceeded * http:/Awww.cisco.com/goleem EEM isa generic framework that detects faults such as the following: = CPU hog detection = Memory use detection Watchdog mechanism Memory leak detection Link failures = Soft high availability test failures |= Cisco Generic Online Diagnostics (GOLD) test failures EEM allows you to script your own responses to detected events within the system. An event does not have to be a fault. For example, an event can be a counter incrementing to a certain value or the specified result of a GOLD test. A network engineer can use Tool Command Language (Tcl) to execute any command-line interface (CLI) command based on a detected event. EEM offers the ability to monitor events and take informational or corrective action when the monitored events occur, or when a threshold is reached. It extends the basic notification methods of syslog and Simple Network Management Protocol (SNMP) by allowing pre~ scripted actions based on monitored events. For example, if EEM is programmed to monitor a specific interface and detects that the interface is down, it can execute a script that is programmed to perform a “shut” followed by a “no shut” to try to reset the interface. The script could then send an e-mail to the network administrators announcing this event and the attempted fix. EEM can monitor syslog messages and take actions based on the text of a particular syslog message. EEM can also be used to craft custom SNMP traps for unique environments. 41-832 Implementing Cisco Data Genter Network infrastructure 1 (DCNF-1) v2.0 (© 2008 Cisco Systems, Inc. Hardware and Software Requirements Hardware and software requirements include: ® Minimum Cisco IOS software Release 12.2(18)SXF5 = = Allservice modules and WAN modules All Cisco Catalyst 6500 Series Supervisor Engines 720 and 32 All line card types—classic, CEF256, dCEF256, CEF720, (CEF720 All Cisco Catalyst 6500 Series Switch chassis—legacy and E-series Note ‘The EEM is included in Cisco IOS software with no additional licensing required. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-333 Basic EEM Architecture EEM is a policy-driven process by which faults in the Cisco IOS software system are reported through a defined API. ‘The EM policy engine receives notifications when faults and other events occur. E policies implement recovery on the basis of the current state of the system and the act specified in the policy for a given event. Recovery actions are triggered when the policy is run. EEM consists of: = Event detectors (publisher) = Event manager = Event manager policy engine (subscriber) The policy engine drives two types of policies that users can configure: = CLI applet policies = Tel policies . 1-334 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. 'Wil allow Cisco 10S applications or EEM polices to publish |epplication specific events Parses CLI commands for regular expression matches and published event on a successful match a persistent EEM counters that can be swt by pulies «« pulcy Can be triggered when a speci counor crosses a threshold [Provides a generic hardware feu detection framework for customers lo deine thei own fault coverage and corecive action co Courter G2nefates an event when a spectc IDB pot generic statis counter hs Her | crosses a threshold (above or below). | Generates an event fr 10S modular process start, |normalabrormel si and restart evens | This detector is used to generate an event when IOS memory leaks ‘SYS Monitor _|ocour, deadcks or infinite loops are detected in 10S tasks (Grocesses) GOLD SYS Manager EEM uses software programs known as event detectors to determine when an EEM event occurs. The event detectors are Cisco TOS processes that 1un all the time. Some event detectors are available in every Cisco IOS software release, but most event detectors have been introduced in a specific release. Noto Refer to the Catalyst 6500 IOS Configuration Guide and Release Notes to determine the current event detector list. Each event detector monitors a subset of the operational state of switch, Upon detecting certain events occurring, the job of the event detector is to raise an alert and provide information about the event that just occurred, The table lists some of the available event detectors. Application Event Detector Administrator configured policies registered to the EM subsystem can publish their own ‘events using this event detector; this gives a policy the ability to trigger another policy to execute. CLI Event Detector When a CLI command is entered from the console that matches a pre-defined CLI command defined by the administrator, then this event detector can generate an event. This event typically uses a pattern match to look for the specific command in order to trigger an event. Counter Event Detector Should the value of a designated counter identified within a policy change, then this event detector can generate an event. For example, policy A increments a counter, and when that ‘counter exceeds a threshold then policy B is invoked. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-335 GOLD Event Detector The GOLD event detector publishes an event when a GOLD failure event is detected on a specified module. Interface Counter Event Detector SYS Manager Event Detector When a threshold (absolute or incremental) for a specific port counter is crossed, then this event detector can generate an event. This provides an easier way to track interface statistics. The interface counters that are supported include: Input Errors Input Errors CRC Input Errors Frame Input Errors Overrun Input Packets Dropped Interface Resets Output Buffer Failures Output Buffer Swapped Out (Output Errors under run ‘Output Errors Output Packets Dropped Receive Broadcasts Receive Giants Receive Rate PPS. Receive Rate BPS Receive Runts Receive Throttle Reliability RX Load TX Load Transmit Rate PPS Transmit Rate BPS ‘The system manager event detector generates events for Cisco IOS software modularity process start, normal or abnormal stop, and restart events, The events generated by the system manager allow policies to change the default behavior of the process restart. SYS Monitor Event Detector Should a Cisco 10S software memory leak occur, or a deadlock or loop occur in a Cisco 10S software task (that is, a Cisco IOS Software modular process), this detector will generate an event. 4-336 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Generates an event when a speciic sysiog message is generated - | match is determined using a regular expression (Generates an event al a spectic ime or after a specific period (countdown) ‘Generate an event when Cisco 10S memory leaks occur, deadlocks, lor infinite loops are detected in Cisco 10S software \Used as a placeholder for policies that are manually triggered via the levent manager run policy-name command [Generate an event when either a ine card is inserted or removed from ithe chassis ‘Generates an event for allredundancy framework notifications and | slate transitions ‘Generates an event when a specific SNMP counter crosses threshold Syslog Event Detector This event detector will generate an event when a set syslog message is generated. Regular Expressions can be used to match on part of a syslog message to generate the event. This detector also allows a match on a number of pattems matching before generating an event (for cxample, if syslog message x occurs within 5 minutes, then generate an event). Timer Event Detector Used to generate an event based on one of the following four timer events: ® An absolute time of day timer = A countdown timer that publishes an event when the value hits 0 A watchdog timer that publishes an event when the timer counts to 0, upon which it resets itself and begins the cycle again ™ A cron timer that uses a UNIX-based cron specification to indicate when an event should be published Cisco IOS Software Watchdog Event Detector This event detector publishes an event when one of the following occurs: ® CPU utilization for a Cisco IOS software process crosses a threshold = Memory use for a Cisco IOS software process crosses a threshold ‘= Total available system memory crosses a threshold . Total used system memory crosses a threshold Total system CPU utilization crosses a threshold ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-337 None Event Detector This event detector is used as a placeholder for policies that are manually triggered through the event manager run command on the switch CLI. OIR Event Detector ‘This event detector will monitor the system for hardware (such as line cards) that are inserted or removed and, should this occur, generate an event. Redundancy Framework Event Detector Hardware or software high availability events related to a stateful switchover (SSO) failover, or any redundancy framework state transition will cause this event detector to generate an event. SNMP Event Detector Allows an SNMP object to be polled at a regular interval, and when the value of the object matches a specified value, an event is generated. 41-338 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. EEM Policies * Defined via: ~ CLI-An applet Tel script - Loaded onto a local file system. ‘= Multiple concurrent policies: ~ Multiple policy execution threads * Policies can generate a variety of actions Sees el ears = Send an e-mail Splines: Coie Generate SNMP trap ste og Generate Syslog message enced ry Fe ot oa An EEM policy is an entity that defines an event and the actions to be taken when that event coceurs. There are two types of EEM policies: applets and scripts. EEM policies are managed by the policy director. Applets and Tel Scripts are policies written by users to apply a set of actions when a given event occurs. The creation of an EEM policy involves: Selecting the event for which the policy is run. ® Defining the event detector options associated with logging and responding to the event. Defining the environment variables, if required = Choosing the actions to be performed when the event occurs. EEM Policy Actions ‘The CLI-based corrective actions that are taken when event detectors report events enable a powerful on-device event-management mechanism. Some actions are available in every Cisco IOS software release, but most actions have been introduced in a specific release. EEM policies can take a number of actions: ™ Execute a Cisco IOS CLI command @ Increment or decrement an EEM counter = Force an SSO ‘= Request system information ® Send an e-mail m= Run another EEM policy ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-339 Applet ™ Reload the switch = Generate an SNMP trap = Generate a syslog message An applet is a simple form of policy that is defined within the CLI configuration. The applet becomes part of the Cisco IOS configuration file and is persistent across system reboots. Tel Script A script is a form of policy that is written in Tool Command Language (Tel). Tel Scripts cannot be built from the switch CLI. This form of script offers a more flexible and powerful option for network administrators to apply actions on a given event occurrence. Like the applet, a registered Tel script is persistent across system reboots. EEM policies use the full range of the capabilities of the Tcl language. However, enhancements to the Tel language in the form of Tel command extensions that facilitate the writing of EEM policies are provided. The main categories of Tel command extensions identify the detected ‘event, the subsequent action, utility information, counter values, and system information. Tel processes can be configured to prevent the process from tying up the CPU via the event register process command. Resources such as memory and CPU utilization are critical to router operation since they are finite, The Tel interpreter has a built-in throttle that periodically suspends execution. Policy Director The Policy Director manages active scripts on the system. 4-340 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. Tel is a string-based command language that is interpreted at run time. Scripts are defined using aan ASCII editor on another device, not on the networking device. The script is then copied to the networking device and registered with EEM. Tel scripts are supported by EEM. As an enforced rule, EEM policies are short-lived run-time routines that must be interpreted and executed in less than 20 seconds of clapsed time. If more than 20 seconds of elapsed time are required, the maxrun parameter may be specified in the event_register statement to specify any desired value. Tel scripts can operate in one of two modes on the switch: = Full Mode: isco-defined scripts run in full Tel mode = Safe Mode: User-defined scripts run in safe Tel mode Safe mode is a safety mechanism that allows untrusted Tel scripts to run in an interpreter that ‘was created in the safe mode. The safe interpreter has a restricted set of commands that prevent accessing some system resources and harming the host and other applications. For example, it does not allow commands to access critical Ciscu 10S file system directories. For a list of restricted Tel commands refer to the Cisco JOS Configuration Guide. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-341 Default Policies in Cisco 1OS Software ** Built-in Tel scripts are supplied in Cisco IOS Software = Can be enabled by the network administrator Mandatory go_porttx.tel BELETELLIZEZ EEE ze The Cisco Catalyst 6500 Series Switch has built-in Tel scripts that can be used. The list of those scripts can be examined with the show event manager policy command. For example, the sl_intf_down.tel script monitors for syslog interface down messages and uses the environment variables to invoke CLI response: = syslog_pattern: syslog message to monitor = config_cmdI: First CLI command to invoke = config_cmd2: Second CLI command to invoke For more information on EEM, go to http://www.cisco.com/go/eem and http://www cisco.com/en/US/prod/collateral/switches/ps57118/ps708/prod_white_paper0900aee 4805457¢3.html, To acquire EEM scripts, go to http://forums.cisco.com/eforum/servle/EEM?page=main, 1-342 Implementing Cisco Data Center Network infrastructure 1 (DCNI+1) v2.0 (© 2008 Cisco Systems, Ine EEM Scripting Community on CCO * hitp:/forums.cisco.com/eforum/servieVEEM?page=main EEM policies (applets and Tel scripts) developed and shared by different developers are available on the EEM Scripting Community web page at hup://forums.cisco.com/eforum/servlet/EEM?page=main. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-343 How to Use EEM? ‘rng a bar ana p mai alert when a configuration me custom Syston julod GOLD tions f any EEM can be used to simplify and automate different tasks. The figure shows some examples. 1-344 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc Example: Simplified Network Troubleshooting = Upon syslog message “LINK-3-UPDOWN,” take following actions: Display counter error statistics for the link gone down ~ Start a TOR test ~ Start a GOLD loopback test Sond tho roculte to a user-configurable address Interface down Loopback test ai: — Ys. aime ' ye cs Send results in email alert mM = er ‘The figure shows an example of an interface down event being detected with the syslog EM. event detector which causes certain actions to be taken: m= Take the error counter statistics for the interface that triggered the “LINK-3-UPDOWN” syslog message = Start the time domain reflectometer (TDR) test for the interface Start the GOLD loopback test for the interface = Send the results of tests taken in a predefined form to a user-configurable address ‘© 2008 Cisco Systems, nc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-345 Example: EEM and Cisco IOS Software Modularity * Restart a process upon fault Event Detection Base * Generate diagnostics Senet * Activate previous maintenance pack upon consecutive fails Event Detection Action = cr i c Cisco IOS Software Patching EEM can be used to automate the patching process. A script periodically polls a central server for new patches. If a patch is present it installs and optionally activates it at a specific time of the day, The following has to be determined prior to using EEM for Cisco IOS patching: What devices should get what patches = Ifpatches should be installed only, or also activated Faulty Process A faulty process can cause one of the following: = Exceeding CPU threshold ® Exceeding a memory utilization threshold = Hangs Actions taken to remedy that could be: = Nonstop Forwarding (NSF)/SSO failover to redundant supervisor = Restart process, generate diagnostics, and email IT admin = Roll back to a previous Cisco IOS modularity maintenance pack 41-346 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Inbound traffic flow =e tows every 5 minutes A Server op por excoodng a testo, FEM Wagers ‘an acion ‘Apply rate tng poly olin ac Row impact on the network ‘Apply access Ist fo sop the tac Tow [Aor the administrator by sending al ERSPAN flows to IDSM for intrusion detecion/prevetion The figure shows an example of using EM to identify troublesome hosts, The following configuration is applied to the Catalyst 6500 Series Switch: = A Catalyst 6500 Series Switch is running NetFlow collecting traffic statistics (for example a version 9 source prefix aggregation records) = EEM is set up as cron process which counts flows every 5 minutes Upon an event of a given source exceeding a set flow threshold EEM triggers action. Possible actions taken could be: m= Apply rate lin iting to limit flows impact on the network m= Apply access list to stop traffic Alter network operator = Send flows to the Intrusion Detection System Module (IDSM) to act upon by utilizing Encapsulated Remote Switched Port Analyzer (ERSPAN) for distant IDSM. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-347 Example: Automate Switch Configuration bea CD User moves oa new donk and pugs nthe phone Switch por ne contoured 44d Ota cance ac por ionp van Ech rns ahow cap rlgnor eet ave pe, Deventer tpn rrr o Eee pares ASOOS non oe core Another example of EEM usage—upon connecting an IP phone, the EEM detects the device type which was connected and applies standard quality of service (QoS) configuration on the imerface, Event: = Anew phone is plugged in = EM detects switch port linkup status Actions: = Run Cisco -overy Protocol to detect device type = Apply Auto-QoS to apply best-practice configuration or configure location-specific commands 1-948 lmplementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, nc. VSS 1440 EEM Support * EEMis supported in the VSS 1440 system + EM scripts have to exist on both supervisors Standby VSS Cisco Catalyst 6500 Series Virtual Switching System 1440 (VSS 1440) supports EEM and EM for automating network management. The only prerequisite is to put the EEM scripts on both supervisor modules. (© 2008 Cisco Systems, Ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-349 Configuring EEM This topic describes the EEM configuration steps and some EEM usage examples. Configuring EEM Applet 6500 (config) # event manager applet applet-nane = Define the applet 6500(config-applet) # mnt application | cli | config | action Iabel cli | info | mail | * Define the desired event and action setion ‘command *file prompt quiet* ction 4.0 elt comand "end ction 5.0 elt comand *copy running disk0:running-configt comand "no file proagt quiet While an applet’s functionality does not match what a Tel script can do, it does provide a simple way for a policy to be created from the switch CLI and registered with the switch. An applet is entered from the switch CLI and, once entered, i configuration. ecomes part of the switch The applet has three parts: = Applet name @ Event statement = Action statement For more information on writing the CLI applet for EEM refer to Writing Embedded Event Manager Policies Using the Cisco IOS CLI in the IOS Configuration Guide. Configuring EEM Applet ‘The applet ially created using the command event manager applet command. For an applet (or Tel script) to function, it must be registered with the EEM policy director. Creating an applet from the CLI using the event manager applet command also inherently registers the applet with EEM at the same time, Once this command is entered, the system ‘moves the user into applet configuration mode, where additional applet commands can be entered. There are essentially three configuration commands that can then be entered. These include the following. m= event event-iype = action label action-type set label var-name value 1-350 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. For any configured applet, only one event command can be entered. The event command identifies the event detector that this applet is working with. If there is no event command configured, a warming message is posted when exiting event configuration mode, and the applet is not registered. The action statement indicates the action that should be invoked should there be a match to the configured event. The command can be used to set environment variables that might be referenced in this applet. Environment Variables Environment variables are defined outside of the script (via the CLI) and can be referenced by multiple scripts. The event manager enviranment command is used to set environment variables on the switch, All Cisco-defined environment variables start with an underscore ("_' This is a reserved character and cannot be used by users when defining their own variables. ‘There are a number of Cisco-defined environment variables available. Note that the following list is not a complete list of variables available. Refer to the EEM documentation on Cisco.com for a more exhaustive list. Action Environment Variable Purpose Variable ‘Send an e-mail from Used to identify the IP address of the SMNP server used when ‘within a script ‘e-mails are sent from within a script _email_to Used to identify the recipient of the sent e-mail ~email_from Used to identify the originator of the sent e-mail Inspect the counter | _counter_name | inspect the name of the counter event detector _counter_value —_| Chock the reference point value of the counter Check an SNMP | _snmp_oid Identify the MIB object being interrogated MIB object __snmp_oid_value | Contains the value of the SNMP MIB object {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-351 Example: Interface Stability Applet ‘Appel s created and registred wih the name ilerace any ‘Gvent manager applet INTERVACE STABILITY Erie ECAR event interface nane Gigabitethernet3/1 parameter raliahility antry-val 100 entry_op 1t entry val_is increment false exit_val 255 exit op eg exit val ie increment false poll_interval 60 faction 1.0 cli command ‘enable! faction 2.0 cli command "config t* faction 3.0 elf command "int g3/1* faction 4.0 cli command "shutdown" action 5.0 syslog mag “Interface G3/1 shutdown due to inatability® A sample EEM applet, Interface GigabitEthernet3/1, is monitored every 60 seconds and in case of bad reliability, the interface is shut down and a syslog message is generated. First, the CLI applet is created by configuring the applet name and at the same time the applet is also registered with EEM. The command event manager applet name places the user into applet configuration mode where subsequent configuration commands can be entered. 1-952 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 {© 2008 Cisco Systems, Inc. Example: Interface Stability Applet (Cont.) “an ntertace G31 e monitored every 60 seconds for is relabily index to maich avak Irate i to thon the interface s shut down and syslog message is generated ‘event manager applet INTERFACE-STABILITY event interface nane Gigabitethernet3/1 Parameter reliability entry-val 100 fentry_op 1t entry val_1a increment false ‘exit val is increment false poll_interval 60 action 1.0 cli command *enable* action 2.0 cli comand ‘config t* action 3.0 cli comand "int g3/1* faction 4.0 cli command "shutdowns action 5.0 syslog mag "Interface 63/1 shutdown due to instability Next, in the CLI applet configuration mode, the event which will trigger the actions is specified. terface GigabitEthernet3/I is checked to see if the valu seconds. between 100 and 255 every 60 {© 2006 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-353 Example: Interface Stab y Applet ‘Acton slaiemens perform the folowing +” Shutdown the interface configuration Send asysog alerting adminstrator hal he interface was shut duet instabity ‘event manager applet INTERFACE-STABILITY Le ee avant intarfara name GigahitRthernet3/1 Parameter reliability entry-val 100 entry op 1t entry val_is increnent false exit val 255 exit op eq exit_val_is increment fa! poll_interval 60 faction 1.0 cli command *enabie* ‘action 2.0 cli comand ‘config t* ‘action 3.0 cli comand "int g3/1* action 4.0 cli comand "shutdown* action 5.0 ayalog meg *Interface 03/1 shutdown due to instability! Finally, in the CLI applet configuration mode, the actions taken upon the event trigger are specified. The interface GigabitEthemet3/1 is shutdown upon event trigger and a syslog message is created. 1-354 Implementing Cisco Data Center Network Infrastructure 1 (DCNM-1) v2.0 (© 2008 Cisco Systems, nc. Writing Tcl Script = Register event * Define environment variables * Import namespace for additional command set * Create Tel script arcor $r et mag Sarr einfoines) Sutpat omieeed While applets provide for a simple and effective method for adding basic scripts to the sys Tel scripts are where the truc power and flexibility of EEM become evident. Tel is a string- based command language that is interpreted at runtime (in much the same way as the BASIC programming language), rather than being compiled in a traditional programming sense. The EEM subsystem support for Tel is based on Tel v8.3.4 and contains the full complement of commands available with that release along with a number of Tel command extensions designed specifically for the Catalyst 6500 Series Switch. em, EEM allows you to write and implement your own policies using Tel. Writing an EM seript involves: Selecting the event Tel command extension that establishes the criteria used to determine when the policy is un, ‘= Defining the event detector options associated with detecting the event. © Choosing the actions to implement recovery or respond to the detected event. A Tel script has four parts that must be completed, where the environment variables are the only optional component. The script must be syntactically correct and contain the basic elements before it can be successfully registered on the switch. Event Register Keyword The event register keyword is the first line that must exist in the script—it de! this script will be using, and the parameters that define specifics of the event that is to be monitored. Each of the event detectors has a respective EVENT_REGISTER keyword: application, CLI, counter, interface, IOSWDSYSMON, NONE, OIR, PROCESS, RF, ete. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-355 Importing Namespaces Two namespace keywords should be specified in every script that you load onto the Catalyst 6500 Series Switch: = namespace import ::cisco::eem::* = namespace import ::cisc These two commands import the extensions into the Tel operating environment. Tel Script A script consists of: = Code Entry (optional): performs checks to determine if the code body should be executed or not = Code Body (required): performs the processing of actions required for the given event = Code Exit (optional): confirms the actions have been successfully processed Note For writing Te! scripts Tel knowledge is required, Tel Script Elements Tel scripts are comprised of different elements or language sentences. Some examples include: Variables: set x | ‘Comments: # This next block of code is used to Delays: after 5000 Concatenation: set z [concat $x.Sy] Expressions: set x [expr 7+9] If-then-else: if { Sname == “cisco”} { set init | } = Conditional Loops For more information on using Tel scripts with FEM refer to the Catalyst 6500 IOS Configuration Guide to section Writing Embedded Event Manager Policies Using Tel. 1-356 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Storing Tel Scripts = Tel script can be stored in any file system. * Create directory for EEM scripts 65008 mkdir BEM * Register script directory with EEM 6500 (contig) # ‘event manager policy script-name type user = Register Tcl script After creation, the directory must be registered with the event manager directory user policy filesystem. directory command and verified with the show event manager directory user policy command. Each Tel script must be loaded into a registered directory in order to be successfully invoked, and also registered with the event manager policy script-name type user command. Note In the VSS 1440 this process must be done manually on each of the switches ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-357 Examining Registered Tcl Scripts By_soplet pattern (conf t) sync no akip no futogos. eel, mactp-tel ‘olicynane (macip.te1) ice 0 queue-priority normal maxrun 200.000 Trap Tine Registered applet system olf Off Tue Dec 19 20:29:22 2006 2 script Tue Dec 19 18:29:22 2006 3 woript, ‘Tue Dec 19 18:29:22 2006 Scripts that have been registered can be viewed using the show event manager policy registered command. ‘Asan enforced rule, EEM policies are short-lived run-time routines that must be interpreted and executed in less than 20 seconds of elapsed time. If more than 20 seconds of elapsed time are required, the maxrun parameter may be specified in the event_register statement to specify any desired value. The nice parameter determines whether a Tel script releases the CPU before completion to other contestants or keeps the CPU to itself until completed. A value of 0 means it will not release, and a value of 1 means it will release upon request. 4-358 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-) v2.0 (© 2008 Cisco Systems, Inc Add Static DHCP Snooping DB entries # Deceaber 2006 - Carl Solder (csolder@cisco.con) # Copyright (c) 2006 by Cisco Systems, Inc. All rights reserved. namespace import # open pipe to CLI and enter enable mode =-More-- The content of registered Tel scripts can be viewed with the show event manager policy available det led nue command. Note To create your Te! scripts, do not use Microsoft-based editors, as they add carriage return characters into the script that may cause problems. Use some other editor, for example metapad.exe. © 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-359 Summary This topic summarizes the key points that were discussed in this lesson, Summary * EEMis a generic framework for management automation. + FEM can tise applets or Tel scripts Different EEM event detectors exist. Various EEM actions can be taken. VSS requires EEM script to be put on both supervisor engines. 4-360 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Lesson 10 Utilizing Automated Diagnostics Overview The Cisco Catalyst 6500 Series Switch is a modular LAN switch that delivers highly-available and secure converged network services throughout the enterprise and service provider networks. High availability and reliability features are integrated technologies on the Catalyst 6500 Series Switch, and the platform offers integral components to deliver maximum uptime and fault detection. When problems are identified within the system, fault detection ‘mechanisms trigger fault recovery mechanisms. Although keepalives can be used as a general ‘means of intersystem fault detection, an internal resiliency mechanism is also required to ‘guarantee that a given system is healthy and functioning. This functionality has converged into a generic diagnostics framework known as Generic Online Diagnostics (GOLD). This lesson focuses on the fault management tools found within GOLD. Objectives Upon completing this lesson, you will be able to describe the fault management tools that are available for the Catalyst 6500 Series Switch. This includes being able to meet these objectives: = Describe the fault management software features available on the Catalyst 6500 Series Switch Identify the line cards that support the time domain reflectometer (TDR) and discuss their advantages = Explain GOLD functionality Describe soft high availability = Identify the enhanced troubleshooting and debug information available on the Catalyst 6500 Series Switch Automated Diagnostics Overview This topic explains the fault management software features found on the Catalyst 6500 Series Switch. Fault Management on the Catalyst 6500 * Improving resiliency in redundant and nonredundant deployments with: Software enhancements for better fault detection — Mechanisms to detect and correct soft failures in the system ~ Proactive fault detection and isolation ~ Routines to detect failures that the runtime software may not be able to detect Misconfigured System Memory Corruption ‘Software Inconsistency Enhanced Network Hardware Faults : Stability eae ey Enhanded Si ‘See nhanced System Stability Fault management capabilities provide the following features on the Catalyst 6500 Series Switch: = Software enhancements for better fault detection = Mechanisms to detect and correct soft failures in the system = Proactive fault detection and isolation = Routines to detect failures that the runtime software might not be able to detect Information developed by fault management software is used to trigger software and user responses, and promote high availability, 41-362 Implementing Cisco Data Centor Notwork infrastructure 1 (OCNL1) v2.0 (© 2008 Cisco Systems, inc Mieke Call Home, syslogs, SNMP Eu ‘Automates actions based on events that have occurred; Tel-based configurable ney Detects and correct soft ures ‘mechanisms ‘The fault management framework on the Cisco Catalyst 6500 Series Switch consists of automated and administrator initiated tools in three arcas: = GOLD: Minimizes downtime by detecting hardware problems at boot up time and proactively during normal operation. GOLD provides a common framework for diagnostics ‘operations across Cisco platforms. GOLD is a common concept and a common command- line interface (CLI). GOLD allows fault coverage for most hardware components and verifies proper operation at bootup time and during runtime. = Soft high availability: Proactively identifies soft failures in a system and alerts the administrator. Soft failures are transient failures that have not yet manifested themselves in a catastrophic event. These could be software failures or hardware failures, but not broken hardware or dead hardware. Examples of soft failures include detecting a memory shortage, preventing CPU hogs, verifying hardware and software table consistency, and checking the operation of the Ethernet out-of-band channel (EOBC). = Troubleshooting: Enhanced troubleshooting capabilities help to avoid problems. confine problems quickly, or log information about faults to aid in problem resolution. Information can be provided to the administrator through Smart Call Home, system log (syslog) messages, Simple Network Management Protocol (SNMP) traps and command responses. Fault responses can be automated with applets or Tool Command Language (Tel) scripts via Cisco Embedded Events Manager (EEM). ‘© 2008 Cisco Systems, Ine. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-363 Time Domain Reflectometer * TDR can be used to check the status of copper cables. * Cable fault is detected by sending a signal through the cable and reading the signal that is reflected back Enkei WS-X0148-GE- WSx6148-6E-45AF WS-x8548-06-7% WS.x8548.GE-45AF WSX8748-GE-1X WS-xB148A-GE-TX WS-X51M48A-G5-45AF WSx61484-RU-45 WS-X6140A-45AF TDR can be used to check the status of copper cables. The TDR detects a cable fault by sending a signal through the cable and reading the signal that is reflected back. Alll or part of the signal can be reflected back by any number of cable defects or by the end of the cable itself. Ifyou are unable to establish a link, use TDR to determine if the cabling is at fault. This test is especially important when replacing an existing switch, or when upgrading to Gigabit Ethernet, or when installing new cables, Before running the TDR test, ensure the port is up and running. Line cards that support TDR include: = = WS-X6148-GE-TX = = WS-X6148-GE-45AF WS-X6548-GE-TX WS-X6548-GE-45AF WS-XO748-GE-1X WS-X6148A-GE-TX WS-X6148A-GE-45AF WS-X6148A-RJ-45, WS-X6148A-45AF 1-364 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Generic Online Diagnostics Overview = GOLD implements a number of health checks both at system startup and while the system is running, = GOLD complements existing HA features like NSF/SSO running in the background, and alerting HA features when disruption occurs. ‘Check operational status of satay tiggered by an admins Scheduled chagnostios oman at | spect ne {signs roning inthe background sytog Mossone SSOIAG-SP.3. MAJOR: Modde 2 Onine iagroses dtoced ‘2 ajr Emer. Posse wwe dagnoste Module Zt se test Te reset conporent ote THAacton, Calton et GOLD defines a common framework for diagnostics operations across Cisco platforms running Cisco 10S Sofware, The GOLD framework interacts with platform-dependent diagnostics operation and network-management systems for centralized and distributed systems. Included within the framework are common CLI diagnostics and platform- «dependent fault-detection mechanisms for bootup and runtime diagnostics. Within the platform-specific diagnostics are hardware- diagnostic test results, specific fault-detection tests that can take appropriate action in response to (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switch 7-365 Using GOLD = Fault detection framework for high availability Proactive diagnostics servo as high availability triggers and take fauty hardware = Troubleshooting tools: cere: PU ee ser, GOLD is a suite of tests that run automatically to detect hardware faults in the Cisco Catalyst 6500 Series Switch. GOLD provides functional testing combined with component monitoring to detect faults in both passive components (connector, solder joints and so on) and active components (application-specific integrated circuits, programmable logic devices, and so on). GOLD can also run on-demand or on a scheduled basis under administrator control. Automated tests proactively check for hardware faults before they cause a system problem. On-demand and scheduled diagnostics can be used in troubleshooting scenarios to verify the status of the hardware. Bootup Diagnostics When bootup diagnostics detect a failure on the Catalyst 6500 Series Switch, the failing module is shut down, The network administrator can configure the level of diagnostics to be minimal, complete or disabled. The default on the Catalyst 6500 Series Switch is minimal, permitting the system to come on line faster, although it is recommended that you use the complete option instead: 6500 (config) #diagnostic bootup level compl Health Monitoring Diagnostics These tests are non-disruptive and can run in the background while the system is operational. Online diagnostics health monitoring proactively detects hardware failures in a live network environment. The administrator can determine the number of health-monitoring checks to run and define their execution intervals. These tests do not affect system performance, as the software restricts the interval to a minimum threshold to prevent degradation, Health monitoring tests include data and control plane verification, and validation of hardware registers. | minimal Use the show diagnosties command at the CLI to identify the tests that are available. The tests marked as nondisruptive (N) in this listing can be configured as health monitoring tests. 1-966 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc On Demand Diagnostics When a network administrator issues a diagnostic start command, an on-demand diagnostic test is triggered statically. These tests are useful for troubleshooting when a hardware fault is, suspected. On the Catalyst 6500 Series Switch, an on-demand diagnostic test does not reset or power down the faulty hardware. The following command triggers two on-demand module ‘memory tests (test number 212) on a module in stot 2. Ifthe first memory test fails, no further testing is necessary: 6500 (config) #diagnostic ondemand iterations 2 6500 (config) #diaguostic ondemand action-on-failure stop 6500 (config) #diagnostic start module 2 test 12 Scheduled Diagnostics Scheduled diagnostic tests can run at a specified time, or on a periodic basis. This option is useful when scheduling disruptive tests, which commonly run during maintenance windows. Use the show diagnostic result command to display the diagnostic results. The Catalyst 6500 Series Switch will not reset or power down the faulty modules detected. The following test schedules a loopback test (test number 1) on a module situated in slot 2 to run every Monday at 3 a.m. 6500 (config) #diagnostic schedule module 2 test 1 weekly MON 03:00 All tests generate syslog messages when detecting hardware faults. GOLD Test Suite GOLD provides the following tests: = Boot up diagnostics: Enhanced Address Recognition Logic (EARL) Learning Tests (supervisor and Distributed Forwarding Card [DFC]) — Layer 2 tests (Channel, BPDU, Capture) Layer 3 tests (IP version 4 [IPv4], IP version 6 [IPv6[, Multiprotocol Label Switching [MPLS]) ‘Switched Port Analyzer (SPAN) and multicast tests Content Addressable Memory (CAM) lookup tests (FIB, NetFlow, quality of service [Qos]) Port loopback test (all cards) Fabric snake tests = Health monitoring diagnost — Switch processor (SP)-rout processor (RP) in-band ping test (supervisor SP and RP, EARL, RW engine) Fabric channel health test (fabric-enabled line cards) Macnotification test (DFC line cards) Non-disruptive loopback test Scratch registers test (programmable logic device [PLD] and ASICs) ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-367 = On demand diagnostics: — Exhaustive memory test — Exhaustive ternary content addressable memory (TCAM) search test — Stress testing — All bootup and health monitoring tests can be run on-demand = Scheduled diagnostics: — All bootup and health monitoring tests can be scheduled — Scheduled switchover Note For current GOLD test suite refer to the Catalyst 6500 Series Switch documentation. 41-368 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 ‘© 2008 Cisco Systems, Ine VSS GOLD Support * Distributed GOLD environment * Local GOLD is active on both supervisors = Centrally managed by the active supervisor VS state: Active "[Vs State: Standby Local GOLD: Active Virtual Switch Link (sty ==) ‘Virtual Switch Domain Distributed GOLD Manager ‘Some enhancements to the GOLD framework have been implemented in a Virtual Switching, ‘System (VSS) environment, which leverages a Distributed GOLD environment. In this case, each supervisor runs an instance of GOLD, but is centrally managed by the active supervisor in the active chassis Additionally, four new GOLD tests available in VSS mode were added: m= TestVSLLocalloopback = TestVSLBridgeLink = TestVSLStatus = TestVSActiveToStandbyLoopback {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-369 Enhanced Features in 12.2(33)SXH = GOLD enhanced features: Single command to run tests successively in an order that wont cause false failures ~ Simulate diagnostics test failures ~ Non-disruptive test for unused port loopback test Port Tx monitoring to detect packet buffer, memory latch up issue ~ Error counter monitoring for all ASICs Integration with Call Home * On-Board Failure Logging (OBFL): Captures and tracks critical information specific to the line card Uptime, temperature sensors, critical errors, = System Event Archive (SEA): Logs abnormal events forthe entire system, larger storage capacity than OBFL a The enhancements available from Cisco IOS Software Release 12.2(33)SXH onwards are as follows: = GOLD enhanced features: — Single command to run tests successively in an order that will not cause false failures — Simulate diagnostics test failures — _ Nondisruptive test for unused port loopback test — Port transfer (Tx) monitoring to detect packet buffer, memory latch up issue — Error counter monitoring for all ASICs — Integration with Call Home = On-Board Failure Logging (OBFL): — Captures and tracks critical information specific to the line card — Uptime, temperature sensors, critical errors System Event Archive (SEA): — Logs abnormal events for the entire system — Larger storage capacity than OBFL 4-370 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. On-Board Failure Logging * Performs like a black box on an airplane * Stores critical diagnostic information on the line card * Available on 67XX-series line cards and Cisco |S 12.2(33) SXH * Currently it provides support for Cisco TAC and Engineering commands 3 days 3 hours 80 aint Saye 8 hours “0 min humoer of slot enanges © cays 21 hours ¢ minutos ular OBEL is similar to a black box on an airplane. where the critical information about a parti line card is kept. OBFL provides key data for troubleshooting and failure analysis. Currently it provides support for the Cisco Technical Assistance Center (TAC) and engineering commands. ‘The information among other includes: = Total uptime = Boot time = Historical temperature information on all the different temperature sensors in that particular board Environmental voltage = Diagnosties failures Note BFL is available on 6700 series line cards and since Cisco IOS Software Release 12.2(33)SXH. ‘The OBFL can be examined with the show logging onboard module number command. 6500#show logging onboard module 3 PID: WS-X6748-GE-TxX , VID: VO2, SN: SALI0403VvD UPTIME SUMMARY INFORMATION First customer power on : 02/22/2007 14:44:53 ‘Total uptime years 6 weeks S$ days 22 hours 30 minutes Total downtime : Q years Qweeke 5S days 1 hours 22 minutes Nunber of resets 116 Number of slot changes : 0 Current reset timestamp : 04/15/2008 14:35:45 ‘©2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-371 current slot Current uptime 3 O years 0 weeks 0 days 0 hours 30 minutes Reset | Reason | Count | oxa1 1s ENVIRONMENT SUMMARY INFORMATION MM/DD/YYYY HH:MM:SS Ins count Rem count. VID PID TAN Serial no No environment summary data to display TEMPERATURE SUMMARY INFORMATION Number of sensors 12 Sampling frequency 5 minutes Maximum time of storage : 120 minutes Sensor | 1D | Maximum Temperature oc MB-out, 930201 52 MB-In 93020235 MB 930203 40 MB 930204 49 BARL-Out s1o2m1 oo <...Test of the output omitted...> 1-372 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. System Event Archive * Assecure file (sea_log cat) that resides on the active supervisor Line cards communicate messages to the active SP via the EOBC Allows all modules in the chassis to record events Running log retained across reboots hs i Supports message throttling _Anomatyconaons (¢. gO) Mier nd rb GeooRanon Togoing are ‘Flvaning « packet ‘flushing a packet flushing a packet ‘online iag_flush pak queue: flushing « packet ‘The supervisor module can use the SEA to maintain some critical information about the line cards, ‘The SEA does not keep everything that onboard failure logging keeps, but only a subset of critical information even on classic line cards. ‘The SEA file consumes 10 percent of flash space but no more than 32 MB. ‘The SEA information can be examined with the show logging system command, 6500#show logging system SEQ: MM/DD/YY HH:MM:SS SW/MOD/SUB: SEV, COMP, MESSAGE 04/15/08 14:40:08 2/5/-1: MAJ, GOLD, test_rp_fib_sc{5]: incorrect ping 12 entry [0] 2: 04/15/08 14:40:03 —2/8/-1: MAJ, GOLD, diag_get_fabric_status{3}: diag_hit_sys limit: Test skipped! 1 04/15/08 14:40:03 2/S/-1: MAJ, GOLD, diag hit_sys_limit(3/1]: sp_netint_thr (0) 3. 04/15/00 14.40.09 2/5/-1. MA, GOLD, diag hit_eys limit [3/1]. SP{9¥), ‘Tx_rate [1430280359], Rx_rate [1430679189] : 04/15/08 14:40:03 _2/5/-1: MAS, GOLD, diag get_fabric_status(4]: diag_hit_sys limit: Test skipped! 6: 04/15/08 14:40:03 | 2/5/-1: MAS, GOLD, diag hit_sys limit (4/1) ep_netint_thr (0) 04/18/08 14:40:03 —_-2/5/-1: MAJ, GOLD, diag hit_sys_limit(4/1]: sPI9%], ‘Tx_rate(1430280359], Rx_rate [1430679189] (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-373 Health Monitoring Test— SP/RP In-Band Ping * Monitors forwarding data path between SP, RP and EARL cs » Runs periodically every 15 seconds after system is online (configurable) * 10 consecutive failures is treated as FATAL and wit Ce ela ro failover by crashing supervisor + Test enabled by default (configurable) The TestinbandSPRPing test uses diagnostic packets to verify the control and data path between the switch processor and the route processor through the forwarding engine. The Inband parameter refers to the internal data-path communication through the port ASIC and the fabric interfaces, as opposed to EOBC. The testis different from regular heartbeat tests between the switch and route processor and runs every 15 seconds by default. If there are ten consecutive failures rated as fatal, then a supervisor switchover is initiated. This test is skipped during high traffic or CPU usage. During the TestinbandSPRPing test, a diagnostic packet is sent from the switch processor to the route processor through the Layer 2 engine. The diagnostic packet is then sent back to the switch processor using the Layer 3 and 4 engine and the rewrite and multicast engine functions, This particular test will reveal faulty ASICs, faulty connectors, and software faults Diagnostic tests are not exclusive to supervisors and modules on the Cisco Catalyst 6500 Series Switch, Health-monitoring capabilities can also be implemented for power supplies, fan trays, temperature and other related components. The EEM extends the health-monitoring capabilities for various software and hardware threshold parameters. For additional information on these and other tests, go to hutp://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/ guide/diags.html 41.374 Implementing Cisco Data Center Network Infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. What Is Soft HA? = Software enhancements for better fault detection = Mechanism to proactively detect and correct all soft failures in the ‘Complete coverage for in-band health check (RPISP ping) Proactive software consistency checks LTL, CBL, FPOE jing for memory i “Corrective action initiated by diagnostics "Reporting and isolation of OIR failure events on for EOBC events Detecting failures before they become catastrophic is important within any network. Software high availability proactively identifics and alerts the administrator to soft failures within the system. Soft failures are faults that have not become catastrophic. Examples of soft failures include memory shortages, CPU hogs, hardware and software table inconsistencies, and faulty operation of the Ethernet EOBC. Other software high availability features include these: = Complete coverage for in-band health check (RP/SP ping) Proactive software consistency checks; local target logic (LTL), color-blocking logic (CBL), fabric port of exit (FPOE) Limit recovery attempts during persistent failures Better error handling for memory leak situations Corrective action it tiated by diagnostics Reporting and isolation of online insertion and removal (OIR) failure events Fault isolation for EOBC events Note ‘The vast majority of these tests are not user-configurable. They are additions to the software as part of the ongoing quality initiatives of Cisco. ‘© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-375 Automated System Configuration Check * Administrator initiated automatic configuration check * Detects common configuration errors + Recommends changes * Available in IOS 12.2(18)SXE 177.044, timeout yseee cate sp 100 percent (6/8), round-trip tebadiy but 9 not ‘The command to run the automated system configuration check is show diagnostic sanity. The following is an example of the output produced by the show diagnostic sanity command: 6500#show diagnostic sanity Pinging default gateway 172.26.197.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 172.26.197.1, timeout is 2 seconds: Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 me Please check the confreg value : 0x0 Could not verify boot image "disk0:s72033-js-mz.capacity" specified in the boot string. UDLD has been disabled globally - port-level UDLD sanity checks are being bypassed. ‘The following ports have trunk mode set to on: Po101 The following porte with mode set to desirable are not trunking: Gi3/1 The following ports have receive flow control disabled: Gi3/1, Gi3/2, Gi3/3, Gi3/4, Gi3/5, Gi3/6, Gi3/7, Gi3/8, Gi3/9, Gi3/10, Gi3/11, Gi3/12, Gi3/13, Gi3/14, Gi3/15, Gi3/16, Gi3/17, Gi3/18, Gi3/19, Gi3/20, Gi3/21, Gi3/22, Gi3/23, Gi3/24, Gi3/25, Gi3/26, Gi3/27, Gi3/28, Gi3/29, Gi3/30, Gi3/31, Gi3/32, Gi3/33, Gi3/34, Gi3/35, Gi3/36, Gi3/37, Gi3/38, Gi3/39, Gi3/40, Gi3/41, Gi3/42, Gi3/43, Gi3/44, Gi3/45, Gi3/46, Gi3/47, Gi3/48, Gi6/1, Gi6/2, Pol01 1-376 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 {© 2008 Cisco Systems, Inc. ‘The value for Community-Access on read-only operations for SNMP is the same as default. Please verify that this is the best value from a security point of view. The value for Community-Access on write-only operations for SNMP is the same as default. Please verify that this is the best value from a security point of view. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-377 Automated System Health Check * Administrator initiated automatic health check * Detects all ASIC errors in the system = Available in Cisco IOS software release 12.2(18)SXH [ASIC error registers for nonzero (counters on all installed modules) ort level nonzero error counters CPU and memory utilization To run a full diagnostic test on the system, use the diagnostic start system test all command. Caution This will run all diagnostic tests available for the system as it is configured. Many of these tests are very intrusive and will cause prolonged outages. Do not run this on a production system. 1-378 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Cisco IOS Configuration Rollback and Replace * Rapid and accurate configuration rollback for configuration changes * Replace the running configuration with archived configuration without rebooting = Combined with automatic configuration archive * Allow restoration to a well-known state without need for manual reconfiguration ‘The Cisco IOS configuration rollback and replace can be used for the following: = Rapid and accurate configuration rollback for Cisco IOS configuration changes. Replacing the running configuration with any archived Cisco IOS configuration without rebooting Cisco 10S configuration rollback and replace work by applying only delta changes, which allows very rapid configuration applications. When combined with automatic configuration archive, it allows automated configuration management and control, which improves accuracy by allowing restoration to a well-known state without the need for manual reconfiguration {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-379 Smart Call Home Customer Cisco TAC. s¢ 12.2(33)SXH The Smart Call Home feature enables Catalyst 6500 Series Switches to send diagnostic information directly to Cisco TAC, significantly reducing the time to solve minor hardware problems and shortening the RMA cycle. ‘Smart Call Home is available on Catalyst 6500 Series Switches from Cisco IOS Software Release 12.2(33)SXH onwards. Smart Call Home provides the capability for a customer to configure call home profiles that define: = Destination = Transport = Events of interest For example, a customer might configure a profile to allow an individual to be paged at home via short text email when a major diagnostic failure occurs. Or, all syslog events might be sent via HTTPS to a network management station. Smart Call Home and Cisco TAC Certain events can send call home messages via HTTPS (or email) to Cisco TAC. These cases are covered in the Smart Call Home feature by including a default call home profile for Cisco TAC. The events of interest are: = Diagnostics = Environmental = High-severity syslog . Inventory and configuration Note Any of these message types can be removed by customers. In addition, if customers choose to cond configuration, then eoneitive dotaile ouch ae pacoworde chould be removed, 1-380 Implementing Cisco Data Center Network infrastructure 1 (OCNH1) v2.0 (© 2006 Cisco Systems, Inc. Upon receipt of a Smart Call Home message at Cisco, the first step is entitlement processing. Customers need to have a standard Cisco SMARTnet support contract to be entitled to the ‘Smart Call Home service. Next, the message is passed to the rules processor that inspects the message and determines what next steps to take. If the situation is serious enough (module failure or fan failure, for example), a service request will be raised directly with Cisco TAC and routed to the correct, team to handle the problem. Note To avoid raising service requests when they are not necessary, GOLD knows the difference between modules failing and being removed Ifa service request is not raised, then the message is stored along with the associated analysis of the problem for a customer or TAC engineer to use as part of their troubleshooting. ‘Smart Call Home then has the option of proactively notifying the customer of problems which are likely to be emerging issues, rather than issues with which the TAC can deal (for example high-temperature alarms independent of any fan failures, or accumulating single-bit memory errors). If Smart Call Home does not notify the customer, then the customer or TAC engineer will be able to access all messages along with Cisco analysis on the Smart Call Home web application. Also available on the Smart Call Home web application are reports on the device hardware, software, and configuration cross-referenced against any field notices, security alerts, and any end-of-life notifications of which we are aware, specific to the hardware and software on the: device. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1.381 Using Diagnostics for Troubleshooting This topic describes how diagnostic tools are used for troubleshooting. Time Domain Reflectometer 6500 (config) tar im Uses TDR to test the copper cable 65008 show interfaces transceiver + Examine the optical transceiver information The TDR test guidelines are as follows: = TDR can test cables up to a maximum length of 115 meters. = The TDR test is supported on Cisco 7600 Series Routers running Release 12.2(17a)SX, and later releases on specific modules. See the release notes for Cisco IOS Software Release 12.2SX on the Catalyst 6500 Series Switch and Cisco 7600 Series Router Supervisor Engine 720, Supervisor Engine 32, and Supervisor Engine 2 for the list of the modules that support TDR. m= The valid values for interface type are Fast Ethernet and Gigabit Ethernet, = Do not start the test at the same time on both ends of the cable. Starting the test at both ends of the cable at the same time can lead to false test results. = Do not change the port configuration during any cable diagnostics test. This action may result in incorrect test results. = The interface must be up before running the TDR test. Ifthe port is down, the test cable- dingnosties tdr command is rejected and the following message is displayed: diagnostics tdr interface gigabitethernet2/12 ¥ Interface Gi2/12 is administratively down % Use ‘no shutdown’ to enable interface before TDR test start. | If the port speed is 1000 and the link is up, do not disable the auto-MDIX feature. = For fixed 10/100 ports, before running the TDR test, disable auto-MDIX on both sides of the cable. Failure to do so can lead to misleading results. 1-382 Implementing Cisco Data Center Network infrastructure 1 (OCNH1) v2.0 (© 2008 Cisco Systems, nc. = For all other conditions, you must disable the auto-MDIX (medium dependent interface crossover) feature on both ends of the cable with the no mdix auto command. Failure to disable auto-MDIX will interfere with the TDR test and generate false results. ™ Ifa link partner has auto-MDIX enabled, this action will interfere with the TDR cable diagnostics test and test results will be misleading. The workaround is to disable auto- MDIX on the link partner. = Ifyou change the port speed from 1000 to 10/100, enter the no mdix auto command before running the TDR test. Note that entering the speed 1000 command enables auto-MDIX, regardless of whether the no mdix auto command has been run. The show interfaces transceiver command displays the information about the transceiver modules that are present in any module with the temperature, voltage, current and transmit (Tx) and receive (Rx) power for individual transceiver. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-363 GOLD Complete System Test ‘eeoordiagnostie, stare, systen test a1D wan Diagnostic systen Test will sterupt morsel ayets ‘operation and also system required RESET after # feat fe done. pr won: 5 POWER DOWN Linecar ‘s00¢abor aLagnontie ater Do you mal 120 48 port 10/100/3000mD Bthe W/A Supervisor tagine 020 1008 (astiy Teatteattiostcese esvonciaprostie stop wont ai GOLD can be used to perform a complete system test with the diagnostic start system test all command. Note Avoid using the diagnostic start system test all command during business hours, Prior to 12.2(33)SHX a complete system test with GOLD had to be made manually. A manual approach could produce issues with multiple tests running at the same time, which could cause false failures. To stop the complete GOLD system test, use the diagnostic stop test all command, 1-984 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc ‘current bootup diagnostic 10 Diagnostic( Module 1]: Diagnostic handle is not found for the card. Diagrostic{Module 21: Diagnostic handle 4s not found for the card odute 3: E720 48 port 10/108/1800n0 Ethernet SorialNo : SAL1E9051K7 nostic Tovel x caré bootup? nina est resutts: (. + fase, F = Fail, U = untested) 1) TestLeopoec 2) TeattaPativonstoring 5) Testeyncheareenannel. = = Ouepat cate To examine and verify the results of the complete GOLD system tests, issue the show diagnostic result module all command. The command shows the results of the tests run on. cach individual module. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-385 GOLD TestUnusedPortLoopback outpet smictes IsRbt ons wodute 21 Tenttnanerorttoopbach(z0423) baw The TestUnusedPortLoopback test periodically verifies the data path between the supervisor engine and the network ports of a module in the runtime. In this test, a Layer 2 packet is flooded onto the VLAN, which only the test port and the in-band port of the supervisor are part of. The packet loops back in the port and retums to the supervisor on that same VLAN. It is similar to TestLoopback, but only runs on unused (admin down) network ports, and only one unused port per port ASIC. It substitutes for the lack of nondisruptive loopback test in current ASICs. This test runs every 60 seconds. Note Newer line cards have port ASICs that are capable of fering the diagnostic packet and can loopback only this diagnostic packet during the existing TestPortLoopback test. Therefore, the test is non-disruptive only on the newer line cards; hence, this TestUnusedPortLoopback was developed for legacy line cards. 1-386 Implementing Cisco Data Centar Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc ‘500-1Fahox, longing onboard wodule 3 Pr: wS-XB740-G6-TK —"y VID! VO2, SN: SALIOQ9QtN7 Curcent reset timestamp Current slot OBFL captures and stores hardware failure and environmental information into nonvolatile ‘memory. OBFL permits improved accuracy in hardware troubleshooting and root cause solation analysis. Stored OBFL data can be retrieved in the event of a line card crash or failure and is accessible even if the line card does not boot. (© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Seri 4.387 System Event Archive (SEA) ‘Wino legging syst on Pro/v¥ MM: 88 MOO/SUB: SEV, COMP, MESSAGE sqiQ/27708 20:00:08 BF-4 5 MA, GOLD, tant SoombackAI> t4et_Lnphark com 20:48:05 81-1 MAL, GOLD, test_Lo_ente_nen[3}: non ont non test 19:40:28 /-1 1 MAL, COLD, ru engine s 4 5 Ad, OOD, ranging index xSP] oer oipak $a HULL! pan test failed for queue 9 0 guve8/1]: check_udp_packat snail daar ingest xeiF) ertoente 3) 3/51 SHA, GOL, check top peet[ 5/96]: nwwpak 48 NULL The primary method of discovering the cause of system failure is system messages. When system messages do not provide the information needed to determine the cause of a failure, you can enable debug traces and attempt to recreate the failure, However, there are several situations in which neither of these methods provides an optimum solution: = Reviewing a large number of system messages can be an inefficient method of determining the cause of a failure = Debug trace is usually not configured by default. = You cannot recreate the failure while using debug trace. = Using debug trace is not an option if the switch on which the failure has occurred is part of your critical network. ‘The SEA enables each of the CPUs on a switch to report events to the management processor using an out-of-band interface. Each event is logged in nonvolatile memory with a time stamp. You can retrieve the event log by accessing the bootflash on the device, or you can copy the log, to another location such as a removable starage device ‘The SEA is a file of up to 32 MB that stores the most recent messages recorded to the log. 4-388 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (©2008 Cisco Systems, Inc System Capacity Planning * Providing a dashboard view of system hardware capacity, as well as the current utilization of the system. Para The show platform hardware capacity command provides the user with a summary of the ‘current system utilization of the platform. specifie hardware resources on the Catalyst 6500 Series Switch. This command displays the current system utilization of the hardware resources and displays a list of the currently available hardware capacities, including the following: m= Hardware forwarding table utilization = Switch fabric utilization = CPUC) utilization = Memory device (flash, DRAM, NVRAM) utilization The various engines (Policy Feature Cards [PFCs], Distributed Forwarding Cards [DFCs], and centralized forwarding cards [CFCs]) The intended user of the show platform hardware capacity command is a network engineer or network architect who can use the command for capacity planning. The output of the ‘command can be used to compare the current hardware utilization with the maximum hardware capacities so that the user can make informed network design decisions based on the usage of the produet. This command can also be helpful in initial troubleshooting efforts. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-389, + ingress abel, ‘LB1-09 aU" Gestinetion, aDd- AGL ejaceney edule AcLent AGLask Oo8ent OoSnsk LBI-in Ub-ag LOvere LOUSY AND OR ADs . © o i The show platform hardware capacity acl command displays the capacities and utilization for access control list (ACL) and QoS TCAM resources for the system. This information is important to engineers that may want to deploy voice and video applications in their networks (QoS) or that may want to deploy security in their networks (ACL). The show platform hardware capacity epu command displays the capacities and utilization for CPU resources of the system. These metrics can assist network administrators in determining whether or not they have the overhead to add additional CPU-based features, or if they need to examine their network for possible security threats that could be impacting their performance. ‘The show platform hardware capacity eobe command displays the capacities and utilization for the EOBC resources for the SP and RP, and any CFCs or DFCs in the chassis. The EOBC is the channel over which the supervisor communicates with the system line cards to perform such functions as hardware table updates (on DFCs), hardware management, and statistics collection. The show platform hardware capacity fabric command displays the capacitics and utilization for the switch fabric and/or Cisco Catalyst bus resources for the system. This information can aid network engineers in several ways: ‘= When determining how much more bandwidth can be added to the system |= When determining if an upgrade to the system is needed to support more bandwidth = When deciding whether to change from centralized forwarding to distributed forwarding ‘The show platform hardware capacity flash command displays the capacities and utilization for the flash devices and NVRAM resources of the system. This can assist network administrators in determining when, or if, they need to consider upgrading their flash devices. NVRAM is not upgradeable, but knowing the utilization is helpful when considering the configuration of the system. 1-380 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0, (© 2008 Cisco Systems, Ine The show platform hardware capacity forwarding command displays the capacities and utilization for the Layer 2 and Layer 3 forwarding resources of the system, focusing mainly on forwarding-engine rates and forwarding-table scalability. Using these statistics, a network engineer can determine how any proposed changes or upgrades would affect the Layer 2 and Layer 3 forwarding capabilities of the system. They could also assist in troubleshooting any potential forwarding issues, if those issues were a result of system resources being exceeded. The show platform hardware capacity interface command displays the capacities and utilization for Interface resources of the system. This information can aid network administrators when they need to troubleshoot network issues, or when they need to consider how tw deploy QoS within their infiastucture, With the availability of the Tx and Rx buffers for each interface, a network administrator can decide how much space to allocate for each class of traffic The show platform hardware capacity monitor command displays the capacities and utilization for SPAN, Remote Switched Port Analyzer (RSPAN) and Encapsulated RSPAN (ERSPAN) resources for the Catalyst 6500 Series Switch. Network engineers can use this information to help plan how they are going to utilize the SPAN, RSPAN and ERSPAN capabilities of the Catalyst 6500 Series Switch. Since some of the service modules require SPAN/RSPAN/ERSPAN sessions in order to capture traffic, itis important that these parameters be fully understood when planning to deploy these modules. The show platform hardware capacity multicast command displays the eapacities and utilization for the Layer 3 multicast resources for the Catalyst 6500 Series Switch. This information is important to IT staff when they are preparing to implement or increase their use of multicast applications (such as video, teleconferencing, or financial institution trading). ‘The show platform hardware capacity netflow command displays the capacities and utilization for the NetFlow resources of the Catalyst 6500 Series Switch. This information can be used when deciding whether or not the NetFlow resources for the current supervisor are adequate for the current network environment. It can also help users who may be troubleshooting NetFlow issues in their infrastructure. The show platform hardware capacity pfe command displays the capacities and utilization for all of the PFC3s, including Layer 2 and Layer 3 forwarding, NetFlow, CPU rate limiters and ACLIQoS TCAM resources. This is a useful command if a user is trying to get a general picture of the overall forwarding utilization of the system, The show platform hardware capacity power command displays the capacities and utilization of the power resources for the Catalyst 6500 Series Switch. This information can help network engineers determine if they have the proper power capacity to add new line eards or deploy new equipment such as IEEE 802.3af inline power devices for IP telephony, video surveillance or wireless networking. The show platform hardware capacity qos command displays the capacities and utilization for the QoS policer resources of the Catalyst 6500 Series Switch. This information is useful when troubleshooting QoS policing or when determining how many more policers can be added to a system. The show platform hardware cay ‘ate-limit command displays the capacities and utilization for the CPU rate limiter functionality on the Catalyst 6500 Series Switch. The information relayed by this command can assist network planners in deciding how to deploy security services to help mitigate denial of service (DoS) attacks against their network infrastructure {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-304 ‘The show platform hardware capacity system command displays the capacities and utilization for the system resources of the Catalyst 6500 Series Switch. This information can be used by network administrators to troubleshoot system operational issues and can be used to plan for system upgrades. ‘The show platform hardware capacity vlan command displays the capacities and utilization for the VLAN resources of the Catalyst 6500 Series Switch. With this information, network engineers can plan VLAN utilization for any new services that they need to deploy. 1-882 Implementing Cisco Data Genter Network infrastructure 1 (DCNI-1) v2.0 {© 2008 Cisco Systems, Inc Configuring Smart Call Home: Contact Info * Enter Call Home configuration mode 6500 (ctg-cal1-hone) # “mai customer-id customer aite-id atte contract-1d Claco_contract_1D * Set the contact information How you configure Smart Call Home depends on how you intend to use the feature. Some information to consider before you configure Smart Call Home includes this: Atleast one destination profile (predefined or user-defined) must be configured. Which destination profiles are used depends on whether the receiving entity is a pager, e-mail, or automated service such as Smart Call Home: — Ifthe destination profile uses e-mail message delivery, you must specify a Simple Mail Transfer Protocol (SMTP) server. — Ifthe destination profile uses secure HTTP (HTTPS) message transport, you must configure a Trustpoint certificate authority (CA), The contact e-mail, phone, and street address information should be configured so that the receiver can determine the origin of messages received. = The switch must have IP connectivity to an e-mail server or the destination HTTP server. = If Smart Call Home is used, an active service contract must cover the device being, configured. ‘These are the configuration steps for the Call Home feature: Step1 Configure the contact information of your site. Step2 Configure destination profiles for each of your intended recipients. Step 3 Subscribe each dest tion profile to one oF more alert groups, and set alert options. Step4 Configure e-mail settings or HTTPS settings (including CA certificate), depending ‘on the transport method, Step5 Enable the Call Home feature. Step6 Test Call Home messages. For more information on configuring Call Home, refer to the Catalyst 6500 IOS Configuration Guide or go to http://www.cisco.com/go/smartcall (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-393 1-394 Configuring Smart Call Home: Profile 6500 (cfg-cal1-home) # profi * Define the destination profile 6500 (cfg-cal1-home-profile) # destination preferred. = Enter profile information destination transport-method {enail | http) {eneil emeii-addrers | necp uri) format {Jong-text | short-text | xml) active A destination profile contains the required delivery information for an alert notification. At least one destination profile is required but multiple destination profiles of one or more types ‘can be configured, For the transport method, the destination e-mail address or URL to which Call Home messages will be sent has to be configured. Note ‘When entering a destination URL, include either http’ or https//, depending on whether the server is a secure server. If the destination is a secure server, a Trustpoint CA must also be configured, The message format by default is XML, Implementing Cisco Data Center Network Infrastructure 1 (OCNL-1) v2.0 (© 2008 Cisco Systems, nc. Verify Smart Call Home Profile + The default Cisco TAC Call Home profile 500. 1#shon call howe profile all Profile nae: c$4001AC-1 Profile statues INACTIVE Ft" aooress(eos): nttpar//so0ls.lsco.con/ite/service/odsce/ services /OOCEServic yeriodie contiguration into message 4s scheduled every 8 dey of the month at 14 ertegie snventory snfo message 18 senequlea avery 9 cay of the month at 13:52 stereo severity Gingrostic ‘evironsent syatog-Pattern The profile configuration can be checked with the show call-home profile {name | all} command. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-395 Configuring Smart Call Home - Alert Groups 6500((cfg-cal1-home) # ‘lert-group {all | configuration | diagnostic | environment [dnventory | syslog} * Enable the desired alert group 6500 (cf;-call-home-profile)# Gubscribe-to-alert-group configuration [periodic (daily hhimm | monthly date hhimm | weekly day hhsmm)) = Subscribe the destination profile to the configuration group = The destination profile can be subscribed to any of the defined alert groups 6500 (cfg-cal1-hone-profile)# subscribe. tovalert-group all * Subscribe the destination profile to all available alert groups An alert group is a predefined subset of Call Home alerts supported in all switches, Different types of Call Home alerts are grouped into different alert groups depending on their type. These alert groups are available: = Configuration = Diagnostic = Environment = Inventory m= Syslog The triggering events for each alert group are listed in the Alert Group Trigger Events and Commands section, and the contents of the alert group messages are listed in the Message Contents section of the Catalyst 6500 Release 12.2SXH and Later Software Configuration Guide, Network Management section, Configuring Call Home subsection. The configuration steps are as follows Step1 Enable the desired or all alert groups. Step2 Subscribe the selected profiles to the alert group. 1-396 Implementing Cisco Data Center Network infrastructure 1 (OCNI1) v2.0 (© 2008 Cisco Systems, Inc. Enabling Smart Call Home 6500 (cf; hone) mail-server (pv4-address | name) priority number * Specify the mail server 46500(contig) # » Enable the Call Home service ‘destination preterred-nog-t tno destinetion transport-eethod NTP To use the e-mail message transport, at least one SMTP server address has to be configured (up to four can be configured for backup purposes). From and reply-to e-mail addresses ean also be configured, and a rate limit on e-mail or HTTP messages can be set. Finally, the configured Call Home has to be enabled with the service call-home command. (© 2008 Cisco Systems. Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-397 Verifying Smart Call Home Configuration 65008 call-home teat “test profile name = Send Call Home test message Gall-hone send elert-group diagnostic (module aumber | slot/aubelot | elot/bay number) (profile naze] A test message can be manually triggered with the call-home test command, and a specific alert group message to a selected profile can be triggered with the call-home send alert-group command, 1-398 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, In. Cisco IOS Configuration Archiving 6500(config)# ‘archive path filesystem: filename maximum number-of-backup-copies * Enable archive functionality, specify the archive configuration file and maximum number of backup files 65008 * Archive the current configuration 65008 show archive e500Ashow archive ‘he maxinut archive configurations altowed is 14 + Verify archive Tha'nent archive file Alli be ened" ofanarbaokap” cong? ‘cnive # ane Pet etacocnoncuntg-t < no Recent ‘The Cisco IOS configuration archive is intended to provide a mechanism to store, organize, and manage an archive of Cisco IOS configuration files in order to enhance the configuration rollback capability provided by the configure replace command. The configuration replacement and configuration rollback feature provides the capability to automatically save copies of the running configuration to the Cisco IOS configuration archive. These archived files serve as checkpoint configuration references and can be used by the configure replace command to revert to previous configuration states. The archive config command allows you to save Cisco IOS configurations in the configuration archive using a standard location and filename prefix that is automatically appended with an incremental version number (and optional timestamp) as each consecutive file is saved. This functionality provides a means for consistent identification of saved Cisco 10S configuration files. You can specify how many versions of the running configuration will be kept in the archive. After the maximum number of files has been saved in the archive, the oldest file will be automatically deleted when the next, most recent file is saved. The show archive command displays information for all configuration files saved in the Cisco IOS configuration archive. (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-209 * Activate the configuration from the file with optional listing the ‘commands that are applied (6500- Weonfigure replace disk0-backup-contig, nis wi1i apply all necessary aogitions. and deletions to repla ‘content which is ‘assumed to be a complete configuration, not a partial ‘configuration. Enter Y if you are sure’ you want to proceed. 7 Ino}: y Total nunber of RolIback Done The configuration replacement and configuration rollback features provide the capability to replace the current running configuration with any saved Cisco IOS configuration file, This functionality can be used to revert to a previous configuration state, effectively rolling back any configuration changes that were made since that configuration file was saved. The configure replace command provides the capability to replace the current running configuration with any saved Cisco IOS configuration file. This functionality can be used to revert to a previous configuration state, effectively rolling back any configuration changes that were made since the previous configuration state was saved. When using the configure replace command, you must specify a saved Cisco IOS configuration as the replacement configuration file that will be used to replace the current running configuration, The replacement file must be a complete configuration generated by a Cisco IOS device (for example, a configuration generated by the copy running-config destination-url command), or, if generated externally, the replacement file must comply with the format of files generated by Cisco IOS devices. When the configure replace command is entered, the current running configuration is compared with the specified replacement configuration and a set of differences is generated. The algorithm used to compare the two files is the same as that employed by the show archive config differences command. The resulting differences are then applied by the Cisco IOS parser in order to achieve the replacement configuration state, Only the differences are applied, avoiding potential service disruption from reapplying configuration commands that already exist in the current running configuration. This algorithm effectively handles configuration changes to order-dependent commands (such as ACLs) through a multiple-pass process. Under normal circumstances, no more than three passes are needed to complete a configuration replacement operation, and a limit of five passes is performed to preclude any looping behavior. 1-400 Implementing Cisco Data Center Network Infrastructure 4 (OCNI-1) v2.0 {© 2008 Cisco Systeme, Ine. Summary This topic summarizes the key points that were discussed in this lesson Summary * The fault management framework on the Cisco Catalyst 6500 Series Switch consists of automated and administrator initiated tools. * GOLD minimizes downtime by detecting hardware problems at boot time and proactively during normal operation. + Enhanced troubleshooting capabilities help to avoid problems, confine problems quickly, or log information about faults to aid in problem resolution * Information can be provided to the administrator through Call Home, syslog messages, SNMP traps, and command responses. = TDR can be used to check the status of copper cables. * Cisco 10S configuration archive and replace and rollback functionality achieve configuration versioning and enable reverting to an older configuration. (©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-401 1-402 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 © 2008 Cisco Systems, Inc. Lesson 11 Implementing SPAN, RSPAN, and ERSPAN Overview Switched Port Analyzer (SPAN), Remote SPAN (RSPAN), and Encapsulated RSPAN (ERSPAN) sessions allow the network administrator to monitor and analyze trattic locally or remotely. Local SPAN, RSPAN, and ERSPAN sessions all send traffic to a network analyzer such as a sniffer device or other Remote Monitoring (RMON) probe. This lesson provides an overview of SPAN, RSPAN, and ERSPAN and discusses how these sessions are configured and used on the Cisco Catalyst 6500 Series Switch. Objectives Upon completing this lesson, you will be able to describe how to use and configure SPAN, RSPAN, and ERSPAN sessions. This includes being able to meet these objectives: = Describe how to use SPAN sessions within the network Describe how to configure SPAN sessions Describe how to use RSPAN sessions within the network Describe how to configure an RSPAN session Describe how to use ERSPAN sessions within the network Describe how to configure ERSPAN sessions SPAN Overview This topic describes the purpose and use of local SPAN sessions on the network. Understanding SPAN * Facility that nondisruptively copies traffic from source port to a destination port for analysis * Analysis is commonly performed by a network sniffer or RMON probe. SPAN Source Port ‘SPAN Destination Port Copies are received here Local SPAN copies traffic from one or more source ports in any VLAN, or from one or more VLANs to a destination port for analysis. A local SPAN session is an association of source ports and source VLANs with one or more destination ports A local SPAN session is configured on a single switch, and does not have separate source and destination sessions, Note Each local SPAN session can have elther ports or VLANs as sources, but not both, Local SPAN sessions do not copy locally sourced RSPAN VLAN traffic from source trunk ports that carry RSPAN VLANs. Additionally, SPAN scssions do not copy locally sourced RSPAN Generic Routing Encapsulation (GRE) traffic from source ports, Each SPAN session can use either ports or VLANs as sources, but not both. 1-404 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc SPAN Port Types = Local source ports, local VLANs, and local destination ports are all supported by the SPAN process. * Traffic flowing in either or both directions on source SPAN port is copied to a SPAN destination port. (or both direct Receive fratfic Trandmit Traffic, ‘SPAN Destination Port ‘SPAN Destination Port A SPAN port can be either source or destination SPAN port: = A source interface, which is a monitored port and can be either switched, routed port or port channel. A single SPAN session can monitor one or multiple source ports, which can be in any VLAN. Trunk and non-trunk ports can be intermixed. SPAN does not copy the encapsulation from a source trunk port. The source port cannot be an active member of a port channel = A source VLAN, which is a monitored VLAN and, thus, effectively makes all the ports in the source VLANs source ports. A source VLAN only monitors traffic leaving or entering, Layer 2 ports in VLAN, Routed traffic thru the source VLAN is not monitored. = A destination port, which is a Layer 2 or Layer 3 LAN port to which local SPAN sends traffic for analysis. After a port is configured as a destination port, it no longer receives extemal traffic and is dedicated for use by the SPAN feature only. The only traffic forwarded by a SPAN destination port is the traffic required for the SPAN session. If source VLAN is used and destination port resides in the same VLAN, it gets two copies (for ingress and egress traffic and port). ‘Trunk ports configured as SPAN destination ports can transmit encapsulated traffic. With Cisco IOS Software Release 12.2(18)SXD and later you can configure per-VLAN filtering on destination trunk ports using allowed VLAN lists for local SPAN. This is known as Virtual SPAN. ‘You can monitor ingress (receive-only), egress (transmit-only), or both (receive [Rx] and transmit [Tx]) traffic types on a source SPAN port. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-405 SPAN Guidelines * L2 and L3 interfaces can be configured as SPAN ports * Previous SPAN session has to be explicitly deleted * Multiple source SPAN interfaces can belong to different VLANs * By default ingress and egress traffic is mirrored * Source SPAN port does not copy source trunk 802.1q or ISL tag * Interface can be destination SPAN port for one SPAN session only * Destination SPAN port does not participate in STP; BPDUs seen are from source SPAN port * Individual session can have either (no intermixing) ~ Source interfaces ~ Source VLANs — Filter VLANs The following restrictions must be considered when using SPAN: =A SPAN destination port that is copying traffic from a single egress SPAN source port sends only egress traffic to the network analyzer. From Cisco IOS Software Release 12.2(18)SXE and later, if you configure more than one egress SPAN source port, then the traffic that is sent to the network analyzer includes the following types of ingress traffic from the egress SPAN source port: — Any unicast traffic that is flooded on the VLAN — Broadcast and multicast traffic = Both Layer 2 and Layer 3 ports can be configured as sources or destinations, ™ Individual source ports and source VLANs cannot be mixed in a session. = Multiple ingress source ports can belong to different VLANs. m= Source VLANs and filter VLANs cannot be mixed in the same session. = If ingress or egress is not configured both 1s assumed. ‘= SPAN does not copy source trunk port Inter-Switch Link (ISL) or IEEE 802.1Q tags; you should configure a trunk as the destination port to send locally tagged traffic to the traffic analyzer. = A port specified as a destination port in one SPAN session cannot be a destination port for another SPAN session. = A port configured as a destination port cannot be configured as a source port. = Destination ports do not participate in any spanning tree instance; any bridge protocol data units (BPDUs) seen on the destination port are from the source port. RSPAN does not support BPDU monitoring, m= Allpackets sent through the switch for transmission from a port configured as an egress source are copied to the destination port, including packets that do not exit the switch through the port because Spanning Tree Protocol (STP) has put the port into the blocking state, or on a trunk port because STP has put the VLAN into the blocking state on the trunk port. 1-408 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. SPAN, NAM, and IDSM « SPAN: Copy packets to E destination ports (NAM) fos) ‘on the same switch from: : = * source ports, + VLANs, or » EtherChannels, * RSPAN SPAN from a remote ‘switch * VACL: Ciscoe and Filler traffic based on. an ‘ACL and send it to NAM The Network Analysis Module (NAM) analyzes traffic for Catalyst 6500 Series Switches using RMON, RMON? and other MIB. SPAN selects network traffic and directs it to the NAM. TraffieDirector (or any other IETF-compliant RMON application) can analyze link characteristics, aid capacity planning, departmental accounting, deployment of differentiated services policies, and filter or capture packets for debugging. The Cisco Catalyst 6500 Series Intrusion Detection System Services Module 2 (IDSM-2) analyzes traffic for Cisco Catalyst 6500 Series Switches and acts upon detected attacks. SPAN is used to send the traffic to be monitored to the Catalyst 6500 Series IDSM-2. {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-407 SPAN, Supervisor 720, and VSS Supervisor 720 and Supervisor 32 SPAN, RSPAN and ERSPAN its with Cisco OS 12.2(33)SXH onwards * VSS Domain — The number of SPAN sessions is limited to the VSS active supervisor module. ‘SPAN management is active on VSS active supervisor module. * 12.2 SXH IOS — Number of local egress SPAN session is increased to 14 ~ CPU interfaces (SP and RP) can be used as SPAN source ports SPAN, RSPAN and ERSPAN have limitations in the number of sessions when used on the Cisco Catalyst 6500 Series Supervisor Engine 720. When SPAN is used in the Cisco Catalyst 6500 Series Virtual Switching System 1440 (VSS 1440) domain, the number of available SPAN sessions is limited to the capability of the active supervisor and the active supervisor is also the one that handles SPAN management, With Cisco IOS Software Release 12.2 SXH some enhancements were introduced: = The number of local egress SPAN sessions is increased to 14 Switch processors (SPs) and route processors (Ps)can be used as SPAN source ports, thus allowing monitor control traffic. 4-408 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc Configuring SPAN This topic describes the SPAN configuration steps. Configuring SPAN Source Port £500 (config) # monitor session number source interface interface (a) | vlan(a) {rx | tx | both} = Define SPAN session number and source port * Set session number [6500 (config) #monitor 1-66 <1-86> SPAN rviconedule Use SPAN to enable service module ‘6500 (config) Fnonitor session 17 * Choose the SPAN ‘SPAN destination interface, VLAN source option SPAN Filter VLAN [sco contigysmonitor session 1 source? | * Choose traffic source | "dn 7 raeib pera interface (interfaceiVLAN) ‘Pat source Remote __SPa¥ Source VLAN Local SPAN does not use separate source and destination sessions. ing a local SPAN s nn, the monitor session source interface command is When config used A SPAN session has number between 1 and 66, The same SPAN session number is used to configure local SPAN source and destination ports. A source SPAN port is configured by choosing: terfaces or VLANs ® Actual interface or VLAN = Port type which can be = Trafic type: Rx, Tx, or both (© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-409 Configuring SPAN Source Port (Cont.) ' Set the actual source interface or VLAN ‘interface 7 Fastethernet Ethernet TEE 802 Gigabitethernet Gigabitethernet IEEE 802.37 fort-channel Ethernet Channel of interfaces {6500(contigy#nonitor session 1 source vian 7 ‘<1-4094>" SPAN source VLAN + Choose the 1 Specify a range of intertaces both Monitor received and transmitted traffic Fx Monitor received traffic only {x Monitor transaitted traffic only Choosing the actual interface or range of interfaces (physical, port channel, VLAN) and type of traffic (Rx, Tx, or both). 1-410 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc Configuring SPAN Destination Port 6500 (contig) # monitor session number destination interface(a) * Define SPAN session number and destination port ‘analysis-nodule interface remote * Sot the actual interface for SPAN destination (6500 (contig) #eonitor session 1 source interface ? Fastethornot FastEthernet IEEE Gigabitethernet Gigabitethernet IE (06503(contig) aonitor session 1 destination interface Gi9/1 Destination SPAN port is configured by choosing: = Port type, which can be interfaces or NAM/IDSM (keep in mind that remote is used for RSPAN) = Actual destination interface {© 2008 Cisco Systems, Inc. _ Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-411 Setting SPAN Destination for Cisco NAM or IDSM ' Select NAM for SPAN destination ‘8500 (config) Faonitor session 1 destination 7 analysis-module interface Antrusion-detection-module renote 18500 (contig) Faonitor <1-6>" SPAN * Select one of the available Cisco NAM data ports (8500(config)#ronitor session 1 destination intrueion-detection-nodule 4 data-port 7 1-2 Port number To configure the NAM or Catalyst 6500 Series IDSM-2 as the destination port for a SPAN session, follow the steps listed in the figure above. ‘The difference between this and a regular SPAN session is in the destination port type—use cither the analysis-module or intrusion-detection-module command, 41-412 Implementing Cisco Data Conter Network Infrastructure 1 (OCNI-1) v2.0 ‘© 2008 Cisco Systems, Inc Verifying SPAN Session 65008 show monitor session number (detail) * Examine the configured SPAN session GSthanon wonTRar easton T Se Filter vue: A SPAN session can be verified with the show monitor session command. ‘The command shows: = The source SPAN port type and number (cither physical, port channel or VLAN) The type of the traffic mirrored = The destination port {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-413 RSPAN Overview This topic describes the RSPAN configuration Understanding RSPAN * Similar to SPAN in functionality * Source and destination ports are located on different switches RSPAN Destination Port La bh RSPAN copies traffic from one or more source ports in any VLAN or from one or more VLANs on one switch to a destination port on a remote switch. RSPAN supports source ports and source VLANs on one switch, and destination ports on a different switch, With RSPAN, an administrator can remotely monitor multiple switches across the network in ‘one central location. 1-414 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc Understanding RSPAN VLAN = Aseparate VLAN must be defined to support the transmission of RSPAN traffic across the network. = The RSPAN VLAN carries only RSPAN traffic. RSPAN VLAN Destination gf ® rot RSPAN consists of an RSPAN source session, an RSPAN VLAN, and an RSPAN destination ‘The RSPAN source and destination sessions are configured on different switches. When configuring an RSPAN source session on one switch, a set of source ports or VLANs are associated with an RSPAN VLAN. ‘The RSPAN destination s RSPAN VLAN. jon on another switch associates the destination port with the Traffic for cach RSPAN session is carried as Layer 2 nonroutable traffic over a user-specified RSPAN VLAN that is dedicated for that session in all participating switches. Participating switches must be trunk-connected at Layer 2. The participating switches do not need to have RSPAN configured on them. Only the switches with the RSPAN source and. destination will have RSPAN configured on them. In the example here, this means that the middle switch will just need the RSPAN VLAN configured on it while the left-most switch will have the RSPAN source session + RSPAN VLAN and the right-most switch will have the RSPAN destination session + RSPAN VLAN. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-415, RSPAN Guidelines = All switches (source, intermediate, and destination) between RSPAN source and RSPAN destination port: Must be trunk connected at Layer 2, Must be configured with all RSPAN VLANs, * Original VLAN information is not maintained with RSPAN + Only trunk ports should be members of RSPAN VLAN + RSPAN does not support BPDU monitoring oe * RSPAN destination ports do not participate in STP * RSPAN cannot be source VLAN for SPAN The following guidelines and restrictions must be considered when using RSPAN: = Participating switches (source, intermediate, and destination) must be trunk-connected at Layer 2. = Any network device that supports the RSPAN VLAN(s) can be an RSPAN interme: device. ‘= There is no limit to the number of RSPAN VLANS that the network can carry. = Intermediate network devices might impose limits on the number of RSPAN VLANs they ccan support = RSPAN VLANs must be configured in all source, intermediate, and destination network devices, = VLAN Trunking Protocol (VTP) can propagate the configuration of VLANs numbered 1 through 1024 as RSPAN VLANs, RSPAN VLANs are only used for RSPAN traffic. Do not configure a VLAN used to carry management traffic as an RSPAN VLAN. Do not assign access ports to RSPAN VLANs. Ensure that trunk ports are configured to carry RSPAN VLANs. MAC address learning is disabled in the RSPAN VLAN. RSPAN does not support BPDU mon 4-416 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, Inc. Configuring RSPAN This topic describes the RSPAN configuration. Configuring RSPAN on Source Switch * Source switch configuration steps: ~ Define RSPAN VLAN — Configure source and destination RSPAN session — Configure trunk on port pointing towards destination switch 6500 (config) # vian number = Set VLAN to be RSPAN 6500 (config) # monitor session number destination remote vlan rspan-vian = Set RSPAN VLAN as destination for the session RSPAN does not use separate source and destination sessions. An RSPAN session is configured on source and destination switch. ‘An RSPAN session on the source switch is configured using the following steps: Step1 Define the chosen VLAN to be an RSPAN VLAN with the remote-span command. Step2 Configure the interface towards the destination switch to be a trunk with the switchport mode trunk command. Step3 Configure the RSPAN session source port with the monitor session source interface command. Step4 Configure the RSPAN session destination with the monitor session destination remote van command, Note If there are intermediate switches between the source and destination switches, they have to be configured with the RSPAN VLAN, and the links connecting the switches have to be trunk links with the RSPAN VLAN allowed. ‘© 2008 Cisco Systems, nc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-417 Configuring RSPAN on Source Switch (Cont.) = Define VLAN 400 to be used for the RSPAN session on switch A * Define the RSPAN source and destination session for switch A * Define GigabitEthernet 4/1 as a trunk port vian 400 renote-span ' interface Gigabitethernet 4/1 ‘suitchport mode trunk ' monitor session 1 source interface Gi2/1 both monitor session 1 destination remote vian 400 " Destination: RSPAN VLAN 400 } FRSPAN destination G31 port In the example, VLAN 400 is configured to be RSPAN VLAN on source switch. Interface GigabitEthernet 4/1 is configured to be a trunk port. The source for the RSPAN session is the GigabitEthemet2/1 interface with Rx and Tx traffic being spanned, The destination for the RSPAN session is set to VLAN 400. It the interface GigabitEthernet 4/1 on source switch would have any VLAN restrictions, RSPAN VLAN 400 should be allowed. 1-418 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. Configuring RSPAN on Destination Switch * Destination switch configuration steps: Define the RSPAN VLAN ~ Configure source and destination RSPAN session — Configure trunk on port pointing towards source switch 6500 (config) # = Set the RSPAN VLAN as source for the session ‘The RSPAN session on the destination switch is configured using the following steps: Step 1 Step 2 Step 3 Step 4 Define the chosen VLAN to be RSPAN with the remote-span command. Configure the interface towards the source switch to be a trunk with the switchport mode trunk command. Configure RSPAN session source port with the monitor session source remote vlan command, Configure RSPAN session de: interface command. ation with the monitor session destination Note If there are intermediate switches between the source and destination switches, they have to be configured with the RSPAN VLAN, and the links connecting the switches have to be trunk links with the RSPAN VLAN allowed. ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 8500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-419 Configuring RSPAN on Destination Switch (Cont.) = Define VLAN 400 to be used for the RSPAN session on switch B + Define the RSPAN source and destination session for switch B = Define interface Gigabitthemet 4/2 as a trunk port vian 400 Fenote-span ' interface GigabitEthernet 4/2 ‘suitehport mode trunk 1 monitor monitor session 1 destination interface Gi3/1 rspant RSPAN VLAN 400 7 Source Port G21 In the example, VLAN 400 is configured to be an RSPAN VLAN on destination switch. Interface GigabitEthernet 4/2 is configured to be a trunk port, ‘The source for the RSPAN session is VLAN 406, since the configuration is applied on the destination switch. The destination for the RSPAN session is set to interface GigabitEthernet 3/1 Ifthe interface Gigabitéthernet 4/2 on the destination switch would have any VLAN restrictions, RSPAN VLAN 400 should be allowed. 1-420 Implementing Cisco Data Center Network Infrastructure 1 (OCNI-1) v2.0| (© 2008 Cisco Systems, Inc Verifying RSPAN Session SS00F show SonTtar SeaTTEN TY GORRTT SS00F shan Santor essen 1 GaTATT + newote PRinatton session son Destination Porte Filter vans ‘ Destination REPAN VLAN srroutput ousted. RsPanE— RSPAN Destination Source Port G2/1 1 Port A RSPAN session can be verified with the show monitor session command. ‘The command shows: = The source RSPAN port type and number (either physical, port channel or VLAN) The type of the traffic mirrored = The destination port or VLAN The example shows the output for the RSPAN session on the source and destination switeh, {© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4800 Series, and Blade Switches 1-421 ERSPAN Overview This topic discusses the ERSPAN functionality. Understanding ERSPAN * Similar to SPAN in functionality = Source and destination ports are located on different switches with any L3 routed network in between Routed “a Network _- ERSPAN ESS Routed GRE-Encapsulated Destination, Trattic Port on Like the other forms of SPAN, ERSPAN copies traffic from the SPAN source to the SPAN destination. But in ERSPAN, the SPAN destination is the IP address of another device in the network. While RSPAN can carry monitored traffic only over a Layer 2 infrastructure, ERSPAN overcomes this limitation, carrying the monitored traffic across an arbitrary IP cloud to an analyzer or other device connected anywhere in the network. At the same time, like RSPAN, ERSPAN provides the ability to aggregate captured traffic from multiple network devices and deliver it to a single centralized system, eliminating the need to distribute analyzers throughout the network. ERSPAN consists of: = An ERSPAN source session = Routable ERSPAN GRE traffic = An ERSPAN destination session ERSPAN source and destination sessions are configured on different switches. An ERSPAN session is configured by associating a set of source ports or VLANs with a destination IP address, ERSPAN-ID number and optionally, a virtual routing and forwarding (VRF) name. On the destination switch, an ERSPAN destination session is configured by associating a destination port with the source IP address, ERSPAN ID and optionally, a VRF name. Each ERSPAN session can have either ports or VLANs as sources, but not both. 1-422 Implementing Cisco Data Center Network infrastructure 1 (DCNI+1) v2.0 (© 2008 Cisco Systems, Inc. ERSPAN source sessions copy traffic from the source ports, or source VLANs, and forward that traffic through the network using routable GRE packets to the ERSPAN destination session. ‘The ERSPAN destination session then switches that traffic to the destination port. Note Certain versions of the Catalyst 6500 Series Supervisor Engine 720 with PFC3A do not support ERSPAN. For more details, go to ERSPAN Guidelines and Restrictions at hitp:waw.cisco.com/en/USidocs/switches/ian/catalyst6500/ios/12.2SX/configuration(quidel span.himitwp 1059619. Note Cisco 10S Software Release 12.2(18)SXE and later release support ERSPAN. ‘© 2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-423 ERSPAN Guidelines Release 12.2(18)SXE and later support ERSPAN PFC3B and PFC3BXL support ERSPAN Regardless of any configured MTU, ERSPAN creates Layer 3 packets that can be as long as 9,202 bytes Al participating switches must be connected at Layer 3 and the network path must support the size of the ERSPAN trafic. ERSPAN does not support packet fragmentation The "protocol type" field vaiue in the GRE header is 0x88BE The following restrictions must be taken into consideration when using ERSPAN: = Cisco IOS Software Release 12.2(18)SXE and later support ERSPAN. = PFC3B and PFC3BXL support ERSPAN. = WS-SUP720 (supervisor engine with PFC3A) hardware version 3.2 or higher support SPAN; use the show module version command to find the supervisor engine hardware version. = PFC2 does not support ERSPAN. The Protocol Type field in the ERSPAN packets GRE header is 0x88BE, The payload of a Layer 3 ERSPAN packet is a Layer 2 Ethernet frame excluding any ISL or 802.1Q tag, = ERSPAN adds a 50-byte header to each copied Layer 2 Ethernet frame and replaces the 4- byte cyclic redundancy check (CRC) trailer. = ERSPAN supports jumbo frames that contain Layer 3 packets of up to 9,202 bytes. If the length of the copied Layer 2 Ethernet frame is greater than 9,170 (9,152-byte Layer 3 packet), then ERSPAN truncates the copied Layer 2 Ethernet frame to create a 9,202-byte ERSPAN Layer 3 packet. = Regardless of any configured maximum transmission unit (MTU) size, ERSPAN creates a Layer 3 packet that can be as long as 9,202 bytes; if any interface in the network enforces an MTU size smaller than 9,202 bytes, the ERSPAN traffic will be dropped. = All participating switches and the Layer 3 network paths must support the size of the ERSPAN traffic. = ERSPAN does not support packet fragmentation = _ERSPAN packet IP precedence or differentiated services code point (DSCP) value can be set to prioritize ERSPAN traffic for quality of service (QoS). = The ERSPAN destination session must be on the PFC3. 41-424 Implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 (© 2008 Cisco Systems, nc. = AILERSPAN source sessions on a switch must use the same origin IP address. = AILERSPAN destination sessions on a switch must use the same IP address on the same destination interface. = The ERSPAN ID differentiates ERSPAN traffic arriving at the same destination IP address from various different ERSPAN source session. Note To verify that the supervisor in a Catalyst 6500 Series Switch supports ERSPAN, the show asic-version slot command is used. Mind that HYPERION version 2.0 and higher supports. ERSPAN ‘©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-425 Configuring ERSPAN This topic describes the ERSPAN configuration. Configuring ERSPAN on Source Switch 6500 (config) # monitor session number type erspan-source * Set the source ERSPAN session type and number 6500 (config-mon-erspan-src) # source interface interface(s) | vlan vian(s) {xx|tx|both} * Choose the ERSPAN session source port and traffic type 6500 (config-mon-erspan-src) # ‘destination ip address ip address erspan-id erspan_flow id origin ip address ip addr * Set ERSPAN session destination parameters — source (local) and destination (remote) IP address, flow ID ERSPAN uses separate source and destination sessions. ERSPAN source and destination sessions are configured on different switches. The ERSPAN session on the source switch is configured using the following steps: Step1 Define the ERSPAN session number and type with the monitor session type erspan-source command. The value can be between | and 66. Step2 Choose the source port and traffic type with the source command, The same rules as for RSPAN and SPAN source port applies; that is, the source port can be a single interface, a list of interfaces, a VLAN or a list of VLANs. The source session can look at traffic entering and/or leaving the source port. Step3 Enter the session destination parameters: m= The destination IP address—a routable IP address configured on the destination switch—with the ip address command. The same IP address has to be configured in the ERSPAN destination session. The local IP address with the origin ip address command (this can be a loopback interface). = The ERSPAN session flow ID with the erspan-id command. The ID number is used by the source and destination sessions to identify the ERSPAN traffic and must also be entered in the ERSPAN destination session configuration. The value can be between 1 and 1023. Optionally you can also define: = An ERSPAN session description with the description command, ™ A source VLAN filter. if the ERSPAN source port is a trunk interface, with the filter command. 1-426 Implementing Cisco Data Center Network Infrastructure 1 (OGNI-1) v2.0 (© 2008 Cisco Systems, Inc = The IP Time to Live (TTL) value of the ERSPAN traffic packets with the ip ttl command. ‘The value can be between | and 255. = The IP precedence value of the packets in the ERSPAN traffic with the ip prec command. ‘The value can be between 0 and 7. = The IP DSCP value of the packets in the ERSPAN traffic with the ip dsep command. The value can be between 0 and 63. = A VRF used, instead of a global routing table, with the vrf command. Note ‘The destination IP addresses must be routable, and thus reachable, through Layer 3. {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-427 Configuring ERSPAN on Destination Switch 6500 (config) # monitor session number type erspan-destination * Set the destination ERSPAN session type and number 6500 (config-mon-erspan-src) # cination interface interface (s) = Choose the ERSPAN session destination port 6500 (config-mon-erspan-src) # soure ip address ip add: erspan-id erspan flow id = Set ERSPAN session source parameters — destination IP address (local) and flow ID The ERSPAN session on the destination switch is configured using the following steps: Step1 Define the ERSPAN session number and type with the monitor session type erspan-destination command. Note ‘The session number can be different from the one defined in the source switch. Step 2 Choose the destination port with the destination command. The same rules apply as for RSPAN and SPAN destination interface. Step3 Enter the session source parameters: © The destination local IP address with the ip address command. This is a routable local IP address on the switch which must match the IP address configured in the ERSPAN source session. = The ERSPAN session flow ID with the erspan-id command. The flow ID must match the ane configured in the FRSPAN source session. Optionally you can also define: = An ERSPAN session description with the description command. = A VRF used instead of global routing table with the vrf command, Note The destination local IP addresses must be routable, and thus reachable, through Layer 3 from the source switch. 41-428 Implementing Cisco Data Center Network infrastructure 1 (DCNI-1) v2.0 (© 2008 Cisco Systems, Inc. ‘monitor session 1 type erspan-source source interface GigabitEthernet2/1 rx destination ip address 10.2.2.254 origin ip address 10.1.1.254 erspan-id 10 IERSPAN Destination ‘erspan-destination destination interface Gigabitethernet3/1 ip address 10.2.2.254 erspan-id 10 In the example, an ERSPAN session is configured between switch A and B. ‘The follos Thi Traffic received is monitored on Gigal 1g pertains to the switch A configuration: is a source ERSPAN session, thernet 2/1 ‘The destination IP address of the ERSPAN session is the Loopback0 interface on switch B. The origin of the session is set to the Loopback0 interface on switch A. ‘The ERSPAN flow ID is set to 10. The following pertains to the switch B configuration: = This is a destination ERSPAN session. A protocol analyzer or RMON probe is connected to interface GigabitEthemet 3/1 The local IP address ERSPAN destination—is Loopback0 on switch m= The same ERSPAN flow ID is used as on switch A. Note Loopback0 on switch B has to be reachable on Layer 3 from switch A. {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1.429 Tin tanto eveston TOT i Peters 4 hone! is Destination 1P Aadress eRsPANE ETERSPAN Destination Source Port G2/1 G31 Port AERSPAN session can be verified with the show monitor session command, ‘The command shows: = The source ERSPAN port type and number (either physical, port channel or VLAN) or IP address = The type of the traffic mirrored = The destination port or VLAN or IP address The example shows the output for the ERSPAN session on the source and destination switch. 41-430 implementing Cisco Data Center Network infrastructure 1 (OCNI-1) v2.0 © 2008 Cisco Systems, Inc Summary This topic summarizes the key points that were discussed in this lesson. Summary * SPAN sessions forward monitored traffic to a destination port for analysis. = SPAN copies packets from source ports, VLANS, or EtherChannels to destination ports on the same switch. * RSPAN is SPAN from a remote switch; the source and destination ports are located on different switches. + ERSPAN is similar to SPAN, except that the source and destination ports are on different switches and a routable GRE header is applied for transport over intermediate Layer 3 networks. * RSPAN operates at Layer 2, while ERSPAN functions at Layer 3, {©2008 Cisco Systems, Inc. Implementing the Cisco Catalyst 6500 Series, Cisco Catalyst 4900 Series, and Blade Switches 1-431 1-432 Implementing Cisco Data Center Network Infrastructure 1 (DCNI-1) v2.0 (©2008 Cisco Systems, Inc. peeeacoacoaoenonoe ee ©

You might also like